WO2021107264A1 - Deep learning-based sparse color sensor image processing device, image processing method and computer-readable medium - Google Patents

Deep learning-based sparse color sensor image processing device, image processing method and computer-readable medium Download PDF

Info

Publication number
WO2021107264A1
WO2021107264A1 PCT/KR2020/000795 KR2020000795W WO2021107264A1 WO 2021107264 A1 WO2021107264 A1 WO 2021107264A1 KR 2020000795 W KR2020000795 W KR 2020000795W WO 2021107264 A1 WO2021107264 A1 WO 2021107264A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
color
merged
luminance
pixels
Prior art date
Application number
PCT/KR2020/000795
Other languages
French (fr)
Korean (ko)
Inventor
정용주
Original Assignee
가천대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 가천대학교 산학협력단 filed Critical 가천대학교 산학협력단
Publication of WO2021107264A1 publication Critical patent/WO2021107264A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to a deep learning-based sparse color sensor image processing apparatus, an image processing method, and a computer-readable medium, and more particularly, to an image sensor including a color filter array including a very small proportion of color pixels. It relates to a deep learning-based sparse color sensor image processing apparatus, an image processing method, and a computer-readable medium capable of obtaining a high-quality image by restoring color from an image through an artificial neural network.
  • an image sensor is a semiconductor device that converts an optical image into an electrical signal, and a charge couple device (CCD) image sensor and a complementary metal-oxide semiconductor (CMOS) image sensor are widely used.
  • CCD charge couple device
  • CMOS complementary metal-oxide semiconductor
  • the photo-sensing device used in such an image sensor generally measures only the intensity of an optical signal and cannot detect the spectral characteristics, only optical signals of a certain frequency are passed through a color filter for a predetermined number of frequency domains.
  • a color filter for a predetermined number of frequency domains.
  • Each of the color filter and the light sensing element is provided to obtain the intensity of an optical signal for each frequency region, and color image data (R, G, B data) is obtained therefrom.
  • a color filter array CFA in which color filters such as red, green, and blue are regularly arranged is used.
  • a typical image sensor is composed of a color filter array 20 and a sensor array 30 in which a light sensing element for detecting light passing through the color filter array 20 is arranged as shown in FIG. 1 .
  • a red filter (R), a green filter (G), and a blue filter (B) are regularly arranged in the color filter array 20 .
  • One such color filter 20.1 corresponds to one photo-sensing element 30.1, and the photo-sensing element 30.1 detects the intensity of light passing through the color filter 20.1, The intensity of the light of that color is sensed.
  • the minimum repeating unit of the Bayer pattern shown in FIG. 1 includes in common two rows and two columns, that is, an array structure of 2 X 2
  • the red filter (R), green filter (G), and blue filter (B) have a ratio of 1:2:1.
  • a red filter (R) and a blue filter (B) are arranged in a diagonal direction, and two green filters (G) are arranged in a diagonal direction that intersects the same.
  • each color filter is detected by each photo-sensing element, and an image is formed based on the intensity of the detected light.
  • a demosaicing process of restoring the color based on information obtained through filters of other adjacent colors is required.
  • the light passing through the color filter of the color filter array as described above is lost by the color filter, and this characteristic is used to amplify a signal for a weak light intensity, especially when an image is sensed in a low-light environment.
  • the influence of noise increases and the quality of the acquired image decreases.
  • the present invention is a deep learning-based sparse color sensor image capable of obtaining excellent quality images by restoring colors through an artificial neural network from an image obtained through an image sensor including a color filter array containing a very small proportion of color pixels.
  • An object of the present invention is to provide a processing apparatus, an image processing method and a computer-readable medium.
  • an image processing apparatus having one or more processors and one or more memories, receiving image data from an electrically connected image sensor and outputting a merged image
  • the processing apparatus may include: a first image input unit configured to receive a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input unit for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging unit including one or more learned artificial neural networks and generating a merged image based on the first image and the second image; wherein the image merging unit includes: a luminance restoration unit for generating a luminance restored image by deriving the luminance of a region in which the color pixel is located based on the first image; and a color restoration unit for generating a merged image based on data including the second image and the luminance restored image. It provides an image processing apparatus comprising a.
  • the first image includes white pixel information of 90% or more of the total number of pixels of the merged image
  • the second image contains less than 10% of the total number of pixels of the merged image. It may include color pixel information.
  • the luminance restoration unit and the color restoration unit may generate the luminance restoration image and the merged image based on each learned artificial neural network.
  • the image merging unit includes: a boundary extractor for generating a boundary image including color boundary information of the luminance restoration image through the luminance restoration unit;
  • the method further includes, wherein the color restoration unit may generate a merged image based on the second image, the luminance restoration image, and the boundary image.
  • the color restoration unit may be configured with a generative adversarial network (GAN) to generate a merged image.
  • GAN generative adversarial network
  • the color restoration unit includes: a color restoration module for generating a merged image based on data including the second image and the luminance restoration image; and an evaluation module for evaluating the merged image generated by the color restoration module.
  • a color restoration module for generating a merged image based on data including the second image and the luminance restoration image
  • an evaluation module for evaluating the merged image generated by the color restoration module.
  • the color restoration module and the evaluation module may perform mutual feedback.
  • the evaluation module receives a plurality of pre-stored learning merged images, evaluates the authenticity of the learned merged images, and performs feedback based on the evaluation results, so that learning can be performed.
  • the color restoration module receives a plurality of pre-stored learning second images, learning luminance restoration images, and learning boundary image sets to generate a learning merged image, and the evaluation module is generated. Learning can be performed by performing feedback based on the evaluation result of evaluating the merged image.
  • the color restoration module may be configured as a high-density U-net, and the evaluation module may be configured as a stacked convolutional neural network.
  • the luminance restoration unit may be configured as a stacked convolutional neural network.
  • the artificial neural network of each of the luminance restoration unit and the color restoration unit may be an artificial neural network in which learning is performed independently.
  • an image is performed in a computing device having one or more processors and one or more memories, and receives image data from an electrically connected image sensor and outputs a merged image
  • the image processing method comprising: a first image acquisition step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor;
  • the image merging step comprises: a luminance restoration step of generating a luminance restored image by deriving the luminance of an area in which the color pixel is located based on the first image; and a color restoration step of generating a merged image based on the second image and the luminance restoration image. It provides an image processing method comprising a.
  • the computer-readable medium for implementing an image processing method for outputting a merged image by receiving image data from an electrically connected image sensor
  • the computer-readable medium enables a computing device to Stores instructions for performing the following steps, the steps comprising: a first image acquisition step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image acquisition step of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging step of generating a merged image based on the first image and the second image by using one or more learned artificial neural networks.
  • the image merging step comprises: a luminance restoration step of generating a luminance restored image by deriving the luminance of an area in which the color pixel is located based on the first image; and a color restoration step of generating a merged image based on the second image and the luminance restoration image. It provides a computer-readable medium comprising a.
  • the present invention provides an image processing apparatus having one or more processors and one or more memories, receiving image data from an electrically connected image sensor and outputting a merged image
  • the image processing apparatus comprising: a first image input unit receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input unit for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging unit for generating a merged image based on the first image and the second image by the learned artificial neural network.
  • the first image includes white pixel information of 70% or more of the total number of pixels of the merged image
  • the second image includes color pixel information of 30% or less of the total number of pixels of the merged image.
  • each of the plurality of color pixels includes an R pixel that transmits red light; G pixels that transmit green light; and a B pixel that transmits blue light; any one of the above, and in the second image, a plurality of color pixels may be arranged according to a predetermined pattern.
  • the preset pattern may include a plurality of color pixel groups each including one or more of the R, G, and B pixels and arranged adjacently.
  • the plurality of color pixel groups may be arranged periodically according to a predetermined interval.
  • the learned artificial neural network may be learned based on the structural similarity (SSIM) of the image.
  • SSIM structural similarity
  • the learned artificial neural network includes a convolutional neural network
  • the first image is 1-channel data including only brightness information
  • the second image is 3-channel data including color information
  • the convolution The neural network may use the data of the second image as a color hint.
  • the output of the first convolution block of the second image may be input to the first convolution block of the first image.
  • the present invention is an image processing method for outputting a merged image by receiving image data from an electrically connected image sensor and being performed in a computing device having one or more processors and one or more memories
  • the image processing method may include: a first image input step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input step of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging step of generating a merged image based on the first image and the second image by the learned artificial neural network.
  • the first image includes white pixel information of 70% or more of the total number of pixels of the merged image
  • the second image includes color pixel information of 30% or less of the total number of pixels of the merged image. It provides an image processing method, including.
  • each of the plurality of color pixels includes an R pixel that transmits red light; G pixels that transmit green light; and a B pixel that transmits blue light; any one of the above, and in the second image, a plurality of color pixels may be arranged according to a predetermined pattern.
  • the preset pattern may include a plurality of color pixel groups each including one or more of the R, G, and B pixels and arranged adjacently.
  • the present invention provides a computer-readable medium for implementing an image processing method of receiving image data from an electrically connected image sensor and outputting a merged image, the computer-readable medium comprising: , a first image input step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor.
  • each of the plurality of color pixels includes an R pixel that transmits red light; G pixels that transmit green light; and a B pixel that transmits blue light; any one of the above, and in the second image, a plurality of color pixels may be arranged according to a predetermined pattern.
  • the preset pattern may include a plurality of color pixel groups each including one or more of the R, G, and B pixels and arranged adjacently.
  • the present invention provides an image sensing device, comprising: a color filter array including filters for a plurality of pixels; a sensor array in which a plurality of sensors for sensing light are arranged corresponding to the plurality of pixels of the color filter array; and an image sensor substrate for generating image data based on the data sensed from the sensor array;
  • An image sensor comprising a; and an image processing device electrically connected to the image sensor, receiving the image data and outputting a merged image, wherein the number of white pixels in the color filter array is 70% or more of the total number of pixels, and the image processing
  • the apparatus may include: a first image input unit for receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input unit for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging unit that generates a merged image based on the first image and the second image by means of a learned artificial neural network,
  • color restoration through an artificial neural network based on color information obtained through a very small proportion of color pixels can exhibit the effect of performing high-quality color restoration.
  • FIG. 1 is a diagram schematically illustrating the structure of a color filter array and a sensor array of a general image sensor.
  • FIG. 2 is a diagram schematically illustrating a structure of an image sensing device according to an embodiment of the present invention.
  • FIG. 3 is a diagram schematically illustrating an internal structure of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 4 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
  • FIG. 5 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
  • FIG. 6 is a diagram schematically illustrating operations of an image merging unit and a merged image output unit according to an embodiment of the present invention.
  • FIG. 7 is a diagram schematically illustrating an operation of an artificial neural network of an image merging unit according to an embodiment of the present invention.
  • FIG. 8 is a diagram schematically illustrating the structure of a convolutional neural network of an image merging unit according to an embodiment of the present invention.
  • FIG. 9 is a diagram schematically illustrating each step of an image sensing method according to an embodiment of the present invention.
  • FIG. 10 is a diagram illustrating an image obtained according to an embodiment of the present invention.
  • FIG. 11 is a diagram illustrating an image obtained according to an embodiment of the present invention.
  • FIG. 12 is a ground truth image for the images of FIGS. 10 and 11 .
  • FIG. 13 is a diagram schematically illustrating an internal configuration of an image merging unit according to an embodiment of the present invention.
  • FIG. 14 is a diagram schematically illustrating the structure of an artificial neural network of a luminance restoration unit, a color restoration module, and an evaluation module according to an embodiment of the present invention.
  • 15 is a view exemplarily showing a first image, a second image, a luminance restored image, and a boundary image according to an embodiment of the present invention.
  • 16 is a diagram schematically illustrating a learning step of an image merging unit according to an embodiment of the present invention.
  • 17 is a diagram schematically illustrating each step of an image processing method according to an embodiment of the present invention.
  • FIG. 18 is a diagram schematically illustrating detailed steps of an image merging step according to an embodiment of the present invention.
  • FIG. 19 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
  • FIG. 20 is a block diagram illustrating an example of an internal configuration of a computing device according to an embodiment of the present invention.
  • first, second, etc. may be used to describe various elements, but the elements are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another.
  • a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component. and/or includes a combination of a plurality of related listed items or any of a plurality of related listed items.
  • FIG. 2 is a diagram schematically illustrating a structure of an image sensing device according to an embodiment of the present invention.
  • an image sensing apparatus includes: a color filter array 2000 including filters for a plurality of pixels; a sensor array 3000 in which a plurality of sensors for sensing light are arranged corresponding to the plurality of pixels of the color filter array; and an image sensor substrate 4000 for generating image data based on the data sensed from the sensor array; An image sensor including a (5000); and an image processing apparatus 1000 electrically connected to the image sensor 5000, receiving the image data, and outputting a merged image.
  • the color filter array 2000 includes filters for a plurality of pixels, and the filters pass light so that each sensor of the sensor array 3000 can detect light.
  • the color filter array may include a plurality of micro lenses to collect and transmit light to a corresponding sensor.
  • the sensor array 3000 has a plurality of sensors for detecting light corresponding to the filter of the color filter array 2000, and detects the light passing through the filter.
  • the sensor may be configured as CCD or CMOS to detect light.
  • the image sensor substrate 4000 generates image data from information detected by each sensor of the sensor array 3000 .
  • the image data includes information on the light detected by each sensor of the sensor array 3000 and information on the position of the sensor, so that a two-dimensional image can be generated.
  • the pixel includes a white pixel that transmits light of all colors; or color pixels that pass only light of a preset color; to be.
  • the white pixel transmits light of all colors, and a sensor corresponding to the white pixel detects the intensity of light and provides brightness information, and the color pixel transmits light of a preset color (eg, red, green or blue), the sensor corresponding to the color pixel senses the intensity of light of the preset color and provides color information.
  • the image processing apparatus 1000 generates a final output image based on the brightness information and the color information.
  • the number of white pixels is 90% or more of the total number of pixels. In this embodiment, the number of color pixels is 10% or less of the total number of pixels.
  • the number of white pixels is 95% or more of the total number of pixels. In this embodiment, the number of color pixels is 5% or less of the total number of pixels.
  • the number of the white pixels is 99% or more of the total number of pixels.
  • the number of color pixels is less than or equal to 1% of the total number of pixels.
  • the image processing apparatus 1000 may perform a process of restoring color through an artificial neural network.
  • FIG. 3 is a diagram schematically illustrating an internal structure of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 3 is a block diagram illustrating internal components of the image processing apparatus 1000 according to an embodiment of the present invention.
  • the image processing apparatus 1000 may include a processor 100 , a bus 200 , a network interface 300 , and a memory 400 .
  • the memory may include an operating system 410 , an image merging routine 420 , and sensor information 430 .
  • the processor 100 may include a first image input unit 110 , a second image input unit 120 , an image merging unit 130 , and a merged image output unit 140 .
  • the image processing apparatus 1000 may include more components than those of FIG. 3 .
  • the memory is a computer-readable recording medium and may include a random access memory (RAM), a read only memory (ROM), and a permanent mass storage device such as a disk drive.
  • program codes for the operating system 410 , the image merging routine 420 , and the sensor information 430 may be stored in the memory.
  • These software components may be loaded from a computer-readable recording medium separate from the memory using a drive mechanism (not shown).
  • the separate computer-readable recording medium may include a computer-readable recording medium (not shown) such as a floppy drive, a disk, a tape, a DVD/CD-ROM drive, and a memory card.
  • the software components may be loaded into the memory through the network interface unit 300 instead of a computer-readable recording medium.
  • the bus 200 may enable communication and data transmission between components of a computing device that controls the image processing apparatus 1000 .
  • the bus may be configured using a high-speed serial bus, a parallel bus, a storage area network (SAN), and/or other suitable communication technology.
  • the network interface unit 300 may be a computer hardware component for connecting a computing device controlling the image processing apparatus 1000 to a computer network.
  • the network interface 300 may connect a computing device controlling the image processing apparatus 1000 to a computer network through a wireless or wired connection.
  • a computing device that controls the image processing apparatus 1000 through the network interface unit 300 may be wirelessly or wiredly connected to the image processing apparatus 1000 .
  • the processor may be configured to process instructions of a computer program by performing basic arithmetic, logic, and input/output operations of the computing device controlling the image processing apparatus 1000 .
  • the command may be provided to the processor 100 by the memory or network interface unit 300 and via the bus 200 .
  • the processor may be configured to execute program codes for the first image input unit 110 , the second image input unit 120 , the image merging unit 130 , and the merged image output unit 140 .
  • Such program code may be stored in a recording device such as a memory.
  • the first image input unit 110 , the second image input unit 120 , the image merging unit 130 , and the merged image output unit 140 are configured to perform an operation of the image processing apparatus 1000 , which will be described below. can be
  • processor some components may be omitted, additional components not shown may be further included, or two or more components may be combined according to a method of controlling the image processing apparatus 1000 .
  • the image processing apparatus 1000 includes: a first image input unit 110 for receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input unit 120 for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; an image merging unit 130 for generating a merged image based on the first image and the second image by the learned artificial neural network; and a merged image output unit 140 for outputting the merged image to the outside.
  • a first image input unit 110 for receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor
  • a second image input unit 120 for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor
  • an image merging unit 130 for generating a merged image based on the first image and the second image by the learned artificial neural network
  • a merged image output unit 140 for outputting the merged image to the outside.
  • the first image input unit 110 receives the first image obtained from the sensor of the sensor array 3000 corresponding to the white pixel.
  • the first image may be generated from information sensed by a sensor of the sensor array 3000 corresponding to the white pixel.
  • the first image may include white pixel information of 95% or more of the total number of pixels of the merged image.
  • the first image may be generated by receiving only information sensed from a sensor of the sensor array 3000 corresponding to the white pixel, or obtained from all sensors of the sensor array 3000
  • the image may be generated by removing information on a pixel corresponding to a position of the sensor corresponding to the color pixel in addition to a pixel corresponding to a position of the sensor corresponding to the white pixel in an image.
  • the first image as described above is composed of information obtained by the sensor corresponding to the white pixel and may be generated as a mono image without color information.
  • the second image input unit 120 receives the second image obtained from the sensor of the sensor array 3000 corresponding to the color pixel.
  • the second image may be generated from information sensed by a sensor of the sensor array 3000 corresponding to the color pixel.
  • the second image may include color pixel information of 5% or less of the total number of pixels of the merged image.
  • the second image may be generated by receiving only information sensed from a sensor of the sensor array 3000 corresponding to the color pixel, or obtained from all sensors of the sensor array 3000
  • the image may be generated by removing information on a pixel corresponding to a location of the sensor corresponding to the white pixel in addition to a pixel corresponding to a location of the sensor corresponding to the color pixel in an image.
  • the second image may be generated as a color image by being composed of information obtained by the sensor corresponding to the color pixel.
  • the image merging unit 130 receives the first image and the second image from the first image input unit 110 and the second image input unit 120 , and based on the first image and the second image to create a merged image.
  • the first image is a mono image including only brightness information
  • the second image is a color image including color information
  • the image is obtained by merging the first image and the second image.
  • the merging unit 130 generates a merged image completely equipped with brightness information and color information.
  • the image merging unit 130 generates a merged image by merging the first image and the second image through a learned artificial neural network.
  • the image merging unit 130 may merge the first image and the second image through two-step processing. In the first step, a luminance restored image may be generated by restoring the luminance of the missing pixel of the first image, and in the second step, a merged image may be generated based on the luminance restored image and the second image.
  • the merged image output unit 140 outputs the merged image generated by the image merging unit 130 to the outside.
  • the merged image output unit 140 senses light through the image sensing device, processes the sensed light information, and transmits the generated merged image to a connected device for use by a user or the like.
  • the merged image output unit 140 may transmit the merged image to an external computing device or the like through the network interface 300 .
  • FIG. 4 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
  • the color filter array 2000 includes a plurality of pixels, wherein the pixels include white pixels that pass light of all colors; or color pixels that pass only light of a preset color; and the color pixel includes: an R pixel that transmits red light; G pixels that transmit green light; and a B pixel that transmits blue light; one of them, and are arranged according to a predetermined pattern on the color filter array.
  • the color filter array 2000 includes three types of color pixels (R, G, B) and white pixels (W), and the color pixels have a pattern preset in the color filter array 2000 . are arranged according to The preset pattern shown in FIG. 4 is only one embodiment of the present invention, and the color pixels of the present invention may be arranged in another pattern not shown in FIG. 4 .
  • the preset pattern according to an embodiment of the present invention is a repeating pattern having a size of 10 ⁇ 10 pixels.
  • the nth pixel from the left and the mth pixel from the top in the 10 ⁇ 10 size pattern are referred to as (n,m) pixels.
  • (1,1) pixel is B pixel
  • (10,1) pixel is G pixel
  • (5,5) pixel is R pixel
  • a pixel is a G pixel
  • a (6,6) pixel is a B pixel
  • a (1,10) pixel is a G pixel
  • a (10,10) pixel is an R pixel
  • all other pixels are white pixels 2200 .
  • the sensors of the sensor array 3000 corresponding to the pixels can acquire brightness information and color information, respectively.
  • the preset pattern includes a color pixel group including one or more of the R pixel, the G pixel, and the B pixel and arranged adjacently, and the color pixel group includes the preset They are arranged regularly in the pattern.
  • the color pixel group includes one or more of the R, G, and B pixels, respectively, and is arranged adjacent to each other.
  • This group of color pixels includes R pixels ((5,5) pixels), G pixels ((6,5) pixels, (5,6) pixels), and B pixels ((6,6) pixels).
  • (1,1) pixels, (10,1) pixels, (1,10) pixels, and (10,10) pixels are arranged adjacently at the corners of the 10 x 10 pattern to form a color pixel group 2100. b) form.
  • the preset pattern includes a color pixel group in which at least one R pixel, G pixel, and B pixel are each included and arranged adjacently, thereby providing accurate color information required for color restoration and providing a higher value It can have the effect of performing color restoration of quality.
  • Information on such a preset pattern may be stored in the sensor information 430 of the memory 400 and utilized in the image processing apparatus 1000 .
  • the first image input unit 110 may acquire a first image from the image data received from the image sensor 500 based on the information on the preset pattern.
  • FIG. 5 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
  • FIG. 5 shows a pattern of a color filter array having an arrangement different from that shown in FIG. 4 .
  • the color filter array 2000 includes three types of color pixels (R, G, B) and white pixels (W), and the color pixels have a pattern preset in the color filter array 2000 . are arranged according to The preset pattern shown in FIG. 5 is only one embodiment of the present invention, and the color pixels of the present invention may be arranged in another pattern not shown in FIG. 5 .
  • the preset pattern according to an embodiment of the present invention is a repeating pattern having a size of 18 ⁇ 18 pixels.
  • the n-th pixel from the left and the m-th pixel from the top in the 18 ⁇ 18 size pattern are referred to as (n,m) pixels.
  • (9,9) pixels are R pixels
  • (10,9) pixels and (9,10) pixels are G pixels
  • (10,10) pixels are B pixels color pixels 2100
  • the remaining pixels are all white pixels 2100 .
  • the ratio of color pixels is 1.13% and the ratio of white pixels is 98.86%, which is a very low pattern of color pixels.
  • the sensors of the sensor array 3000 corresponding to the pixels can acquire brightness information and color information, respectively.
  • FIG. 6 is a diagram schematically illustrating operations of an image merging unit and a merged image output unit according to an embodiment of the present invention.
  • the image merging unit 130 receives a first image 1 and a second image ( 1 ) from the first image input unit 110 and the second image input unit 120 , respectively. 2) is input.
  • the first image 1 and the second image 2 shown in FIG. 6 are images obtained by the pattern of the color filter array 2000 as shown in FIG. 4 .
  • the first image 1 is obtained from the sensor of the sensor array corresponding to the white pixel 2200 excluding the color pixel 2100 of the color filter array 2000, and the area except for the color pixel 2100 (white color) In the display), brightness information is included by detecting the light intensity of each pixel.
  • the first image 1 includes brightness information on the remaining areas of the color pixel groups 2100.a and 2100.b arranged by a preset pattern, leaving holes. .
  • the second image 2 is obtained from the sensor of the sensor array corresponding to the color pixel 2100 of the color filter array 2000, and the color set in each pixel in the area (displayed in white) of the color pixel 2100 Color information is included by sensing the intensity of light. As shown in FIG. 6 , the second image 2 includes color information for the area of the color pixel group 2100.a and 2100.b.
  • the image merging unit 130 may generate a merged image by merging the first image 1 and the second image 2 based on the learned artificial neural network.
  • the artificial neural network is a convolutional neural network.
  • the convolutional neural network is an algorithm that has been optimized so that the artificial neural network can acquire two-dimensional images well by merging the filter technology with the artificial neural network.
  • the merged image generated through the image merging unit 130 is output to an external device or the like through the merged image output unit 140 .
  • the merged image output unit 140 may store the merged image in a memory card connected through the network interface 300 or transmit and store the merged image to a cloud service connected through the Internet network. .
  • the merging of images described below merges the images acquired through the color filter array 2000 as shown in FIG. 4 .
  • FIG. 7 is a diagram schematically illustrating an operation of an artificial neural network of an image merging unit according to an embodiment of the present invention.
  • the artificial neural network 135 may acquire first learning data corresponding to a first image obtainable according to an embodiment of the present invention and an embodiment of the present invention.
  • the second learning data corresponding to the second image is received and the first learning data and the second learning data are merged to generate a learning merged image.
  • the first learning data is a mono image including brightness information for a region excluding the color pixel 2100 like the first image
  • the second learning data is for the region of the color pixel 2100 like the second image. It is a color image containing color information.
  • Such first and second learning data correspond to an educational data set capable of performing supervised learning on the artificial neural network 135 .
  • the artificial neural network 135 also receives the third learning data.
  • the third learning data is a color image generated by the first learning data and the second learning data, and is a ground truth for a learning merged image generated by merging the first learning data and the second learning data. Truth) image.
  • the third learning data constitutes an education data set for supervised learning of the artificial neural network 135 together with the first learning data and the second learning data.
  • the artificial neural network 135 receives the third learning data as shown in (a) of FIG. 7 . After that, the artificial neural network 135 compares the learning merged image with the third learning data, and the learning merged image generated by the artificial neural network 135 from the first learning data and the second learning data is the second learning data. 3 Perform learning to get closer to the learning data. This may be performed through a process of changing the internal parameters of the artificial neural network 135 through a backpropagation algorithm or the like.
  • FIG. 7B illustrates a process in which the learned artificial neural network 135 generates a merged image.
  • the artificial neural network 135 learned through the same process as in FIG. 7A receives the first image and the second image and generates a merged image.
  • the artificial neural network 135, which has performed learning through an education data set composed of the first learning data, the second learning data, and the third learning data, receives the first image and the second image as input and merges the image.
  • FIG. 8 is a diagram schematically illustrating the structure of a convolutional neural network of an image merging unit according to an embodiment of the present invention.
  • the first image is 1-channel data including only brightness information
  • the second image is 3-channel data including color information.
  • the first image and the second image may be input to a convolutional neural network, respectively.
  • the convolutional neural network may be configured as a cross-channel autoencoder using the second image as a color hint.
  • the convolutional neural network includes ten convolution blocks from a first convolution block to a tenth convolution block.
  • Each of the convolution blocks includes two to three pairs of convolution and activation functions, the first convolution block and the tenth convolution block have a height H and a width W of 64 channels, and the second convolution block and the ninth convolution block has a size of 128 channels in a height H/2 and a width W/2, and the third convolution block and the eighth convolution block have a size of 256 channels in a height H/4 and a width W/4.
  • the fourth to seventh convolution blocks have a height of H/8, a width of W/8, and a size of 512 channels.
  • the activation function is a Rectified Linear Unit (ReLU).
  • the spatial resolution is reduced as in from the first convolution block to the second convolution block, from the second convolution block to the third convolution block, and from the third convolution block to the fourth convolution block, by downsampling Reduce the spatial resolution, and increase the spatial resolution as in the case from the 7th convolution block to the 8th convolution block, from the 8th convolution block to the 9th convolution block, and from the 9th convolution block to the 10th convolution block.
  • the spatial resolution is increased by upsampling.
  • the output of the first convolutional block of the second image is connected to the first convolutional block of the first image.
  • the output of the first convolution block is connected to the tenth convolution block
  • the output of the second convolution block is connected to the ninth convolution block
  • the output of the third convolution block is connected to the eighth convolution block.
  • the last layer of the tenth convolution block is a 1 x 1 kernel and generates output colors of 3 channels.
  • the output color of the three channels may be an RGB color value of an RGB color space.
  • Such a convolutional neural network is trained in the following way.
  • mapping function of the convolutional neural network may be expressed as follows.
  • M is the first image
  • C is the second image
  • is the parameter of the convolutional neural network
  • the loss function representing the degree of correspondence between the output value Y' and the actual data Y may be expressed as follows.
  • the convolutional neural network learns the convolutional neural network by deriving the parameter ⁇ that satisfies the following equation.
  • a convolutional neural network is trained using a structural similarity (SSIM) of an image in order to prevent color bleeding in the color restoration process through the convolutional neural network.
  • SSIM structural similarity
  • FIG. 9 is a diagram schematically illustrating each step of an image processing method according to an embodiment of the present invention.
  • an image processing method is an image processing method that receives image data from an electrically connected image sensor and outputs a merged image, and is sensed by a plurality of white pixels of the image sensor.
  • a merged image output step (S300) of outputting the merged image to the outside includes
  • the first image acquisition step S110 performs the same operation as that of the first image input unit 110 described above.
  • a first image is obtained by receiving information sensed from a sensor corresponding to the white pixel 2100 among the sensors of the connected sensor array 3000 .
  • Such a first image may be configured by receiving only information sensed from a sensor corresponding to the white pixel 2100, or after receiving information sensed from all sensors of the sensor array 3000, the white pixel It may be configured by selecting information sensed from a sensor corresponding to 2100 .
  • the second image acquisition step S120 performs the same operation as that of the second image input unit 120 described above.
  • a second image is obtained by receiving information sensed from a sensor corresponding to the color pixel 2200 among the sensors of the connected sensor array 3000 .
  • Such a second image may be configured by receiving only information sensed from a sensor corresponding to the color pixel 2200, or after receiving information sensed from all sensors of the sensor array 3000, the color pixel It may be configured by selecting information sensed from a sensor corresponding to 2200 .
  • the first image acquisition step S110 and the second image acquisition step S120 may be performed simultaneously.
  • the image merging step (S200) generates a merged image based on the first image and the second image.
  • the first image and the second image obtained in the first image acquisition step (S110) and the second image acquisition step (S120) are transmitted and received, A merged image is generated by merging the first image and the second image.
  • the first image is a mono image including only brightness information
  • the second image is a color image including color information
  • the image is obtained by merging the first image and the second image.
  • a merged image fully equipped with brightness information and color information is generated.
  • the first image and the second image are merged through a learned artificial neural network to generate a merged image.
  • the merged image output step (S300) outputs the merged image to the outside.
  • the merged image output step (S300) light is sensed through the image sensing device, and the merged image generated by processing the sensed light information is transmitted to a connected device so that the user can use it.
  • the merged image may be transmitted to an external computing device or the like through the network interface 300 .
  • FIGS. 10 and 11 are diagrams illustrating images obtained according to an embodiment of the present invention.
  • the first image is a monochromatic image obtained from white pixels excluding the color pixel group of the color filter array 2000, and as shown in the upper part of FIG. 10, image information at the color pixel group position is processed as a black-and-white image. .
  • FIG. 11 is a merged image generated by an image sensing device according to an embodiment of the present invention.
  • a color image including color information may be obtained as shown in FIG. 11 .
  • the merged image merged through an artificial neural network is color restored based on information obtained through the color filter array 2000 including a very low ratio of color pixels. It reproduces colors very well.
  • FIG. 12 is a ground truth image for the images of FIGS. 10 and 11 .
  • the image merging described below merges the images acquired through the color filter array 2000 as shown in FIG. 5 .
  • FIG. 13 is a diagram schematically illustrating an internal configuration of an image merging unit according to an embodiment of the present invention.
  • the image merging unit 130 merges the first image 1 and the second image 2 through a two-step process to form a merged image 3 .
  • the image merging unit 130 generates a luminance restored image 4 by deriving the luminance of a region in which the color pixel 2100 is located based on the first image 1 . ); a boundary extraction unit 132 for generating a boundary image 5 including color boundary information of the luminance restoration image 4 generated through the luminance restoration unit; and a color restoration unit 133 for generating a merged image 3 based on the second image 2, the luminance restoration image 4, and the boundary image 5; may include.
  • the luminance restoration unit 131 and the boundary extraction unit 132 primarily perform processing on the first image 1 to obtain the luminance restoration image 4 and the boundary image. (5) is generated, and the second image (2), the luminance restored image (4), and the boundary image (5) are secondarily processed by the color restoration unit 133 to process the merged image (3) ) will be created.
  • the first image 1 includes brightness information on the remaining area in which the pixels in which the color pixels 2100 are located, which are arranged in a predetermined pattern, are left as holes.
  • the luminance restoration unit 131 may generate the luminance restoration image 4 by receiving the first image 1 and deriving and filling the brightness information on the pixel in which the color pixel 2100 is located.
  • the color restoration unit 133 may generate the merged image 2 by merging the luminance information of the luminance restoration image 4 with the color information of the second image 2 .
  • the luminance restoration unit 131 and the color restoration unit 133 generate the luminance restoration image 4 and the merged image 3 based on each artificial neural network.
  • the luminance restoration unit 131 includes an artificial neural network for generating the luminance restoration image 4
  • the color restoration unit 133 uses an artificial neural network for generating the merged image 3 .
  • the boundary extractor 132 may generate the boundary image 5 based on edge information extracted during the process in which the artificial neural network of the luminance restoration unit 131 generates the luminance restoration image 4 .
  • the boundary image 5 generated in this way is used as information for generating the merged image 3 by the color restoration unit 133, thereby reducing color bleeding that may occur in the process of generating the merged image 3 have.
  • the color restoration unit 133 generates the merged image 3 by an artificial neural network, but the luminance restoration unit 131 uses an artificial neural network such as a known pixel interpolation algorithm. It is also possible to generate the luminance restored image 4 in a way that does not use .
  • the color restoration unit 133 may be configured with a generative adversarial network (GAN) to generate the merged image 3 .
  • GAN generative adversarial network
  • the generative adversarial neural network is an artificial neural network structure that includes two neural networks, a generator and a discriminator, and improves the performance of the generator by competitively learning the generator and the discriminator.
  • the generator In the generative adversarial neural network, the generator generates data, and the discriminator determines whether input data is real data or generated false data. Accordingly, the discriminator improves the ability to discriminate the real data through learning, and the generator improves the ability to generate false data close to the real data to the extent that the discriminator cannot discriminate it through learning.
  • the color restoration unit 133 includes a color restoration module for generating a merged image based on the second image 2, the luminance restoration image 4, and the boundary image 5 ( 133a); and an evaluation module (133b) for evaluating the merged image generated by the color restoration module; Including, the color restoration module and the evaluation module may perform mutual feedback.
  • the color restoration module 133a serves as a generator of the generative adversarial neural network
  • the evaluation module 133b acts as a discriminator of the generative adversarial neural network.
  • FIG. 14 is a diagram schematically illustrating the structure of an artificial neural network of the luminance restoration unit 131, the color restoration module 133a, and the evaluation module 133b according to an embodiment of the present invention.
  • FIG. 14 (a) shows the structure of the artificial neural network of the luminance restoration unit 131 according to an embodiment of the present invention.
  • the luminance restoration unit 131 may be configured as a stacked convolutional neural network.
  • it may be composed of a stacked 5-layer convolutional neural network.
  • the first three layers of the convolutional neural network have a depth of 64, and the next two layers have a depth of 128.
  • Each layer has a kernel size of 3 x 3, and ReLU can be used as an activation function.
  • the final layer of the convolutional neural network may have an output of H x W x 1.
  • H and W mean the height and width of the first image 1, respectively.
  • FIG. 14 (b) shows the structure of the artificial neural network of the luminance restoration module 133a according to an embodiment of the present invention.
  • the color restoration module 133a may be configured as a high-density U-net.
  • High-density U-net is a combination of high-density convolutional neural network and U-net, which utilizes high-density blocks to extract and propagate features through densely connected layers while maintaining the overall architectural structure of U-net.
  • the high-density U-net takes the second image 2 and the luminance restored image 4 as inputs.
  • the second image 2 and the luminance restored image 4 are processed through a first convolutional layer Conv1 having a kernel size of 3 x 3, a depth of 15, and using ReLU as an activation function.
  • the boundary image 5 is connected to the output of the first convolutional layer.
  • each high-density block Dense1-3 has a plurality of kernel sizes of 3 x 3 and a stride of 1.
  • Each convolutional layer is normalized by batch normalization, and ReLU is used as the activation function.
  • the output sizes of the three high-density blocks are 64, 128 and 256, respectively.
  • the output of the high-density block passes through each conversion block Dense1-3.
  • Each conversion block Dense1-3 is composed of a convolution layer and a downsampling layer.
  • the kernel size of the convolutional layer of the conversion block (Dense1-3) is 3 x 3, the stride is 1, and ReLU is used as the activation function.
  • the downsampling layer performs a convolution operation with a stride of 2 at the same kernel size.
  • the third conversion block Dense3 is followed by three convolution blocks Conv2-4.
  • a convolution block has a single convolutional layer.
  • Each layer of the three convolution blocks Conv2-4 has a kernel size of 3 x 3, a stride of 1, and a depth of 512. Also, this layer is normalized by batch normalization and uses ReLU as the activation function.
  • the fourth convolution block (Conv4) is followed by three upsampling blocks (Conv5-7).
  • Each upsampling block Conv5 - 7 is composed of a 2x2 upsampling layer and a convolutional layer having a kernel size of 3x3 following the upsampling layer.
  • the upsampling block (Conv5-7) is also normalized by batch normalization and uses ReLU as an activation function.
  • the output sizes of the three upsampling blocks Conv5-7 are 256, 128, and 64, respectively.
  • Each of the three upsampling blocks Conv5-7 has a separate skip connection with the high-density block Dense1-3, respectively. By such a skip connection, the network can propagate low-level features, thereby contributing to the reconstruction of the final image.
  • the eighth convolutional layer Conv8 is positioned.
  • the output size of the eighth convolutional layer Conv8 is H x W x 3.
  • the final merged image uses tanh as the activation function.
  • FIG. 14 (c) shows the structure of the artificial neural network of the evaluation module 133b according to an embodiment of the present invention.
  • the evaluation module 133b may be configured as a stacked convolutional neural network.
  • the stacked convolutional neural network receives a second image, a luminance restored image, a boundary image, and a merged image.
  • the first three convolutional layers of the stacked convolutional neural network have a depth of 64, and the next convolutional layer has a depth of 128.
  • the kernel of each convolutional layer has a size of 3 x 3 and a stride of 1. All convolutional layers are normalized by batch normalization, and ReLU is used as the activation function.
  • the last layer of the stacked convolutional neural network performs flattening and uses a sigmoid function as an activation function.
  • each artificial neural network of the luminance restoration unit 131 and the color restoration unit 133 is independently trained. As described above, the artificial neural network of the luminance restoration unit and the artificial neural network of the color restoration unit are each trained, so that learning can be efficiently performed using learning data suitable for each artificial neural network.
  • 15 is a view exemplarily showing a first image, a second image, a luminance restored image, and a boundary image according to an embodiment of the present invention.
  • the 15A shows a first image 1 according to an embodiment of the present invention.
  • the first image 1 does not include color information but only brightness information, and the pixel corresponding to the color pixel 2100 does not have brightness information, so it is expressed like a black hole.
  • FIG. 15 (b) shows a second image 2 according to an embodiment of the present invention.
  • color information is included only in pixels corresponding to the color pixels 2100, and color information is not present in the pixels corresponding to the white pixels 2200, and thus is expressed as black.
  • Figure 15 (c) shows a luminance restored image 4 according to an embodiment of the present invention.
  • the luminance restoration image 4 is filled by the luminance restoration unit 131 deriving and filling the brightness information of the pixel corresponding to the color pixel 2100 of the first image 1 shown in FIG. 15A. can be created by
  • the boundary image 5 may be generated by processing the luminance restoration image 4 in the boundary extraction unit 132 .
  • 16 is a diagram schematically illustrating a learning step of an image merging unit according to an embodiment of the present invention.
  • FIG. 16A illustrates a process in which learning of the evaluation module 133b is performed.
  • the evaluation module 133b according to an embodiment of the present invention receives a plurality of pre-stored learning merged images, evaluates the authenticity of the learning merged images, and based on the evaluation results, Learning takes place through feedback.
  • the learning merged image may include a learning truth merged image and a learning false merged image.
  • the evaluation module 133b receives such a learning merged image, evaluates the authenticity of the learning merged image, derives an evaluation result, and feeds back the evaluation result to learn the artificial neural network of the evaluation module 133b.
  • the evaluation capability of the evaluation module 133b may be improved.
  • the color restoration module 133a receives a plurality of pre-stored learning second images, learning luminance restoration images, and learning boundary image sets, and learning merged images.
  • the evaluation module 133b evaluates the generated learning merged image, and feedback is performed based on the evaluation result to perform learning. That is, the color restoration module 133a receives an image set for learning and generates a learning merged image, and the evaluation module 133b feeds back an evaluation result of evaluating whether the learning merged image is artificial to restore the color.
  • the generation capability of the module 133a may be improved.
  • the color restoration module 133a performs learning in a direction in which the learned merged image can be determined as a true merged image in the evaluation module 133b to generate a merged image close to the authentic and false merged image. ability can be improved.
  • 17 is a diagram schematically illustrating each step of an image processing method according to an embodiment of the present invention.
  • an image processing method is an image processing method that receives image data from an electrically connected image sensor and outputs a merged image, and is sensed by a plurality of white pixels of the image sensor.
  • the first image acquisition step S110 performs the same operation as that of the first image input unit 110 described above.
  • a first image is obtained by receiving information sensed from a sensor corresponding to the white pixel 2100 among the sensors of the connected sensor array 3000 .
  • Such a first image may be configured by receiving only information sensed from a sensor corresponding to the white pixel 2100, or after receiving information sensed from all sensors of the sensor array 3000, the white pixel It may be configured by selecting information sensed from a sensor corresponding to 2100 .
  • the second image acquisition step S120 performs the same operation as that of the second image input unit 120 described above.
  • a second image is obtained by receiving information sensed from a sensor corresponding to the color pixel 2200 among the sensors of the connected sensor array 3000 .
  • Such a second image may be configured by receiving only information sensed from a sensor corresponding to the color pixel 2200, or after receiving information sensed from all sensors of the sensor array 3000, the color pixel It may be configured by selecting information sensed from a sensor corresponding to 2200 .
  • the first image acquisition step S110 and the second image acquisition step S120 may be performed simultaneously.
  • the image merging step (S200) generates a merged image based on the first image and the second image.
  • the first image and the second image obtained in the first image acquisition step (S110) and the second image acquisition step (S120) are transmitted and received, A merged image is generated by merging the first image and the second image.
  • the first image is a mono image including only brightness information
  • the second image is a color image including color information
  • the image is obtained by merging the first image and the second image.
  • a merged image fully equipped with brightness information and color information is generated.
  • the first image and the second image are merged through a learned artificial neural network to generate a merged image.
  • the merged image output step (S300) outputs the merged image to the outside.
  • the merged image output step (S300) light is sensed through the image sensing device, and the merged image generated by processing the sensed light information is transmitted to a connected device so that the user can use it.
  • the merged image may be transmitted to an external computing device or the like through the network interface 300 .
  • FIG. 18 is a diagram schematically illustrating detailed steps of an image merging step according to an embodiment of the present invention.
  • the image merging step ( S200 ) is a luminance restoration step of generating a luminance restored image by deriving the luminance of an area in which the color pixel is located based on the first image.
  • a boundary extraction step of generating a boundary image including color boundary information of the luminance restoration image through the luminance restoration unit (S220); a color restoration step of generating a merged image based on the second image, the luminance restored image, and the boundary image (S230); and an evaluation step of evaluating the merged image generated in the color restoration step (S240); may include.
  • a luminance restoration image is generated by deriving the luminance of a region in which the color pixel is located based on the first image.
  • the luminance restoration image 4 may be generated by receiving the first image 1 , deriving brightness information about the pixel in which the color pixel 2100 is located, and filling the information.
  • the luminance restoration image is generated based on an artificial neural network.
  • a boundary image including color boundary information of the luminance restoration image generated through the luminance restoration operation (S210) is generated.
  • Such a boundary image may be obtained through filter processing on the luminance restored image.
  • a merged image is generated based on the second image, the luminance restored image, and the boundary image.
  • the merged image may be generated by restoring color information to the luminance restored image based on the brightness information of the luminance restored image and the color information of the second image. In this case, the color can be more accurately restored by using the boundary image as a guide for restoring color information.
  • the merged image is generated based on an artificial neural network.
  • the evaluation step (S240) evaluates the merged image generated in the color restoration step (S230).
  • the evaluation step (S240) evaluates the merged image to determine whether the merged image is an actual image or a generated false image.
  • the evaluation result is fed back to the color restoration step S230 to generate a merged image closer to the actual image in the color restoration step S230.
  • the merged image is evaluated based on the artificial neural network.
  • FIG. 19 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
  • the image merging unit 130 suppresses color bleeding through the boundary image 5 and restores the color through a generative adversarial neural network, thereby exhibiting very good color reproduction in the merged image 3 .
  • the color filter array 2000 for obtaining the first image and the second image according to an embodiment of the present invention may have a very low ratio of color pixels 2200 .
  • FIG. 19 shows an example of a color filter array 2000 having such a very low ratio of color pixels 2200 .
  • pixels are displayed in the same way as in FIG. 5, in the embodiment shown in FIG. 19, (4,4) pixels are R pixels, (17,6) pixels and (5,15) pixels are G pixels, (14, 19) A pixel is a color pixel 2200 of a B pixel, and all other pixels are white pixels 2100 . That is, the color filter array 2000 shown in FIG. 19 includes four color pixels in the color filter array 2000 having a size of 20 x 20 pixels, so that the ratio of color pixels is 1%, and the ratio of white pixels is 99% reaches to
  • the R pixels, G pixels, and B pixels are not arranged in a predetermined pattern but are randomly arranged in a color filter array 2000 having a size of 20 x 20 pixels.
  • the color pixels 2200 having a ratio of 1% as described above are also excellent from the first image 1 and the second image 2 obtained by the color filter array 2000 randomly arranged.
  • a quality merged image (3) can be created.
  • FIG. 20 is a block diagram illustrating an example of an internal configuration of a computing device according to an embodiment of the present invention.
  • the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, an input/output subsystem ( It may include at least an I/O subsystem) 11400 , a power circuit 11500 , and a communication circuit 11600 .
  • the computing device 11000 may correspond to a user terminal or a consultation matching service providing system.
  • the memory 11200 may include, for example, a high-speed random access memory, a magnetic disk, an SRAM, a DRAM, a ROM, a flash memory, or a non-volatile memory. have.
  • the memory 11200 may include a software module, an instruction set, or other various data necessary for the operation of the computing device 11000 .
  • access to the memory 11200 from other components such as the processor 11100 or the peripheral interface 11300 may be controlled by the processor 11100 .
  • Peripheral interface 11300 may couple input and/or output peripherals of computing device 11000 to processor 11100 and memory 11200 .
  • the processor 11100 may execute a software module or an instruction set stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
  • the input/output subsystem 11400 may couple various input/output peripherals to the peripheral interface 11300 .
  • the input/output subsystem 11400 may include a controller for coupling peripheral devices such as a monitor, keyboard, mouse, printer, or a touch screen or sensor as needed to the peripheral interface 11300 .
  • input/output peripherals may be coupled to peripheral interface 11300 without going through input/output subsystem 11400 .
  • the power circuit 11500 may supply power to all or some of the components of the terminal.
  • the power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may include any other components for creation, management, and distribution.
  • the communication circuit 11600 may enable communication with another computing device using at least one external port.
  • the communication circuit 11600 may include an RF circuit to transmit and receive an RF signal, also known as an electromagnetic signal, to enable communication with other computing devices.
  • an RF signal also known as an electromagnetic signal
  • FIG. 20 is only an example of the computing device 11000 , and the computing device 11000 may omit some components shown in FIG. 20 , or further include additional components not shown in FIG. 20 , or 2 It may have a configuration or arrangement that combines two or more components.
  • a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 20 , and various communication methods (WiFi, 3G, LTE) are provided in the communication circuit 1160 .
  • WiFi, 3G, LTE wireless fidelity
  • 3G Third Generation
  • LTE wireless local area network
  • Bluetooth Wireless Fidelity
  • NFC wireless Fidelity
  • Zigbee Zigbee
  • Components that may be included in the computing device 11000 may be implemented in hardware, software, or a combination of both hardware and software including an integrated circuit specialized for one or more signal processing or applications.
  • Methods according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded in a computer-readable medium.
  • the program according to the present embodiment may be configured as a PC-based program or an application dedicated to a mobile terminal.
  • the application to which the present invention is applied may be installed in the user terminal through a file provided by the file distribution system.
  • the file distribution system may include a file transmission unit (not shown) that transmits the file according to a request of the user terminal.
  • color restoration through an artificial neural network based on color information obtained through a very small proportion of color pixels can exhibit the effect of performing high-quality color restoration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a deep learning-based sparse color sensor image processing device, an image processing method and a computer-readable medium, and, more specifically, to a deep learning-based sparse color sensor image processing device, an image processing method and a computer-readable medium, the device acquiring images of superior quality by restoring colors through an artificial neural network from images acquired through an image sensor including a color filter array that comprises a very small percentage of color pixels.

Description

딥 러닝 기반 희소 컬러 센서 이미지 처리장치, 이미지 처리방법 및 컴퓨터-판독가능 매체Deep learning-based sparse color sensor image processing apparatus, image processing method and computer-readable medium
본 발명은 딥 러닝 기반 희소 컬러 센서 이미지 처리장치, 이미지 처리방법 및 컴퓨터-판독가능 매체에 관한 것으로서, 더욱 상세하게는 매우 적은 비율의 컬러픽셀을 포함하는 컬러필터어레이를 포함하는 이미지 센서를 통해 획득한 이미지로부터 인공신경망을 통해 색상을 복원함으로써 우수한 품질의 이미지를 획득할 수 있는 딥 러닝 기반 희소 컬러 센서 이미지 처리장치, 이미지 처리방법 및 컴퓨터-판독가능 매체에 관한 것이다.The present invention relates to a deep learning-based sparse color sensor image processing apparatus, an image processing method, and a computer-readable medium, and more particularly, to an image sensor including a color filter array including a very small proportion of color pixels. It relates to a deep learning-based sparse color sensor image processing apparatus, an image processing method, and a computer-readable medium capable of obtaining a high-quality image by restoring color from an image through an artificial neural network.
일반적으로, 이미지 센서는 광학 영상을 전기 신호로 변환시키는 반도체 소자로서, CCD(Charge Couple Device) 이미지 센서와 CMOS(Complementary Metal-Oxide Semiconductor) 이미지 센서가 널리 활용되고 있다. In general, an image sensor is a semiconductor device that converts an optical image into an electrical signal, and a charge couple device (CCD) image sensor and a complementary metal-oxide semiconductor (CMOS) image sensor are widely used.
이와 같은 이미지 센서에 사용되는 광 감지소자는 일반적으로 광학 신호의 세기만을 측정할 뿐 분광 특성은 감지할 수 없기 때문에 컬러 필터를 통해 일정 영역 주파수의 광학 신호만을 통과시켜 소정의 개수의 주파수 영역에 대해 각각 컬러 필터 및 광 감지소자를 구비하여, 각 주파수 영역에 대한 광학 신호의 세기를 구하고 이로부터 컬러 이미지 데이터(R, G, B 데이터)를 얻는다. 이와 같은 컬러 필터는 적색(Red), 녹색(Green), 청색(Blue) 등의 컬러필터가 규칙적으로 배열된 컬러 필터 어레이(Color Filter Array, CFA)를 사용하고 있다.Since the photo-sensing device used in such an image sensor generally measures only the intensity of an optical signal and cannot detect the spectral characteristics, only optical signals of a certain frequency are passed through a color filter for a predetermined number of frequency domains. Each of the color filter and the light sensing element is provided to obtain the intensity of an optical signal for each frequency region, and color image data (R, G, B data) is obtained therefrom. As such a color filter, a color filter array (CFA) in which color filters such as red, green, and blue are regularly arranged is used.
도 1은 일반적으로 사용되는 이미지 센서를 개략적으로 도시하는 도면이다. 일반적인 이미지 센서는 도 1에 도시된 것과 같이 컬러 필터 어레이(20) 및 상기 컬러 필터 어레이(20)를 통과한 빛을 감지하는 광 감지소자가 배열된 센서어레이(30)로 구성된다.1 is a diagram schematically illustrating an image sensor generally used. A typical image sensor is composed of a color filter array 20 and a sensor array 30 in which a light sensing element for detecting light passing through the color filter array 20 is arranged as shown in FIG. 1 .
상기 컬러 필터 어레이(20)에는 적색 필터(R), 녹색 필터(G) 및 청색 필터(B)가 규칙적으로 배열되어 있다. 이와 같은 하나의 색상 필터(20.1)는 하나의 광 감지소자(30.1)에 대응되어, 상기 광 감지소자(30.1)에서는 상기 색상 필터(20.1)를 통과한 빛의 세기를 감지하여, 해당 위치에서의 해당 색상의 빛의 세기를 감지하게 된다.A red filter (R), a green filter (G), and a blue filter (B) are regularly arranged in the color filter array 20 . One such color filter 20.1 corresponds to one photo-sensing element 30.1, and the photo-sensing element 30.1 detects the intensity of light passing through the color filter 20.1, The intensity of the light of that color is sensed.
도 1에 도시된 베이어 패턴의 최소 반복 단위는 공통적으로 2개의 행(Row) 및 2개의 열(Column) 즉, 2 X 2의 어레이 구조를 포함한다The minimum repeating unit of the Bayer pattern shown in FIG. 1 includes in common two rows and two columns, that is, an array structure of 2 X 2
베이어 패턴은 적색 필터(R), 녹색 필터(G) 및 청색 필터(B)가 1:2:1의 비율을 가진다. 이러한 베이어 패턴은 적색 필터(R)와 청색 필터(B)가 대각선 방향으로 배치되고, 이와 교차되는 대각선 방향으로 두 개의 녹색 필터 (G)들이 배치된다.In the Bayer pattern, the red filter (R), green filter (G), and blue filter (B) have a ratio of 1:2:1. In this Bayer pattern, a red filter (R) and a blue filter (B) are arranged in a diagonal direction, and two green filters (G) are arranged in a diagonal direction that intersects the same.
이와 같은 구성을 통해 각각의 색상 필터를 통과한 빛이 각각의 광 감지소자에서 검출되고, 이와 같이 검출된 빛의 세기에 기초하여 이미지를 형성하게 된다. 다만, 색상 필터를 통해서는 하나의 색상의 빛의 세기만을 판독할 수 있어, 인접한 다른 색상의 필터를 통해 획득한 정보에 기초하여 색을 복원하는 디모자이킹 과정이 필요하게 된다.Through such a configuration, light passing through each color filter is detected by each photo-sensing element, and an image is formed based on the intensity of the detected light. However, since only the intensity of light of one color can be read through the color filter, a demosaicing process of restoring the color based on information obtained through filters of other adjacent colors is required.
한편, 이와 같이 컬러 필터 어레이의 색상 필터를 통과한 빛은 상기 색상 필터에 의해 손실이 발생하게 되고, 이와 같은 특성은 특히 저조도 환경에서 이미지를 센싱 하는 경우, 약한 빛의 세기에 대한 신호를 증폭하는 과정에서 노이즈의 영향이 높아져 획득하는 이미지의 품질이 낮아지게 된다.On the other hand, the light passing through the color filter of the color filter array as described above is lost by the color filter, and this characteristic is used to amplify a signal for a weak light intensity, especially when an image is sensed in a low-light environment. In the process, the influence of noise increases and the quality of the acquired image decreases.
이를 개선하기 위해 컬러 필터 어레이에 색상 필터가 아닌, 모든 주파수의 빛을 통과시키는 화이트 필터를 배치하여, 저조도 환경에서도 노이즈에 대한 영향을 낮추고, 고품질의 이미지를 획득할 수 있도록 하는 방법이 제안되었다.In order to improve this, a method has been proposed in which a white filter that passes light of all frequencies, rather than a color filter, is disposed in the color filter array, thereby reducing the influence on noise and obtaining a high-quality image even in a low-light environment.
이와 같은 방법에서는 상기 화이트 필터를 사용함으로써 저조도 환경에서도 컬러 필터에 의한 빛의 손실을 줄일 수 있으나, 화이트 필터에서는 색상 정보를 획득할 수 없어 디모자이킹 과정에서 색상의 복원에 불리하게 된다. 따라서 현재의 컬러 필터 어레이는 색상 정보의 획득을 위해 화이트 필터의 비율이 매우 낮고, 화이트 패턴을 높이는 경우, 기존의 방식으로는 색상 복원에 있어서 문제점이 발생할 수 있다. 따라서, 저조도 환경에서의 새로운 기법에 의한 고품질 이미지 획득을 위한 장치 및 이미지 처리 알고리즘의 개발이 필요하다.In this method, by using the white filter, light loss due to the color filter can be reduced even in a low-light environment, but color information cannot be obtained from the white filter, which is disadvantageous in color restoration in the demosaicing process. Therefore, in the current color filter array, the ratio of the white filter is very low to obtain color information, and when the white pattern is increased, a problem may occur in color restoration using the conventional method. Therefore, it is necessary to develop an apparatus and an image processing algorithm for obtaining a high-quality image by a new technique in a low light environment.
본 발명은 매우 적은 비율의 컬러픽셀을 포함하는 컬러필터어레이를 포함하는 이미지 센서를 통해 획득한 이미지로부터 인공신경망을 통해 색상을 복원함으로써 우수한 품질의 이미지를 획득할 수 있는 딥 러닝 기반 희소 컬러 센서 이미지 처리장치, 이미지 처리방법 및 컴퓨터-판독가능 매체를 제공하는 것을 그 목적으로 한다.The present invention is a deep learning-based sparse color sensor image capable of obtaining excellent quality images by restoring colors through an artificial neural network from an image obtained through an image sensor including a color filter array containing a very small proportion of color pixels. An object of the present invention is to provide a processing apparatus, an image processing method and a computer-readable medium.
상기와 같은 과제를 해결하기 위하여, 본 발명의 일 실시예에서는 1 이상의 프로세서 및 1 이상의 메모리를 갖고, 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리장치로서, 상기 이미지처리장치는, 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지입력부; 상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지입력부; 및 1이상의 학습된 인공신경망을 포함하고, 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합부; 를 포함하고, 상기 이미지병합부는, 상기 제1이미지에 기초하여 상기 컬러픽셀이 위치하는 영역의 휘도를 도출하여 휘도복원이미지를 생성하는 휘도복원부; 및 상기 제2이미지 및 상기 휘도복원이미지를 포함하는 데이터에 기초하여 병합이미지를 생성하는 색상복원부; 를 포함하는, 이미지처리장치를 제공한다.In order to solve the above problems, in one embodiment of the present invention, there is provided an image processing apparatus having one or more processors and one or more memories, receiving image data from an electrically connected image sensor and outputting a merged image, the image The processing apparatus may include: a first image input unit configured to receive a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input unit for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging unit including one or more learned artificial neural networks and generating a merged image based on the first image and the second image; wherein the image merging unit includes: a luminance restoration unit for generating a luminance restored image by deriving the luminance of a region in which the color pixel is located based on the first image; and a color restoration unit for generating a merged image based on data including the second image and the luminance restored image. It provides an image processing apparatus comprising a.
본 발명의 일 실시예에서는, 상기 제1이미지는 상기 병합이미지의 전체 픽셀의 수의 90% 이상의 화이트픽셀 정보를 포함하고, 상기 제2이미지는 상기 병합이미지의 전체 픽셀의 수의 10% 이하의 컬러픽셀 정보를 포함할 수 있다.In one embodiment of the present invention, the first image includes white pixel information of 90% or more of the total number of pixels of the merged image, and the second image contains less than 10% of the total number of pixels of the merged image. It may include color pixel information.
본 발명의 일 실시예에서는, 상기 휘도복원부 및 상기 색상복원부는 각각의 학습된 인공신경망에 기초하여 상기 휘도복원이미지 및 상기 병합이미지를 생성할 수 있다.In an embodiment of the present invention, the luminance restoration unit and the color restoration unit may generate the luminance restoration image and the merged image based on each learned artificial neural network.
본 발명의 일 실시예에서는, 상기 이미지병합부는, 상기 휘도복원부를 통해 상기 휘도복원이미지의 색 경계 정보를 포함하는 경계이미지를 생성하는 경계추출부; 를 더 포함하고, 상기 색상복원부는, 상기 제2이미지, 상기 휘도복원이미지 및 상기 경계이미지에 기초하여 병합이미지를 생성할 수 있다.In an embodiment of the present invention, the image merging unit includes: a boundary extractor for generating a boundary image including color boundary information of the luminance restoration image through the luminance restoration unit; The method further includes, wherein the color restoration unit may generate a merged image based on the second image, the luminance restoration image, and the boundary image.
본 발명의 일 실시예에서는, 상기 색상복원부는, 생성적 적대 신경망(Generative Adversarial Network, GAN)으로 구성되어 병합이미지를 생성할 수 있다.In an embodiment of the present invention, the color restoration unit may be configured with a generative adversarial network (GAN) to generate a merged image.
본 발명의 일 실시예에서는, 상기 색상복원부는, 상기 제2이미지 및 상기 휘도복원이미지를 포함하는 데이터에 기초하여 병합이미지를 생성하는 색상복원모듈; 및 상기 색상복원모듈이 생성한 상기 병합이미지를 평가하는 평가모듈; 을 포함하고, 상기 색상복원모듈 및 상기 평가모듈은 상호 피드백을 수행할 수 있다.In an embodiment of the present invention, the color restoration unit includes: a color restoration module for generating a merged image based on data including the second image and the luminance restoration image; and an evaluation module for evaluating the merged image generated by the color restoration module. Including, the color restoration module and the evaluation module may perform mutual feedback.
본 발명의 일 실시예에서는, 상기 평가모듈은, 기저장된 복수의 학습 병합이미지를 입력 받아 상기 학습 병합이미지의 진위여부를 평가하고 평가결과에 기초하여 피드백을 수행하여 학습이 이루어질 수 있다.In an embodiment of the present invention, the evaluation module receives a plurality of pre-stored learning merged images, evaluates the authenticity of the learned merged images, and performs feedback based on the evaluation results, so that learning can be performed.
본 발명의 일 실시예에서는, 상기 색상복원모듈은, 기저장된 복수의 학습 제2이미지, 학습 휘도복원이미지 및 학습 경계이미지 세트를 입력 받아 학습 병합이미지를 생성하고, 상기 평가모듈이 생성된 상기 학습 병합이미지를 평가한 평가결과에 기초하여 피드백을 수행하여 학습이 이루어질 수 있다.In an embodiment of the present invention, the color restoration module receives a plurality of pre-stored learning second images, learning luminance restoration images, and learning boundary image sets to generate a learning merged image, and the evaluation module is generated. Learning can be performed by performing feedback based on the evaluation result of evaluating the merged image.
본 발명의 일 실시예에서는, 상기 색상복원모듈은, 고밀도 U-net으로 구성되고, 상기 평가모듈은, 스택형 컨볼루션 신경망으로 구성될 수 있다.In an embodiment of the present invention, the color restoration module may be configured as a high-density U-net, and the evaluation module may be configured as a stacked convolutional neural network.
본 발명의 일 실시예에서는, 상기 휘도복원부는, 스택형 컨볼루션 신경망으로 구성될 수 있다.In an embodiment of the present invention, the luminance restoration unit may be configured as a stacked convolutional neural network.
본 발명의 일 실시예에서는, 상기 휘도복원부 및 상기 색상복원부 각각의 인공신경망은 독립적으로 학습이 수행되어진 인공신경망일 수 있다.In an embodiment of the present invention, the artificial neural network of each of the luminance restoration unit and the color restoration unit may be an artificial neural network in which learning is performed independently.
상기와 같은 과제를 해결하기 위하여, 본 발명의 일 실시예에서는, 1 이상의 프로세서 및 1 이상의 메모리를 갖는 컴퓨팅 장치에서 수행되고, 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리방법으로서, 상기 이미지처리방법은, 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지획득단계;In order to solve the above problems, in an embodiment of the present invention, an image is performed in a computing device having one or more processors and one or more memories, and receives image data from an electrically connected image sensor and outputs a merged image A processing method, the image processing method comprising: a first image acquisition step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor;
상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지획득단계; 및 1 이상의 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합단계; 를 포함하고, 상기 이미지병합단계는, 상기 제1이미지에 기초하여 상기 컬러픽셀이 위치하는 영역의 휘도를 도출하여 휘도복원이미지를 생성하는 휘도복원단계; 및 상기 제2이미지 및 상기 휘도복원이미지에 기초하여 병합이미지를 생성하는 색상복원단계; 를 포함하는, 이미지처리방법을 제공한다.a second image acquisition step of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging step of generating a merged image based on the first image and the second image by using one or more learned artificial neural networks. wherein the image merging step comprises: a luminance restoration step of generating a luminance restored image by deriving the luminance of an area in which the color pixel is located based on the first image; and a color restoration step of generating a merged image based on the second image and the luminance restoration image. It provides an image processing method comprising a.
본 발명의 일 실시예에서는, 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리방법을 구현하기 위한 컴퓨터-판독가능 매체로서, 상기 컴퓨터-판독가능 매체는 컴퓨팅 장치로 하여금 이하의 단계들을 수행하도록 하는 명령들을 저장하며, 상기 단계들은: 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지획득단계; 상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지획득단계; 및 1 이상의 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합단계; 를 포함하고, 상기 이미지병합단계는, 상기 제1이미지에 기초하여 상기 컬러픽셀이 위치하는 영역의 휘도를 도출하여 휘도복원이미지를 생성하는 휘도복원단계; 및 상기 제2이미지 및 상기 휘도복원이미지에 기초하여 병합이미지를 생성하는 색상복원단계; 를 포함하는, 컴퓨터-판독가능 매체를 제공한다.In one embodiment of the present invention, as a computer-readable medium for implementing an image processing method for outputting a merged image by receiving image data from an electrically connected image sensor, the computer-readable medium enables a computing device to Stores instructions for performing the following steps, the steps comprising: a first image acquisition step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image acquisition step of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging step of generating a merged image based on the first image and the second image by using one or more learned artificial neural networks. wherein the image merging step comprises: a luminance restoration step of generating a luminance restored image by deriving the luminance of an area in which the color pixel is located based on the first image; and a color restoration step of generating a merged image based on the second image and the luminance restoration image. It provides a computer-readable medium comprising a.
상기와 같은 과제를 해결하기 위하여 본 발명은, 1 이상의 프로세서 및 1 이상의 메모리를 갖고, 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리장치로서, 상기 이미지처리장치는, 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지입력부; 상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지입력부; 및 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합부; 를 포함하고, 상기 제1이미지는 상기 병합이미지의 전체 픽셀의 수의 70% 이상의 화이트픽셀 정보를 포함하고, 상기 제2이미지는 상기 병합이미지의 전체 픽셀의 수의 30% 이하의 컬러픽셀 정보를 포함하는, 이미지처리장치를 제공한다.In order to solve the above problems, the present invention provides an image processing apparatus having one or more processors and one or more memories, receiving image data from an electrically connected image sensor and outputting a merged image, the image processing apparatus comprising: a first image input unit receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input unit for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging unit for generating a merged image based on the first image and the second image by the learned artificial neural network. wherein the first image includes white pixel information of 70% or more of the total number of pixels of the merged image, and the second image includes color pixel information of 30% or less of the total number of pixels of the merged image. It provides an image processing apparatus, including.
본 발명에서는, 상기 복수의 컬러픽셀은 각각, 적색 빛을 투과시키는 R픽셀; 녹색 빛을 투과시키는 G픽셀; 및 청색 빛을 투과시키는 B픽셀; 중 어느 하나이고, 상기 제2이미지는 복수의 컬러픽셀이 기설정된 패턴에 따라 배열되어 있을 수 있다.In the present invention, each of the plurality of color pixels includes an R pixel that transmits red light; G pixels that transmit green light; and a B pixel that transmits blue light; any one of the above, and in the second image, a plurality of color pixels may be arranged according to a predetermined pattern.
본 발명에서는, 상기 기설정된 패턴은, 상기 R픽셀, G픽셀 및 B픽셀이 각각 1 이상 포함되어, 인접하게 배열되는 복수의 컬러픽셀그룹을 포함할 수 있다.In the present invention, the preset pattern may include a plurality of color pixel groups each including one or more of the R, G, and B pixels and arranged adjacently.
본 발명에서는, 상기 복수의 컬러픽셀그룹은 기설정된 간격에 따라 주기적으로 배열될 수 있다.In the present invention, the plurality of color pixel groups may be arranged periodically according to a predetermined interval.
본 발명에서는, 상기 학습된 인공신경망은 이미지의 구조적 유사도(SSIM)에 기반하여 학습될 수 있다.In the present invention, the learned artificial neural network may be learned based on the structural similarity (SSIM) of the image.
본 발명에서는, 상기 학습된 인공신경망은 컨볼루션신경망을 포함하고, 상기 제1이미지는 밝기정보만을 포함하는 1 채널 데이터이고, 상기 제2이미지는 색상정보를 포함하는 3 채널 데이터이고, 상기 컨볼루션신경망은 상기 제2이미지의 데이터를 컬러힌트로 할 수 있다.In the present invention, the learned artificial neural network includes a convolutional neural network, the first image is 1-channel data including only brightness information, the second image is 3-channel data including color information, and the convolution The neural network may use the data of the second image as a color hint.
본 발명에서는, 상기 제2이미지의 제1컨볼루션블록의 출력은 상기 제1이미지의 제1컨볼루션블록으로 입력될 수 있다.In the present invention, the output of the first convolution block of the second image may be input to the first convolution block of the first image.
상기와 같은 과제를 해결하기 위하여 본 발명은, 1 이상의 프로세서 및 1 이상의 메모리를 갖는 컴퓨팅 장치에서 수행되고, 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리방법으로서, 상기 이미지처리방법은, 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지입력단계; 상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지입력단계; 및 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합단계; 를 포함하고, 상기 제1이미지는 상기 병합이미지의 전체 픽셀의 수의 70% 이상의 화이트픽셀 정보를 포함하고, 상기 제2이미지는 상기 병합이미지의 전체 픽셀의 수의 30% 이하의 컬러픽셀 정보를 포함하는, 이미지처리방법을 제공한다.In order to solve the above problems, the present invention is an image processing method for outputting a merged image by receiving image data from an electrically connected image sensor and being performed in a computing device having one or more processors and one or more memories, The image processing method may include: a first image input step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input step of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging step of generating a merged image based on the first image and the second image by the learned artificial neural network. wherein the first image includes white pixel information of 70% or more of the total number of pixels of the merged image, and the second image includes color pixel information of 30% or less of the total number of pixels of the merged image. It provides an image processing method, including.
본 발명에서는, 상기 복수의 컬러픽셀은 각각, 적색 빛을 투과시키는 R픽셀; 녹색 빛을 투과시키는 G픽셀; 및 청색 빛을 투과시키는 B픽셀; 중 어느 하나이고, 상기 제2이미지는 복수의 컬러픽셀이 기설정된 패턴에 따라 배열되어 있을 수 있다.In the present invention, each of the plurality of color pixels includes an R pixel that transmits red light; G pixels that transmit green light; and a B pixel that transmits blue light; any one of the above, and in the second image, a plurality of color pixels may be arranged according to a predetermined pattern.
본 발명에서는, 상기 기설정된 패턴은, 상기 R픽셀, G픽셀 및 B픽셀이 각각 1 이상 포함되어, 인접하게 배열되는 복수의 컬러픽셀그룹을 포함할 수 있다.In the present invention, the preset pattern may include a plurality of color pixel groups each including one or more of the R, G, and B pixels and arranged adjacently.
상기와 같은 과제를 해결하기 위하여 본 발명은, 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리방법을 구현하기 위한 컴퓨터-판독가능 매체로서, 상기 컴퓨터-판독가능 매체는, 컴퓨팅 장치로 하여금 이하의 단계들을 수행하도록 하는 명령들을 저장하며, 상기 단계들은: 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지입력단계; 상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지입력단계; 및 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합단계;를 포함하고, 상기 제1이미지는 상기 병합이미지의 전체 픽셀의 수의 70% 이상의 화이트픽셀 정보를 포함하고, 상기 제2이미지는 상기 병합이미지의 전체 픽셀의 수의 30% 이하의 컬러픽셀 정보를 포함하는, 컴퓨터-판독가능 매체를 제공한다.In order to solve the above problems, the present invention provides a computer-readable medium for implementing an image processing method of receiving image data from an electrically connected image sensor and outputting a merged image, the computer-readable medium comprising: , a first image input step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor. ; a second image input step of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging step of generating a merged image based on the first image and the second image by the learned artificial neural network, wherein the first image is white at least 70% of the total number of pixels of the merged image and pixel information, and wherein the second image includes color pixel information of no more than 30% of the total number of pixels of the merged image.
본 발명에서는, 상기 복수의 컬러픽셀은 각각, 적색 빛을 투과시키는 R픽셀; 녹색 빛을 투과시키는 G픽셀; 및 청색 빛을 투과시키는 B픽셀; 중 어느 하나이고, 상기 제2이미지는 복수의 컬러픽셀이 기설정된 패턴에 따라 배열되어 있을 수 있다.In the present invention, each of the plurality of color pixels includes an R pixel that transmits red light; G pixels that transmit green light; and a B pixel that transmits blue light; any one of the above, and in the second image, a plurality of color pixels may be arranged according to a predetermined pattern.
본 발명에서는, 상기 기설정된 패턴은, 상기 R픽셀, G픽셀 및 B픽셀이 각각 1 이상 포함되어, 인접하게 배열되는 복수의 컬러픽셀그룹을 포함할 수 있다.In the present invention, the preset pattern may include a plurality of color pixel groups each including one or more of the R, G, and B pixels and arranged adjacently.
상기와 같은 과제를 해결하기 위하여 본 발명은, 이미지 센싱장치로서, 복수의 픽셀에 대한 필터를 포함하는 컬러필터어레이; 상기 컬러필터어레이의 복수의 픽셀에 대응되어 빛을 감지하는 복수의 센서가 배열된 센서어레이; 및 상기 센서어레이로부터 센싱 된 데이터에 기초하여 이미지데이터를 생성하는, 이미지센서기판; 을 포함하는 이미지센서; 및 상기 이미지센서에 전기적으로 접속되고, 상기 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리장치를 포함하고, 상기 컬러필터어레이는 화이트픽셀의 수가 전체 픽셀의 수의 70% 이상이고, 상기 이미지처리장치는, 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지입력부; 상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지입력부; 및 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합부;를 포함하고, 상기 제1이미지는 상기 병합이미지의 전체 픽셀의 수의 70% 이상의 화이트픽셀 정보를 포함하고, 상기 제2이미지는 상기 병합이미지의 전체 픽셀의 수의 30% 이하의 컬러픽셀 정보를 포함하는, 이미지처리장치를 제공한다.In order to solve the above problems, the present invention provides an image sensing device, comprising: a color filter array including filters for a plurality of pixels; a sensor array in which a plurality of sensors for sensing light are arranged corresponding to the plurality of pixels of the color filter array; and an image sensor substrate for generating image data based on the data sensed from the sensor array; An image sensor comprising a; and an image processing device electrically connected to the image sensor, receiving the image data and outputting a merged image, wherein the number of white pixels in the color filter array is 70% or more of the total number of pixels, and the image processing The apparatus may include: a first image input unit for receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input unit for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and an image merging unit that generates a merged image based on the first image and the second image by means of a learned artificial neural network, wherein the first image is white or more than 70% of the total number of pixels of the merged image. and pixel information, and wherein the second image includes color pixel information of 30% or less of the total number of pixels of the merged image.
본 발명의 일 실시예에 따르면 화이트픽셀의 비율이 매우 높은 컬러필터어레이를 이용하여 빛을 센싱 함으로써 저조도 환경에서 우수한 품질의 이미지를 획득할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by sensing light using a color filter array having a very high ratio of white pixels, it is possible to obtain an image of excellent quality in a low-light environment.
본 발명의 일 실시예에 따르면 매우 적은 비율의 컬러픽셀을 통해 획득한 색상정보에 기초하여 인공신경망을 통해 색상을 복원하여 높은 품질의 색상 복원을 수행할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, color restoration through an artificial neural network based on color information obtained through a very small proportion of color pixels can exhibit the effect of performing high-quality color restoration.
본 발명의 일 실시예에 따르면 경계이미지를 생성하여 색상의 복원에 사용함으로써 색상 복원 과정에서의 색번짐을 줄일 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by generating a boundary image and using it for color restoration, it is possible to exhibit the effect of reducing color bleeding in the color restoration process.
본 발명의 일 실시예에 따르면 생성적 적대 신경망을 통해 색상을 복원함으로써 허위 색상이 재현되는 것을 방지하는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by restoring a color through a generative adversarial neural network, it is possible to prevent false colors from being reproduced.
도 1은 일반적인 이미지 센서의 컬러필터어레이 및 센서어레이의 구조를 개략적으로 도시하는 도면이다.1 is a diagram schematically illustrating the structure of a color filter array and a sensor array of a general image sensor.
도 2는 본 발명의 일 실시예에 따른 이미지 센싱장치의 구조를 개략적으로 도시하는 도면이다.2 is a diagram schematically illustrating a structure of an image sensing device according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 이미지처리장치의 내부 구조를 개략적으로 도시하는 도면이다.3 is a diagram schematically illustrating an internal structure of an image processing apparatus according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 컬러필터어레이의 패턴을 개략적으로 도시하는 도면이다.4 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 컬러필터어레이의 패턴을 개략적으로 도시하는 도면이다.5 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따른 이미지병합부 및 병합이미지출력부의 동작을 개략적으로 도시하는 도면이다.6 is a diagram schematically illustrating operations of an image merging unit and a merged image output unit according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 이미지병합부의 인공신경망의 동작을 개략적으로 도시하는 도면이다.7 is a diagram schematically illustrating an operation of an artificial neural network of an image merging unit according to an embodiment of the present invention.
도 8은 본 발명의 일 실시예에 따른 이미지병합부의 컨볼루션신경망의 구조를 개략적으로 도시하는 도면이다.8 is a diagram schematically illustrating the structure of a convolutional neural network of an image merging unit according to an embodiment of the present invention.
도 9는 본 발명의 일 실시예에 따른 이미지 센싱 방법의 각 단계를 개략적으로 도시하는 도면이다.9 is a diagram schematically illustrating each step of an image sensing method according to an embodiment of the present invention.
도 10은 본 발명의 일 실시예에 따라 획득한 이미지를 도시하는 도면이다.10 is a diagram illustrating an image obtained according to an embodiment of the present invention.
도 11은 본 발명의 일 실시예에 따라 획득한 이미지를 도시하는 도면이다.11 is a diagram illustrating an image obtained according to an embodiment of the present invention.
도 12는 상기 도 10 및 도 11의 이미지에 대한 기반 진실(Ground Truth) 이미지이다.12 is a ground truth image for the images of FIGS. 10 and 11 .
도 13은 본 발명의 일 실시예에 따른 이미지병합부의 내부 구성을 개략적으로 도시하는 도면이다.13 is a diagram schematically illustrating an internal configuration of an image merging unit according to an embodiment of the present invention.
도 14는 본 발명의 일 실시예에 따른 휘도복원부, 색상복원모듈 및 평가모듈의 인공신경망의 구조를 개략적으로 도시하는 도면이다.14 is a diagram schematically illustrating the structure of an artificial neural network of a luminance restoration unit, a color restoration module, and an evaluation module according to an embodiment of the present invention.
도 15는 본 발명의 일 실시예에 따른 제1이미지, 제2이미지, 휘도복원이미지 및 경계이미지를 예시적으로 도시하는 도면이다.15 is a view exemplarily showing a first image, a second image, a luminance restored image, and a boundary image according to an embodiment of the present invention.
도 16은 본 발명의 일 실시예에 따른 이미지병합부의 학습단계를 개략적으로 도시하는 도면이다.16 is a diagram schematically illustrating a learning step of an image merging unit according to an embodiment of the present invention.
도 17은 본 발명의 일 실시예에 따른 이미지처리방법의 각 단계를 개략적으로 도시하는 도면이다.17 is a diagram schematically illustrating each step of an image processing method according to an embodiment of the present invention.
도 18은 본 발명의 일 실시예에 따른 이미지 병합 단계의 세부 단계를 개략적으로 도시하는 도면이다.18 is a diagram schematically illustrating detailed steps of an image merging step according to an embodiment of the present invention.
도 19는 본 발명의 일 실시예에 따른 컬러필터어레이의 패턴을 개략적으로 도시하는 도면이다.19 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
도 20은 본 발명의 일 실시예에 있어서, 컴퓨팅 장치의 내부 구성의 일례를 설명하기 위한 블록도이다.20 is a block diagram illustrating an example of an internal configuration of a computing device according to an embodiment of the present invention.
이하에서는, 다양한 실시예들 및/또는 양상들이 이제 도면들을 참조하여 개시된다. 하기 설명에서는 설명을 목적으로, 하나이상의 양상들의 전반적 이해를 돕기 위해 다수의 구체적인 세부사항들이 개시된다. 그러나, 이러한 양상(들)은 이러한 구체적인 세부사항들 없이도 실행될 수 있다는 점 또한 본 발명의 기술 분야에서 통상의 지식을 가진 자에게 인식될 수 있을 것이다. 이후의 기재 및 첨부된 도면들은 하나 이상의 양상들의 특정한 예시적인 양상들을 상세하게 기술한다. 하지만, 이러한 양상들은 예시적인 것이고 다양한 양상들의 원리들에서의 다양한 방법들 중 일부가 이용될 수 있으며, 기술되는 설명들은 그러한 양상들 및 그들의 균등물들을 모두 포함하고자 하는 의도이다.Hereinafter, various embodiments and/or aspects are disclosed with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of one or more aspects. However, it will also be recognized by one of ordinary skill in the art that such aspect(s) may be practiced without these specific details. The following description and accompanying drawings set forth in detail certain illustrative aspects of one or more aspects. These aspects are illustrative, however, and some of the various methods in principles of various aspects may be employed, and the descriptions set forth are intended to include all such aspects and their equivalents.
또한, 다양한 양상들 및 특징들이 다수의 디바이스들, 컴포넌트들 및/또는 모듈들 등을 포함할 수 있는 시스템에 의하여 제시될 것이다. 다양한 시스템들이, 추가적인 장치들, 컴포넌트들 및/또는 모듈들 등을 포함할 수 있다는 점 그리고/또는 도면들과 관련하여 논의된 장치들, 컴포넌트들, 모듈들 등 전부를 포함하지 않을 수도 있다는 점 또한 이해되고 인식되어야 한다.Further, various aspects and features will be presented by a system that may include a number of devices, components and/or modules, and the like. It is also noted that various systems may include additional devices, components, and/or modules, etc. and/or may not include all of the devices, components, modules, etc. discussed in connection with the figures. must be understood and recognized.
본 명세서에서 사용되는 "실시예", "예", "양상", "예시" 등은 기술되는 임의의 양상 또는 설계가 다른 양상 또는 설계들보다 양호하다거나, 이점이 있는 것으로 해석되지 않을 수도 있다. 아래에서 사용되는 용어들 '~부', '컴포넌트', '모듈', '시스템', '인터페이스' 등은 일반적으로 컴퓨터 관련 엔티티(computer-related entity)를 의미하며, 예를 들어, 하드웨어, 하드웨어와 소프트웨어의 조합, 소프트웨어를 의미할 수 있다.As used herein, “embodiment”, “example”, “aspect”, “exemplary”, etc. may not be construed as an advantage or advantage in any aspect or design described above over other aspects or designs. . The terms '~part', 'component', 'module', 'system', 'interface', etc. used below generally mean a computer-related entity, for example, hardware, hardware A combination of and software may mean software.
또한, "포함한다" 및/또는 "포함하는"이라는 용어는, 해당 특징 및/또는 구성요소가 존재함을 의미하지만, 하나이상의 다른 특징, 구성요소 및/또는 이들의 그룹의 존재 또는 추가를 배제하지 않는 것으로 이해되어야 한다.Also, the terms "comprises" and/or "comprising" mean that the feature and/or element is present, but excludes the presence or addition of one or more other features, elements, and/or groups thereof. should be understood as not
또한, 제1, 제2 등과 같이 서수를 포함하는 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 상기 구성요소들은 상기 용어들에 의해 한정되지는 않는다. 상기 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 본 발명의 권리 범위를 벗어나지 않으면서 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소도 제1 구성요소로 명명될 수 있다. 및/또는 이라는 용어는 복수의 관련된 기재된 항목들의 조합 또는 복수의 관련된 기재된 항목들 중의 어느 항목을 포함한다.Also, terms including an ordinal number, such as first, second, etc., may be used to describe various elements, but the elements are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component. and/or includes a combination of a plurality of related listed items or any of a plurality of related listed items.
또한, 본 발명의 실시예들에서, 별도로 다르게 정의되지 않는 한, 기술적이거나 과학적인 용어를 포함해서 여기서 사용되는 모든 용어들은 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미를 가지고 있다. 일반적으로 사용되는 사전에 정의되어 있는 것과 같은 용어들은 관련 기술의 문맥 상 가지는 의미와 일치하는 의미를 가지는 것으로 해석되어야 하며, 본 발명의 실시예에서 명백하게 정의하지 않는 한, 이상적이거나 과도하게 형식적인 의미로 해석되지 않는다.In addition, in the embodiments of the present invention, unless otherwise defined, all terms used herein, including technical or scientific terms, are generally understood by those of ordinary skill in the art to which the present invention belongs. have the same meaning as Terms such as those defined in a commonly used dictionary should be interpreted as having a meaning consistent with the meaning in the context of the related art, and unless explicitly defined in the embodiment of the present invention, an ideal or excessively formal meaning is not interpreted as
도 2는 본 발명의 일 실시예에 따른 이미지 센싱장치의 구조를 개략적으로 도시하는 도면이다.2 is a diagram schematically illustrating a structure of an image sensing device according to an embodiment of the present invention.
도 2를 참조하면 본 발명의 일 실시예에 따른 이미지 센싱장치는 복수의 픽셀에 대한 필터를 포함하는 컬러필터어레이(2000); 상기 컬러필터어레이의 복수의 픽셀에 대응되어 빛을 감지하는 복수의 센서가 배열된 센서어레이(3000); 및 상기 센서어레이로부터 센싱 된 데이터에 기초하여 이미지데이터를 생성하는, 이미지센서기판(4000); 을 포함하는 이미지센서(5000); 및 상기 이미지센서(5000)에 전기적으로 접속되고, 상기 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리장치(1000)를 포함한다.Referring to FIG. 2 , an image sensing apparatus according to an embodiment of the present invention includes: a color filter array 2000 including filters for a plurality of pixels; a sensor array 3000 in which a plurality of sensors for sensing light are arranged corresponding to the plurality of pixels of the color filter array; and an image sensor substrate 4000 for generating image data based on the data sensed from the sensor array; An image sensor including a (5000); and an image processing apparatus 1000 electrically connected to the image sensor 5000, receiving the image data, and outputting a merged image.
상기 컬러필터어레이(2000)는 복수의 픽셀에 대한 필터를 포함하고, 상기 필터는 빛을 통과시켜 센서어레이(3000)의 각각의 센서가 빛을 감지할 수 있도록 한다. 상기 컬러필터어레이는 대응하는 센서에 빛을 모아 전달할 수 있도록 하는 복수의 마이크로 렌즈를 포함할 수 있다.The color filter array 2000 includes filters for a plurality of pixels, and the filters pass light so that each sensor of the sensor array 3000 can detect light. The color filter array may include a plurality of micro lenses to collect and transmit light to a corresponding sensor.
상기 센서어레이(3000)는 빛을 감지하는 복수의 센서가 상기 컬러필터어레이(2000)의 필터에 대응하여, 상기 필터를 통과한 빛을 감지한다. 본 발명의 일 실시예에서 상기 센서는 CCD 혹은 CMOS로 구성되어 빛을 감지할 수 있다.The sensor array 3000 has a plurality of sensors for detecting light corresponding to the filter of the color filter array 2000, and detects the light passing through the filter. In an embodiment of the present invention, the sensor may be configured as CCD or CMOS to detect light.
상기 이미지센서기판(4000)은 상기 센서어레이(3000)의 각각의 센서가 감지한 정보로부터 이미지데이터를 생성한다. 상기 이미지데이터는 상기 센서어레이(3000)의 각각의 센서가 감지한 빛의 정보 및 상기 센서의 위치에 대한 정보를 포함하여, 2차원 이미지를 생성할 수 있도록 한다.The image sensor substrate 4000 generates image data from information detected by each sensor of the sensor array 3000 . The image data includes information on the light detected by each sensor of the sensor array 3000 and information on the position of the sensor, so that a two-dimensional image can be generated.
상기 픽셀은 모든 색상의 빛을 통과시키는 화이트픽셀; 혹은 기설정된 색상의 빛만을 통과시키는 컬러픽셀; 이다. 상기 화이트픽셀은 모든 색상의 빛을 통과시켜, 상기 화이트픽셀에 대응되는 센서에서는 빛의 세기를 감지하여 밝기 정보를 제공하게 되고, 상기 컬러픽셀은 기설정된 색상의 빛(예를 들어 적색, 녹색 혹은 청색)만을 통과시켜, 상기 컬러픽셀에 대응되는 센서에서는 상기 기설정된 색상의 빛의 세기를 감지하여 색상 정보를 제공하게 된다. 상기 이미지처리장치(1000)에서는 상기 밝기 정보 및 상기 색상 정보에 기초하여 최종적인 출력이미지를 생성하게 된다.The pixel includes a white pixel that transmits light of all colors; or color pixels that pass only light of a preset color; to be. The white pixel transmits light of all colors, and a sensor corresponding to the white pixel detects the intensity of light and provides brightness information, and the color pixel transmits light of a preset color (eg, red, green or blue), the sensor corresponding to the color pixel senses the intensity of light of the preset color and provides color information. The image processing apparatus 1000 generates a final output image based on the brightness information and the color information.
본 발명의 일 실시예에서 상기 컬러필터어레이는, 상기 화이트픽셀의 수가 전체 픽셀의 수의 90% 이상이다. 이와 같은 실시예에서는 상기 컬러픽셀의 수는 전체 픽셀의 수의 10% 이하이다. In an embodiment of the present invention, in the color filter array, the number of white pixels is 90% or more of the total number of pixels. In this embodiment, the number of color pixels is 10% or less of the total number of pixels.
본 발명의 일 실시예에서 상기 컬러필터어레이는, 상기 화이트픽셀의 수가 전체 픽셀의 수의 95% 이상이다. 이와 같은 실시예에서는 상기 컬러픽셀의 수는 전체 픽셀의 수의 5% 이하이다. In an embodiment of the present invention, in the color filter array, the number of white pixels is 95% or more of the total number of pixels. In this embodiment, the number of color pixels is 5% or less of the total number of pixels.
바람직하게는, 상기 화이트픽셀의 수가 전체 픽셀의 수의 99% 이상이다. 이와 같이 상기 화이트픽셀의 수가 전체 픽셀의 수의 99% 이상인 경우 상기 컬러픽셀의 수는 전체 픽셀의 수의 1% 이하가 된다.Preferably, the number of the white pixels is 99% or more of the total number of pixels. As described above, when the number of white pixels is greater than or equal to 99% of the total number of pixels, the number of color pixels is less than or equal to 1% of the total number of pixels.
이와 같이 컬러픽셀의 수가 극히 적은 경우 상기 컬러픽셀을 통해 획득한 색상 정보에 기초하여 이미지 전체에 색상을 복원하는 과정이 필요한데, 상기 이미지처리장치(1000)에서 이와 같은 동작을 수행하게 된다. 이 때, 본 발명의 일 실시예에서 상기 이미지처리장치(1000)는 인공신경망을 통해 색상을 복원하는 과정을 수행할 수 있다.As described above, when the number of color pixels is extremely small, a process of restoring color to the entire image based on color information obtained through the color pixels is required, and the image processing apparatus 1000 performs such an operation. At this time, in an embodiment of the present invention, the image processing apparatus 1000 may perform a process of restoring color through an artificial neural network.
도 3은 본 발명의 일 실시예에 따른 이미지처리장치의 내부 구조를 개략적으로 도시하는 도면이다.3 is a diagram schematically illustrating an internal structure of an image processing apparatus according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 이미지처리장치(1000)의 내부구성요소를 블록도로 도시한다.3 is a block diagram illustrating internal components of the image processing apparatus 1000 according to an embodiment of the present invention.
본 실시예에 따른 이미지처리장치(1000)는 프로세서(100), 버스(200), 네트워크 인터페이스(300) 및 메모리(400)를 포함할 수 있다. 메모리는 운영체제(410), 이미지병합루틴(420), 센서정보(430)를 포함할 수 있다. 프로세서(100)는 제1이미지입력부(110), 제2이미지입력부(120), 이미지병합부(130), 및 병합이미지출력부(140)를 포함할 수 있다. 다른 실시예들에서 이미지처리장치(1000)는 도 3의 구성요소들보다 더 많은 구성요소들을 포함할 수도 있다.The image processing apparatus 1000 according to the present embodiment may include a processor 100 , a bus 200 , a network interface 300 , and a memory 400 . The memory may include an operating system 410 , an image merging routine 420 , and sensor information 430 . The processor 100 may include a first image input unit 110 , a second image input unit 120 , an image merging unit 130 , and a merged image output unit 140 . In other embodiments, the image processing apparatus 1000 may include more components than those of FIG. 3 .
메모리는 컴퓨터에서 판독 가능한 기록 매체로서, RAM(random access memory), ROM(read only memory) 및 디스크 드라이브와 같은 비소멸성 대용량 기록장치(permanent mass storage device)를 포함할 수 있다. 또한, 메모리에는 운영체제(410), 이미지병합루틴(420), 센서정보(430)를 위한 프로그램 코드가 저장될 수 있다. 이러한 소프트웨어 구성요소들은 드라이브 메커니즘(drive mechanism, 미도시)을 이용하여 메모리와는 별도의 컴퓨터에서 판독 가능한 기록 매체로부터 로딩될 수 있다. 이러한 별도의 컴퓨터에서 판독 가능한 기록 매체는 플로피 드라이브, 디스크, 테이프, DVD/CD-ROM 드라이브, 메모리 카드 등의 컴퓨터에서 판독 가능한 기록 매체(미도시)를 포함할 수 있다. 다른 실시예에서 소프트웨어 구성요소들은 컴퓨터에서 판독 가능한 기록 매체가 아닌 네트워크 인터페이스부(300)를 통해 메모리에 로딩될 수도 있다.The memory is a computer-readable recording medium and may include a random access memory (RAM), a read only memory (ROM), and a permanent mass storage device such as a disk drive. In addition, program codes for the operating system 410 , the image merging routine 420 , and the sensor information 430 may be stored in the memory. These software components may be loaded from a computer-readable recording medium separate from the memory using a drive mechanism (not shown). The separate computer-readable recording medium may include a computer-readable recording medium (not shown) such as a floppy drive, a disk, a tape, a DVD/CD-ROM drive, and a memory card. In another embodiment, the software components may be loaded into the memory through the network interface unit 300 instead of a computer-readable recording medium.
버스(200)는 이미지처리장치(1000)를 제어하는 컴퓨팅 장치의 구성요소들간의 통신 및 데이터 전송을 가능하게 할 수 있다. 버스는 고속 시리얼 버스(high-speed serial bus), 병렬 버스(parallel bus), SAN(Storage Area Network) 및/또는 다른 적절한 통신 기술을 이용하여 구성될 수 있다.The bus 200 may enable communication and data transmission between components of a computing device that controls the image processing apparatus 1000 . The bus may be configured using a high-speed serial bus, a parallel bus, a storage area network (SAN), and/or other suitable communication technology.
네트워크 인터페이스부(300)는 이미지처리장치(1000)를 제어하는 컴퓨팅 장치를 컴퓨터 네트워크에 연결하기 위한 컴퓨터 하드웨어 구성 요소일 수 있다. 네트워크 인터페이스(300)는 이미지처리장치(1000)를 제어하는 컴퓨팅 장치를 무선 또는 유선 커넥션을 통해 컴퓨터 네트워크에 연결시킬 수 있다. 이와 같은 네트워크 인터페이스부(300)를 통하여 이미지처리장치(1000)를 제어하는 컴퓨팅 장치가 이미지처리장치(1000)에 무선적 혹은 유선적으로 접속될 수 있다.The network interface unit 300 may be a computer hardware component for connecting a computing device controlling the image processing apparatus 1000 to a computer network. The network interface 300 may connect a computing device controlling the image processing apparatus 1000 to a computer network through a wireless or wired connection. A computing device that controls the image processing apparatus 1000 through the network interface unit 300 may be wirelessly or wiredly connected to the image processing apparatus 1000 .
프로세서는 기본적인 산술, 로직 및 이미지처리장치(1000)을 제어하는 컴퓨팅 장치의 입출력 연산을 수행함으로써, 컴퓨터 프로그램의 명령을 처리하도록 구성될 수 있다. 명령은 메모리 또는 네트워크 인터페이스부(300)에 의해, 그리고 버스(200)를 통해 프로세서(100)로 제공될 수 있다. 프로세서는 제1이미지입력부(110), 제2이미지입력부(120), 이미지병합부(130) 및 병합이미지출력부(140)를 위한 프로그램 코드를 실행하도록 구성될 수 있다. 이러한 프로그램 코드는 메모리와 같은 기록 장치에 저장될 수 있다.The processor may be configured to process instructions of a computer program by performing basic arithmetic, logic, and input/output operations of the computing device controlling the image processing apparatus 1000 . The command may be provided to the processor 100 by the memory or network interface unit 300 and via the bus 200 . The processor may be configured to execute program codes for the first image input unit 110 , the second image input unit 120 , the image merging unit 130 , and the merged image output unit 140 . Such program code may be stored in a recording device such as a memory.
상기 제1이미지입력부(110), 제2이미지입력부(120), 이미지병합부(130) 및 병합이미지출력부(140)는 이하에서 설명하게 될 이미지처리장치(1000)의 작동을 수행하기 위해 구성될 수 있다.The first image input unit 110 , the second image input unit 120 , the image merging unit 130 , and the merged image output unit 140 are configured to perform an operation of the image processing apparatus 1000 , which will be described below. can be
상기한 프로세서는 이미지처리장치(1000)를 제어하는 방법에 따라 일부 컴포넌트가 생략되거나, 도시되지 않은 추가의 컴포넌트가 더 포함되거나, 2개 이상의 컴포넌트가 결합될 수 있다.In the processor, some components may be omitted, additional components not shown may be further included, or two or more components may be combined according to a method of controlling the image processing apparatus 1000 .
본 발명의 일 실시예에 따른 상기 이미지처리장치(1000)는, 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지입력부(110); 상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지입력부(120); 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합부(130); 및 상기 병합이미지를 외부로 출력하는 병합이미지출력부(140); 를 포함한다.The image processing apparatus 1000 according to an embodiment of the present invention includes: a first image input unit 110 for receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor; a second image input unit 120 for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; an image merging unit 130 for generating a merged image based on the first image and the second image by the learned artificial neural network; and a merged image output unit 140 for outputting the merged image to the outside. includes
상기 제1이미지입력부(110)는 상기 화이트픽셀에 대응되는 상기 센서어레이(3000)의 센서로부터 획득한 제1이미지를 입력 받는다. 상기 제1이미지는 상기 화이트픽셀에 대응되는 상기 센서어레이(3000)의 센서로부터 센싱 된 정보로부터 생성될 수 있다. 본 발명의 일 실시예에서 상기 제1이미지는 상기 병합이미지의 전체 픽셀의 수의 95% 이상의 화이트픽셀 정보를 포함할 수 있다.The first image input unit 110 receives the first image obtained from the sensor of the sensor array 3000 corresponding to the white pixel. The first image may be generated from information sensed by a sensor of the sensor array 3000 corresponding to the white pixel. In an embodiment of the present invention, the first image may include white pixel information of 95% or more of the total number of pixels of the merged image.
본 발명의 일 실시예에서 상기 제1이미지는 상기 화이트픽셀에 대응되는 상기 센서어레이(3000)의 센서로부터 센싱 된 정보만을 입력 받아 생성할 수도 있고, 혹은 상기 센서어레이(3000)의 모든 센서로부터 획득한 이미지 중 상기 화이트픽셀에 대응되는 상기 센서의 위치에 해당하는 픽셀 외에 상기 컬러픽셀에 대응되는 상기 센서의 위치에 해당하는 픽셀의 정보를 제거함으로써 생성될 수 있다.In an embodiment of the present invention, the first image may be generated by receiving only information sensed from a sensor of the sensor array 3000 corresponding to the white pixel, or obtained from all sensors of the sensor array 3000 The image may be generated by removing information on a pixel corresponding to a position of the sensor corresponding to the color pixel in addition to a pixel corresponding to a position of the sensor corresponding to the white pixel in an image.
이와 같은 상기 제1이미지는 상기 화이트픽셀에 대응되는 상기 센서에 의해 획득한 정보로 구성되어 색상정보가 없는 모노 이미지로 생성될 수 있다.The first image as described above is composed of information obtained by the sensor corresponding to the white pixel and may be generated as a mono image without color information.
상기 제2이미지입력부(120)는 상기 컬러픽셀에 대응되는 상기 센서어레이(3000)의 센서로부터 획득한 제2이미지를 입력 받는다. 상기 제2이미지는 상기 컬러픽셀에 대응되는 상기 센서어레이(3000)의 센서로부터 센싱 된 정보로부터 생성될 수 있다. 본 발명의 일 실시예에서 상기 제2이미지는 상기 병합이미지의 전체 픽셀의 수의 5% 이하의 컬러픽셀 정보를 포함할 수 있다.The second image input unit 120 receives the second image obtained from the sensor of the sensor array 3000 corresponding to the color pixel. The second image may be generated from information sensed by a sensor of the sensor array 3000 corresponding to the color pixel. In an embodiment of the present invention, the second image may include color pixel information of 5% or less of the total number of pixels of the merged image.
본 발명의 일 실시예에서 상기 제2이미지는 상기 컬러픽셀에 대응되는 상기 센서어레이(3000)의 센서로부터 센싱 된 정보만을 입력 받아 생성할 수도 있고, 혹은 상기 센서어레이(3000)의 모든 센서로부터 획득한 이미지 중 상기 컬러픽셀에 대응되는 상기 센서의 위치에 해당하는 픽셀 외에 상기 화이트픽셀에 대응되는 상기 센서의 위치에 해당하는 픽셀의 정보를 제거함으로써 생성될 수 있다.In an embodiment of the present invention, the second image may be generated by receiving only information sensed from a sensor of the sensor array 3000 corresponding to the color pixel, or obtained from all sensors of the sensor array 3000 The image may be generated by removing information on a pixel corresponding to a location of the sensor corresponding to the white pixel in addition to a pixel corresponding to a location of the sensor corresponding to the color pixel in an image.
이와 같은 상기 제2이미지는 상기 컬러픽셀에 대응되는 상기 센서에 의해 획득한 정보로 구성되어 컬러 이미지로 생성될 수 있다.The second image may be generated as a color image by being composed of information obtained by the sensor corresponding to the color pixel.
상기 이미지병합부(130)는 상기 제1이미지입력부(110) 및 상기 제2이미지입력부(120)로부터 상기 제1이미지 및 상기 제2이미지를 전송 받고, 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성한다. 본 발명의 일 실시예에 따르면 상기 제1이미지는 밝기정보만을 포함하는 모노이미지이고, 상기 제2이미지는 색상정보를 포함하는 컬러이미지로서, 상기 제1이미지 및 상기 제2이미지를 병합함으로써 상기 이미지병합부(130)는 밝기정보 및 색상정보가 완전히 구비된 병합이미지를 생성하게 된다. 바람직하게는 상기 이미지병합부(130)는 학습된 인공신경망을 통해 상기 제1이미지 및 상기 제2이미지를 병합하여 병합이미지를 생성한다. 바람직하게는 본 발명의 일 실시예에 따른 이미지병합부(130)는 2단계의 처리를 통해 상기 제1이미지 및 상기 제2이미지를 병합할 수 있다. 제1단계에서는 상기 제1이미지의 누락된 픽셀의 휘도를 복원하여 휘도복원이미지를 생성하고, 제2단계에서는 상기 휘도복원이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성할 수 있다.The image merging unit 130 receives the first image and the second image from the first image input unit 110 and the second image input unit 120 , and based on the first image and the second image to create a merged image. According to an embodiment of the present invention, the first image is a mono image including only brightness information, and the second image is a color image including color information, and the image is obtained by merging the first image and the second image. The merging unit 130 generates a merged image completely equipped with brightness information and color information. Preferably, the image merging unit 130 generates a merged image by merging the first image and the second image through a learned artificial neural network. Preferably, the image merging unit 130 according to an embodiment of the present invention may merge the first image and the second image through two-step processing. In the first step, a luminance restored image may be generated by restoring the luminance of the missing pixel of the first image, and in the second step, a merged image may be generated based on the luminance restored image and the second image.
상기 병합이미지출력부(140)는 상기 이미지병합부(130)에서 생성된 병합이미지를 외부로 출력한다. 상기 병합이미지출력부(140)는 상기 이미지 센싱장치를 통해 빛을 센싱하고, 센싱 된 빛의 정보를 처리하여 생성한 상기 병합이미지를 연결된 장치로 전송하여 사용자 등이 이용할 수 있도록 한다. 상기 병합이미지출력부(140)는 네트워크인터페이스(300)를 통해 외부의 컴퓨팅 장치 등으로 상기 병합이미지를 전송할 수 있다.The merged image output unit 140 outputs the merged image generated by the image merging unit 130 to the outside. The merged image output unit 140 senses light through the image sensing device, processes the sensed light information, and transmits the generated merged image to a connected device for use by a user or the like. The merged image output unit 140 may transmit the merged image to an external computing device or the like through the network interface 300 .
도 4는 본 발명의 일 실시예에 따른 컬러필터어레이의 패턴을 개략적으로 도시하는 도면이다.4 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
도 4를 참조하면 본 발명의 일 실시예에 따른 컬러필터어레이(2000)는 복수의 픽셀을 포함하고, 상기 픽셀은 모든 색상의 빛을 통과시키는 화이트픽셀; 혹은 기설정된 색상의 빛만을 통과시키는 컬러픽셀; 이고, 상기 컬러픽셀은, 적색 빛을 투과시키는 R픽셀; 녹색 빛을 투과시키는 G픽셀; 및 청색 빛을 투과시키는 B픽셀; 중 하나이고, 상기 컬러필터어레이에 기설정된 패턴에 따라 배열되어 있다.Referring to FIG. 4 , the color filter array 2000 according to an embodiment of the present invention includes a plurality of pixels, wherein the pixels include white pixels that pass light of all colors; or color pixels that pass only light of a preset color; and the color pixel includes: an R pixel that transmits red light; G pixels that transmit green light; and a B pixel that transmits blue light; one of them, and are arranged according to a predetermined pattern on the color filter array.
도 4를 참조하면 상기 컬러필터어레이(2000)는 3종류의 컬러픽셀(R, G, B) 및 화이트픽셀(W)를 포함하고, 상기 컬러픽셀은 상기 컬러필터어레이(2000)에 기설정된 패턴에 따라 배열되어 있다. 도 4에 도시된 상기 기설정된 패턴은 본 발명의 일 실시예일뿐, 본 발명의 컬러픽셀은 도 4에 도시되지 않은 다른 패턴으로 배열되어 있을 수 있다.Referring to FIG. 4 , the color filter array 2000 includes three types of color pixels (R, G, B) and white pixels (W), and the color pixels have a pattern preset in the color filter array 2000 . are arranged according to The preset pattern shown in FIG. 4 is only one embodiment of the present invention, and the color pixels of the present invention may be arranged in another pattern not shown in FIG. 4 .
도 4를 참조하면 본 발명의 일 실시예에 따른 상기 기설정된 패턴은 10 x 10 픽셀 크기의 패턴이 반복되는 형태이다. 설명의 편의를 위해 상기 10 x 10 크기의 패턴에서 왼쪽에서 n번째, 위에서 m번째 픽셀을 (n,m)픽셀이라 한다.Referring to FIG. 4 , the preset pattern according to an embodiment of the present invention is a repeating pattern having a size of 10×10 pixels. For convenience of description, the nth pixel from the left and the mth pixel from the top in the 10×10 size pattern are referred to as (n,m) pixels.
도 4에 도시된 실시예에서는 (1,1)픽셀은 B픽셀, (10,1)픽셀은 G픽셀, (5,5)픽셀은 R픽셀, (6,5)픽셀 및 (5,6)픽셀 은 G픽셀, (6,6)픽셀은 B픽셀, (1,10)픽셀은 G픽셀, (10,10)픽셀은 R픽셀이고, 나머지 픽셀은 모두 화이트픽셀(2200)이다.In the embodiment shown in Fig. 4, (1,1) pixel is B pixel, (10,1) pixel is G pixel, (5,5) pixel is R pixel, (6,5) pixel and (5,6) pixel A pixel is a G pixel, a (6,6) pixel is a B pixel, a (1,10) pixel is a G pixel, a (10,10) pixel is an R pixel, and all other pixels are white pixels 2200 .
이와 같이 배열된 화이트픽셀(2200) 및 컬러픽셀(2100)에 의해 상기 픽셀들에 대응되는 센서어레이(3000)의 센서는 각각 밝기정보 및 색상정보를 획득할 수 있게 된다.By the white pixels 2200 and the color pixels 2100 arranged in this way, the sensors of the sensor array 3000 corresponding to the pixels can acquire brightness information and color information, respectively.
이 때, 본 발명의 일 실시예에서 기설정된 상기 패턴은, 상기 R픽셀, G픽셀 및 B픽셀이 각각 1 이상 포함되어 인접하게 배열되는 컬러픽셀그룹을 포함하고, 상기 컬러픽셀그룹은 기설정된 상기 패턴에서 규칙적으로 배열된다. 바람직하게는 상기 컬러픽셀그룹은 상기 R픽셀, G픽셀 및 B픽셀이 각각 1 이상 포함되어 연속적으로 인접하게 배열된다.In this case, in an embodiment of the present invention, the preset pattern includes a color pixel group including one or more of the R pixel, the G pixel, and the B pixel and arranged adjacently, and the color pixel group includes the preset They are arranged regularly in the pattern. Preferably, the color pixel group includes one or more of the R, G, and B pixels, respectively, and is arranged adjacent to each other.
도 4에 도시된 실시예에서는 (5,5)픽셀, (6,5)픽셀, (5,6)픽셀 및 (6,6)픽셀의 4개 픽셀이 인접하게 배열되어 컬러픽셀그룹(2100.a)을 형성한다. 이와 같은 컬러픽셀그룹에는 R픽셀((5,5)픽셀), G픽셀((6,5)픽셀, (5,6)픽셀) 및 B픽셀((6,6)픽셀)이 포함되어 베이어 패턴을 구성하고 있다.In the embodiment shown in FIG. 4, four pixels of (5,5) pixels, (6,5) pixels, (5,6) pixels, and (6,6) pixels are arranged adjacently to form a color pixel group 2100. a) to form This group of color pixels includes R pixels ((5,5) pixels), G pixels ((6,5) pixels, (5,6) pixels), and B pixels ((6,6) pixels). constitutes
또한, (1,1)픽셀, (10,1)픽셀, (1,10)픽셀 및 (10,10)픽셀이 상기 10 x 10 크기의 패턴의 모서리에서 인접하게 배열되어 컬러픽셀그룹(2100.b)을 형성한다.In addition, (1,1) pixels, (10,1) pixels, (1,10) pixels, and (10,10) pixels are arranged adjacently at the corners of the 10 x 10 pattern to form a color pixel group 2100. b) form.
이와 같이 R픽셀, G픽셀 및 B픽셀이 각각 1 이상 포함되는 컬러픽셀그룹을 형성함으로써, 상기 컬러픽셀그룹 영역에서의 적색, 녹색 및 청색 각각의 세기 정보를 획득할 수 있고, 이와 같은 3가지 색상 각각의 세기 정보에 기초하여, 상기 컬러픽셀그룹 영역에서의 색상 정보를 획득할 수 있게 된다.By forming a color pixel group each including one or more R pixels, G pixels, and B pixels in this way, intensity information of each of red, green, and blue in the color pixel group area can be obtained, and the three colors Based on each intensity information, color information in the color pixel group area can be obtained.
이처럼 본 발명의 일 실시예에서는 기설정된 패턴이 상기 R픽셀, G픽셀 및 B픽셀이 각각 1 이상 포함되어 인접하게 배열되는 컬러픽셀그룹을 포함함으로써, 색상 복원에 필요한 정확한 색상 정보를 제공하여 더욱 높은 품질의 색상 복원을 수행할 수 있는 효과를 발휘할 수 있다.As such, in an embodiment of the present invention, the preset pattern includes a color pixel group in which at least one R pixel, G pixel, and B pixel are each included and arranged adjacently, thereby providing accurate color information required for color restoration and providing a higher value It can have the effect of performing color restoration of quality.
이와 같은 기설정된 패턴에 대한 정보는 상기 메모리(400)의 센서정보(430)에 저장되어, 이미지처리장치(1000)에서 활용될 수 있다. 예를 들면 상기 제1이미지입력부(110)는 상기 기설정된 패턴에 대한 정보에 기초하여 상기 이미지센서(500)로부터 수신한 이미지데이터로부터 제1이미지를 획득할 수 있다.Information on such a preset pattern may be stored in the sensor information 430 of the memory 400 and utilized in the image processing apparatus 1000 . For example, the first image input unit 110 may acquire a first image from the image data received from the image sensor 500 based on the information on the preset pattern.
도 5는 본 발명의 일 실시예에 따른 컬러필터어레이의 패턴을 개략적으로 도시하는 도면이다.5 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
도 5에는 도 4에 도시된 것과는 다른 배열을 갖는 컬러필터어레이의 패턴이 도시되어 있다.FIG. 5 shows a pattern of a color filter array having an arrangement different from that shown in FIG. 4 .
도 5를 참조하면 상기 컬러필터어레이(2000)는 3종류의 컬러픽셀(R, G, B) 및 화이트픽셀(W)를 포함하고, 상기 컬러픽셀은 상기 컬러필터어레이(2000)에 기설정된 패턴에 따라 배열되어 있다. 도 5에 도시된 상기 기설정된 패턴은 본 발명의 일 실시예일뿐, 본 발명의 컬러픽셀은 도 5에 도시되지 않은 다른 패턴으로 배열되어 있을 수 있다.Referring to FIG. 5 , the color filter array 2000 includes three types of color pixels (R, G, B) and white pixels (W), and the color pixels have a pattern preset in the color filter array 2000 . are arranged according to The preset pattern shown in FIG. 5 is only one embodiment of the present invention, and the color pixels of the present invention may be arranged in another pattern not shown in FIG. 5 .
도 5를 참조하면 본 발명의 일 실시예에 따른 상기 기설정된 패턴은 18 x 18 픽셀 크기의 패턴이 반복되는 형태이다. 설명의 편의를 위해 상기 18 x 18 크기의 패턴에서 왼쪽에서 n번째, 위에서 m번째 픽셀을 (n,m)픽셀이라 한다.Referring to FIG. 5 , the preset pattern according to an embodiment of the present invention is a repeating pattern having a size of 18×18 pixels. For convenience of description, the n-th pixel from the left and the m-th pixel from the top in the 18×18 size pattern are referred to as (n,m) pixels.
도 5에 도시된 실시예에서는 (9,9)픽셀은 R픽셀, (10,9)픽셀 및 (9,10)픽셀은 G픽셀, (10,10)픽셀은 B픽셀의 컬러픽셀(2100)이고, 나머지 픽셀은 모두 화이트픽셀(2100)이다. 이와 같이 18 x 18 크기의 패턴에 4 개의 컬러픽셀이 포함됨으로써 컬러픽셀의 비율은 1.13%이고, 화이트픽셀의 비율은 98.86%로서 컬러픽셀의 비율이 극히 낮은 패턴이다.In the embodiment shown in Fig. 5, (9,9) pixels are R pixels, (10,9) pixels and (9,10) pixels are G pixels, and (10,10) pixels are B pixels color pixels 2100 , and the remaining pixels are all white pixels 2100 . As described above, since four color pixels are included in the 18 x 18 pattern, the ratio of color pixels is 1.13% and the ratio of white pixels is 98.86%, which is a very low pattern of color pixels.
이와 같이 배열된 화이트픽셀(2200) 및 컬러픽셀(2100)에 의해 상기 픽셀들에 대응되는 센서어레이(3000)의 센서는 각각 밝기정보 및 색상정보를 획득할 수 있게 된다.By the white pixels 2200 and the color pixels 2100 arranged in this way, the sensors of the sensor array 3000 corresponding to the pixels can acquire brightness information and color information, respectively.
도 6은 본 발명의 일 실시예에 따른 이미지병합부 및 병합이미지출력부의 동작을 개략적으로 도시하는 도면이다.6 is a diagram schematically illustrating operations of an image merging unit and a merged image output unit according to an embodiment of the present invention.
도 6을 참조하면 본 발명의 일 실시예에 따른 이미지병합부(130)는 상기 제1이미지입력부(110) 및 상기 제2이미지입력부(120)로부터 각각 제1이미지(1) 및 제2이미지(2)를 입력 받는다.Referring to FIG. 6 , the image merging unit 130 according to an embodiment of the present invention receives a first image 1 and a second image ( 1 ) from the first image input unit 110 and the second image input unit 120 , respectively. 2) is input.
도 6에 도시된 상기 제1이미지(1) 및 상기 제2이미지(2)는 상기 도 4에 도시된 것과 같은 컬러필터어레이(2000)의 패턴에 의해 획득한 이미지이다. 상기 제1이미지(1)는 상기 컬러필터어레이(2000)의 컬러픽셀(2100)을 제외한 화이트픽셀(2200)에 대응되는 센서어레이의 센서로부터 획득되어 상기 컬러픽셀(2100)을 제외한 영역(흰색으로 표시)에서 각각의 픽셀의 빛의 세기를 감지함으로써 밝기 정보를 포함하게 된다. 도 6에 도시된 것과 같이 상기 제1이미지(1)는 기설정된 패턴에 의해 배열된 상기 컬러픽셀그룹(2100.a)(2100.b) 영역을 구멍으로 남겨둔 나머지 영역에 대한 밝기 정보가 포함된다.The first image 1 and the second image 2 shown in FIG. 6 are images obtained by the pattern of the color filter array 2000 as shown in FIG. 4 . The first image 1 is obtained from the sensor of the sensor array corresponding to the white pixel 2200 excluding the color pixel 2100 of the color filter array 2000, and the area except for the color pixel 2100 (white color) In the display), brightness information is included by detecting the light intensity of each pixel. As shown in FIG. 6 , the first image 1 includes brightness information on the remaining areas of the color pixel groups 2100.a and 2100.b arranged by a preset pattern, leaving holes. .
상기 제2이미지(2)는 상기 컬러필터어레이(2000)의 컬러픽셀(2100)에 대응되는 센서어레이의 센서로부터 획득되어 상기 컬러픽셀(2100) 영역(흰색으로 표시)에서 각각의 픽셀에서 설정된 색상에 대한 빛의 세기를 감지함으로써 색상 정보를 포함하게 된다. 도 6에 도시된 것과 같이 상기 제2이미지(2)는 상기 컬러픽셀그룹(2100.a)(2100.b) 영역에 대한 색상 정보가 포함된다.The second image 2 is obtained from the sensor of the sensor array corresponding to the color pixel 2100 of the color filter array 2000, and the color set in each pixel in the area (displayed in white) of the color pixel 2100 Color information is included by sensing the intensity of light. As shown in FIG. 6 , the second image 2 includes color information for the area of the color pixel group 2100.a and 2100.b.
본 발명의 일 실시예에서, 상기 이미지병합부(130)는, 학습된 인공신경망에 기초하여 상기 제1이미지(1) 및 상기 제2이미지(2)를 병합하여 병합이미지를 생성할 수 있다. 바람직하게는 상기 인공신경망은 컨볼루션신경망이다. 컨볼루션신경망은 인공신경망에 필터 기술을 병합하여 인공신경망이 2차원 영상을 잘 습득할 수 있도록 최적화 시킨 알고리즘이다.In an embodiment of the present invention, the image merging unit 130 may generate a merged image by merging the first image 1 and the second image 2 based on the learned artificial neural network. Preferably, the artificial neural network is a convolutional neural network. The convolutional neural network is an algorithm that has been optimized so that the artificial neural network can acquire two-dimensional images well by merging the filter technology with the artificial neural network.
이와 같은 인공신경망을 통해 상기 제1이미지 및 상기 제2이미지를 병합하여 병합이미지를 생성함으로써, 더욱 높은 품질의 병합이미지를 생성할 수 있다.By merging the first image and the second image through the artificial neural network to generate a merged image, a higher quality merged image can be generated.
상기 이미지병합부(130)를 통해 생성된 상기 병합이미지는 상기 병합이미지출력부(140)를 통해 외부의 장치 등으로 출력된다. 예를 들어 상기 병합이미지출력부(140)는 네트워크인터페이스(300)를 통해 연결된 메모리카드에 상기 병합이미지를 저장하거나, 혹은 인터넷 망을 통해 연결된 클라우드서비스 등에 상기 병합이미지를 전송하여 저장하도록 할 수도 있다.The merged image generated through the image merging unit 130 is output to an external device or the like through the merged image output unit 140 . For example, the merged image output unit 140 may store the merged image in a memory card connected through the network interface 300 or transmit and store the merged image to a cloud service connected through the Internet network. .
이하에서는 전술한 것과 같은 컬러필터어레이(2000)를 통해 획득한 이미지를 병합하는 구체적인 과정을 실시예를 통해 설명하도록 한다.Hereinafter, a detailed process of merging images acquired through the color filter array 2000 as described above will be described through embodiments.
컬러픽셀그룹을 포함하는 컬러필터어레이를 통해 획득한 이미지의 병합Merge of images acquired through color filter array containing color pixel group
이하에서 설명하는 이미지의 병합은 도 4에 도시된 것과 같은 컬러필터어레이(2000)를 통해 획득한 이미지를 병합한다.The merging of images described below merges the images acquired through the color filter array 2000 as shown in FIG. 4 .
도 7은 본 발명의 일 실시예에 따른 이미지병합부의 인공신경망의 동작을 개략적으로 도시하는 도면이다.7 is a diagram schematically illustrating an operation of an artificial neural network of an image merging unit according to an embodiment of the present invention.
도 7의 (a) 는 본 발명의 일 실시예에 따른 이미지병합부(130)의 인공신경망(135)이 학습데이터로부터 학습을 수행하는 과정을 도시한다. 도 7의 (a)를 참조하면 상기 인공신경망(135)은 본 발명의 일 실시예에 따라 획득할 수 있는 제1이미지에 해당하는 제1학습데이터 및 본 발명의 일 실시예에 따라 획득할 수 있는 제2이미지에 해당하는 제2학습데이터를 입력 받고 상기 제1학습데이터 및 상기 제2학습데이터를 병합하여 학습병합이미지를 생성한다. 상기 제1학습데이터는 제1이미지와 같이 컬러픽셀(2100)을 제외한 영역에 대한 밝기정보를 포함하는 모노 이미지이고, 상기 제2학습데이터는 제2이미지와 같이 컬러픽셀(2100)의 영역에 대한 색상정보를 포함하는 컬러 이미지이다. 이와 같은 제1학습데이터 및 제2학습데이터는 상기 인공신경망(135)에 대한 지도 학습(Supervised Learning)을 수행할 수 있는 교육 데이터 세트에 해당한다.7A illustrates a process in which the artificial neural network 135 of the image merging unit 130 performs learning from learning data according to an embodiment of the present invention. Referring to (a) of FIG. 7 , the artificial neural network 135 may acquire first learning data corresponding to a first image obtainable according to an embodiment of the present invention and an embodiment of the present invention. The second learning data corresponding to the second image is received and the first learning data and the second learning data are merged to generate a learning merged image. The first learning data is a mono image including brightness information for a region excluding the color pixel 2100 like the first image, and the second learning data is for the region of the color pixel 2100 like the second image. It is a color image containing color information. Such first and second learning data correspond to an educational data set capable of performing supervised learning on the artificial neural network 135 .
또한, 상기 인공신경망(135)은 제3학습데이터도 입력 받는다. 상기 제3학습데이터는 상기 제1학습데이터 및 상기 제2학습데이터에 의해 생성되는 컬러 이미지로서, 상기 제1학습데이터 및 상기 제2학습데이터를 병합하여 생성되는 학습병합이미지에 대한 기반 진실(Ground Truth)이미지이다. 상기 제3학습데이터는 상기 제1학습데이터 및 상기 제2학습데이터와 함께 상기 인공신경망(135)에 대한 지도 학습을 위한 교육 데이터 세트를 구성한다.In addition, the artificial neural network 135 also receives the third learning data. The third learning data is a color image generated by the first learning data and the second learning data, and is a ground truth for a learning merged image generated by merging the first learning data and the second learning data. Truth) image. The third learning data constitutes an education data set for supervised learning of the artificial neural network 135 together with the first learning data and the second learning data.
상기 인공신경망(135)은 도 7의 (a)에서와 같이 상기 제3학습데이터를 입력 받는다. 이 후 상기 인공신경망(135)은 상기 학습병합이미지와 상기 제3학습데이터를 비교하여, 상기 인공신경망(135)이 상기 제1학습데이터 및 상기 제2학습데이터로부터 생성하는 학습병합이미지가 상기 제3학습데이터에 가까워질 수 있도록 학습을 수행한다. 이는 역전파 알고리즘 등을 통해 상기 인공신경망(135)의 내부 파라미터 등을 변경하는 과정 등을 통해 수행될 수 있다.The artificial neural network 135 receives the third learning data as shown in (a) of FIG. 7 . After that, the artificial neural network 135 compares the learning merged image with the third learning data, and the learning merged image generated by the artificial neural network 135 from the first learning data and the second learning data is the second learning data. 3 Perform learning to get closer to the learning data. This may be performed through a process of changing the internal parameters of the artificial neural network 135 through a backpropagation algorithm or the like.
도 7의 (b)는 학습된 상기 인공신경망(135)이 병합이미지를 생성하는 과정을 도시한다. 도 7의 (a) 와 같은 과정을 거쳐 학습된 상기 인공신경망(135)은 제1이미지 및 제2이미지를 입력 받아 병합이미지를 생성한다. 상기 제1학습데이터, 상기 제2학습데이터 및 상기 제3학습데이터로 구성되는 교육 데이터 세트를 통해 학습을 수행한 상기 인공신경망(135)은 상기 제1이미지 및 상기 제2이미지를 입력 받아 병합이미지를 생성하게 된다.7B illustrates a process in which the learned artificial neural network 135 generates a merged image. The artificial neural network 135 learned through the same process as in FIG. 7A receives the first image and the second image and generates a merged image. The artificial neural network 135, which has performed learning through an education data set composed of the first learning data, the second learning data, and the third learning data, receives the first image and the second image as input and merges the image. will create
도 8은 본 발명의 일 실시예에 따른 이미지병합부의 컨볼루션신경망의 구조를 개략적으로 도시하는 도면이다.8 is a diagram schematically illustrating the structure of a convolutional neural network of an image merging unit according to an embodiment of the present invention.
도 8를 참조하면 제1이미지는 밝기정보만을 포함하는 1 채널 데이터이고, 제2이미지는 색상정보를 포함하는 3채널 데이터이다. 상기 제1이미지 및 상기 제2이미지가 각각 컨볼루션 신경망에 입력될 수 있다.Referring to FIG. 8 , the first image is 1-channel data including only brightness information, and the second image is 3-channel data including color information. The first image and the second image may be input to a convolutional neural network, respectively.
본 발명의 일 실시예에서 상기 컨볼루션 신경망은 상기 제2이미지를 컬러 힌트로 하는 크로스채널 오토인코더로 구성될 수 있다.In an embodiment of the present invention, the convolutional neural network may be configured as a cross-channel autoencoder using the second image as a color hint.
더욱 상세하게는 도 8를 참조하면 상기 컨볼루션 신경망은, 제1컨볼루션블록 내지 제10컨볼루션블록까지 10개의 컨볼루션블록을 포함한다. 상기 컨볼루션블록 각각은 2 내지 3개의 컨볼루션 및 활성함수 쌍을 포함하고, 제1컨볼루션블록 및 제10컨볼루션블록은 높이 H, 폭 W에 64 채널의 크기를 갖고, 제2컨볼루션블록 및 제9컨볼루션블록은 높이 H/2, 폭 W/2에 128 채널의 크기를 갖고, 제3컨볼루션블록 및 제8컨볼루션블록은 높이 H/4, 폭 W/4에 256 채널의 크기를 갖고, 제4컨볼루션블록 내지 제7컨볼루션블록은 높이 H/8, 폭 W/8에 512 채널의 크기를 갖는다. 바람직하게는 상기 활성함수는 ReLU(Rectified Linear Unit)이다.In more detail, referring to FIG. 8 , the convolutional neural network includes ten convolution blocks from a first convolution block to a tenth convolution block. Each of the convolution blocks includes two to three pairs of convolution and activation functions, the first convolution block and the tenth convolution block have a height H and a width W of 64 channels, and the second convolution block and the ninth convolution block has a size of 128 channels in a height H/2 and a width W/2, and the third convolution block and the eighth convolution block have a size of 256 channels in a height H/4 and a width W/4. , and the fourth to seventh convolution blocks have a height of H/8, a width of W/8, and a size of 512 channels. Preferably, the activation function is a Rectified Linear Unit (ReLU).
제1컨볼루션블록에서 제2컨볼루션블록으로, 제2컨볼루션블록에서 제3컨볼루션블록으로, 제3컨볼루션블록에서 제4컨볼루션블록으로에서와 같이 공간해상도가 줄어드는 경우 다운샘플링에 의해 공간해상도를 줄이고, 제7컨볼루션블록에서 제8컨볼루션블록으로, 제8컨볼루션블록에서 제9컨볼루션블록으로, 제9컨볼루션블록에서 제10컨볼루션블록으로에서와 같이 공간해상도가 증가하는 경우 업샘플링에 의해 공간해상도를 증가시킨다.When the spatial resolution is reduced as in from the first convolution block to the second convolution block, from the second convolution block to the third convolution block, and from the third convolution block to the fourth convolution block, by downsampling Reduce the spatial resolution, and increase the spatial resolution as in the case from the 7th convolution block to the 8th convolution block, from the 8th convolution block to the 9th convolution block, and from the 9th convolution block to the 10th convolution block. In this case, the spatial resolution is increased by upsampling.
제2이미지의 제1컨볼루션블록의 출력은 제1이미지의 제1컨볼루션블록으로 연결된다.The output of the first convolutional block of the second image is connected to the first convolutional block of the first image.
또한, 상기 제1컨볼루션블록의 출력은 제10컨볼루션블록으로, 제2컨볼루션블록의 출력은 제9컨볼루션블록으로, 제3컨볼루션블록의 출력은 제8컨볼루션블록으로 각각 연결된다.In addition, the output of the first convolution block is connected to the tenth convolution block, the output of the second convolution block is connected to the ninth convolution block, and the output of the third convolution block is connected to the eighth convolution block. .
제10컨볼루션블록의 마지막 레이어는 1 x 1 커널로서 3 채널의 출력 색상을 생성한다. 본 발명의 일 실시예에서 상기 3채널의 출력 색상은 RGB 색공간의 RGB 색상값일 수 있다.The last layer of the tenth convolution block is a 1 x 1 kernel and generates output colors of 3 channels. In an embodiment of the present invention, the output color of the three channels may be an RGB color value of an RGB color space.
이와 같은 컨볼루션 신경망은 아래와 같은 방식을 통해 학습된다.Such a convolutional neural network is trained in the following way.
상기 컨볼루션 신경망의 매핑 함수는 아래와 같이 표현될 수 있다.The mapping function of the convolutional neural network may be expressed as follows.
F(M, C;θ)F(M, C; θ)
이 때, M은 제1이미지, C는 제2이미지, θ는 상기 컨볼루션 신경망의 파라미터이다.In this case, M is the first image, C is the second image, and θ is the parameter of the convolutional neural network.
이 때, 출력값(Y')과 실제 데이터(Y)의 일치도를 나타내는 로스함수는 아래와 같이 표현될 수 있다.At this time, the loss function representing the degree of correspondence between the output value Y' and the actual data Y may be expressed as follows.
L(Y', Y)L(Y', Y)
이 때, 상기 컨볼루션 신경망은 다음 식을 만족시키는 상기 파라미터 θ를 도출함으로써 상기 컨볼루션 신경망의 학습을 수행한다.In this case, the convolutional neural network learns the convolutional neural network by deriving the parameter θ that satisfies the following equation.
Figure PCTKR2020000795-appb-I000001
Figure PCTKR2020000795-appb-I000001
본 발명의 일 실시예에서는 컨볼루션 신경망을 통해 색상 복원 과정에서의 색번짐 현상을 방지하기 위하여 이미지의 구조적 유사도(Structural SIMilarity, SSIM)를 이용하여 컨볼루션 신경망을 학습시킨다. In an embodiment of the present invention, a convolutional neural network is trained using a structural similarity (SSIM) of an image in order to prevent color bleeding in the color restoration process through the convolutional neural network.
도 9는 본 발명의 일 실시예에 따른 이미지처리방법의 각 단계를 개략적으로 도시하는 도면이다.9 is a diagram schematically illustrating each step of an image processing method according to an embodiment of the present invention.
도 9를 참조하면 본 발명의 일 실시예에 따른 이미지처리방법은 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리방법으로서, 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지입력단계(S110); 상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지입력단계(S120); 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합단계(S200); 및 상기 병합이미지를 외부로 출력하는 병합이미지출력단계(S300); 를 포함한다.Referring to FIG. 9 , an image processing method according to an embodiment of the present invention is an image processing method that receives image data from an electrically connected image sensor and outputs a merged image, and is sensed by a plurality of white pixels of the image sensor. a first image input step (S110) of receiving a first image generated on the basis of the received information; a second image input step (S120) of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; an image merging step (S200) of generating a merged image based on the first image and the second image by the learned artificial neural network; and a merged image output step (S300) of outputting the merged image to the outside; includes
상기 제1이미지획득단계(S110)는 전술한 상기 제1이미지입력부(110)의 동작과 동일한 동작을 수행한다. 연결된 센서어레이(3000)의 센서 중 상기 화이트픽셀(2100)에 대응되는 센서로부터 센싱 된 정보를 입력 받아 제1이미지를 획득한다. 이와 같은 제1이미지는 상기 화이트픽셀(2100)에 대응되는 센서로부터 센싱 된 정보만을 입력 받아 구성될 수도 있고, 혹은 상기 센서어레이(3000)의 센서 전부로부터 센싱 된 정보를 입력 받은 후, 상기 화이트픽셀(2100)에 대응되는 센서로부터 센싱 된 정보를 선별하여 구성될 수도 있다.The first image acquisition step S110 performs the same operation as that of the first image input unit 110 described above. A first image is obtained by receiving information sensed from a sensor corresponding to the white pixel 2100 among the sensors of the connected sensor array 3000 . Such a first image may be configured by receiving only information sensed from a sensor corresponding to the white pixel 2100, or after receiving information sensed from all sensors of the sensor array 3000, the white pixel It may be configured by selecting information sensed from a sensor corresponding to 2100 .
상기 제2이미지획득단계(S120)는 전술한 상기 제2이미지입력부(120)의 동작과 동일한 동작을 수행한다. 연결된 센서어레이(3000)의 센서 중 상기 컬러픽셀(2200)에 대응되는 센서로부터 센싱 된 정보를 입력 받아 제2이미지를 획득한다. 이와 같은 제2이미지는 상기 컬러픽셀(2200)에 대응되는 센서로부터 센싱 된 정보만을 입력 받아 구성될 수도 있고, 혹은 상기 센서어레이(3000)의 센서 전부로부터 센싱 된 정보를 입력 받은 후, 상기 컬러픽셀(2200)에 대응되는 센서로부터 센싱 된 정보를 선별하여 구성될 수도 있다.The second image acquisition step S120 performs the same operation as that of the second image input unit 120 described above. A second image is obtained by receiving information sensed from a sensor corresponding to the color pixel 2200 among the sensors of the connected sensor array 3000 . Such a second image may be configured by receiving only information sensed from a sensor corresponding to the color pixel 2200, or after receiving information sensed from all sensors of the sensor array 3000, the color pixel It may be configured by selecting information sensed from a sensor corresponding to 2200 .
본 발명의 일 실시예에서 상기 제1이미지획득단계(S110) 및 상기 제2이미지획득단계(S120)는 동시에 수행될 수 있다.In an embodiment of the present invention, the first image acquisition step S110 and the second image acquisition step S120 may be performed simultaneously.
상기 이미지병합단계(S200)는 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성한다. 본 발명의 일 실시예에서 상기 이미지병합단계(S200)에서는 상기 제1이미지획득단계(S110) 및 상기 제2이미지획득단계(S120)에서 획득한 상기 제1이미지 및 상기 제2이미지를 전송 받고, 상기 제1이미지 및 상기 제2이미지를 병합하여 병합이미지를 생성한다. 본 발명의 일 실시예에 따르면 상기 제1이미지는 밝기정보만을 포함하는 모노이미지이고, 상기 제2이미지는 색상정보를 포함하는 컬러이미지로서, 상기 제1이미지 및 상기 제2이미지를 병합함으로써 상기 이미지병합단계(S200)에서는 밝기정보 및 색상정보가 완전히 구비된 병합이미지를 생성하게 된다. 바람직하게는 상기 이미지병합단계(S200)에서는 학습된 인공신경망을 통해 상기 제1이미지 및 상기 제2이미지를 병합하여 병합이미지를 생성한다.The image merging step (S200) generates a merged image based on the first image and the second image. In an embodiment of the present invention, in the image merging step (S200), the first image and the second image obtained in the first image acquisition step (S110) and the second image acquisition step (S120) are transmitted and received, A merged image is generated by merging the first image and the second image. According to an embodiment of the present invention, the first image is a mono image including only brightness information, and the second image is a color image including color information, and the image is obtained by merging the first image and the second image. In the merging step ( S200 ), a merged image fully equipped with brightness information and color information is generated. Preferably, in the image merging step (S200), the first image and the second image are merged through a learned artificial neural network to generate a merged image.
상기 병합이미지출력단계(S300)는 상기 병합이미지를 외부로 출력한다. 상기 병합이미지출력단계(S300)에서는 상기 이미지 센싱장치를 통해 빛을 센싱하고, 센싱 된 빛의 정보를 처리하여 생성한 상기 병합이미지를 연결된 장치로 전송하여 사용자 등이 이용할 수 있도록 한다. 상기 병합이미지출력단계(S300)에서는 네트워크인터페이스(300)를 통해 외부의 컴퓨팅 장치 등으로 상기 병합이미지를 전송할 수 있다.The merged image output step (S300) outputs the merged image to the outside. In the merged image output step (S300), light is sensed through the image sensing device, and the merged image generated by processing the sensed light information is transmitted to a connected device so that the user can use it. In the step of outputting the merged image ( S300 ), the merged image may be transmitted to an external computing device or the like through the network interface 300 .
도 10 및 도 11은 본 발명의 일 실시예에 따라 획득한 이미지를 도시하는 도면이다.10 and 11 are diagrams illustrating images obtained according to an embodiment of the present invention.
도 10은 본 발명의 일 실시예에 따른 이미지 센싱장치로부터 획득한 제1이미지이다. 제1이미지는 컬러필터어레이(2000)의 컬러픽셀그룹을 제외한 화이트픽셀에서 획득되는 단색 이미지로서, 도 10의 상단에서와 같이 컬러픽셀그룹 위치에서의 이미지 정보가 검은색으로 처리된 흑백 이미지로 나타난다.10 is a first image obtained from an image sensing device according to an embodiment of the present invention. The first image is a monochromatic image obtained from white pixels excluding the color pixel group of the color filter array 2000, and as shown in the upper part of FIG. 10, image information at the color pixel group position is processed as a black-and-white image. .
도 11은 본 발명의 일 실시예에 따른 이미지 센싱장치에 의해 생성된 병합이미지이다. 상기 제1이미지 및 컬러픽셀그룹에서 획득한 색상 정보를 포함하는 제2이미지를 인공신경망을 통해 병합함으로써, 도 11과 같이 색상정보가 포함된 컬러 이미지를 획득할 수 있다.11 is a merged image generated by an image sensing device according to an embodiment of the present invention. By merging the first image and the second image including color information obtained from the color pixel group through an artificial neural network, a color image including color information may be obtained as shown in FIG. 11 .
도 11에서 보는 바와 같이, 본 발명의 일 실시예에 따라 인공신경망을 통해 병합한 병합이미지는 매우 낮은 비율의 컬러픽셀을 포함하는 컬러필터어레이(2000)를 통해 획득한 정보에 기초하여 색상을 복원하였음에도 매우 우수하게 색을 재현한다.As shown in FIG. 11 , the merged image merged through an artificial neural network according to an embodiment of the present invention is color restored based on information obtained through the color filter array 2000 including a very low ratio of color pixels. It reproduces colors very well.
도 12는 상기 도 10 및 도 11의 이미지에 대한 기반 진실(Ground Truth) 이미지이다.12 is a ground truth image for the images of FIGS. 10 and 11 .
도 12 및 상기 도 11을 비교해 보면, 본 발명의 일 실시예에 따라 인공신경망을 통해 병합한 병합이미지가 상기 기반 진실(Ground Truth) 이미지를 충실하게 복원해 냄을 확인할 수 있다.12 and 11, it can be confirmed that the merged image merged through an artificial neural network according to an embodiment of the present invention faithfully restores the ground truth image.
희소 컬러픽셀을 포함하는 컬러필터어레이를 통해 획득한 이미지의 병합Merge of images acquired through color filter array containing sparse color pixels
이하에서 설명하는 이미지의 병합은 도 5에 도시된 것과 같은 컬러필터어레이(2000)를 통해 획득한 이미지를 병합한다.The image merging described below merges the images acquired through the color filter array 2000 as shown in FIG. 5 .
도 13은 본 발명의 일 실시예에 따른 이미지병합부의 내부 구성을 개략적으로 도시하는 도면이다.13 is a diagram schematically illustrating an internal configuration of an image merging unit according to an embodiment of the present invention.
전술한 바와 같이 본 발명의 일 실시예에 따른 상기 이미지병합부(130)는 2단계의 처리를 통해 상기 제1이미지(1) 및 상기 제2이미지(2)를 병합하여 병합이미지(3)를 생성할 수 있다. 이를 위해 상기 이미지병합부(130)는, 상기 제1이미지(1)에 기초하여 상기 컬러픽셀(2100)이 위치하는 영역의 휘도를 도출하여 휘도복원이미지(4)를 생성하는 휘도복원부(131); 상기 휘도복원부를 통해 생성된 상기 휘도복원이미지(4)의 색 경계 정보를 포함하는 경계이미지(5)를 생성하는 경계추출부(132); 및 상기 제2이미지(2), 상기 휘도복원이미지(4) 및 상기 경계이미지(5)에 기초하여 병합이미지(3)를 생성하는 색상복원부(133); 를 포함할 수 있다.As described above, the image merging unit 130 according to an embodiment of the present invention merges the first image 1 and the second image 2 through a two-step process to form a merged image 3 . can create To this end, the image merging unit 130 generates a luminance restored image 4 by deriving the luminance of a region in which the color pixel 2100 is located based on the first image 1 . ); a boundary extraction unit 132 for generating a boundary image 5 including color boundary information of the luminance restoration image 4 generated through the luminance restoration unit; and a color restoration unit 133 for generating a merged image 3 based on the second image 2, the luminance restoration image 4, and the boundary image 5; may include.
이와 같이 본 발명의 일 실시예에 따르면 상기 휘도복원부(131) 및 경계추출부(132)에서 1차적으로 상기 제1이미지(1)에 대한 처리를 수행하여 휘도복원이미지(4) 및 경계이미지(5)를 생성하고, 상기 색상복원부(133)에서 2차적으로 상기 제2이미지(2), 상기 휘도복원이미지(4) 및 상기 경계이미지(5)에 대한 처리를 수행하여 병합이미지(3)를 생성하게 된다.As described above, according to an embodiment of the present invention, the luminance restoration unit 131 and the boundary extraction unit 132 primarily perform processing on the first image 1 to obtain the luminance restoration image 4 and the boundary image. (5) is generated, and the second image (2), the luminance restored image (4), and the boundary image (5) are secondarily processed by the color restoration unit 133 to process the merged image (3) ) will be created.
상기 제1이미지(1)는 도 6에 도시된 바와 같이 기설정된 패턴에 의해 배열된 상기 컬러픽셀(2100)이 위치한 픽셀을 구멍으로 남겨둔 나머지 영역에 대한 밝기 정보가 포함되어 있다. 상기 휘도복원부(131)에서는 상기 제1이미지(1)를 입력 받아 상기 컬러픽셀(2100)이 위치한 픽셀에 대한 밝기정보를 도출하여 채워 넣음으로써 휘도복원이미지(4)를 생성할 수 있다.As shown in FIG. 6 , the first image 1 includes brightness information on the remaining area in which the pixels in which the color pixels 2100 are located, which are arranged in a predetermined pattern, are left as holes. The luminance restoration unit 131 may generate the luminance restoration image 4 by receiving the first image 1 and deriving and filling the brightness information on the pixel in which the color pixel 2100 is located.
한편, 상기 색상복원부(133)는 상기 휘도복원이미지(4)의 휘도 정보에 상기 제2이미지(2)의 색상정보를 병합하여 병합이미지(2)를 생성할 수 있다.Meanwhile, the color restoration unit 133 may generate the merged image 2 by merging the luminance information of the luminance restoration image 4 with the color information of the second image 2 .
이 때, 본 발명의 일 실시예에서 상기 휘도복원부(131) 및 상기 색상복원부(133)는 각각의 인공신경망에 기초하여 상기 휘도복원이미지(4) 및 상기 병합이미지(3)를 생성한다. 즉, 상기 휘도복원부(131)는 상기 휘도복원이미지(4)를 생성하기 위한 인공신경망을 포함하여 구성되고, 상기 색상복원부(133)는 상기 병합이미지(3)를 생성하기 위한 인공신경망을 포함하여 구성된다. 상기 경계추출부(132)는 상기 휘도복원부(131)의 인공신경망이 상기 휘도복원이미지(4)를 생성하는 과정에서 추출되는 엣지정보에 기초하여 경계이미지(5)를 생성할 수 있다. 이와 같이 생성된 경계이미지(5)는 상기 색상복원부(133)에서 병합이미지(3)를 생성하기 위한 정보로 사용되어, 상기 병합이미지(3) 생성 과정에서 발생할 수 있는 색번짐 현상을 감소시킬 수 있다.At this time, in an embodiment of the present invention, the luminance restoration unit 131 and the color restoration unit 133 generate the luminance restoration image 4 and the merged image 3 based on each artificial neural network. . That is, the luminance restoration unit 131 includes an artificial neural network for generating the luminance restoration image 4 , and the color restoration unit 133 uses an artificial neural network for generating the merged image 3 . consists of including The boundary extractor 132 may generate the boundary image 5 based on edge information extracted during the process in which the artificial neural network of the luminance restoration unit 131 generates the luminance restoration image 4 . The boundary image 5 generated in this way is used as information for generating the merged image 3 by the color restoration unit 133, thereby reducing color bleeding that may occur in the process of generating the merged image 3 have.
본 발명의 다른 실시예에서는 상기 색상복원부(133)는 인공신경망에 의하여 상기 병합이미지(3)를 생성하지만, 상기 휘도복원부(131)는 공지된 픽셀 보간 (pixel interpolation) 알고리즘 등의 인공신경망을 이용하지 않는 방식으로 휘도복원이미지(4)를 생성할 수도 있다.In another embodiment of the present invention, the color restoration unit 133 generates the merged image 3 by an artificial neural network, but the luminance restoration unit 131 uses an artificial neural network such as a known pixel interpolation algorithm. It is also possible to generate the luminance restored image 4 in a way that does not use .
본 발명의 일 실시예에서 상기 색상복원부(133)는, 생성적 적대 신경망(Generative Adversarial Network, GAN)으로 구성되어 병합이미지(3)를 생성할 수 있다. 생성적 적대 신경망은 생성기(Generator) 및 판별기(Discriminator) 두 개의 신경망을 포함하고, 상기 생성기와 상기 판별기가 경쟁적으로 학습됨으로써 생성기의 성능을 향상시키는 인공신경망 구조이다.In an embodiment of the present invention, the color restoration unit 133 may be configured with a generative adversarial network (GAN) to generate the merged image 3 . The generative adversarial neural network is an artificial neural network structure that includes two neural networks, a generator and a discriminator, and improves the performance of the generator by competitively learning the generator and the discriminator.
생성적 적대 신경망에서 상기 생성기는 데이터를 생성하고, 상기 판별기는 입력되는 데이터가 실제 데이터인지 생성된 허위 데이터인지를 판별하게 된다. 따라서 상기 판별기는 학습을 통해 실제 데이터를 판별하는 능력을 향상시키게 되고, 상기 생성기는 학습을 통해 상기 판별기가 판별하지 못할 정도로 실제 데이터에 가까운 허위 데이터를 생성하는 능력을 향상시키게 된다.In the generative adversarial neural network, the generator generates data, and the discriminator determines whether input data is real data or generated false data. Accordingly, the discriminator improves the ability to discriminate the real data through learning, and the generator improves the ability to generate false data close to the real data to the extent that the discriminator cannot discriminate it through learning.
본 발명의 일 실시예에서 상기 색상복원부(133)는, 상기 제2이미지(2), 상기 휘도복원이미지(4) 및 상기 경계이미지(5)에 기초하여 병합이미지를 생성하는 색상복원모듈(133a); 및 상기 색상복원모듈이 생성한 상기 병합이미지를 평가하는 평가모듈(133b); 을 포함하고, 상기 색상복원모듈 및 상기 평가모듈은 상호 피드백을 수행할 수 있다. 이와 같은 실시예에서 상기 색상복원모듈(133a)는 상기 생성적 적대 신경망의 생성기로서의 역할을 수행하고, 상기 평가모듈(133b)는 기 생성적 적대 신경망의 판별기로서의 역할을 수행하게 된다.In an embodiment of the present invention, the color restoration unit 133 includes a color restoration module for generating a merged image based on the second image 2, the luminance restoration image 4, and the boundary image 5 ( 133a); and an evaluation module (133b) for evaluating the merged image generated by the color restoration module; Including, the color restoration module and the evaluation module may perform mutual feedback. In this embodiment, the color restoration module 133a serves as a generator of the generative adversarial neural network, and the evaluation module 133b acts as a discriminator of the generative adversarial neural network.
도 14는 본 발명의 일 실시예에 따른 휘도복원부(131), 색상복원모듈(133a) 및 평가모듈(133b)의 인공신경망의 구조를 개략적으로 도시하는 도면이다.14 is a diagram schematically illustrating the structure of an artificial neural network of the luminance restoration unit 131, the color restoration module 133a, and the evaluation module 133b according to an embodiment of the present invention.
도 14의 (a)에는 본 발명의 일 실시예에 따른 휘도복원부(131)의 인공신경망의 구조가 도시되어 있다.14 (a) shows the structure of the artificial neural network of the luminance restoration unit 131 according to an embodiment of the present invention.
본 발명의 일 실시예에서 상기 휘도복원부(131)는 스택형 컨볼루션 신경망으로 구성될 수 있다. 더욱 상세하게는 스택형 5층 컨볼루션 신경망으로 구성될 수 있다. 상기 컨볼루션 신경망의 첫 3개의 층은 깊이가 64이고, 다음 2개의 층은 깊이가 128이다. 각 층은 3 x 3의 커널 크기를 가지고, 활성함수로 ReLU를 사용할 수 있다. 상기 컨볼루션 신경망의 최종 층은 H x W x 1의 출력을 가질 수 있다. 이 때, H 및 W는 각각 상기 제1이미지(1)의 높이 및 폭을 의미한다.In an embodiment of the present invention, the luminance restoration unit 131 may be configured as a stacked convolutional neural network. In more detail, it may be composed of a stacked 5-layer convolutional neural network. The first three layers of the convolutional neural network have a depth of 64, and the next two layers have a depth of 128. Each layer has a kernel size of 3 x 3, and ReLU can be used as an activation function. The final layer of the convolutional neural network may have an output of H x W x 1. In this case, H and W mean the height and width of the first image 1, respectively.
도 14의 (b)에는 본 발명의 일 실시예에 따른 휘도복원모듈(133a)의 인공신경망의 구조가 도시되어 있다.14 (b) shows the structure of the artificial neural network of the luminance restoration module 133a according to an embodiment of the present invention.
본 발명의 일 실시예에서 상기 색상복원모듈(133a)은 고밀도 U-net으로 구성될 수 있다. 고밀도 U-net은 고밀도 컨볼루션 신경망과 U-net의 조합으로서, 전체적으로 U-net의 아키텍처 구조를 유지하면서 조밀하게 연결된 층을 통해 특징을 추출하고 전파하기 위해 고밀도 블록을 활용한다. 상기 고밀도 U-net은 상기 제2이미지(2) 및 상기 휘도복원이미지(4)를 입력으로 한다. 상기 제2이미지(2) 및 상기 휘도복원이미지(4)는 3 x 3의 커널 크기를 가지고 깊이가 15이고 활성함수로 ReLU를 사용하는 제1컨볼루션층(Conv1)을 통해 처리된다. 또한 상기 경계이미지(5)는 상기 제1컨볼루션층의 출력과 연결된다. 제1컨볼루션층 다음으로는 3개의 고밀도 블록(Dense1-3)과 3개의 전환블록(Tran1-3)이 번갈아 위치한다. 상기 제1컨볼루션층(Conv1)의 출력은 피드 포워드 방식으로 연속적인 고밀도 블록(Dense1-3)으로 공급되고, 각 고밀도 블록(Dense1-3)은 커널 크기가 3 x 3이고 스트라이드가 1인 복수의 컨볼루션층을 갖는다. 각각의 컨볼루션층은 배치 정규화로 정규화 되고, 활성함수로 ReLU를 사용한다. 3개의 상기 고밀도 블록의 출력 크기는 각각 64, 128 및 256이다. 상기 고밀도 블록의 출력은 각각의 전환블록(Dense1-3)을 통과한다. 각각의 전환블록(Dense1-3)은 컨볼루션층과 다운샘플링층으로 구성된다. 전환블록(Dense1-3)의 컨볼루션층의 커널 크기는 3 x 3이고 스트라이드는 1이고 활성함수로 ReLU를 사용한다. 다운샘플링층은 동일한 커널 크기에서 스트라이드가 2인 컨볼루션연산을 수행한다. 세 번째 전환블록(Dense3) 다음에 세 개의 컨볼루션 블록(Conv2-4)이 이어진다. 두 번째 컨볼루션 블록(Conv2)은 두 개의 컨볼루션층을 더 갖고, 세 번째 컨볼루션 블록(Conv3)은 고레벨 특징 추출을 위한 5개의 확장 컨볼루션층(확장=2)으로 구성되고, 네 번째 컨볼루션 블록은 단일 컨볼루션층을 갖는다. 상기 세 개의 컨볼루션 블록(Conv2-4)의 각 계층은 커널 크기가 3 x 3이고 스트라이드는 1이고 깊이는 512이다. 또한 이 계층은 배치 정규화에 의해 정규화 되고 활성함수로 ReLU를 사용한다. 네 번째 컨볼루션 블록(Conv4) 다음에는 세 개의 업샘플링 블록(Conv5-7)이 이어진다. 각각의 업샘플링 블록(Conv5-7)은 2 x 2 업샘플링층과 상기 업샘플링층에 이어지는 커널 크기가 3 x 3인 컨볼루션층으로 구성된다. 상기 업샘플링 블록(Conv5-7) 또한 배치 정규화로 정규화 되고 활성함수로 ReLU를 사용한다. 세 개의 상기 업샘플링 블록(Conv5-7)의 출력 크기는 각각 256, 128, 64이다. 세 개의 상기 업샘플링 블록(Conv5-7)에는 각각 상기 고밀도 블록(Dense1-3)과 개별 스킵 연결이 존재한다. 이와 같은 스킵 연결에 의해 네트워크는 저레벨 특징을 전파할 수 있도록 하여 최종 이미지의 재구성에 기여할 수 있다. 마지막으로 제8컨볼루션층(Conv8)이 위치한다. 상기 제8컨볼루션층(Conv8)의 출력 크기는 H x W x 3이다. 최종 병합이미지는 활성함수로 tanh를 사용한다.In an embodiment of the present invention, the color restoration module 133a may be configured as a high-density U-net. High-density U-net is a combination of high-density convolutional neural network and U-net, which utilizes high-density blocks to extract and propagate features through densely connected layers while maintaining the overall architectural structure of U-net. The high-density U-net takes the second image 2 and the luminance restored image 4 as inputs. The second image 2 and the luminance restored image 4 are processed through a first convolutional layer Conv1 having a kernel size of 3 x 3, a depth of 15, and using ReLU as an activation function. Also, the boundary image 5 is connected to the output of the first convolutional layer. After the first convolutional layer, three high-density blocks (Dense1-3) and three transition blocks (Tran1-3) are alternately located. The output of the first convolutional layer Conv1 is supplied as continuous high-density blocks Dense1-3 in a feed-forward manner, and each high-density block Dense1-3 has a plurality of kernel sizes of 3 x 3 and a stride of 1. has a convolutional layer of Each convolutional layer is normalized by batch normalization, and ReLU is used as the activation function. The output sizes of the three high-density blocks are 64, 128 and 256, respectively. The output of the high-density block passes through each conversion block Dense1-3. Each conversion block Dense1-3 is composed of a convolution layer and a downsampling layer. The kernel size of the convolutional layer of the conversion block (Dense1-3) is 3 x 3, the stride is 1, and ReLU is used as the activation function. The downsampling layer performs a convolution operation with a stride of 2 at the same kernel size. The third conversion block Dense3 is followed by three convolution blocks Conv2-4. The second convolutional block (Conv2) has two more convolutional layers, and the third convolutional block (Conv3) consists of five extended convolutional layers (extension = 2) for high-level feature extraction, and the fourth convolutional block (Conv3) A convolution block has a single convolutional layer. Each layer of the three convolution blocks Conv2-4 has a kernel size of 3 x 3, a stride of 1, and a depth of 512. Also, this layer is normalized by batch normalization and uses ReLU as the activation function. The fourth convolution block (Conv4) is followed by three upsampling blocks (Conv5-7). Each upsampling block Conv5 - 7 is composed of a 2x2 upsampling layer and a convolutional layer having a kernel size of 3x3 following the upsampling layer. The upsampling block (Conv5-7) is also normalized by batch normalization and uses ReLU as an activation function. The output sizes of the three upsampling blocks Conv5-7 are 256, 128, and 64, respectively. Each of the three upsampling blocks Conv5-7 has a separate skip connection with the high-density block Dense1-3, respectively. By such a skip connection, the network can propagate low-level features, thereby contributing to the reconstruction of the final image. Finally, the eighth convolutional layer Conv8 is positioned. The output size of the eighth convolutional layer Conv8 is H x W x 3. The final merged image uses tanh as the activation function.
도 14의 (c)에는 본 발명의 일 실시예에 따른 평가모듈(133b)의 인공신경망의 구조가 도시되어 있다.14 (c) shows the structure of the artificial neural network of the evaluation module 133b according to an embodiment of the present invention.
본 발명의 일 실시예에서 상기 평가모듈(133b)은 스택형 컨볼루션 신경망으로 구성될 수 있다. 더욱 상세하게는 상기 스택형 컨볼루션 신경망은 제2이미지, 휘도복원이미지, 경계이미지 및 병합이미지를 입력 받는다. 상기 스택형 컨볼루션 신경망의 첫 3개의 컨볼루션층은 깊이가 64이고, 다음 컨볼루션층은 깊이가 128이다. 각각의 컨볼루션층의 커널은 크기가 3 x 3이고 스트라이드가 1이다. 모든 컨볼루션층에서는 배치 정규화로 정규화 되고, 활성함수로 ReLU를 사용한다. 상기 스택형 컨볼루션 신경망의 마지막 층은 평탄화를 수행하고 활성함수로 시그모이드 함수를 사용한다.In an embodiment of the present invention, the evaluation module 133b may be configured as a stacked convolutional neural network. In more detail, the stacked convolutional neural network receives a second image, a luminance restored image, a boundary image, and a merged image. The first three convolutional layers of the stacked convolutional neural network have a depth of 64, and the next convolutional layer has a depth of 128. The kernel of each convolutional layer has a size of 3 x 3 and a stride of 1. All convolutional layers are normalized by batch normalization, and ReLU is used as the activation function. The last layer of the stacked convolutional neural network performs flattening and uses a sigmoid function as an activation function.
본 발명의 일 실시예에서 상기 휘도복원부(131) 및 상기 색상복원부(133) 각각의 인공신경망은 독립적으로 학습이 수행된다. 이와 같이 상기 휘도복원부의 인공신경망과 상기 색상복원부의 인공신경망이 각각 학습이 수행됨으로써 각각의 인공신경망에 적합한 학습 데이터를 이용하여 효율적으로 학습을 수행할 수 있다.In an embodiment of the present invention, each artificial neural network of the luminance restoration unit 131 and the color restoration unit 133 is independently trained. As described above, the artificial neural network of the luminance restoration unit and the artificial neural network of the color restoration unit are each trained, so that learning can be efficiently performed using learning data suitable for each artificial neural network.
도 15는 본 발명의 일 실시예에 따른 제1이미지, 제2이미지, 휘도복원이미지 및 경계이미지를 예시적으로 도시하는 도면이다.15 is a view exemplarily showing a first image, a second image, a luminance restored image, and a boundary image according to an embodiment of the present invention.
도 15의 (a)는 본 발명의 일 실시예에 따른 제1이미지(1)를 도시한다. 이와 같은 제1이미지(1)는 색상정보를 포함하지 않고 밝기정보만을 포함하고 있으며, 컬러픽셀(2100)에 해당하는 픽셀에는 밝기정보가 존재하지 않아 검은 구멍과 같이 표현되어 있다.15A shows a first image 1 according to an embodiment of the present invention. The first image 1 does not include color information but only brightness information, and the pixel corresponding to the color pixel 2100 does not have brightness information, so it is expressed like a black hole.
도 15의 (b)는 본 발명의 일 실시예에 따른 제2이미지(2)를 도시한다. 이와 같은 제2이미지(2)는 컬러픽셀(2100)에 해당하는 픽셀에서만 색상정보가 포함되어 있고, 화이트픽셀(2200)에 해당하는 픽셀에서는 색상정보가 존재하지 않아 검게 표현되어 있다.15 (b) shows a second image 2 according to an embodiment of the present invention. In this second image 2, color information is included only in pixels corresponding to the color pixels 2100, and color information is not present in the pixels corresponding to the white pixels 2200, and thus is expressed as black.
도 15의 (c)는 본 발명의 일 실시예에 따른 휘도복원이미지(4)를 도시한다. 상기 휘도복원이미지(4)는 상기 휘도복원부(131)가 도 15의 (a)에 도시된 상기 제1이미지(1)의 상기 컬러픽셀(2100)에 해당하는 픽셀의 밝기정보를 도출하여 채움으로써 생성될 수 있다.Figure 15 (c) shows a luminance restored image 4 according to an embodiment of the present invention. The luminance restoration image 4 is filled by the luminance restoration unit 131 deriving and filling the brightness information of the pixel corresponding to the color pixel 2100 of the first image 1 shown in FIG. 15A. can be created by
도 15의 (d)는 본 발명의 일 실시예에 따른 경계이미지(5)를 도시한다. 상기 경계이미지(5)는 상기 경계추출부(132)에서 상기 휘도복원이미지(4)를 처리하여 생성될 수 있다.15 (d) shows a boundary image 5 according to an embodiment of the present invention. The boundary image 5 may be generated by processing the luminance restoration image 4 in the boundary extraction unit 132 .
도 16은 본 발명의 일 실시예에 따른 이미지병합부의 학습단계를 개략적으로 도시하는 도면이다.16 is a diagram schematically illustrating a learning step of an image merging unit according to an embodiment of the present invention.
도 16의 (a)는 상기 평가모듈(133b)의 학습이 이루어지는 과정을 도시한다. 도 16의 (a)를 참조하면 본 발명의 일 실시예에 따른 상기 평가모듈(133b)은, 기저장된 복수의 학습 병합이미지를 입력 받아 상기 학습 병합이미지의 진위여부를 평가하고 평가결과에 기초하여 피드백을 수행하여 학습이 이루어진다. 상기 학습 병합이미지는 학습 진실 병합이미지 및 학습 허위 병합이미지를 포함할 수 있다. 상기 평가모듈(133b)은 이와 같은 학습 병합이미지를 입력 받아 상기 학습 병합이미지의 진위 여부를 평가하여 평가결과를 도출하고, 상기 평가결과를 피드백 하여 상기 평가모듈(133b)의 인공신경망을 학습함으로써 상기 평가모듈(133b)의 평가 능력을 향상시킬 수 있다.FIG. 16A illustrates a process in which learning of the evaluation module 133b is performed. Referring to (a) of FIG. 16 , the evaluation module 133b according to an embodiment of the present invention receives a plurality of pre-stored learning merged images, evaluates the authenticity of the learning merged images, and based on the evaluation results, Learning takes place through feedback. The learning merged image may include a learning truth merged image and a learning false merged image. The evaluation module 133b receives such a learning merged image, evaluates the authenticity of the learning merged image, derives an evaluation result, and feeds back the evaluation result to learn the artificial neural network of the evaluation module 133b. The evaluation capability of the evaluation module 133b may be improved.
도 16의 (b)는 상기 색상복원모듈(133a)의 학습이 이루어지는 과정을 도시한다. 도 16의 (b)를 참조하면 본 발명의 일 실시예에 따른 상기 색상복원모듈(133a)은, 기저장된 복수의 학습 제2이미지, 학습 휘도복원이미지 및 학습 경계이미지 세트를 입력 받아 학습 병합이미지를 생성하고, 상기 평가모듈(133b)이 생성된 상기 학습 병합이미지를 평가한 평가결과에 기초하여 피드백을 수행하여 학습이 이루어진다. 즉, 상기 색상복원모듈(133a)는 학습을 위한 이미지세트를 입력 받아 학습 병합이미지를 생성하고, 상기 평가모듈(133b)이 상기 학습 병합이미지의 인위 여부를 평가한 평가결과를 피드백 하여 상기 색상복원모듈(133a)의 생성 능력을 향상시킬 수 있다. 이 때, 상기 색상복원모듈(133a)은 상기 학습 병합이미지가 상기 평가모듈(133b)에서 진실 병합이미지로 판단될 수 있는 방향으로 학습을 수행하여 진위 병합이미지에 가까운 병합이미지를 생성할 수 있도록 생성 능력을 향상시킬 수 있다.16 (b) shows a process in which learning of the color restoration module 133a is performed. Referring to FIG. 16 (b), the color restoration module 133a according to an embodiment of the present invention receives a plurality of pre-stored learning second images, learning luminance restoration images, and learning boundary image sets, and learning merged images. , and the evaluation module 133b evaluates the generated learning merged image, and feedback is performed based on the evaluation result to perform learning. That is, the color restoration module 133a receives an image set for learning and generates a learning merged image, and the evaluation module 133b feeds back an evaluation result of evaluating whether the learning merged image is artificial to restore the color. The generation capability of the module 133a may be improved. At this time, the color restoration module 133a performs learning in a direction in which the learned merged image can be determined as a true merged image in the evaluation module 133b to generate a merged image close to the authentic and false merged image. ability can be improved.
도 17은 본 발명의 일 실시예에 따른 이미지처리방법의 각 단계를 개략적으로 도시하는 도면이다.17 is a diagram schematically illustrating each step of an image processing method according to an embodiment of the present invention.
도 17을 참조하면 본 발명의 일 실시예에 따른 이미지처리방법은 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리방법으로서, 상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지획득단계(S110); 상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지획득단계(S120); 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합단계(S200); 및 상기 병합이미지를 외부로 출력하는 병합이미지출력단계(S300); 를 포함한다.Referring to FIG. 17 , an image processing method according to an embodiment of the present invention is an image processing method that receives image data from an electrically connected image sensor and outputs a merged image, and is sensed by a plurality of white pixels of the image sensor. A first image acquisition step (S110) of receiving a first image generated on the basis of the received information; a second image acquisition step (S120) of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; an image merging step (S200) of generating a merged image based on the first image and the second image by the learned artificial neural network; and a merged image output step (S300) of outputting the merged image to the outside; includes
상기 제1이미지획득단계(S110)는 전술한 상기 제1이미지입력부(110)의 동작과 동일한 동작을 수행한다. 연결된 센서어레이(3000)의 센서 중 상기 화이트픽셀(2100)에 대응되는 센서로부터 센싱 된 정보를 입력 받아 제1이미지를 획득한다. 이와 같은 제1이미지는 상기 화이트픽셀(2100)에 대응되는 센서로부터 센싱 된 정보만을 입력 받아 구성될 수도 있고, 혹은 상기 센서어레이(3000)의 센서 전부로부터 센싱 된 정보를 입력 받은 후, 상기 화이트픽셀(2100)에 대응되는 센서로부터 센싱 된 정보를 선별하여 구성될 수도 있다.The first image acquisition step S110 performs the same operation as that of the first image input unit 110 described above. A first image is obtained by receiving information sensed from a sensor corresponding to the white pixel 2100 among the sensors of the connected sensor array 3000 . Such a first image may be configured by receiving only information sensed from a sensor corresponding to the white pixel 2100, or after receiving information sensed from all sensors of the sensor array 3000, the white pixel It may be configured by selecting information sensed from a sensor corresponding to 2100 .
상기 제2이미지획득단계(S120)는 전술한 상기 제2이미지입력부(120)의 동작과 동일한 동작을 수행한다. 연결된 센서어레이(3000)의 센서 중 상기 컬러픽셀(2200)에 대응되는 센서로부터 센싱 된 정보를 입력 받아 제2이미지를 획득한다. 이와 같은 제2이미지는 상기 컬러픽셀(2200)에 대응되는 센서로부터 센싱 된 정보만을 입력 받아 구성될 수도 있고, 혹은 상기 센서어레이(3000)의 센서 전부로부터 센싱 된 정보를 입력 받은 후, 상기 컬러픽셀(2200)에 대응되는 센서로부터 센싱 된 정보를 선별하여 구성될 수도 있다.The second image acquisition step S120 performs the same operation as that of the second image input unit 120 described above. A second image is obtained by receiving information sensed from a sensor corresponding to the color pixel 2200 among the sensors of the connected sensor array 3000 . Such a second image may be configured by receiving only information sensed from a sensor corresponding to the color pixel 2200, or after receiving information sensed from all sensors of the sensor array 3000, the color pixel It may be configured by selecting information sensed from a sensor corresponding to 2200 .
본 발명의 일 실시예에서 상기 제1이미지획득단계(S110) 및 상기 제2이미지획득단계(S120)는 동시에 수행될 수 있다.In an embodiment of the present invention, the first image acquisition step S110 and the second image acquisition step S120 may be performed simultaneously.
상기 이미지병합단계(S200)는 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성한다. 본 발명의 일 실시예에서 상기 이미지병합단계(S200)에서는 상기 제1이미지획득단계(S110) 및 상기 제2이미지획득단계(S120)에서 획득한 상기 제1이미지 및 상기 제2이미지를 전송 받고, 상기 제1이미지 및 상기 제2이미지를 병합하여 병합이미지를 생성한다. 본 발명의 일 실시예에 따르면 상기 제1이미지는 밝기정보만을 포함하는 모노이미지이고, 상기 제2이미지는 색상정보를 포함하는 컬러이미지로서, 상기 제1이미지 및 상기 제2이미지를 병합함으로써 상기 이미지병합단계(S200)에서는 밝기정보 및 색상정보가 완전히 구비된 병합이미지를 생성하게 된다. 바람직하게는 상기 이미지병합단계(S200)에서는 학습된 인공신경망을 통해 상기 제1이미지 및 상기 제2이미지를 병합하여 병합이미지를 생성한다.The image merging step (S200) generates a merged image based on the first image and the second image. In an embodiment of the present invention, in the image merging step (S200), the first image and the second image obtained in the first image acquisition step (S110) and the second image acquisition step (S120) are transmitted and received, A merged image is generated by merging the first image and the second image. According to an embodiment of the present invention, the first image is a mono image including only brightness information, and the second image is a color image including color information, and the image is obtained by merging the first image and the second image. In the merging step ( S200 ), a merged image fully equipped with brightness information and color information is generated. Preferably, in the image merging step (S200), the first image and the second image are merged through a learned artificial neural network to generate a merged image.
상기 병합이미지출력단계(S300)는 상기 병합이미지를 외부로 출력한다. 상기 병합이미지출력단계(S300)에서는 상기 이미지 센싱장치를 통해 빛을 센싱하고, 센싱 된 빛의 정보를 처리하여 생성한 상기 병합이미지를 연결된 장치로 전송하여 사용자 등이 이용할 수 있도록 한다. 상기 병합이미지출력단계(S300)에서는 네트워크인터페이스(300)를 통해 외부의 컴퓨팅 장치 등으로 상기 병합이미지를 전송할 수 있다.The merged image output step (S300) outputs the merged image to the outside. In the merged image output step (S300), light is sensed through the image sensing device, and the merged image generated by processing the sensed light information is transmitted to a connected device so that the user can use it. In the step of outputting the merged image ( S300 ), the merged image may be transmitted to an external computing device or the like through the network interface 300 .
도 18은 본 발명의 일 실시예에 따른 이미지 병합 단계의 세부 단계를 개략적으로 도시하는 도면이다.18 is a diagram schematically illustrating detailed steps of an image merging step according to an embodiment of the present invention.
도 18을 참조하면 본 발명의 일 실시예에 따른 상기 이미지병합단계(S200)는, 상기 제1이미지에 기초하여 상기 컬러픽셀이 위치하는 영역의 휘도를 도출하여 휘도복원이미지를 생성하는 휘도복원단계(S210); 상기 휘도복원부를 통해 상기 휘도복원이미지의 색 경계 정보를 포함하는 경계이미지를 생성하는 경계추출단계(S220); 상기 제2이미지, 상기 휘도복원이미지 및 상기 경계이미지에 기초하여 병합이미지를 생성하는 색상복원단계(S230); 및 상기 색상복원단계에서 생성한 상기 병합이미지를 평가하는 평가단계(S240); 를 포함할 수 있다.Referring to FIG. 18 , the image merging step ( S200 ) according to an embodiment of the present invention is a luminance restoration step of generating a luminance restored image by deriving the luminance of an area in which the color pixel is located based on the first image. (S210); a boundary extraction step of generating a boundary image including color boundary information of the luminance restoration image through the luminance restoration unit (S220); a color restoration step of generating a merged image based on the second image, the luminance restored image, and the boundary image (S230); and an evaluation step of evaluating the merged image generated in the color restoration step (S240); may include.
상기 휘도복원단계(S210)는 상기 제1이미지에 기초하여 상기 컬러픽셀이 위치하는 영역의 휘도를 도출하여 휘도복원이미지를 생성한다. 상기 휘도복원단계(S210)에서는 상기 제1이미지(1)를 입력 받아 상기 컬러픽셀(2100)이 위치한 픽셀에 대한 밝기정보를 도출하여 채워 넣음으로써 휘도복원이미지(4)를 생성할 수 있다. 바람직하게는 상기 휘도복원단계(S210)에서는 인공신경망에 기초하여 상기 휘도복원이미지를 생성한다. In the luminance restoration step (S210), a luminance restoration image is generated by deriving the luminance of a region in which the color pixel is located based on the first image. In the luminance restoration step ( S210 ), the luminance restoration image 4 may be generated by receiving the first image 1 , deriving brightness information about the pixel in which the color pixel 2100 is located, and filling the information. Preferably, in the luminance restoration step (S210), the luminance restoration image is generated based on an artificial neural network.
상기 경계추출단계(S220)는 상기 휘도복원단계(S210)를 통해 생성된 상기 휘도복원이미지의 색 경계 정보를 포함하는 경계이미지를 생성한다. 이와 같은 경계이미지는 상기 휘도복원이미지에 대한 필터 처리를 통해 회득될 수 있다.In the boundary extraction step (S220), a boundary image including color boundary information of the luminance restoration image generated through the luminance restoration operation (S210) is generated. Such a boundary image may be obtained through filter processing on the luminance restored image.
상기 색상복원단계(S230)는 상기 제2이미지, 상기 휘도복원이미지 및 상기 경계이미지에 기초하여 병합이미지를 생성한다. 바람직하게는 상기 색상복원단계(S230)에서는 상기 휘도복원이미지의 밝기정보 및 상기 제2이미지의 색상정보에 기초하여 상기 휘도복원이미지에 색상정보를 복원함으로써 병합이미지를 생성할 수 있다. 이 때 상기 경계이미지를 색상정보를 복원하는 가이드로 사용하여 더욱 정확하게 색상을 복원할 수 있다. 바람직하게는 상기 색상복원단계(S230)에서는 인공신경망에 기초하여 상기 병합이미지를 생성한다.In the color restoration step S230, a merged image is generated based on the second image, the luminance restored image, and the boundary image. Preferably, in the color restoration step ( S230 ), the merged image may be generated by restoring color information to the luminance restored image based on the brightness information of the luminance restored image and the color information of the second image. In this case, the color can be more accurately restored by using the boundary image as a guide for restoring color information. Preferably, in the color restoration step (S230), the merged image is generated based on an artificial neural network.
상기 평가단계(S240)는 상기 색상복원단계(S230)에서 생성한 상기 병합이미지를 평가한다. 상기 평가단계(S240)는 상기 병합이미지를 평가하여 상기 병합이미지가 실제 이미지인지 생성된 허위 이미지인지를 판별하여 평가하게 된다. 이와 같은 평가 결과는 상기 색상복원단계(S230)에 피드백 되어 상기 색상복원단계(S230)에서 실제 이미지에 더욱 가까운 병합이미지를 생성할 수 있도록 할 수 있다. 바람직하게는 상기 평가단계(S240)에서는 인공신경망에 기초하여 상기 병합이미지를 평가한다.The evaluation step (S240) evaluates the merged image generated in the color restoration step (S230). The evaluation step (S240) evaluates the merged image to determine whether the merged image is an actual image or a generated false image. The evaluation result is fed back to the color restoration step S230 to generate a merged image closer to the actual image in the color restoration step S230. Preferably, in the evaluation step (S240), the merged image is evaluated based on the artificial neural network.
도 19는 본 발명의 일 실시예에 따른 컬러필터어레이의 패턴을 개략적으로 도시하는 도면이다.19 is a diagram schematically illustrating a pattern of a color filter array according to an embodiment of the present invention.
본 발명의 일 실시예에서 상기 이미지병합부(130)는 경계이미지(5)를 통해 색번짐을 억제하고, 생성적 적대 신경망을 통해 색상을 복원함으로써 병합이미지(3)에서 매우 우수한 색 재현을 나타낸다. 이에, 본 발명의 일 실시예에 따른 상기 제1이미지 및 상기 제2이미지를 획득하기 위한 컬러필터어레이(2000)는 매우 낮은 컬러픽셀(2200)의 비율을 가질 수 있다.In an embodiment of the present invention, the image merging unit 130 suppresses color bleeding through the boundary image 5 and restores the color through a generative adversarial neural network, thereby exhibiting very good color reproduction in the merged image 3 . Accordingly, the color filter array 2000 for obtaining the first image and the second image according to an embodiment of the present invention may have a very low ratio of color pixels 2200 .
도 19에는 이와 같이 매우 낮은 비율의 컬러픽셀(2200)을 갖는 컬러필터어레이(2000)의 일 예가 도시되어 있다.19 shows an example of a color filter array 2000 having such a very low ratio of color pixels 2200 .
도 5에서와 같은 방법으로 픽셀을 표시하면, 도 19에 도시된 실시예에서는 (4,4)픽셀은 R픽셀, (17,6)픽셀 및 (5,15)픽셀은 G픽셀, (14,19)픽셀은 B픽셀의 컬러픽셀(2200)이고, 나머지 픽셀은 모두 화이트픽셀(2100)이다. 즉, 도 19에 도시된 컬러필터어레이(2000)는 20 x 20 픽셀 크기의 컬러필터어레이(2000)에 4 개의 컬러픽셀이 포함됨으로써 컬러픽셀의 비율은 1%이고, 화이트픽셀의 비율은 99%에 달한다.If pixels are displayed in the same way as in FIG. 5, in the embodiment shown in FIG. 19, (4,4) pixels are R pixels, (17,6) pixels and (5,15) pixels are G pixels, (14, 19) A pixel is a color pixel 2200 of a B pixel, and all other pixels are white pixels 2100 . That is, the color filter array 2000 shown in FIG. 19 includes four color pixels in the color filter array 2000 having a size of 20 x 20 pixels, so that the ratio of color pixels is 1%, and the ratio of white pixels is 99% reaches to
특히 도 19에 도시된 실시예에서는 상기 R픽셀, G픽셀 및 B픽셀이 기설정된 패턴으로 배열되지 않고 20 x 20 픽셀 크기의 컬러필터어레이(2000)에 무작위로 배치되어 있다.In particular, in the embodiment shown in FIG. 19 , the R pixels, G pixels, and B pixels are not arranged in a predetermined pattern but are randomly arranged in a color filter array 2000 having a size of 20 x 20 pixels.
본 발명의 일 실시예에서는 이와 같이 1% 비율을 갖는 컬러픽셀(2200)이 무작위로 배치된 컬러필터어레이(2000)에 의해 획득한 제1이미지(1) 및 제2이미지(2)로부터도 우수한 품질의 병합이미지(3)를 생성할 수 있다.In one embodiment of the present invention, the color pixels 2200 having a ratio of 1% as described above are also excellent from the first image 1 and the second image 2 obtained by the color filter array 2000 randomly arranged. A quality merged image (3) can be created.
도 20은 본 발명의 일 실시예에 있어서, 컴퓨팅 장치의 내부 구성의 일례를 설명하기 위한 블록도이다.20 is a block diagram illustrating an example of an internal configuration of a computing device according to an embodiment of the present invention.
도 20에 도시한 바와 같이, 컴퓨팅 장치(11000)은 적어도 하나의 프로세서(processor)(11100), 메모리(memory)(11200), 주변장치 인터페이스(peripheral interface)(11300), 입/출력 서브시스템(I/O subsystem)(11400), 전력 회로(11500) 및 통신 회로(11600)를 적어도 포함할 수 있다. 이때, 컴퓨팅 장치(11000)은 사용자단말기 혹은 상담매칭서비스제공시스템에 해당될 수 있다.20, the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, an input/output subsystem ( It may include at least an I/O subsystem) 11400 , a power circuit 11500 , and a communication circuit 11600 . In this case, the computing device 11000 may correspond to a user terminal or a consultation matching service providing system.
메모리(11200)는, 일례로 고속 랜덤 액세스 메모리(high-speed random access memory), 자기 디스크, 에스램(SRAM), 디램(DRAM), 롬(ROM), 플래시 메모리 또는 비휘발성 메모리를 포함할 수 있다. 메모리(11200)는 컴퓨팅 장치(11000)의 동작에 필요한 소프트웨어 모듈, 명령어 집합 또는 그밖에 다양한 데이터를 포함할 수 있다.The memory 11200 may include, for example, a high-speed random access memory, a magnetic disk, an SRAM, a DRAM, a ROM, a flash memory, or a non-volatile memory. have. The memory 11200 may include a software module, an instruction set, or other various data necessary for the operation of the computing device 11000 .
이때, 프로세서(11100)나 주변장치 인터페이스(11300) 등의 다른 컴포넌트에서 메모리(11200)에 액세스하는 것은 프로세서(11100)에 의해 제어될 수 있다.In this case, access to the memory 11200 from other components such as the processor 11100 or the peripheral interface 11300 may be controlled by the processor 11100 .
주변장치 인터페이스(11300)는 컴퓨팅 장치(11000)의 입력 및/또는 출력 주변장치를 프로세서(11100) 및 메모리 (11200)에 결합시킬 수 있다. 프로세서(11100)는 메모리(11200)에 저장된 소프트웨어 모듈 또는 명령어 집합을 실행하여 컴퓨팅 장치(11000)을 위한 다양한 기능을 수행하고 데이터를 처리할 수 있다. Peripheral interface 11300 may couple input and/or output peripherals of computing device 11000 to processor 11100 and memory 11200 . The processor 11100 may execute a software module or an instruction set stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
입/출력 서브시스템(11400)은 다양한 입/출력 주변장치들을 주변장치 인터페이스(11300)에 결합시킬 수 있다. 예를 들어, 입/출력 서브시스템(11400)은 모니터나 키보드, 마우스, 프린터 또는 필요에 따라 터치스크린이나 센서 등의 주변장치를 주변장치 인터페이스(11300)에 결합시키기 위한 컨트롤러를 포함할 수 있다. 다른 측면에 따르면, 입/출력 주변장치들은 입/출력 서브시스템(11400)을 거치지 않고 주변장치 인터페이스(11300)에 결합될 수도 있다.The input/output subsystem 11400 may couple various input/output peripherals to the peripheral interface 11300 . For example, the input/output subsystem 11400 may include a controller for coupling peripheral devices such as a monitor, keyboard, mouse, printer, or a touch screen or sensor as needed to the peripheral interface 11300 . According to another aspect, input/output peripherals may be coupled to peripheral interface 11300 without going through input/output subsystem 11400 .
전력 회로(11500)는 단말기의 컴포넌트의 전부 또는 일부로 전력을 공급할 수 있다. 예를 들어 전력 회로(11500)는 전력 관리 시스템, 배터리나 교류(AC) 등과 같은 하나 이상의 전원, 충전 시스템, 전력 실패 감지 회로(power failure detection circuit), 전력 변환기나 인버터, 전력 상태 표시자 또는 전력 생성, 관리, 분배를 위한 임의의 다른 컴포넌트들을 포함할 수 있다.The power circuit 11500 may supply power to all or some of the components of the terminal. For example, the power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may include any other components for creation, management, and distribution.
통신 회로(11600)는 적어도 하나의 외부 포트를 이용하여 다른 컴퓨팅 장치와 통신을 가능하게 할 수 있다.The communication circuit 11600 may enable communication with another computing device using at least one external port.
또는 상술한 바와 같이 필요에 따라 통신 회로(11600)는 RF 회로를 포함하여 전자기 신호(electromagnetic signal)라고도 알려진 RF 신호를 송수신함으로써, 다른 컴퓨팅 장치와 통신을 가능하게 할 수도 있다.Alternatively, as described above, if necessary, the communication circuit 11600 may include an RF circuit to transmit and receive an RF signal, also known as an electromagnetic signal, to enable communication with other computing devices.
이러한 도 20의 실시예는, 컴퓨팅 장치(11000)의 일례일 뿐이고, 컴퓨팅 장치(11000)은 도 20에 도시된 일부 컴포넌트가 생략되거나, 도 20에 도시되지 않은 추가의 컴포넌트를 더 구비하거나, 2개 이상의 컴포넌트를 결합시키는 구성 또는 배치를 가질 수 있다. 예를 들어, 모바일 환경의 통신 단말을 위한 컴퓨팅 장치는 도 20에도시된 컴포넌트들 외에도, 터치스크린이나 센서 등을 더 포함할 수도 있으며, 통신 회로(1160)에 다양한 통신방식(WiFi, 3G, LTE, Bluetooth, NFC, Zigbee 등)의 RF 통신을 위한 회로가 포함될 수도 있다. 컴퓨팅 장치(11000)에 포함 가능한 컴포넌트들은 하나 이상의 신호 처리 또는 어플리케이션에 특화된 집적 회로를 포함하는 하드웨어, 소프트웨어, 또는 하드웨어 및 소프트웨어 양자의 조합으로 구현될 수 있다.This embodiment of FIG. 20 is only an example of the computing device 11000 , and the computing device 11000 may omit some components shown in FIG. 20 , or further include additional components not shown in FIG. 20 , or 2 It may have a configuration or arrangement that combines two or more components. For example, a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 20 , and various communication methods (WiFi, 3G, LTE) are provided in the communication circuit 1160 . , Bluetooth, NFC, Zigbee, etc.) may include a circuit for RF communication. Components that may be included in the computing device 11000 may be implemented in hardware, software, or a combination of both hardware and software including an integrated circuit specialized for one or more signal processing or applications.
본 발명의 실시예에 따른 방법들은 다양한 컴퓨팅 장치를 통하여 수행될 수 있는 프로그램 명령(instruction) 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 특히, 본 실시예에 따른 프로그램은 PC 기반의 프로그램 또는 모바일 단말 전용의 어플리케이션으로 구성될 수 있다. 본 발명이 적용되는 애플리케이션은 파일 배포 시스템이 제공하는 파일을 통해 이용자 단말에 설치될 수 있다. 일 예로, 파일 배포 시스템은 이용자 단말이기의 요청에 따라 상기 파일을 전송하는 파일 전송부(미도시)를 포함할 수 있다.Methods according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded in a computer-readable medium. In particular, the program according to the present embodiment may be configured as a PC-based program or an application dedicated to a mobile terminal. The application to which the present invention is applied may be installed in the user terminal through a file provided by the file distribution system. As an example, the file distribution system may include a file transmission unit (not shown) that transmits the file according to a request of the user terminal.
본 발명의 일 실시예에 따르면 화이트픽셀의 비율이 매우 높은 컬러필터어레이를 이용하여 빛을 센싱 함으로써 저조도 환경에서 우수한 품질의 이미지를 획득할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by sensing light using a color filter array having a very high ratio of white pixels, it is possible to obtain an image of excellent quality in a low-light environment.
본 발명의 일 실시예에 따르면 매우 적은 비율의 컬러픽셀을 통해 획득한 색상정보에 기초하여 인공신경망을 통해 색상을 복원하여 높은 품질의 색상 복원을 수행할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, color restoration through an artificial neural network based on color information obtained through a very small proportion of color pixels can exhibit the effect of performing high-quality color restoration.
본 발명의 일 실시예에 따르면 경계이미지를 생성하여 색상의 복원에 사용함으로써 색상 복원 과정에서의 색번짐을 줄일 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by generating a boundary image and using it for color restoration, it is possible to exhibit the effect of reducing color bleeding in the color restoration process.
본 발명의 일 실시예에 따르면 생성적 적대 신경망을 통해 색상을 복원함으로써 허위 색상이 재현되는 것을 방지하는 효과를 발휘할 수 있다.According to an embodiment of the present invention, by restoring a color through a generative adversarial neural network, it is possible to prevent false colors from being reproduced.
이상과 같이 실시예들이 비록 한정된 실시예와 도면에 의해 설명되었으나, 해당 기술분야에서 통상의 지식을 가진 자라면 상기의 기재로부터 다양한 수정 및 변형이 가능하다. 예를 들어, 설명된 기술들이 설명된 방법과 다른 순서로 수행되거나, 및/또는 설명된 시스템, 구조, 장치, 회로 등의 구성요소들이 설명된 방법과 다른 형태로 결합 또는 조합되거나, 다른 구성요소 또는 균등물에 의하여 대치되거나 치환되더라도 적절한 결과가 달성될 수 있다. 그러므로, 다른 구현들, 다른 실시예들 및 특허청구범위와 균등한 것들도 후술하는 특허청구범위의 범위에 속한다.As described above, although the embodiments have been described with reference to the limited embodiments and drawings, various modifications and variations are possible from the above description by those skilled in the art. For example, the described techniques are performed in an order different from the described method, and/or the described components of the system, structure, apparatus, circuit, etc. are combined or combined in a different form than the described method, or other components Or substituted or substituted by equivalents may achieve an appropriate result. Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (14)

1 이상의 프로세서 및 1 이상의 메모리를 갖고, 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리장치로서,An image processing apparatus having one or more processors and one or more memories, receiving image data from an electrically connected image sensor and outputting a merged image,
상기 이미지처리장치는,The image processing device,
상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지입력부;a first image input unit receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor;
상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지입력부; 및a second image input unit for receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and
1이상의 학습된 인공신경망을 포함하고, 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합부; 를 포함하고,an image merging unit including one or more learned artificial neural networks and generating a merged image based on the first image and the second image; including,
상기 이미지병합부는,The image merging unit,
상기 제1이미지에 기초하여 상기 컬러픽셀이 위치하는 영역의 휘도를 도출하여 휘도복원이미지를 생성하는 휘도복원부; 및a luminance restoration unit generating a luminance restoration image by deriving the luminance of a region in which the color pixels are located based on the first image; and
상기 제2이미지 및 상기 휘도복원이미지를 포함하는 데이터에 기초하여 병합이미지를 생성하는 색상복원부; 를 포함하는, 이미지처리장치.a color restoration unit for generating a merged image based on data including the second image and the luminance restored image; Including, an image processing device.
청구항 1에 있어서,The method according to claim 1,
상기 제1이미지는 상기 병합이미지의 전체 픽셀의 수의 90% 이상의 화이트픽셀 정보를 포함하고,The first image includes white pixel information of 90% or more of the total number of pixels of the merged image,
상기 제2이미지는 상기 병합이미지의 전체 픽셀의 수의 10% 이하의 컬러픽셀 정보를 포함하는, 이미지처리장치.The second image includes color pixel information of 10% or less of the total number of pixels of the merged image.
청구항 1에 있어서,The method according to claim 1,
상기 색상복원부는 학습된 인공신경망에 기초하여 상기 병합이미지를 생성하는, 이미지처리장치.The color restoration unit generates the merged image based on the learned artificial neural network, an image processing apparatus.
청구항 1에 있어서,The method according to claim 1,
상기 휘도복원부 및 색상복원부는 각각의 학습된 인공신경망에 기초하여 상기 휘도복원이미지 및 상기 병합이미지를 생성하는, 이미지처리장치.The luminance restoration unit and the color restoration unit generate the luminance restoration image and the merged image based on each learned artificial neural network.
청구항 1에 있어서,The method according to claim 1,
상기 이미지병합부는,The image merging unit,
상기 휘도복원부를 통해 상기 휘도복원이미지의 색 경계 정보를 포함하는 경계이미지를 생성하는 경계추출부; 를 더 포함하고,a boundary extraction unit generating a boundary image including color boundary information of the luminance restoration image through the luminance restoration unit; further comprising,
상기 색상복원부는,The color restoration unit,
상기 제2이미지, 상기 휘도복원이미지 및 상기 경계이미지에 기초하여 병합이미지를 생성하는, 이미지처리장치.An image processing apparatus for generating a merged image based on the second image, the luminance restored image, and the boundary image.
청구항 1에 있어서,The method according to claim 1,
상기 색상복원부는,The color restoration unit,
생성적 적대 신경망(Generative Adversarial Network, GAN)으로 구성되어 병합이미지를 생성하는, 이미지처리장치.An image processing device that generates a merged image composed of a generative adversarial network (GAN).
청구항 1에 있어서,The method according to claim 1,
상기 색상복원부는,The color restoration unit,
상기 제2이미지 및 상기 휘도복원이미지를 포함하는 데이터에 기초하여 병합이미지를 생성하는 색상복원모듈; 및a color restoration module for generating a merged image based on data including the second image and the luminance restored image; and
상기 색상복원모듈이 생성한 상기 병합이미지를 평가하는 평가모듈; 을 포함하고,an evaluation module for evaluating the merged image generated by the color restoration module; including,
상기 색상복원모듈 및 상기 평가모듈은 상호 피드백을 수행하는, 이미지처리장치.wherein the color restoration module and the evaluation module perform mutual feedback.
청구항 7에 있어서,8. The method of claim 7,
상기 평가모듈은,The evaluation module is
기저장된 복수의 학습 병합이미지를 입력 받아 상기 학습 병합이미지의 진위여부를 평가하고 평가결과에 기초하여 피드백을 수행하여 학습이 이루어지는, 이미지처리장치.An image processing apparatus in which learning is performed by receiving a plurality of pre-stored learning merged images, evaluating the authenticity of the learning merged images, and performing feedback based on the evaluation results.
청구항 7에 있어서,8. The method of claim 7,
상기 색상복원모듈은,The color restoration module,
기저장된 복수의 학습 제2이미지, 학습 휘도복원이미지 및 학습 경계이미지 세트를 입력 받아 학습 병합이미지를 생성하고, 상기 평가모듈이 생성된 상기 학습 병합이미지를 평가한 평가결과에 기초하여 피드백을 수행하여 학습이 이루어지는, 이미지처리장치.A plurality of pre-stored learning second images, learning luminance restoration images, and learning boundary image sets are input to generate a learning merged image, and the evaluation module performs feedback based on the evaluation result of evaluating the generated learning merged image. Learning is made, an image processing device.
청구항 7에 있어서,8. The method of claim 7,
상기 색상복원모듈은,The color restoration module,
고밀도 U-net으로 구성되고,It is composed of high-density U-net,
상기 평가모듈은,The evaluation module is
스택형 컨볼루션 신경망으로 구성되는, 이미지처리장치.An image processing device composed of a stacked convolutional neural network.
청구항 1에 있어서,The method according to claim 1,
상기 휘도복원부는,The luminance restoration unit,
스택형 컨볼루션 신경망으로 구성되는, 이미지처리장치.An image processing device composed of a stacked convolutional neural network.
청구항 1에 있어서,The method according to claim 1,
상기 휘도복원부 및 상기 색상복원부 각각의 인공신경망은 독립적으로 학습이 수행되어진 인공신경망인, 이미지처리장치.The artificial neural network of each of the luminance restoration unit and the color restoration unit is an artificial neural network in which learning is performed independently.
1 이상의 프로세서 및 1 이상의 메모리를 갖는 컴퓨팅 장치에서 수행되고, 전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리방법으로서,An image processing method that is performed in a computing device having one or more processors and one or more memories, receiving image data from an electrically connected image sensor and outputting a merged image,
상기 이미지처리방법은, The image processing method is
상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지획득단계;a first image acquisition step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor;
상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지획득단계; 및a second image acquisition step of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and
1 이상의 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합단계; 를 포함하고,an image merging step of generating a merged image based on the first image and the second image by one or more learned artificial neural networks; including,
상기 이미지병합단계는,The image merging step is
상기 제1이미지에 기초하여 상기 컬러픽셀이 위치하는 영역의 휘도를 도출하여 휘도복원이미지를 생성하는 휘도복원단계; 및a luminance restoration step of generating a luminance restored image by deriving the luminance of a region in which the color pixel is located based on the first image; and
상기 제2이미지 및 상기 휘도복원이미지에 기초하여 병합이미지를 생성하는 색상복원단계; 를 포함하는, 이미지처리방법.a color restoration step of generating a merged image based on the second image and the luminance restoration image; Including, an image processing method.
전기적으로 접속된 이미지센서로부터 이미지데이터를 입력 받아 병합이미지를 출력하는 이미지처리방법을 구현하기 위한 컴퓨터-판독가능 매체로서, 상기 컴퓨터-판독가능 매체는 컴퓨팅 장치로 하여금 이하의 단계들을 수행하도록 하는 명령들을 저장하며, 상기 단계들은:A computer-readable medium for implementing an image processing method of receiving image data from an electrically connected image sensor and outputting a merged image, the computer-readable medium comprising instructions for causing a computing device to perform the following steps , the steps of which are:
상기 이미지센서의 복수의 화이트픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제1이미지를 입력 받는 제1이미지획득단계;a first image acquisition step of receiving a first image generated based on information sensed by a plurality of white pixels of the image sensor;
상기 이미지센서의 복수의 컬러픽셀에 의하여 센싱 된 정보에 기초하여 생성된 제2이미지를 입력 받는 제2이미지획득단계; 및a second image acquisition step of receiving a second image generated based on information sensed by a plurality of color pixels of the image sensor; and
1 이상의 학습된 인공신경망에 의하여 상기 제1이미지 및 상기 제2이미지에 기초하여 병합이미지를 생성하는 이미지병합단계; 를 포함하고,an image merging step of generating a merged image based on the first image and the second image by one or more learned artificial neural networks; including,
상기 이미지병합단계는,The image merging step is
상기 제1이미지에 기초하여 상기 컬러픽셀이 위치하는 영역의 휘도를 도출하여 휘도복원이미지를 생성하는 휘도복원단계; 및a luminance restoration step of generating a luminance restored image by deriving the luminance of a region in which the color pixel is located based on the first image; and
상기 제2이미지 및 상기 휘도복원이미지에 기초하여 병합이미지를 생성하는 색상복원단계; 를 포함하는, 컴퓨터-판독가능 매체.a color restoration step of generating a merged image based on the second image and the luminance restoration image; A computer-readable medium comprising:
PCT/KR2020/000795 2019-11-26 2020-01-16 Deep learning-based sparse color sensor image processing device, image processing method and computer-readable medium WO2021107264A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0153450 2019-11-26
KR1020190153450A KR102230609B1 (en) 2019-11-26 2019-11-26 Deep Learning-Based Sparse Color Sensor Image Processing Apparatus, Image Processing Method and Computer-readable Medium

Publications (1)

Publication Number Publication Date
WO2021107264A1 true WO2021107264A1 (en) 2021-06-03

Family

ID=75262046

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/000795 WO2021107264A1 (en) 2019-11-26 2020-01-16 Deep learning-based sparse color sensor image processing device, image processing method and computer-readable medium

Country Status (2)

Country Link
KR (1) KR102230609B1 (en)
WO (1) WO2021107264A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140066684A (en) * 2012-03-19 2014-06-02 앱티나 이미징 코포레이션 Imaging systems with clear filter pixels
US20140362250A1 (en) * 2012-12-28 2014-12-11 Visera Technologies Company Limited Method for correcting pixel information of color pixels on a color filter array of an image sensor
US20160163249A1 (en) * 2014-12-03 2016-06-09 Japan Display Inc. Image display device
US20180006078A1 (en) * 2014-12-22 2018-01-04 Teledyne E2V Semiconductors Sas Colour image sensor with white pixels and colour pixels

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140066684A (en) * 2012-03-19 2014-06-02 앱티나 이미징 코포레이션 Imaging systems with clear filter pixels
US20140362250A1 (en) * 2012-12-28 2014-12-11 Visera Technologies Company Limited Method for correcting pixel information of color pixels on a color filter array of an image sensor
US20160163249A1 (en) * 2014-12-03 2016-06-09 Japan Display Inc. Image display device
US20180006078A1 (en) * 2014-12-22 2018-01-04 Teledyne E2V Semiconductors Sas Colour image sensor with white pixels and colour pixels

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHARIF S. M. A., JUNG YONG JU: "Deep color reconstruction for a sparse color sensor", OPTICS EXPRESS, vol. 27, no. 17, 19 August 2019 (2019-08-19), pages 23661 - 23681, XP055832582, Retrieved from the Internet <URL:https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-27-17-23661&id=416497> *

Also Published As

Publication number Publication date
KR102230609B1 (en) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2020050499A1 (en) Method for acquiring object information and apparatus for performing same
WO2016056787A1 (en) Display device and method of controlling the same
WO2020080765A1 (en) Apparatuses and methods for performing artificial intelligence encoding and artificial intelligence decoding on image
WO2016013902A1 (en) Image photographing apparatus and image photographing method
WO2010039005A2 (en) Picture quality control method and image display using same
WO2014058086A1 (en) Image processing device and image processing method
WO2020130496A1 (en) Display apparatus and control method thereof
WO2020246753A1 (en) Electronic apparatus for object recognition and control method thereof
WO2020085873A1 (en) Camera and terminal having same
WO2019172685A1 (en) Electronic apparatus and control method thereof
WO2016017906A1 (en) Display device, display correction device, display correction system, and display correction method
WO2020017936A1 (en) Electronic device and method for correcting image on basis of image transmission state
WO2021158058A1 (en) Method for providing filter and electronic device supporting the same
WO2021107264A1 (en) Deep learning-based sparse color sensor image processing device, image processing method and computer-readable medium
WO2017196023A1 (en) Camera module and auto focusing method thereof
WO2020111441A1 (en) Processor and control method therefor
WO2020045858A1 (en) Electronic apparatus and method of controlling the same
EP3811617A1 (en) Apparatuses and methods for performing artificial intelligence encoding and artificial intelligence decoding on image
WO2022097981A1 (en) Electronic device including camera module and operation method of same electronic device
WO2021132889A1 (en) Electronic device and control method thereof
WO2021034015A1 (en) Display apparatus and method of controlling the same
WO2020171576A1 (en) Method for applying image effect, and electronic device supporting same
WO2023008983A1 (en) Method for controlling image signal processor, and control device for performing same
WO2024111922A1 (en) Touch sensing device and method
WO2022005126A1 (en) Electronic device and controlling method of electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20892268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20892268

Country of ref document: EP

Kind code of ref document: A1