WO2016113983A1 - 画像処理装置、画像処理方法、プログラム及びシステム - Google Patents
画像処理装置、画像処理方法、プログラム及びシステム Download PDFInfo
- Publication number
- WO2016113983A1 WO2016113983A1 PCT/JP2015/080965 JP2015080965W WO2016113983A1 WO 2016113983 A1 WO2016113983 A1 WO 2016113983A1 JP 2015080965 W JP2015080965 W JP 2015080965W WO 2016113983 A1 WO2016113983 A1 WO 2016113983A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- visible light
- infrared
- image processing
- region
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 111
- 238000003672 processing method Methods 0.000 title claims description 5
- 238000001914 filtration Methods 0.000 claims abstract description 40
- 238000003384 imaging method Methods 0.000 claims description 83
- 238000000034 method Methods 0.000 claims description 32
- 230000008569 process Effects 0.000 claims description 30
- 210000000746 body region Anatomy 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 8
- 230000007613 environmental effect Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 8
- 230000004075 alteration Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000000926 separation method Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 239000003595 mist Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004297 night vision Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 239000000758 substrate Substances 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 230000005457 Black-body radiation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000001678 irradiating effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 238000000862 absorption spectrum Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000000149 argon plasma sintering Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000002594 fluoroscopy Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/12—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/20—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present disclosure relates to an image processing apparatus, an image processing method, a program, and a system.
- the technology according to the present disclosure aims to realize a mechanism capable of generating a color image having improved image quality.
- an image acquisition unit that acquires a far-infrared image, a near-infrared image, and a visible light image showing a common subject, and the far-infrared image, the near-infrared image, and the visible-light image
- An image processing apparatus includes a generation unit that generates a color image by filtering a filter tap including pixels.
- An image processing method includes: generating a color image by filtering a filter tap that includes pixels.
- the computer that controls the image processing apparatus includes a far-infrared image, a near-infrared image, and a visible light image that show a common subject, the far-infrared image,
- a program for functioning as a generation unit that generates a color image by filtering a filter tap including pixels of the near-infrared image and the visible light image is provided.
- a camera module that images a subject in the far infrared region, the near infrared region, and the visible light region, and outputs a corresponding far infrared image, a near infrared image, and a visible light image
- An image processing system including an image processing module that generates a color image by filtering a filter tap including pixels of a far-infrared image, the near-infrared image, and the visible light image is provided.
- a color image having improved image quality can be generated.
- the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
- FIG. 6 is a flowchart illustrating an example of a flow of color image generation processing according to the first embodiment. It is a flowchart which shows the 1st example of the flow of a filter structure setting process. It is a flowchart which shows the 2nd example of the flow of a filter structure setting process. It is a block diagram which shows an example of a structure of the logical function of the image processing apparatus which concerns on 2nd Embodiment. It is explanatory drawing for demonstrating an example of the area
- FIG. 1 is an explanatory diagram for explaining various uses of infrared images depending on wavelengths.
- the horizontal direction in FIG. 1 corresponds to the wavelength of infrared rays, and the wavelength increases from left to right.
- Light having a wavelength of 0.7 ⁇ m or less is visible light, and human vision senses this visible light.
- the wavelength region adjacent to the visible light region is a near infrared (NIR) region, and infrared rays belonging to the NIR region are referred to as near infrared rays.
- NIR near infrared
- the upper limit of the wavelength in the NIR region varies depending on the definition, it is often assumed to be between 2.5 ⁇ m and 4.0 ⁇ m.
- the relatively long wavelength portion of the NIR region is sometimes called the short wavelength infrared (SWIR) region.
- Near-infrared light can be used, for example, for night vision, fluoroscopy, optical communication and ranging.
- a camera that captures an NIR image first irradiates infrared rays in the vicinity and captures the reflected light.
- a wavelength region adjacent to the NIR region on the long wavelength side is a far infrared (FIR) region, and infrared rays belonging to the FIR region are called far infrared rays.
- Far infrared can be utilized for night vision, thermography and heating. Infrared rays emitted by black body radiation from an object correspond to far infrared rays.
- a night vision apparatus using far-infrared rays can generate an FIR image by capturing blackbody radiation from an object without irradiating infrared rays.
- a portion having a relatively short wavelength in the FIR region may be referred to as a mid-wavelength infrared (MWIR) region. Since a substance-specific absorption spectrum appears in the wavelength range of the medium wavelength infrared, the medium wavelength infrared can be used for identification of the substance.
- MWIR mid-wavelength infrared
- Patent Document 3 proposes to colorize an infrared image using color information from a visible light image. In general, it is desirable that the color image provided to the user or application has the best possible image quality.
- FIG. 2 is an explanatory diagram showing an example of a visible light image, a near infrared (NIR) image, and a far infrared (FIR) image.
- the visible light image Im01 is shown on the left
- the NIR image Im02 is shown in the center
- the FIR image Im03 is shown on the right.
- These images show the same person.
- the visible light image Im01 the face of a person is best represented, but the boundary between the subject and the background is ambiguous at the periphery of the subject that is not sufficiently exposed to ambient light.
- the boundary between the subject and the background is clear, but the clothes that reflected the near infrared rays most strongly rather than the face of the person appear brightest.
- the subject may be buried in the background under a situation where an object that strongly reflects near infrared rays exists in the background.
- the boundary between the subject and the background is clear. The details of a person's face are not expressed in the FIR image, but the face appears brighter than clothing, so the FIR image is more suitable for detection of the living body than the visible light image and the NIR image. It can be understood that
- the color information related to the subject color is usually included only in the visible light image. However, when deciding the color image to be output in the end, consideration should be given to what areas in the image should be expressed more clearly and in which areas color details are required. is there. Information on such a viewpoint is included in each of the FIR image and the NIR image rather than the visible light image. Therefore, in the present specification, several embodiments for generating a color image having improved image quality in consideration of an FIR image and an NIR image in addition to a visible light image will be described.
- an image processing apparatus 100 as an in-vehicle apparatus will be described.
- the image processing apparatus 100 has a configuration partially specialized for mounting on a vehicle, the application of the technology according to the present disclosure is not limited to such an example.
- the technology according to the present disclosure is used to generate a color image in any type of device such as a security device such as a monitoring camera, a medical / diagnostic device, an inspection device, or an information device such as a smartphone or a tablet PC (Personal Computer). Applicable.
- FIG. 3 is a block diagram illustrating an example of a hardware configuration of the image processing apparatus 100 according to the first embodiment.
- the image processing apparatus 100 includes a camera module 102, a sensor module 104, an input interface 106, a memory 108, a display 110, a communication interface 112, a vehicle network (NW) interface 113, a storage 114, a bus 116, and a processor 118. Is provided.
- NW vehicle network
- the camera module 102 is a module that images a subject in the FIR region, the NIR region, and the visible light region.
- the camera module 102 typically detects an array of image sensors that sense far-infrared light having a wavelength belonging to the FIR region, an array of image sensors that sense near-infrared light having a wavelength that belongs to the NIR region, and senses visible light. And an array of imaging devices. These arrays may be located on the same substrate or on different substrates.
- the camera module 102 may further include a light emitting element that emits near infrared rays.
- the camera module 102 captures an FIR image, an NIR image, and a visible light image, for example, in response to a trigger such as a user input or periodically. These images may be part of a series of frames constituting the video.
- the sensor module 104 is a module having a sensor group that may include a positioning sensor, an acceleration sensor, a depth sensor, an illuminance sensor, a temperature sensor, a humidity sensor, and the like.
- the positioning sensor measures the current position of the image processing apparatus 100 based on, for example, a GPS signal from a GPS (Global Positioning System) satellite or a wireless signal from a wireless access point.
- the acceleration sensor measures triaxial acceleration applied to the image processing apparatus 100.
- the depth sensor measures a distance (that is, depth) to a subject existing within the angle of view of the camera module 102.
- the illuminance sensor measures the illuminance of the environment where the image processing apparatus 100 is placed.
- the temperature sensor and the humidity sensor measure the temperature and humidity of the environment, respectively.
- the sensor data generated in the sensor module 104 can be used for purposes such as image calibration and determination of imaging conditions, which will be described later.
- the input interface 106 is used for a user to operate the image processing apparatus 100 or input information to the image processing apparatus 100.
- the input interface 106 may include an input device such as a touch sensor, a keypad, a button, or a switch, for example.
- the input interface 106 may include a microphone for voice input and a voice recognition module.
- the input interface 106 may also include a remote control module that receives commands selected by the user from the remote device.
- the memory 108 is a storage medium that can include a RAM (Random Access Memory) and a ROM (Read Only Memory).
- the memory 108 is coupled to the processor 118 and stores programs and data for processing executed by the processor 118.
- the display 110 is a display module having a screen for displaying an image.
- the display 110 may be, for example, an LCD (Liquid Crystal Display), an OLED (Organic light-Emitting Diode), or a CRT (Cathode Ray Tube).
- the communication interface 112 is a module that mediates communication between the image processing apparatus 100 and another apparatus.
- the communication interface 112 establishes a communication connection according to any wireless communication protocol or wired communication protocol.
- the vehicle NW interface 113 is a module that mediates communication with the vehicle network of the vehicle on which the image processing apparatus 100 is mounted.
- the vehicle NW interface 113 is connected to a vehicle network through a terminal (not shown), and acquires data generated on the vehicle side such as vehicle speed data and steering angle (steering angle) data.
- the storage 114 is a storage device that stores image data and stores a database used in image processing executed by the image processing apparatus 100.
- the storage 114 contains a storage medium such as a semiconductor memory or a hard disk. Note that the program and data described in this specification may be acquired from a data source external to the image processing apparatus 100 (for example, a data server, a network storage, or an external memory).
- Bus The bus 116 connects the camera module 102, the sensor module 104, the input interface 106, the memory 108, the display 110, the communication interface 112, the vehicle NW interface 113, the storage 114, and the processor 118 to each other.
- the processor 118 is a processing module such as a CPU (Central Processing Unit) or a DSP (Digital Signal Processor).
- the processor 118 operates a function to be described later for generating a color image having improved image quality by executing a program stored in the memory 108 or other storage medium.
- FIG. 4 is an explanatory diagram showing some examples of the arrangement of cameras and displays in the vehicle.
- a simplified plan view of the vehicle 1 as an example is shown by a solid line.
- the camera 102 a is disposed in the center of the vehicle body front portion of the vehicle 1 and is directed to the front of the vehicle 1.
- the camera 102b is disposed in the center of the rear part of the vehicle body and is directed to the rear of the vehicle 1.
- a plurality of cameras 102 c are arranged on both sides of the vehicle 1 and directed toward the side of the vehicle 1.
- the camera module 102 shown in FIG. 3 may have any combination of these cameras 102a, 102b and 102c, and other cameras having other arrangements.
- the display 110a is arranged on or near the dashboard, and is typically shared with a navigation device.
- the display 110b is disposed on the rearview mirror and displays an image behind the vehicle body that is captured by the camera 102b, for example.
- the display 110c is a wearable device (for example, a head mounted display) worn by the driver.
- the display 110 shown in FIG. 3 may have any combination of these displays 110a, 110b and 110c, and other displays having other arrangements.
- the camera module 102, the display 110, and some other components illustrated in FIG. 3 may exist outside the image processing apparatus 100 and may be connected to the image processing apparatus 100 through signal lines.
- FIG. 5 is a block diagram illustrating an example of a configuration of logical functions realized by linking the components of the image processing apparatus 100 illustrated in FIG. 3 to each other.
- the image processing apparatus 100 includes an image acquisition unit 120, a data acquisition unit 130, a determination unit 140, a generation unit 160, and a database 170.
- the image acquisition unit 120 acquires a far infrared (FIR) image, a near infrared (NIR) image, and a visible light image showing a common subject from the camera module 102, and determines the acquired image.
- the image acquired by the image acquisition unit 120 may be an image that has undergone primitive processing such as signal amplification, demosaicing, and noise removal. Further, the image acquisition unit 120 performs preliminary processing for generating a color image, such as image calibration, wavelength component separation, and viewpoint integration, as necessary.
- an image with improved image quality in consideration of FIR images and NIR images is referred to as a color image
- an image captured by a visible light camera before image quality improvement is referred to as a visible light image (visible light image). Also has a color).
- the camera module 102 includes an image sensor 102a-1 that captures a visible light image, an image sensor 102a-2 that captures an NIR image, and an image sensor 102a-3 that captures an FIR image. Including. Since these image sensors are on different substrates, the visible light image, the NIR image, and the FIR image that are originally captured include a shift in the angle of view. Therefore, the image acquisition unit 120 executes calibration 121 for compensating for the deviation of the angle of view between the images.
- the calibration 121 may include not only a deviation of the angle of view but also a process for eliminating a difference in spatial resolution, a difference in temporal resolution, and aberration.
- Spatial resolution differences can be eliminated by interpolation of pixel values in lower resolution images or by pixel decimation in higher resolution images.
- Differences in temporal resolution ie, frame rate
- Aberrations (for example, chromatic aberration and monochromatic aberration) may be reduced by pixel calculation in the image acquisition unit 120, or may be corrected in the optical system.
- the camera module 102 includes a single-plate image sensor 102a-4 that captures a visible light image and an NIR image, and an image sensor 102a-3 that captures an FIR image.
- the calibration 122 includes compensation for the field angle deviation associated with the FIR image and the other processing described above.
- a mixed color for example, R (red) component due to the correlation of the wavelength between the visible light region and the NIR region affects the NIR pixel value in the captured image. , Etc.
- the image acquisition unit 120 can execute a component separation process 123 (for example, a filter operation for component separation) for separating wavelength components mixed between the visible light image and the NIR image.
- the camera module 102 includes a single-plate image sensor 102a-5 that captures a visible light image, an NIR image, and an FIR image.
- the image acquisition unit 120 separates each wavelength component.
- the component separation process 124 may be executed.
- the camera module 102 includes adjacent (partially) image sensors 102c-1 and 102c-4 that capture visible light images at adjacent (possibly overlapping) angles of view.
- Image sensors 102c-2 and 102c-5 that capture NIR images at an angle of view
- image sensors 102c-3 and 102c- that image FIR images at an adjacent (possibly overlapping) angle of view. 6 is included.
- the image acquisition unit 120 executes, for example, a viewpoint merging process 125a for combining the visible light images from the image sensors 102c-1 and 102c-4 at the boundary of the angle of view, so that a single visible light having a larger size is obtained. An image can be generated.
- the image acquisition unit 120 can generate a single NIR image through the viewpoint merging process 125b and a single FIR image through the viewpoint merging process 125c.
- the image acquisition unit 120 also executes the calibration 126 for these visible light image, NIR image, and FIR image, and eliminates the deviation of the angle of view, the difference of the spatial resolution, the difference of the temporal resolution, and the aberration.
- the data acquisition unit 130 acquires various data other than an image used for generating a color image in the image processing apparatus 100.
- the data acquisition unit 130 may acquire positioning data indicating the geographical position of the image processing apparatus 100 from the sensor module 104 and weather data from an external data server via the communication interface 112.
- the positioning data and the weather data are used for determining the weather at the current location when determining the imaging condition by the determination unit 140 described later.
- the weather data may be input by the user via the input interface 106.
- the data acquisition unit 130 may acquire illuminance data, temperature data, and humidity data from the sensor module 104. These data can also be used for determination of the imaging condition by the determination unit 140.
- the data acquisition unit 130 may acquire driving data including vehicle speed data and steering angle data from the vehicle network via the vehicle NW interface 113.
- the driving data can be used for motion prediction or motion blur correction at the time of frame rate conversion by the image acquisition unit 120, for example.
- the determining unit 140 determines an imaging condition when an FIR image, an NIR, and a visible light image are captured.
- the imaging condition includes one or more of time zone, weather, and environmental illuminance.
- the determination unit 140 determines a time zone to which the current time belongs.
- the time zone may be divided in any way, for example, two types of “daytime” and “nighttime”, and four types of “morning”, “daytime”, “evening”, and “night”.
- the determination unit 140 can determine the weather at the current location indicated by the positioning data by referring to the weather data acquired from the external server or input by the user.
- the definition of the weather may be any definition.
- the determination unit 140 may classify the weather at that time into one of “sunny”, “cloudy”, “rain”, “snow”, and “mist”. Instead of determining the weather from the weather data, the determination unit 140 may estimate the current weather from the temperature data and the humidity data. Further, the determination unit 140 can determine the environmental illuminance based on the illuminance data from the sensor module 104. The determination unit 140 outputs imaging condition information indicating the results of these determinations to the generation unit 160.
- the generation unit 160 generates a color image by filtering filter taps including pixels of the FIR image, the NIR image, and the visible light image.
- the pixels selected from the FIR image contribute to, for example, the identification of the subject in a situation where ambient light is scarce, particularly to enhance the color of the living body region.
- the pixels selected from the NIR image also contribute to the identification of the subject in a situation where ambient light is scarce, but also contribute to the sharpening of the subject details due to the action of near-infrared irradiation. Infrared rays that are more straight than visible light also contribute to the generation of highly visible images under rain or fog conditions. Pixels selected from the visible light image provide color information directly to the color image.
- the generation unit 160 executes filtering for generating a color image with different filter configurations depending on the imaging conditions determined by the determination unit 140.
- filtering for generating a color image with different filter configurations depending on the imaging conditions determined by the determination unit 140.
- FIG. 7 is an explanatory diagram for describing a first example of imaging conditions.
- an exemplary visible light image Im11 is shown.
- the weather when the visible light image Im11 is captured is “sunny”, and the time zone is “daytime”. That is, the imaging condition C1 associated with the visible light image Im11 represents a combination of “clear” and “daytime”.
- FIG. 8 is an explanatory diagram for describing an example of a filter configuration corresponding to the imaging condition C1. Referring to FIG. 8, a one-dimensional wavelength direction axis and a two-dimensional spatial direction axis are shown, and the visible light (RGB) image, the NIR image, and the FIR image are partially at corresponding wavelength positions. It is shown.
- RGB visible light
- the visible light image actually has three wavelength components, but here, they are collected at one wavelength position for the sake of simplicity.
- the visible light image is not limited to the illustrated example, and may be expressed in a color system other than RGB.
- the grid of each image in FIG. 8 means an array of pixels, and a bold rectangle indicates the pixel position of one target pixel.
- the shaded darkness assigned to each pixel represents the filter coefficient (that is, the weight in the filter operation) assigned to the pixel.
- the imaging condition C1 represents a combination of “sunny” and “daytime”, so that more pixels are selected as filter taps from the visible light image, and pixels of the visible light image have a larger filter coefficient. Is given.
- FIG. 9 is an explanatory diagram for describing a second example of imaging conditions.
- an exemplary visible light image Im12 is shown.
- the weather when the visible light image Im12 is captured is “sunny”, and the time zone is “night”. That is, the imaging condition C2 associated with the visible light image Im12 represents a combination of “sunny” and “night”.
- FIG. 10 is an explanatory diagram for describing an example of a filter configuration corresponding to the imaging condition C2.
- pixels within a wider range from the visible light image are selected as filter taps in order to reduce the influence of noise that tends to appear in the visible light image during night imaging, and in addition, from the NIR image and the FIR image.
- more pixels are selected (compared to the imaging condition C1). The largest filter coefficient is given to the target pixel of the NIR image.
- FIG. 11 is an explanatory diagram for describing a third example of imaging conditions.
- an exemplary visible light image Im13 is shown.
- the weather when the visible light image Im13 is captured is “mist”, and the time zone is “daytime”. That is, the imaging condition C3 associated with the visible light image Im13 represents a combination of “mist” and “daytime”.
- FIG. 12 is an explanatory diagram for describing an example of a filter configuration corresponding to the imaging condition C3.
- a pixel within a wider range from each image is selected as a filter tap in consideration of the effect of light scattering, and the target pixel of the FIR image is used in order to utilize far-infrared rays with high straightness. Large filter coefficients are given.
- the filter configuration described above is merely an example for explanation.
- the number and arrangement of filter taps for each image and the filter coefficients for each filter tap may be configured in any manner.
- the filter tap may not be selected from one or two of the FIR image, the NIR image, and the visible light image.
- the generation unit 160 may perform filtering with a filter configuration determined in advance through a learning process.
- a set of FIR images, NIR images and visible light images showing a certain subject (calibrated as necessary) and the same subject under good imaging conditions are used. Many combinations of projected color images (that is, sufficiently good image quality) are prepared. These images correspond to student images and teacher images in supervised learning.
- a filter configuration for generating a color image with good image quality from the set of FIR image, NIR image, and visible light image is determined.
- a filter configuration optimized for each imaging condition can be determined separately.
- the database 170 stores filter configuration data indicating the filter configuration for each imaging condition learned in advance as described above.
- the generation unit 160 acquires filter configuration data corresponding to the imaging condition indicated by the imaging condition information input from the determination unit 140 from the database 170, and obtains the filter tap and the filter coefficient indicated by the acquired filter configuration data. Set. Then, the generation unit 160 generates a color image by repeating the filtering with the set filter configuration while sequentially scanning the pixels of the image input from the image acquisition unit 120.
- the generation unit 160 may display the generated color image on the screen of the display 110.
- the generation unit 160 may output the generated color image to a subsequent application.
- the subsequent application may be a driving assistance application such as ADAS (Advanced Driver Assistance Systems).
- the driving support application can execute driving support processing such as pedestrian detection, collision warning notification, or parking support information presentation based on the color image generated by the generation unit 160, for example.
- the generation unit 160 adjusts the filter coefficient based on the difference between the imaging condition determined by the determination unit 140 and the imaging condition during learning, and executes the filtering using the adjusted filter coefficient.
- the imaging conditions are expressed by numerical values.
- the time zone may be expressed by a numerical value within a predetermined range in which the darkest night is the lower limit and the brightest noon is the upper limit.
- the weather may be expressed by a numerical value such as a cloud amount.
- An integrated numerical value based on a combination of time zone, weather, and environmental illumination may be calculated.
- an optimal filter configuration is determined for several representative (discrete) imaging condition values, and filter configuration data indicating the determined filter configuration is stored in the database 170.
- the generation unit 160 acquires the filter configuration data learned under the condition closest to the imaging condition determined by the determination unit 140 from the database 170, and the filter coefficient indicated by the acquired filter configuration data is based on the difference in the imaging condition. Adjust and use for color image generation. For example, if the current time is darker than the learning time, the generation unit 160 may reduce the pixel weight of the visible light image and instead increase the pixel weight of the NIR image and the FIR image. By such adjustment, it is possible to continuously change the filter configuration for generating the color image following the continuous change of the imaging condition. Accordingly, it is possible to prevent an unnatural color image that suddenly and discontinuously changes the appearance of the color image from being provided to the user or the subsequent application.
- the database 170 stores a plurality of sets of the above-described filter configuration data indicating filter configurations determined in advance for each imaging condition candidate.
- Each set of filter configuration data indicates the pixel position of the filter tap to be selected from each of the FIR image, NIR image and visible light image, and the filter coefficient to be applied to the respective filter tap.
- Each set of filter configuration data is associated with imaging condition information that identifies the corresponding imaging condition.
- FIG. 13 is a flowchart illustrating an example of the flow of color image generation processing according to the first embodiment.
- the color image generation process shown in FIG. 13 is typically repeated for each of a series of frames constituting a video.
- the camera module 102 captures an original image showing a subject in the FIR region, the NIR region, and the visible light region (step S100).
- the image acquisition unit 120 performs preliminary processing such as calibration on the original image captured by the camera module 102 as necessary, and acquires an FIR image, an NIR image, and a visible light image (step S105).
- the data acquisition unit 130 acquires auxiliary data used for generating a color image in the image processing apparatus 100 (step S110).
- the auxiliary data acquired here may include some of positioning data, weather data, illuminance data, temperature data, and humidity data.
- the determination unit 140 determines imaging conditions when an image is captured by the camera module 102 using, for example, auxiliary data input from the data acquisition unit 130 (step S120). Then, the determination unit 140 outputs imaging condition information indicating the determined imaging condition including one or more of time zone, weather, and environmental illuminance to the generation unit 160.
- the generation unit 160 acquires the filter configuration data corresponding to the imaging condition determined by the determination unit 140 from the database 170, and sets the filter configuration indicated by the acquired filter configuration data (step S140). Then, the generation unit 160 generates a color image by filtering the FIR image, the NIR image, and the visible light image input from the image acquisition unit 120 with the set filter configuration (step S150).
- step S160 when there is a subsequent application (step S160), the generation unit 160 outputs the generated color image to the application (for example, a driving support application) (step S165). Then, the generation unit 160 (or the subsequent application) displays a color image on the screen of the display 110 (step S170).
- the application for example, a driving support application
- FIG. 14A is a flowchart showing a first example of the flow of the filter configuration setting process that can be executed in step S140 of FIG.
- the generation unit 160 obtains filter configuration data corresponding to the imaging condition from the database 170 by, for example, lookup using the imaging condition indicated by the imaging condition information input from the determination unit 140. (Step S141).
- the generation unit 160 sets filter taps that can include pixels of the FIR image, the NIR image, and the visible light image according to the acquired filter configuration data (step S143).
- the generation unit 160 sets the filter coefficient indicated by the filter configuration data in the set filter tap (step S145).
- FIG. 14B is a flowchart showing a second example of the flow of the filter configuration setting process that can be executed in step S140 of FIG.
- the generation unit 160 acquires filter configuration data corresponding to the imaging conditions determined by the determination unit 140 from the database 170 (step S141).
- the generation unit 160 sets filter taps that can include pixels of the FIR image, the NIR image, and the visible light image according to the acquired filter configuration data (step S143).
- the generation unit 160 adjusts the filter coefficient indicated by the filter configuration data based on the difference between the imaging condition determined by the determination unit 140 and the imaging condition at the time of learning (step S147).
- the generation unit 160 sets the adjusted filter coefficient in the filter tap set in step S143 (step S149).
- Second Embodiment> In the first embodiment described in the previous section, one type of filter configuration is used when generating one color image. In contrast, in the second embodiment, the image is segmented into several partial areas, and an optimum filter configuration is used for each partial area. By switching or adaptively selecting these filter configurations, further improvement in the quality of the color image can be expected.
- FIG. 15 is a block diagram illustrating an example of a logical function configuration of the image processing apparatus 200 according to the second embodiment.
- the image processing apparatus 200 includes an image acquisition unit 120, a data acquisition unit 230, a determination unit 140, a recognition unit 250, a generation unit 260, and a database 270.
- the data acquisition unit 230 acquires auxiliary data used for generating a color image in the image processing apparatus 200.
- the data acquisition unit 230 receives positioning data from the sensor module 104 from an external data server (or input by a user) via the communication interface 112. Weather data may be acquired.
- the data acquisition unit 130 may acquire illuminance data, temperature data, and humidity data from the sensor module 104.
- the data acquisition unit 130 may acquire driving data from the vehicle network via the vehicle NW interface 113.
- the data acquisition unit 230 may acquire depth data (also referred to as a depth map) indicating the distance to the subject measured by the depth sensor for each pixel from the sensor module 104.
- the depth data can be used for image segmentation in the recognition unit 150 (described later) or filter configuration setting in the generation unit 160.
- the recognizing unit 250 segments an image into a plurality of partial regions in at least one of the FIR image, the NIR image, and the visible light image input from the image acquisition unit 120. Then, the recognizing unit 250 generates region information for specifying each segmented partial region, and outputs the generated region information to the generating unit 260.
- the area information here may be information indicating the position, size and shape of each area, or the bit values of the pixels belonging to each area are “1” and the bit values of the other pixels are “0”. It may be a bitmap as shown.
- the area information may include the type (biological area or object area) of each area and identification information (area ID, etc.).
- the recognizing unit 250 recognizes a living body region where a living body is reflected in the image.
- the living body here may be only a human body or may include an animal body in addition to the human body.
- the recognition unit 250 may recognize a human body region in a visible light image or an NIR image using any existing human body recognition technology (for example, a technology based on a known image feature amount of a human body).
- the recognizing unit 250 may recognize an area showing a relatively high gradation value in the FIR image as a living body area. When one or more biological areas are recognized in the image, the recognizing unit 250 generates biological area information that identifies each recognized biological area.
- the recognition unit 250 may recognize an object region in which an object defined in advance in an image is displayed using any existing object recognition technology.
- the object here may include, for example, a vehicle, a traffic light, a road sign, or the like.
- the recognition unit 250 generates object area information that identifies each of the object areas recognized in the image.
- the recognizing unit 250 may use a depth data acquired by the data acquiring unit 230 to distinguish a certain living body or object shown in the image from another living body or object (for example, appearing to overlap).
- FIG. 16 is an explanatory diagram for explaining an example of the area recognition process executed by the recognition unit 250.
- a visible light image Im21 as an example is shown on the left of FIG. In the visible light image Im21, two persons and one vehicle are shown.
- the right side of FIG. 16 shows the result of the area recognition process performed by the recognition unit 250 on the visible light image Im21.
- the visible light image Im21 is segmented into four regions R0, R11, R12, and R2.
- Regions R11 and R12 are biological regions in which one person is shown.
- the region R2 is an object region in which the vehicle is reflected, and is a non-biological region.
- the region R0 is a non-living region where no subject is shown.
- the generation unit 260 generates a color image by filtering the filter tap including the pixels of the FIR image, the NIR image, and the visible light image. More specifically, like the generation unit 160 according to the first embodiment, the generation unit 260 has a different filter configuration depending on the imaging condition determined by the determination unit 140, and generates a color image. Perform filtering. Further, in the present embodiment, the generation unit 260 changes the filter configuration for generating a color image depending on the area information input from the recognition unit 250. As an example, the generation unit 260 may perform filtering on the biological region with a filter configuration different from the filter configuration used for the non-biological region. Further, the generation unit 260 may perform filtering on the living body region with a different filter configuration depending on the distance from the camera to the living body.
- FIG. 17A and FIG. 17B show examples of filter configurations that can be set for each region by the generation unit 260 based on the result of the region recognition processing illustrated in FIG.
- a filter configuration F11 for a biological region and a filter configuration F12 for a non-biological region are set as filter configurations corresponding to a certain imaging condition.
- Filter configuration F11 is applied to regions R11 and R12.
- the filter configuration F12 is applied to the regions R0 and R1.
- the filter configuration F11 for the living body region has, for example, a filter coefficient for improving the discrimination of the living body more than the filter configuration F12 for the non-living region.
- FIG. 17A shows an example of filter configurations that can be set for each region by the generation unit 260 based on the result of the region recognition processing illustrated in FIG.
- a filter configuration F11 for a biological region and a filter configuration F12 for a non-biological region are set as filter configurations corresponding to a certain imaging condition.
- Filter configuration F11 is applied
- the filter configuration corresponding to a certain imaging condition includes a first filter configuration F21 for the biological region R11, a second filter configuration F22 for the biological region R12, and an object region R2.
- a third filter configuration F23 and a fourth filter configuration F24 for the non-biological region R0 are set.
- the first filter configuration F21 for the living body region R11 which is estimated to show a closer human body according to the depth data, has a filter coefficient for increasing brightness or saturation as compared with other filter configurations. You may have.
- These filter configurations are different as the imaging conditions change.
- an optimal filter configuration can be determined in advance through learning processing for each combination of imaging conditions and region types (or for each representative value of distance to the subject).
- the generation unit 260 acquires, from the database 270, filter configuration data corresponding to a combination of the type of the region and the imaging condition indicated by the imaging condition information for each region. . Then, the generation unit 260 generates a color image by repeating filtering while sequentially scanning pixels of the image input from the image acquisition unit 120 with the filter configuration indicated by the acquired filter configuration data.
- the generation unit 260 may display the generated color image on the screen of the display 110.
- the generation unit 260 may output the generated color image to a subsequent application such as a driving support application. Further, the generation unit 260 may output the application support information to a subsequent application.
- the application support information here may include, for example, one or more of the following lists: a) Region information including at least one of biological region information and object region information b) Likelihood information calculated for each region of a) c) Image feature quantity that can be calculated accompanying biological recognition or object recognition d) Probability distribution by color that can be calculated accompanying the generation of a color image For example, the gradation value of each region in a nighttime FIR image has a strong correlation with the likelihood that a living body is reflected in that region.
- the above-described application support information may be reused to avoid redundant generation (for example, re-segmentation of images or calculation of image feature amounts) in subsequent applications of overlapping information.
- the database 270 stores a plurality of sets of filter configuration data indicating filter configurations determined in advance for each combination of imaging condition candidates and region types. Each set of filter configuration data is associated with imaging condition information for identifying a corresponding imaging condition and a region type. Each set of filter configuration data may further be associated with a representative value of the distance from the camera to the subject. Further, the database 270 may store image feature amount data (for a human body, a living body, or an object) that can be used in the region recognition processing by the recognition unit 250.
- FIG. 18 is a flowchart illustrating an example of the flow of color image generation processing according to the second embodiment.
- the color image generation process shown in FIG. 18 is typically repeated for each of a series of frames constituting a video.
- the camera module 102 captures an original image showing a subject in the FIR region, the NIR region, and the visible light region (step S100).
- the image acquisition unit 120 performs preliminary processing such as calibration on the original image captured by the camera module 102 as necessary, and acquires an FIR image, an NIR image, and a visible light image (step S105).
- the data acquisition unit 230 acquires auxiliary data used for generating a color image in the image processing apparatus 200 (step S210).
- the auxiliary data acquired here may include depth data in addition to data used for determining the imaging condition.
- the determination unit 140 determines an imaging condition when an image is captured by the camera module 102 using, for example, auxiliary data input from the data acquisition unit 230 (step S120). Then, the determination unit 140 outputs imaging condition information indicating the determined imaging condition to the generation unit 260.
- the recognition unit 250 recognizes the living body region in the image by detecting the living body shown in the image (step S230).
- the generation unit 260 acquires the filter configuration data for the biological region corresponding to the determined imaging condition from the database 270, and sets the filter configuration indicated by the acquired filter configuration data in the biological region (Step S240). . Note that when the living body is not shown in the image, step S240 is omitted. Further, the generation unit 260 acquires filter configuration data for the non-biological region corresponding to the determined imaging condition from the database 270, and sets the filter configuration indicated by the acquired filter configuration data in the non-biological region (step S245). ).
- the generation unit 260 generates a color image by filtering the FIR image, the NIR image, and the visible light image input from the image acquisition unit 120 with the set filter configuration (different for each region type) (Step S ⁇ b> 1). S250).
- step S260 when there is a subsequent application (step S260), the generation unit 260 outputs the generated color image and application support information to the application (for example, a driving support application) (step S265). Then, the generation unit 260 (or the subsequent application) displays a color image on the screen of the display 110 (step S170).
- the application for example, a driving support application
- the filter configuration may be switched for each pixel, for example.
- the switching of the filter configuration may be performed based on other arbitrary information.
- the filter configuration may be adaptively selected based on a local image feature amount such as edge strength, band, or activity of at least one of the FIR image, the NIR image, and the visible light image. Image feature quantities across different types of images, such as cross-correlation between FIR images and NIR images, may be used.
- image feature quantity and the optimum filter configuration may be determined through a learning process, or may be modeled or tuned by a developer.
- FIG. 19 is an explanatory diagram for describing some application examples of the technology according to the present disclosure.
- the vehicle 1 as an example illustrated in FIG. 19 includes an in-vehicle system 10.
- the in-vehicle system 10 includes an image processing system 20, an application module 30, and one or more peripheral modules 40.
- the image processing system 20 includes a camera module 102 and an image processing module 100 or 200 connected to the camera module 102.
- the image processing module 100 or 200 may be composed of a single chip (or processor), or may be a collection of a plurality of chips.
- the application module 30 is connected to the image processing system 20 via a connection terminal and a signal line.
- the application module 30 receives a color image generated by the image processing module 100 or 200 and executes an application based on the received color image.
- the application module 30 may be implemented in the form of, for example, a CPU or a SoC (System-on-a-chip).
- the peripheral module 40 includes a display, for example, and a color image processed by the application module 30 is displayed on the screen of the display.
- a far-infrared image, a near-infrared image, and a visible light image showing a common subject are acquired, and pixels of the acquired far-infrared image, near-infrared image, and visible light image are obtained.
- a color image is generated by filtering the included filter taps. According to such a configuration, it is possible to effectively improve the image quality of the generated color image by utilizing the properties of the far-infrared image, the near-infrared image, and the visible light image.
- the far-infrared image provides information on which region in the image should be expressed more clearly, especially in applications where the visibility of the living body is important, and contributes to enhancement of the color of the living body region.
- Near-infrared images contribute to sharpening of subject details in situations where ambient light is scarce.
- a visible light image provides color information directly to a color image.
- the filtering is executed with different filter configurations depending on the imaging conditions when the input image is captured. Therefore, it is possible to generate a high-quality color image more robustly by adaptively combining the three types of images with respect to changes in imaging conditions such as changes in time or weather.
- the filtering is executed with a filter configuration determined in advance through a learning process. Therefore, even in applications where real-time characteristics are important, it is possible to quickly set a filter configuration or change adaptively to stably generate a high-quality color image without significant delay. .
- a living body region in which a living body is reflected is recognized in any input image, and the filtering for the living body region is performed with a filter configuration different from the filter configuration used for the non-biological region. Is done. Therefore, it is possible to highlight the living body in the color image without burying the living body in the background, and to improve the certainty of the recognition of the living body in the subsequent processing.
- the above filtering is performed on an area in which the subject appears with different filter configurations depending on the distance from the camera to the subject. Therefore, for example, in a driving support application, it is possible to particularly emphasize an object such as a nearby pedestrian or an obstacle to be noted by the driver in the color image.
- a series of control processing by each device described in this specification may be realized using any of software, hardware, and a combination of software and hardware.
- the program constituting the software is stored in advance in a storage medium (non-transitory medium) provided inside or outside each device.
- Each program is read into a RAM (Random Access Memory) at the time of execution and executed by a processor such as a CPU (Central Processing Unit).
- a processor such as a CPU (Central Processing Unit).
- processing described using the flowchart in this specification does not necessarily have to be executed in the order shown in the flowchart. Some processing steps may be performed in parallel. Further, additional processing steps may be employed, and some processing steps may be omitted.
- An image acquisition unit that acquires a far-infrared image, a near-infrared image, and a visible light image showing a common subject
- a generation unit that generates a color image by filtering a filter tap including pixels of the far-infrared image, the near-infrared image, and the visible light image
- An image processing apparatus comprising: (2) The image processing apparatus further includes a determination unit that determines an imaging condition when the far-infrared image, the near-infrared image, and the visible light image are captured, The generation unit executes the filtering with a different filter configuration depending on the imaging condition determined by the determination unit.
- the image processing apparatus according to (1).
- the image processing apparatus includes: A recognition unit that recognizes a living body region in which a living body is reflected in at least one of the far-infrared image, the near-infrared image, and the visible light image; Further comprising The generation unit performs the filtering for the biological region with a filter configuration different from the filter configuration used for the non-biological region, The image processing apparatus according to any one of (1) to (5). (7) The image processing apparatus according to (6), wherein the generation unit executes the filtering for the living body region with different filter configurations depending on a distance from a camera to the living body. (8) The recognizing unit generates biological area information that identifies the recognized biological area, The generation unit outputs the biological region information together with the color image to a subsequent application.
- the image processing apparatus according to (6) or (7).
- the image processing apparatus is mounted on a vehicle, The generation unit outputs the color image to a driving support application.
- the image processing apparatus according to any one of (1) to (7).
- (10) Obtaining a far-infrared image, a near-infrared image, and a visible light image showing a common subject; Generating a color image by filtering a filter tap including pixels of the far-infrared image, the near-infrared image and the visible light image;
- An image processing method including: (11) A computer for controlling the image processing apparatus; An image acquisition unit that acquires a far-infrared image, a near-infrared image, and a visible light image showing a common subject; A generation unit that generates a color image by filtering a filter tap including pixels of the far-infrared image, the near-infrared image, and the visible light image; Program to function as.
- a camera module that images a subject in the far-infrared region, the near-infrared region, and the visible light region, and outputs a corresponding far-infrared image, a near-infrared image, and a visible light image;
- An image processing module that generates a color image by filtering a filter tap including pixels of the far-infrared image, the near-infrared image, and the visible light image; Including an image processing system.
- the image processing system is mounted on a vehicle, An application module that executes an application based on the color image generated by the image processing module; The image processing system according to (12).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
Description
なお、上記の効果は必ずしも限定的なものではなく、上記の効果と共に、又は上記の効果に代えて、本明細書に示されたいずれかの効果、又は本明細書から把握され得る他の効果が奏されてもよい。
1.基本的な原理
1-1.赤外線の多様な用途
1-2.画像ごとの性質
2.第1の実施形態
2-1.ハードウェア構成
2-2.機能構成
2-3.処理の流れ
3.第2の実施形態
3-1.機能構成
3-2.処理の流れ
4.応用例
5.まとめ
[1-1.赤外線の多様な用途]
図1は、波長に依存する赤外線画像の多様な用途について説明するための説明図である。図1の水平方向は赤外線の波長に対応し、左から右へと波長は長くなる。0.7μm以下の波長を有する光線は可視光線であり、人間の視覚はこの可視光線を感知する。可視光領域に隣接する波長領域は近赤外(NIR)領域であり、NIR領域に属する赤外線を近赤外線という。NIR領域の波長の上限は、定義に依存して異なるものの、2.5μmから4.0μmの間にあるとされることが多い。NIR領域のうち相対的に波長の長い部分は、短波長赤外(SWIR)領域と呼ばれることもある。近赤外線は、例えば、暗視(night vision)、透視、光通信及び測距のために利用され得る。NIR画像を撮像するカメラは、通常、まず近傍に赤外線を照射し、その反射光を捕捉する。NIR領域に長波長側で隣接する波長領域は遠赤外(FIR)領域であり、FIR領域に属する赤外線を遠赤外線という。遠赤外線は、暗視、サーモグラフィ及び加熱のために利用され得る。物体からの黒体放射によって発せられる赤外線は、遠赤外線に相当する。そのため、遠赤外線を用いた暗視装置は、赤外線を照射せずとも、物体からの黒体放射を捕捉することによりFIR画像を生成することができる。FIR領域のうち相対的に波長の短い部分は、中波長赤外(MWIR)領域と呼ばれることもある。中波長赤外線の波長範囲では物質固有の吸収スペクトルが現れることから、中波長赤外線は、物質の同定のために利用され得る。
可視光画像は、本質的に色を表現可能であることから、画像に映る被写体をユーザ又は何らかのアプリケーションに認識させる目的で既に広く利用されている。可視光画像の欠点は、環境光に乏しい状況(例えば、夜間又は悪天候時)においてその視認性が著しく低下することである。また、可視光は人間の視覚により感知されるため、可視光を照射すること(いわゆるフラッシュ)により環境光を作り出すことは、多くの場面で忌避される。赤外線画像は、このような可視光画像の欠点を補うことができる。例えば、特許文献2により提案されている技術では、夜間又は悪天候時のような劣悪な条件の下で、可視光画像の代わりに、より視認性の高いNIR画像が運転者へ提供される。赤外線画像は、通常は色の無いグレースケールの画像だが、特許文献3は、可視光画像からの色情報を用いて赤外線画像をカラー化することを提案している。一般に、ユーザ又はアプリケーションへ提供されるカラー画像は、可能な限り良好な画質を有していることが望ましい。
本節では、一例として、車載装置としての画像処理装置100を説明する。画像処理装置100は部分的に車両への搭載に特化した構成を有するものの、本開示に係る技術の用途はかかる例に限定されない。本開示に係る技術は、例えば、監視カメラなどのセキュリティ機器、医療/診断機器、検査機器、又は、スマートフォン若しくはタブレットPC(Personal Computer)などの情報機器といった任意の種類の装置におけるカラー画像の生成に適用可能である。
図3は、第1の実施形態に係る画像処理装置100のハードウェア構成の一例を示すブロック図である。図3を参照すると、画像処理装置100は、カメラモジュール102、センサモジュール104、入力インタフェース106、メモリ108、ディスプレイ110、通信インタフェース112、車両ネットワーク(NW)インタフェース113、ストレージ114、バス116及びプロセッサ118を備える。
カメラモジュール102は、FIR領域、NIR領域及び可視光領域において被写体を撮像するモジュールである。カメラモジュール102は、典型的には、FIR領域に属する波長を有する遠赤外線を感知する撮像素子の配列と、NIR領域に属する波長を有する近赤外線を感知する撮像素子の配列と、可視光を感知する撮像素子の配列とを含む。これら配列は同一の基板上に配置されてもよく、又は異なる基板上に配置されてもよい。カメラモジュール102は、さらに近赤外線を照射する発光素子をも有し得る。カメラモジュール102は、例えば、ユーザ入力などのトリガに応じて又は周期的に、FIR画像、NIR画像及び可視光画像を撮像する。これら画像は、映像を構成する一連のフレームの一部であってもよい。
センサモジュール104は、測位センサ、加速度センサ、深度センサ、照度センサ、温度センサ及び湿度センサなどを含み得るセンサ群を有するモジュールである。測位センサは、例えば、GPS(Global Positioning System)衛星からのGPS信号又は無線アクセスポイントからの無線信号に基づいて、画像処理装置100の現在位置を測定する。加速度センサは、画像処理装置100に加わる3軸加速度を測定する。深度センサは、カメラモジュール102の画角内に存在する被写体への距離(即ち、深度(depth))を測定する。照度センサは、画像処理装置100が置かれている環境の照度を測定する。温度センサ及び湿度センサは、環境の温度及び湿度をそれぞれ測定する。センサモジュール104において生成されるセンサデータは、後述する画像のキャリブレーション及び撮像条件の判定といった目的のために利用され得る。
入力インタフェース106は、ユーザが画像処理装置100を操作し又は画像処理装置100へ情報を入力するために使用される。入力インタフェース106は、例えば、タッチセンサ、キーパッド、ボタン又はスイッチなどの入力デバイスを含んでもよい。また、入力インタフェース106は、音声入力用のマイクロフォン及び音声認識モジュールを含んでもよい。また、入力インタフェース106は、ユーザにより選択される命令をリモートデバイスから受信する遠隔制御モジュールを含んでもよい。
メモリ108は、RAM(Random Access Memory)及びROM(Read Only Memory)を含み得る記憶媒体である。メモリ108は、プロセッサ118に連結され、プロセッサ118により実行される処理のためのプログラム及びデータを記憶する。
ディスプレイ110は、画像を表示する画面を有する表示モジュールである。ディスプレイ110は、例えば、LCD(Liquid Crystal Display)、OLED(Organic light-Emitting Diode)又はCRT(Cathode Ray Tube)などであってよい。
通信インタフェース112は、画像処理装置100と他の装置との間の通信を仲介するモジュールである。通信インタフェース112は、任意の無線通信プロトコル又は有線通信プロトコルに従って、通信接続を確立する。
車両NWインタフェース113は、画像処理装置100が搭載される車両の車両ネットワークとの間の通信を仲介するモジュールである。車両NWインタフェース113は、例えば、図示しない端子を介して車両ネットワークに接続され、車速データ及び舵角(ステアリング角度)データなどの車両側で生成されるデータを取得する。
ストレージ114は、画像データを蓄積し、及び画像処理装置100により実行される画像処理において利用されるデータベースを記憶する記憶デバイスである。ストレージ114は、半導体メモリ又はハードディスクなどの記憶媒体を内蔵する。なお、本明細書で説明するプログラム及びデータは、画像処理装置100の外部のデータソース(例えば、データサーバ、ネットワークストレージ又は外付けメモリなど)から取得されてもよい。
バス116は、カメラモジュール102、センサモジュール104、入力インタフェース106、メモリ108、ディスプレイ110、通信インタフェース112、車両NWインタフェース113、ストレージ114及びプロセッサ118を相互に接続する。
プロセッサ118は、CPU(Central Processing Unit)又はDSP(Digital Signal Processor)などの処理モジュールである。プロセッサ118は、メモリ108又は他の記憶媒体に記憶されるプログラムを実行することにより、改善された画質を有するカラー画像を生成するための後述する機能を動作させる。
図5は、図3に示した画像処理装置100の構成要素が互いに連係することにより実現される論理的機能の構成の一例を示すブロック図である。図5を参照すると、画像処理装置100は、画像取得部120、データ取得部130、判定部140、生成部160及びデータベース170を備える。
画像取得部120は、共通する被写体を映した遠赤外(FIR)画像、近赤外(NIR)画像及び可視光画像をカメラモジュール102から取得し、取得した画像を判定部140及び生成部160へ出力する。画像取得部120により取得される画像は、信号の増幅、デモザイク及びノイズ除去などのプリミティブな処理を経た画像であってよい。また、画像取得部120は、必要に応じて、画像のキャリブレーション、波長成分の分離及び視点の統合といった、カラー画像の生成のための予備的処理を行う。なお、本明細書では、FIR画像及びNIR画像をも考慮して画質の改善された画像をカラー画像といい、画質改善前の可視光カメラにより撮像された画像を可視光画像という(可視光画像もまた、色を有する)。
データ取得部130は、画像処理装置100におけるカラー画像の生成のために利用される画像以外の様々なデータを取得する。例えば、データ取得部130は、センサモジュール104から画像処理装置100の地理的位置を示す測位データを、通信インタフェース112を介して外部のデータサーバから天候データを取得してもよい。測位データ及び天候データは、後述する判定部140による撮像条件の判定の際に、現在地の天候を判定するために利用される。なお、天候データは、入力インタフェース106を介してユーザにより入力されてもよい。また、データ取得部130は、センサモジュール104から照度データ、温度データ及び湿度データを取得してもよい。これらデータもまた、判定部140による撮像条件の判定のために利用され得る。
判定部140は、FIR画像、NIR及び可視光画像が撮像された際の撮像条件を判定する。本実施形態において、撮像条件は、時間帯、天候及び環境照度のうちの1つ以上を含む。例えば、判定部140は、現在時刻が属する時間帯を判定する。時間帯は、例えば「日中」及び「夜間」の2種類、「朝」、「昼」、「夕方」及び「夜」の4種類など、どのように区分されてもよい。また、判定部140は、測位データにより示される現在地の天候を、外部サーバから取得され又はユーザにより入力される天候データを参照することにより判定し得る。天候の定義もまたどのような定義であってもよい。一例として、判定部140は、その時点の天候を「晴れ」、「曇り」、「雨」、「雪」及び「霧」のいずれかに分類し得る。判定部140は、天候データから天候を判定する代わりに、温度データ及び湿度データから現在地の天候を推定してもよい。また、判定部140は、センサモジュール104からの照度データに基づいて、環境照度を判定し得る。判定部140は、これら判定の結果を示す撮像条件情報を、生成部160へ出力する。
生成部160は、FIR画像、NIR画像及び可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成する。FIR画像から選択される画素は、例えば、環境光に乏しい状況下での被写体の識別、特に生体領域の色彩を強調することに寄与する。NIR画像から選択される画素もまた、環境光に乏しい状況下での被写体の識別に寄与するが、近赤外線の照射の作用にも起因して、特に被写体のディテールの鮮明化に寄与する。また、可視光よりも直進性の強い赤外線は、雨又は霧の条件下での視認性の高い画像の生成にも寄与する。可視光画像から選択される画素は、カラー画像に色情報を直接的に提供する。
データベース170は、撮像条件の候補ごとに予め決定されるフィルタ構成を示す上述したフィルタ構成データの複数のセットを記憶する。フィルタ構成データの各セットは、FIR画像、NIR画像及び可視光画像の各々から選択すべきフィルタタップの画素位置と、それぞれのフィルタタップに適用すべきフィルタ係数とを示す。フィルタ構成データの各セットには、対応する撮像条件を識別する撮像条件情報が関連付けられる。
(1)カラー画像生成処理
図13は、第1の実施形態に係るカラー画像生成処理の流れの一例を示すフローチャートである。図13に示したカラー画像生成処理は、典型的には、映像を構成する一連のフレームの各々について繰り返される。
図14Aは、図13のステップS140において実行され得るフィルタ構成設定処理の流れの第1の例を示すフローチャートである。図14Aを参照すると、まず、生成部160は、例えば判定部140から入力される撮像条件情報により示される撮像条件を用いたルックアップにより、当該撮像条件に対応するフィルタ構成データをデータベース170から取得する(ステップS141)。次に、生成部160は、取得したフィルタ構成データに従って、FIR画像、NIR画像及び可視光画像の画素を含み得るフィルタタップを設定する(ステップS143)。また、生成部160は、設定したフィルタタップに、フィルタ構成データにより示されるフィルタ係数をそれぞれ設定する(ステップS145)。
前節で説明した第1の実施形態では、1つのカラー画像を生成する際に1種類のフィルタ構成が使用される。これに対し、第2の実施形態では、画像がいくつかの部分領域にセグメント化され、部分領域ごとに最適なフィルタ構成が使用される。こうしたフィルタ構成の切り替え又は適応的な選択によって、カラー画像のさらなる画質の改善を見込むことができる。
第2の実施形態に係る画像処理装置200のハードウェア構成は、図3を用いて説明した画像処理装置100のハードウェア構成と同様であってよい。図15は、第2の実施形態に係る画像処理装置200の論理的機能の構成の一例を示すブロック図である。図15を参照すると、画像処理装置200は、画像取得部120、データ取得部230、判定部140、認識部250、生成部260及びデータベース270を備える。
データ取得部230は、画像処理装置200におけるカラー画像の生成のために利用される補助データを取得する。例えば、データ取得部230は、第1の実施形態に係るデータ取得部130と同様に、センサモジュール104から測位データを、通信インタフェース112を介して外部のデータサーバから(又はユーザにより入力される)天候データを取得してもよい。また、データ取得部130は、センサモジュール104から照度データ、温度データ及び湿度データを取得してもよい。また、データ取得部130は、車両NWインタフェース113を介して車両ネットワークからドライビングデータを取得してもよい。また、本実施形態において、データ取得部230は、深度センサにより測定された被写体への距離を画素ごとに示す深度データ(深度マップともいう)を、センサモジュール104から取得してもよい。深度データは、後述する認識部150における画像のセグメント化又は生成部160におけるフィルタ構成の設定のために利用され得る。
認識部250は、画像取得部120から入力されるFIR画像、NIR画像及び可視光画像のうちの少なくとも1つにおいて、画像を複数の部分領域にセグメント化する。そして、認識部250は、セグメント化した個々の部分領域を特定する領域情報を生成し、生成した領域情報を生成部260へ出力する。ここでの領域情報は、各領域の位置、サイズ及び形状を示す情報であってもよく、又は各領域に属する画素のビット値が“1”をそれ以外の画素のビット値が“0”を示すようなビットマップであってもよい。また、領域情報は、各領域の種別(生体領域若しくは物体領域)及び識別情報(領域IDなど)を含んでもよい。
生成部260は、FIR画像、NIR画像及び可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成する。より具体的には、生成部260は、第1の実施形態に係る生成部160と同様に、判定部140により判定される撮像条件に依存して異なるフィルタ構成で、カラー画像の生成のためのフィルタリングを実行する。さらに、本実施形態において、生成部260は、認識部250から入力される領域情報に依存して、カラー画像の生成のためのフィルタ構成を変化させる。一例として、生成部260は、非生体領域のために使用されるフィルタ構成とは異なるフィルタ構成で、生体領域についてのフィルタリングを実行してもよい。また、生成部260は、カメラから生体までの距離に依存してさらに異なるフィルタ構成で、生体領域についてのフィルタリングを実行してもよい。
a)生体領域情報及び物体領域情報のうちの少なくとも1つを含む領域情報
b)a)の各領域について計算される尤度情報
c)生体認識又は物体認識に付随して計算され得る画像特徴量
d)カラー画像の生成に付随して計算され得る色別確率分布
例えば、夜間のFIR画像における各領域の階調値は、その領域に生体が映っている尤もらしさとの強い相関を有する。そこで、FIR画像を用いて生成されるb)尤度情報をa)生体領域情報と共に運転支援アプリケーションに提供することで、運転支援アプリケーションにおける歩行者検出などの処理の正確性向上に寄与することができる。また、上述したアプリケーション支援情報は、重複する情報の後段のアプリケーションにおける冗長的な生成(例えば、再度の画像のセグメント化、又は画像特徴量の算出)を回避するために再利用されてもよい。
データベース270は、撮像条件の候補及び領域種別の組合せごとに予め決定されるフィルタ構成を示すフィルタ構成データの複数のセットを記憶する。フィルタ構成データの各セットには、対応する撮像条件を識別する撮像条件情報と、領域種別とが関連付けられる。フィルタ構成データの各セットには、さらにカメラから被写体までの距離の代表値が関連付けられてもよい。また、データベース270は、認識部250による領域認識処理において利用され得る(人体、生体又は物体の)画像特徴量データを記憶してもよい。
図18は、第2の実施形態に係るカラー画像生成処理の流れの一例を示すフローチャートである。図18に示したカラー画像生成処理は、典型的には、映像を構成する一連のフレームの各々について繰り返される。
本開示に係る技術は、実装レベルの異なる様々な形態の製品に応用可能である。図19は、本開示に係る技術のいくつかの応用例について説明するための説明図である。図19に示した一例としての車両1は、車載システム10を含む。車載システム10は、画像処理システム20、アプリケーションモジュール30及び1つ以上の周辺モジュール40を有する。画像処理システム20は、カメラモジュール102と、カメラモジュール102に接続される画像処理モジュール100又は200とを含む。画像処理モジュール100又は200は、単一のチップ(又はプロセッサ)から構成されてもよく、又は複数のチップの集合であってもよい。アプリケーションモジュール30は、接続端子及び信号線を介して画像処理システム20に接続される。アプリケーションモジュール30は、画像処理モジュール100又は200により生成されるカラー画像を受信し、受信したカラー画像に基づいてアプリケーションを実行する。アプリケーションモジュール30は、例えばCPU又はSoC(System-on-a-chip)などの形態で実装され得る。周辺モジュール40は、例えばディスプレイを含み、アプリケーションモジュール30により加工されるカラー画像がディスプレイの画面上に表示される。
ここまで、図1~図19を用いて、本開示に係る技術の様々な実施形態について詳細に説明した。上述した実施形態によれば、共通する被写体を映した遠赤外画像、近赤外画像及び可視光画像が取得され、取得された遠赤外画像、近赤外画像及び可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像が生成される。かかる構成によれば、遠赤外画像、近赤外画像及び可視光画像のそれぞれの性質を活用して、生成されるカラー画像の画質を効果的に改善することができる。例えば、遠赤外画像は、特に生体の視認性が重要視される用途において、画像内のどの領域をより鮮明に表現すべきかの情報を提供し、生体領域の色彩の強調に寄与する。近赤外画像は、環境光に乏しい状況下で、被写体のディテールの鮮明化に寄与する。可視光画像は、カラー画像に色情報を直接的に提供する。こうした画像の種別ごとの性質の統合的な活用によって、既存の技術では達成されない高い画質を有するカラー画像を提供することが可能となる。
(1)
共通する被写体を映した遠赤外画像、近赤外画像及び可視光画像を取得する画像取得部と、
前記遠赤外画像、前記近赤外画像及び前記可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成する生成部と、
を備える画像処理装置。
(2)
前記画像処理装置は、前記遠赤外画像、前記近赤外画像及び前記可視光画像が撮像された際の撮像条件を判定する判定部、をさらに備え、
前記生成部は、前記判定部により判定される前記撮像条件に依存して異なるフィルタ構成で、前記フィルタリングを実行する、
前記(1)に記載の画像処理装置。
(3)
前記生成部は、学習処理を通じて事前に決定される前記フィルタ構成で、前記フィルタリングを実行する、前記(2)に記載の画像処理装置。
(4)
前記生成部は、前記判定部により判定される前記撮像条件と学習時の撮像条件との差に基づいて調整されるフィルタ係数を用いて、前記フィルタリングを実行する、前記(3)に記載の画像処理装置。
(5)
前記撮像条件は、時間帯、天候及び環境照度のうちの1つ以上を含む、前記(2)~(4)のいずれか1項に記載の画像処理装置。
(6)
前記画像処理装置は、
前記遠赤外画像、前記近赤外画像及び前記可視光画像のうちの少なくとも1つにおいて生体が映る生体領域を認識する認識部、
をさらに備え、
前記生成部は、非生体領域のために使用されるフィルタ構成とは異なるフィルタ構成で、前記生体領域についての前記フィルタリングを実行する、
前記(1)~(5)のいずれか1項に記載の画像処理装置。
(7)
前記生成部は、カメラから前記生体までの距離に依存して異なるフィルタ構成で、前記生体領域についての前記フィルタリングを実行する、前記(6)に記載の画像処理装置。
(8)
前記認識部は、認識した前記生体領域を特定する生体領域情報を生成し、
前記生成部は、前記カラー画像と共に前記生体領域情報を後段のアプリケーションへ出力する、
前記(6)又は前記(7)に記載の画像処理装置。
(9)
前記画像処理装置は、車両に搭載され、
前記生成部は、前記カラー画像を運転支援アプリケーションへ出力する、
前記(1)~(7)のいずれか1項に記載の画像処理装置。
(10)
共通する被写体を映した遠赤外画像、近赤外画像及び可視光画像を取得することと、
前記遠赤外画像、前記近赤外画像及び前記可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成することと、
を含む画像処理方法。
(11)
画像処理装置を制御するコンピュータを、
共通する被写体を映した遠赤外画像、近赤外画像及び可視光画像を取得する画像取得部と、
前記遠赤外画像、前記近赤外画像及び前記可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成する生成部と、
として機能させるためのプログラム。
(12)
遠赤外領域、近赤外領域及び可視光領域において被写体を撮像し、対応する遠赤外画像、近赤外画像及び可視光画像を出力するカメラモジュールと、
前記遠赤外画像、前記近赤外画像及び前記可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成する画像処理モジュールと、
を含む画像処理システム。
(13)
前記画像処理システムは、車両に搭載され、
前記画像処理モジュールにより生成される前記カラー画像に基づくアプリケーションを実行するアプリケーションモジュール、をさらに含む、
前記(12)に記載の画像処理システム。
10 車載システム
20 画像処理システム
30 アプリケーションモジュール
40 周辺モジュール
100,200 画像処理装置(画像処理モジュール)
102 カメラモジュール
110 ディスプレイ
120 画像取得部
130,230 データ取得部
140 判定部
250 認識部
160,260 生成部
170,270 データベース
Claims (13)
- 共通する被写体を映した遠赤外画像、近赤外画像及び可視光画像を取得する画像取得部と、
前記遠赤外画像、前記近赤外画像及び前記可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成する生成部と、
を備える画像処理装置。 - 前記画像処理装置は、前記遠赤外画像、前記近赤外画像及び前記可視光画像が撮像された際の撮像条件を判定する判定部、をさらに備え、
前記生成部は、前記判定部により判定される前記撮像条件に依存して異なるフィルタ構成で、前記フィルタリングを実行する、
請求項1に記載の画像処理装置。 - 前記生成部は、学習処理を通じて事前に決定される前記フィルタ構成で、前記フィルタリングを実行する、請求項2に記載の画像処理装置。
- 前記生成部は、前記判定部により判定される前記撮像条件と学習時の撮像条件との差に基づいて調整されるフィルタ係数を用いて、前記フィルタリングを実行する、請求項3に記載の画像処理装置。
- 前記撮像条件は、時間帯、天候及び環境照度のうちの1つ以上を含む、請求項2に記載の画像処理装置。
- 前記画像処理装置は、
前記遠赤外画像、前記近赤外画像及び前記可視光画像のうちの少なくとも1つにおいて生体が映る生体領域を認識する認識部、
をさらに備え、
前記生成部は、非生体領域のために使用されるフィルタ構成とは異なるフィルタ構成で、前記生体領域についての前記フィルタリングを実行する、
請求項1に記載の画像処理装置。 - 前記生成部は、カメラから前記生体までの距離に依存して異なるフィルタ構成で、前記生体領域についての前記フィルタリングを実行する、請求項6に記載の画像処理装置。
- 前記認識部は、認識した前記生体領域を特定する生体領域情報を生成し、
前記生成部は、前記カラー画像と共に前記生体領域情報を後段のアプリケーションへ出力する、
請求項6に記載の画像処理装置。 - 前記画像処理装置は、車両に搭載され、
前記生成部は、前記カラー画像を運転支援アプリケーションへ出力する、
請求項1に記載の画像処理装置。 - 共通する被写体を映した遠赤外画像、近赤外画像及び可視光画像を取得することと、
前記遠赤外画像、前記近赤外画像及び前記可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成することと、
を含む画像処理方法。 - 画像処理装置を制御するコンピュータを、
共通する被写体を映した遠赤外画像、近赤外画像及び可視光画像を取得する画像取得部と、
前記遠赤外画像、前記近赤外画像及び前記可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成する生成部と、
として機能させるためのプログラム。 - 遠赤外領域、近赤外領域及び可視光領域において被写体を撮像し、対応する遠赤外画像、近赤外画像及び可視光画像を出力するカメラモジュールと、
前記遠赤外画像、前記近赤外画像及び前記可視光画像の画素を含むフィルタタップをフィルタリングすることにより、カラー画像を生成する画像処理モジュールと、
を含む画像処理システム。 - 前記画像処理システムは、車両に搭載され、
前記画像処理モジュールにより生成される前記カラー画像に基づくアプリケーションを実行するアプリケーションモジュール、をさらに含む、
請求項12に記載の画像処理システム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15877935.5A EP3229468B1 (en) | 2015-01-13 | 2015-11-02 | Image processing device, image processing method, program, and system |
JP2016569234A JP6729394B2 (ja) | 2015-01-13 | 2015-11-02 | 画像処理装置、画像処理方法、プログラム及びシステム |
US15/539,815 US10176543B2 (en) | 2015-01-13 | 2015-11-02 | Image processing based on imaging condition to obtain color image |
CN201580072616.2A CN107113408B (zh) | 2015-01-13 | 2015-11-02 | 图像处理装置、图像处理方法、程序和系统 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015004121 | 2015-01-13 | ||
JP2015-004121 | 2015-01-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016113983A1 true WO2016113983A1 (ja) | 2016-07-21 |
Family
ID=56405540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/080965 WO2016113983A1 (ja) | 2015-01-13 | 2015-11-02 | 画像処理装置、画像処理方法、プログラム及びシステム |
Country Status (5)
Country | Link |
---|---|
US (1) | US10176543B2 (ja) |
EP (1) | EP3229468B1 (ja) |
JP (1) | JP6729394B2 (ja) |
CN (1) | CN107113408B (ja) |
WO (1) | WO2016113983A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018163725A1 (ja) * | 2017-03-08 | 2018-09-13 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
JP2019004348A (ja) * | 2017-06-16 | 2019-01-10 | ディーピーティー株式会社 | 移動体用映像表示装置およびその方法 |
JP2020028029A (ja) * | 2018-08-10 | 2020-02-20 | 株式会社リコー | 画像処理装置、画像処理システム、プログラムおよび画像処理方法 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102015223175A1 (de) * | 2015-11-24 | 2017-05-24 | Conti Temic Microelectronic Gmbh | Fahrerassistenzsystem mit adaptiver Umgebungsbilddatenverarbeitung |
JP6365598B2 (ja) * | 2016-06-17 | 2018-08-01 | トヨタ自動車株式会社 | 車両ヒータの制御装置 |
WO2018150685A1 (ja) * | 2017-02-20 | 2018-08-23 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
JP7020471B2 (ja) * | 2017-02-24 | 2022-02-16 | ソニーグループ株式会社 | 画像処理装置及び撮像装置 |
JP7075201B2 (ja) * | 2017-12-15 | 2022-05-25 | 東芝ライフスタイル株式会社 | 電気掃除機 |
US10785422B2 (en) * | 2018-05-29 | 2020-09-22 | Microsoft Technology Licensing, Llc | Face recognition using depth and multi-spectral camera |
JP7171254B2 (ja) * | 2018-06-13 | 2022-11-15 | キヤノン株式会社 | 画像処理装置、撮像装置、および画像処理方法 |
CN111488756B (zh) * | 2019-01-25 | 2023-10-03 | 杭州海康威视数字技术股份有限公司 | 基于面部识别的活体检测的方法、电子设备和存储介质 |
CN111429722A (zh) * | 2020-03-30 | 2020-07-17 | 淮阴工学院 | 基于实时路况的辅助车速报警控制系统 |
CN111460186B (zh) * | 2020-03-31 | 2022-04-08 | 河北工业大学 | 包含车辆可见光图像和红外图像数据库的建立方法 |
CN111507930B (zh) * | 2020-06-18 | 2023-10-10 | 杭州海康威视数字技术股份有限公司 | 图像融合方法、装置、存储介质和计算机设备 |
CN117146780B (zh) * | 2023-10-31 | 2024-01-09 | 季华实验室 | 成像方法、终端设备及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002203240A (ja) * | 2000-10-31 | 2002-07-19 | Matsushita Electric Ind Co Ltd | 物体認識装置、物体を認識する方法、プログラムおよび記録媒体 |
JP2008289001A (ja) * | 2007-05-18 | 2008-11-27 | Sony Corp | 画像入力処理装置、および、その方法 |
JP2011055133A (ja) * | 2009-08-31 | 2011-03-17 | Fujifilm Corp | 画像処理システム、画像処理方法およびプログラム |
JP2012094946A (ja) * | 2010-10-25 | 2012-05-17 | Hitachi Ltd | 撮像装置 |
JP2013065280A (ja) * | 2011-09-01 | 2013-04-11 | Canon Inc | 画像処理方法、画像処理装置およびプログラム |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000115759A (ja) | 1998-10-05 | 2000-04-21 | Sony Corp | 撮像表示装置 |
EP1202214A3 (en) | 2000-10-31 | 2005-02-23 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for object recognition |
US7248297B2 (en) * | 2001-11-30 | 2007-07-24 | The Board Of Trustees Of The Leland Stanford Junior University | Integrated color pixel (ICP) |
US7929727B2 (en) * | 2002-07-26 | 2011-04-19 | Tenebraex Corporation | Methods for visually separating an object from its background, methods for detecting a camouflaged object against its background and detection apparatus embodying such methods |
US7570286B2 (en) * | 2005-05-27 | 2009-08-04 | Honda Motor Co., Ltd. | System and method for creating composite images |
CN2842571Y (zh) * | 2005-10-28 | 2006-11-29 | 沈阳理工大学 | 一种抗天气干扰图像采集装置 |
JP2007158820A (ja) | 2005-12-06 | 2007-06-21 | Fujitsu Ten Ltd | 撮影制御装置 |
CN100550996C (zh) * | 2006-07-31 | 2009-10-14 | 株式会社理光 | 图像处理装置,成像装置以及图像处理方法 |
JP5110356B2 (ja) | 2007-07-10 | 2012-12-26 | オムロン株式会社 | 検出装置および方法、並びに、プログラム |
CN101578885B (zh) * | 2007-08-07 | 2012-09-12 | 松下电器产业株式会社 | 摄像处理装置以及摄像装置、图像处理方法 |
KR101441589B1 (ko) * | 2008-10-07 | 2014-09-29 | 삼성전자 주식회사 | 가시광선 이미지와 원적외선 이미지를 광학적으로 융합하는장치 |
US9451183B2 (en) * | 2009-03-02 | 2016-09-20 | Flir Systems, Inc. | Time spaced infrared image enhancement |
JP5578965B2 (ja) * | 2010-07-02 | 2014-08-27 | 富士フイルム株式会社 | オブジェクト推定装置および方法ならびにプログラム |
JP5519083B2 (ja) * | 2011-09-29 | 2014-06-11 | 富士フイルム株式会社 | 画像処理装置、方法、プログラムおよび撮像装置 |
JP5409829B2 (ja) | 2012-02-17 | 2014-02-05 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法、および、プログラム |
CN102647449B (zh) | 2012-03-20 | 2016-01-27 | 西安联客信息技术有限公司 | 基于云服务的智能摄影方法、装置及移动终端 |
US9304301B2 (en) * | 2012-12-26 | 2016-04-05 | GM Global Technology Operations LLC | Camera hardware design for dynamic rearview mirror |
JP6015946B2 (ja) | 2013-03-29 | 2016-10-26 | マツダ株式会社 | 車両用撮像装置 |
JP2014241584A (ja) * | 2013-05-14 | 2014-12-25 | パナソニックIpマネジメント株式会社 | 画像処理方法、及び画像処理システム |
-
2015
- 2015-11-02 US US15/539,815 patent/US10176543B2/en active Active
- 2015-11-02 EP EP15877935.5A patent/EP3229468B1/en active Active
- 2015-11-02 WO PCT/JP2015/080965 patent/WO2016113983A1/ja active Application Filing
- 2015-11-02 CN CN201580072616.2A patent/CN107113408B/zh not_active Expired - Fee Related
- 2015-11-02 JP JP2016569234A patent/JP6729394B2/ja not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002203240A (ja) * | 2000-10-31 | 2002-07-19 | Matsushita Electric Ind Co Ltd | 物体認識装置、物体を認識する方法、プログラムおよび記録媒体 |
JP2008289001A (ja) * | 2007-05-18 | 2008-11-27 | Sony Corp | 画像入力処理装置、および、その方法 |
JP2011055133A (ja) * | 2009-08-31 | 2011-03-17 | Fujifilm Corp | 画像処理システム、画像処理方法およびプログラム |
JP2012094946A (ja) * | 2010-10-25 | 2012-05-17 | Hitachi Ltd | 撮像装置 |
JP2013065280A (ja) * | 2011-09-01 | 2013-04-11 | Canon Inc | 画像処理方法、画像処理装置およびプログラム |
Non-Patent Citations (2)
Title |
---|
See also references of EP3229468A4 * |
YUJI YAMAUCHI ET AL.: "Human Detection Based on Statistical Learning from Image", THE TRANSACTIONS OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. J96-D, no. 9, 1 September 2013 (2013-09-01), pages 2017 - 2040, XP009503833, ISSN: 1880-4535 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018163725A1 (ja) * | 2017-03-08 | 2018-09-13 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
JPWO2018163725A1 (ja) * | 2017-03-08 | 2020-01-09 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
US10880498B2 (en) | 2017-03-08 | 2020-12-29 | Sony Corporation | Image processing apparatus and image processing method to improve quality of a low-quality image |
JP7014218B2 (ja) | 2017-03-08 | 2022-02-01 | ソニーグループ株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
JP2019004348A (ja) * | 2017-06-16 | 2019-01-10 | ディーピーティー株式会社 | 移動体用映像表示装置およびその方法 |
JP2020028029A (ja) * | 2018-08-10 | 2020-02-20 | 株式会社リコー | 画像処理装置、画像処理システム、プログラムおよび画像処理方法 |
JP7155737B2 (ja) | 2018-08-10 | 2022-10-19 | 株式会社リコー | 画像処理装置、画像処理システム、プログラムおよび画像処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JP6729394B2 (ja) | 2020-07-22 |
EP3229468B1 (en) | 2020-03-18 |
EP3229468A4 (en) | 2018-06-13 |
US20170372444A1 (en) | 2017-12-28 |
CN107113408B (zh) | 2021-05-11 |
EP3229468A1 (en) | 2017-10-11 |
US10176543B2 (en) | 2019-01-08 |
CN107113408A (zh) | 2017-08-29 |
JPWO2016113983A1 (ja) | 2017-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6729394B2 (ja) | 画像処理装置、画像処理方法、プログラム及びシステム | |
US11948462B2 (en) | Image generating apparatus, image generating method, and recording medium | |
US10504214B2 (en) | System and method for image presentation by a vehicle driver assist module | |
CN104512411B (zh) | 车辆控制系统及图像传感器 | |
US11600075B2 (en) | Nighttime sensing | |
JP4970516B2 (ja) | 周囲確認支援装置 | |
JP7268001B2 (ja) | 演算処理装置、オブジェクト識別システム、学習方法、自動車、車両用灯具 | |
JP4482599B2 (ja) | 車両の周辺監視装置 | |
US11715180B1 (en) | Emirror adaptable stitching | |
WO2021068178A1 (en) | Systems and methods for image quality detection | |
CN109409186B (zh) | 用于对象检测和通知的驾驶员辅助系统和方法 | |
US10482347B2 (en) | Inspection of the contoured surface of the undercarriage of a motor vehicle | |
KR20200040697A (ko) | 자동차 안전 및 주행 시스템을 위한 셔터리스 원적외선(fir) 카메라 | |
CN105447838A (zh) | 一种红外与微光/可见光融合成像的方法及系统 | |
US20120249801A1 (en) | Image generation apparatus, image generation method and image generation program | |
JP2014215877A (ja) | 物体検出装置 | |
US8781158B1 (en) | UVB-visible channel apparatus and method for viewing a scene comprising terrestrial corona radiation | |
JP2016196233A (ja) | 車両用道路標識認識装置 | |
JP2009025910A (ja) | 障害物検出装置、障害物検出システム及び障害物検出方法 | |
CN112465735A (zh) | 行人检测方法、装置及计算机可读存储介质 | |
US20230308779A1 (en) | Information processing device, information processing system, information processing method, and information processing program | |
US9197822B1 (en) | Array augmented parallax image enhancement system and method | |
JP2008108118A (ja) | 画像処理装置 | |
JP2007214806A (ja) | 障害物検出システム、及び障害物検出方法 | |
JP2008040724A (ja) | 画像処理装置、及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15877935 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016569234 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2015877935 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15539815 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |