WO2018220993A1 - Signal processing device, signal processing method and computer program - Google Patents

Signal processing device, signal processing method and computer program Download PDF

Info

Publication number
WO2018220993A1
WO2018220993A1 PCT/JP2018/014210 JP2018014210W WO2018220993A1 WO 2018220993 A1 WO2018220993 A1 WO 2018220993A1 JP 2018014210 W JP2018014210 W JP 2018014210W WO 2018220993 A1 WO2018220993 A1 WO 2018220993A1
Authority
WO
WIPO (PCT)
Prior art keywords
synthesis
motion
images
signal processing
image
Prior art date
Application number
PCT/JP2018/014210
Other languages
French (fr)
Japanese (ja)
Inventor
毅 都築
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2018220993A1 publication Critical patent/WO2018220993A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/28Circuitry to measure or to take account of the object contrast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present disclosure relates to a signal processing device, a signal processing method, and a computer program.
  • JP 2013-152334 A Japanese Patent Laying-Open No. 2015-186062
  • the processing time for obtaining the HDR image is shortened, but the image quality is deteriorated, or the deterioration of the image quality of the HDR image is suppressed, but it is difficult to shorten the processing time. Further, in the existing technology, when obtaining an HDR image, the trade-off between image quality and processing time is not adjusted flexibly.
  • the dynamic range performance is not degraded, the processing time is shortened, and the trade-off between the image quality and the processing time can be adjusted flexibly.
  • a signal processing apparatus, a signal processing method, and a computer program are proposed.
  • a composition processing unit that performs composition processing at least N ⁇ 1 times on images of N frames (N is an integer of 3 or more) with different exposure times, and the image processing unit
  • a motion adaptation processing unit that performs motion adaptation processing at the time of one synthesis using two of the groups, and the motion adaptation when the synthesis processing unit performs the N-1 synthesis processings
  • a signal processing device is provided in which the number of motion adaptation processes in the processing unit is set to N-2 or less.
  • the processor performs at least N ⁇ 1 synthesis processing on images of N frames (N is an integer of 3 or more) having different exposure times, and Performing motion adaptation processing at the time of one synthesis using two of them, and setting the number of motion adaptation processing at the time of performing the N-1 synthesis processing to be N-2 times or less, A signal processing method is provided.
  • the computer performs at least N ⁇ 1 synthesis processing on images of N frames (N is an integer of 3 or more) having different exposure times, And performing the motion adaptation process at the time of one synthesis using two of them, and the number of the motion adaptation processes at the time of performing the N-1 synthesis process is N-2 times or less.
  • N is an integer of 3 or more
  • FIG. 6 is an explanatory diagram illustrating a specific example of the operation of the signal processing device according to the embodiment.
  • FIG. 6 is an explanatory diagram illustrating a specific example of the operation of the signal processing device according to the embodiment. It is a block diagram which shows the schematic structural example of the vehicle control system which is an example of the mobile body control system with which the technique which concerns on this indication can be applied. It is a figure which shows the example of the installation position of the imaging part 12031.
  • FIG. 6 shows the schematic structural example of the vehicle control system which is an example of the mobile body control system with which the technique which concerns on this indication can be applied. It is a figure which shows the example of the installation position of the imaging part 12031.
  • HDR High Dynamic Range
  • an imaging technique for combining image signals captured by a plurality of types of exposure has been used. If the size of the image sensor is small, the dynamic range becomes small, and so-called whiteout or blackout tends to occur in the captured image. An image in which whiteout or blackout has occurred is different from how it is seen with the naked eye. Therefore, the HDR technology for expanding the dynamic range is useful as a technology for expanding the dynamic range of a small size image sensor.
  • the HDR image is obtained by combining a plurality of images with different exposures.
  • a moving object is included in the subject.
  • the purpose is to reduce the blur of the moving object that occurs in images with long exposure times (long accumulation images), or to reduce the displacement and double image of moving objects if the frame composition type
  • a motion adaptation process is performed in which a motion region is detected to increase a blend ratio of an image having a short exposure time (short accumulation image), or a motion vector is detected to align an object.
  • short accumulation image short exposure time
  • a motion vector is detected to align an object.
  • smartphones are the most popular platform for performing composition processing, and in this case, processing is performed by software on an AP (application processor).
  • AP application processor
  • the computing performance of APs for smartphones is not yet sufficient, and the computing performance of APs mounted on models of middle class and below is relatively low.
  • a long processing time is required. The user waits for a long time until the HDR composite image is output after shooting with the HDR function enabled.
  • the calculation performance is not sufficient, it is not easy to obtain a wide dynamic range in a reasonable processing time while maintaining a constant image quality with respect to movement.
  • the processing time for obtaining the HDR image is shortened, but the image quality is deteriorated, or the deterioration of the image quality of the HDR image is suppressed, but it is difficult to shorten the processing time. Further, in the existing technology, when obtaining an HDR image, the trade-off between image quality and processing time is not adjusted flexibly.
  • the present disclosure person can shorten the processing time without reducing the performance of the dynamic range and flexibly adjust the trade-off between the image quality and the processing time when obtaining the HDR image.
  • the present disclosure when obtaining an HDR image, shortens the processing time without reducing the performance of the dynamic range, or flexibly adjusts the trade-off between image quality and processing time. It came to devise the technology that can do.
  • FIG. 1 is an explanatory diagram showing a state of generating an HDR image that is assumed by the signal processing apparatus according to the present embodiment.
  • FIG. 1 shows a state where an HDR image is generated using four image frames.
  • the four image frames are designated as long accumulation, middle accumulation, short accumulation, and ultrashort accumulation in order of long exposure time.
  • an image with long accumulation has an exposure time of 1/30 seconds
  • an image with intermediate accumulation has an exposure time of 1/120 seconds
  • an image with short accumulation has an exposure time of 1/480 seconds
  • an image with ultra short accumulation has The exposure time is 1/1920 seconds.
  • FIG. 1 shows a state in which an HDR image is generated using four images having different exposure times.
  • the number of images that are the basis of the HDR image is not limited to four, and two images are generated. That is all you need.
  • FIG. 1 shows a state in which the exposure time is long, but the HDR image may be combined from the short exposure time when generating the HDR image.
  • gradation compression processing is performed in order to fit the normal dynamic range. After the gradation compression process, the camera signal processing system is used even when HDR is not performed.
  • Camera signal processing includes white balance, demosaic, color matrix, edge enhancement, noise reduction, and the like.
  • FIG. 2 is an explanatory diagram showing an example of an image sensor in which differently exposed pixels are two-dimensionally arranged.
  • the image sensor 10 shown in FIG. 2 has a configuration in which long accumulation pixels 11, middle accumulation pixels 12, short accumulation pixels 13, and ultrashort accumulation pixels 14 are arranged in a matrix.
  • a method of changing the sensitivity without changing the shutter time may be used. If the shutter time is shortened, it is required to deal with flicker. Therefore, the correspondence to flicker can be omitted by changing the sensitivity without changing the shutter time.
  • FIG. 3 is an explanatory diagram illustrating a functional configuration example of the signal processing device 100 according to the embodiment of the present disclosure.
  • FIG. 3 shows a functional configuration example of the signal processing device 100 that executes processing for generating an HDR image.
  • a functional configuration example of the signal processing device 100 according to the embodiment of the present disclosure will be described with reference to FIG.
  • the signal processing apparatus 100 includes a saturation detection unit 102, a motion detection unit 104, a black crush detection unit 106, a blend rate determination unit 108, a synthesis unit 110, It is comprised including.
  • the saturation detection unit 102 detects the degree of saturation of an image having a longer exposure time among two images having different exposure times input to the signal processing apparatus 100.
  • the degree of image saturation is information that is defined as multi-valued with respect to a pixel value that is 0 below a certain threshold and exceeds the threshold.
  • the saturation detection unit 102 sends information on the detected degree of saturation to the blend rate determination unit 108.
  • Information on the degree of saturation is used by the blend rate determination unit 108 to determine the blend rate (synthesis ratio) of two images.
  • the motion detection unit 104 detects a motion between images using two images input to the signal processing device 100.
  • the motion detection unit 104 normalizes brightness by applying an exposure ratio gain to a short storage frame in advance, and then calculates a difference between pixel values of the long storage frame and the short storage frame. take.
  • the motion detection unit 104 determines that there is motion between images if the difference value is greater than a predetermined threshold value. Further, the motion detection unit 104 may calculate an average value in a certain range and take the difference as a process of detecting the motion.
  • the motion detection unit 104 may detect a change by paying attention to a frequency or a gradient in order to detect a motion more accurately as a process for detecting the motion.
  • the motion detection unit 104 may detect a corresponding change in position between a plurality of frames, or may detect a motion by a method of detecting a movement vector and using the value.
  • the motion detection unit 104 sends information on the area detected as motion to the blend rate determination unit 108.
  • the blend rate determination unit 108 increases the blend rate of an image with a shorter exposure time for the purpose of reducing blur and misalignment in an area detected as motion.
  • the black crushing detection unit 106 detects the degree of black crushing of an image having a shorter exposure time out of two images having different exposure times.
  • the degree of black crushing in an image is information that is defined as multivalued for pixel values that are 0 or more below a certain threshold.
  • the black crushing detection unit 106 sends information on the detected degree of black crushing to the blend rate determination unit 108.
  • Information on the degree of black crushing is used in the blend rate determination unit 108 to determine the blend rate of two images.
  • the blend rate determination unit 108 determines the blend rate of two images input to the signal processing apparatus 100. In determining the blend ratio of two images, the blend rate determination unit 108 determines the degree of saturation of the image detected by the saturation detection unit 102, the level of black collapse of the image detected by the black collapse detection unit 106, and the motion detection unit 104. The information on the presence / absence of the motion detected by is used. For example, if the degree of saturation of the image detected by the saturation detection unit 102 is equal to or greater than a predetermined threshold, the blend rate determination unit 108 increases the blend rate of the image with the shorter exposure time.
  • the blend rate determination unit 108 increases the blend rate of the image with the longer exposure time. Further, for example, in the region detected by the motion detection unit 104 as a motion, the blend rate determination unit 108 increases the blend rate of an image with a shorter exposure time for the purpose of reducing blur and misalignment.
  • the blend rate determination unit 108 determines a final blend rate from these elements, and sends information on the determined blend rate to the synthesis unit 110.
  • the synthesizing unit 110 performs a process of synthesizing two images input to the signal processing device 100 based on the blend rate determined by the blend rate determining unit 108.
  • the image synthesized by the synthesis unit 110 is further synthesized with another image that is an HDR image generation target.
  • the signal processing apparatus 100 generates an HDR image by performing a series of processes on all HDR image generation targets.
  • the amount of calculation of the motion adaptation processing including motion detection of the subject between the images is relatively large. If motion adaptation processing is not performed at all, the amount of calculation is small and the synthesis processing is completed in a relatively short time. Instead, blurring of the moving object is noticeable, and artifacts in which multiple contours are generated can be seen.
  • an object is to flexibly control the relationship between the image quality of the HDR image and the processing time required for generating the HDR image.
  • FIG. 4 is an explanatory diagram illustrating HDR image generation processing by the signal processing apparatus 100 according to the present embodiment.
  • FIG. 4 shows an example in which an HDR image is generated from four images having different exposure times.
  • the shutter time of each frame is 1/30 seconds, 1/120 seconds, 1/480 seconds, 1/1920 seconds from the long storage side, and the exposure ratio interval increases by 4 times. A 64 times case is shown. It is desirable that the exposure ratio interval is about 16 times at the maximum.
  • the shutter time of each frame can change according to the subject and the environment (brightness, etc.) at the time of imaging.
  • the motion adaptation process is executed a total of three times.
  • the signal processing apparatus 100 performs the motion adaptation process only once on the long accumulation side.
  • the motion adaptation process only once on the long storage side, it is possible to reduce the amount of calculation required for the motion adaptation process while eliminating adverse effects such as a fatal blur of the moving object and a significant S / N reduction.
  • FIG. 4 it has been described that the motion adaptation process is performed only once, but the number of times of the motion adaptation process is not limited to this example.
  • a feature of the signal processing apparatus 100 according to the present embodiment is that the number of motion adaptive processes is fixed to a number smaller than the number of times originally necessary.
  • the amount of blur included in each frame is determined by many factors. If the moving speed of the moving object is high, the blur amount naturally increases. If the distance between the camera and the moving subject is long or the angle of view of the lens is wide, the moving speed is relatively slow and the blur amount is small. Further, the allowable limit for the blur amount is subjective, and varies depending on the evaluator (for example, the photographer) of the image.
  • the actual shutter time of each frame depends on whether the scene to be photographed is bright or dark, how many times the exposure ratio between frames is set, how wide the overall exposure ratio, that is, the dynamic range, AE (Auto It depends on various factors such as the method and policy of exposure (auto exposure).
  • AE Auto It depends on various factors such as the method and policy of exposure (auto exposure).
  • the S / N of the moving object decreases as the shutter time becomes shorter, so this is a trade-off between reducing blur or securing S / N of moving object.
  • the balance with the S / N securing of the moving object is also subjective and again varies depending on the evaluator. Considering the above factors, the appropriate number of motion adaptation processes is not easily determined, and it can be said that it is better to control flexibly according to the situation.
  • the signal processing apparatus 100 determines whether or not the shutter time of the image that is the basis for generating the HDR image is equal to or greater than a predetermined threshold value. If the shutter time is equal to or greater than a predetermined threshold value, the signal processing apparatus 100 performs motion adaptation processing to perform image synthesis processing because blur of the moving object may be conspicuous in the image.
  • the composition process is a process for determining the composition ratio of two images in the blend ratio determination unit 108 or a composition of images based on the composition ratio of two images by the composition unit 110. Any one of processing shall be said.
  • FIG. 5 is a flowchart illustrating an operation example of the signal processing device 100 according to the embodiment of the present disclosure.
  • FIG. 5 shows an operation example of the signal processing apparatus 100 when performing HDR image generation processing.
  • N is the number of combined frames
  • TN is the shutter time of the Nth frame
  • TH Shut is a threshold value of a shutter time that is defined in advance in consideration of blur and S / N.
  • N is the long accumulation side and the larger N is the short accumulation side.
  • the signal processing apparatus 100 first initializes the value of N to 1 (step S101), followed by, T N is determined whether a more TH Shut (step S102).
  • the determination in step S102 is executed by, for example, the motion detection unit 104.
  • step S102 if TN is equal to or higher than TH Shut (step S102, Yes), the signal processing apparatus 100 uses the two input images (Nth and N + 1th frames) to perform motion adaptation processing. Is performed (step S103).
  • the process of step S103 is performed by, for example, the motion detection unit 104.
  • step S102 if TN is not equal to or greater than TH Shut (step S102, No), the signal processing apparatus 100 skips the motion adaptation process in step S103.
  • the signal processing apparatus 100 performs a process of combining the two input images (step S104).
  • the compositing unit 110 executes the image compositing process.
  • the signal processing apparatus 100 uses information on the degree of saturation, information on the degree of black crushing, information on movement between the two images, and the like when combining two images.
  • the signal processing apparatus 100 determines whether the value of N is equal to a value that is 1 less than the number of frames to be synthesized (step S105). If the value of N is equal to one less than the number of frames to be combined (step S105, Yes), the signal processing apparatus 100 ends the series of processes assuming that the processing has been completed for all the frames to be combined. On the other hand, if the value of N is not equal to a value that is one less than the number of frames to be combined (step S105, No), the signal processing apparatus 100 increments the value of N by 1 (step S106), and the process of step S102 is performed. Return.
  • FIG. 6 is an explanatory diagram illustrating a specific example of the operation of the signal processing device 100 according to the embodiment of the present disclosure. In FIG. 6, it is assumed that TH Shut is 1/60 second.
  • the signal processing device 100 executes a motion adaptation process between the long storage frame and the intermediate storage frame.
  • the medium ⁇ frame shutter time (1/120 sec) is shorter than the threshold value TH Shut, blur risk of middle ⁇ frame is small. Further, assuming that the S / N of the intermediate storage frame is insufficient, the signal processing device 100 does not execute the motion adaptation process between the intermediate storage frame and the short storage frame.
  • the signal processing apparatus 100 does not execute the motion adaptation process between the short accumulation frame and the ultra short accumulation frame in which the shutter time is shortened.
  • the signal processing apparatus 100 according to the present embodiment can reduce the number of motion adaptation processes while considering blur and S / N.
  • the place where the motion adaptation process is executed three times is reduced to one if the threshold TH Shut is not taken into consideration.
  • the signal processing apparatus 100 according to the present embodiment defines a shutter time threshold TH Shut as an element that reflects the image quality of the combined HDR image, such as blur and S / N, and the finally set shutter time. The number of executions of the motion adaptation process is adjusted from the above relationship.
  • the signal processing apparatus 100 may execute the HDR synthesis process after obtaining the number of executions of the motion adaptation process in advance.
  • Formulas (1) and (2) show the calculation formula for the number of executions.
  • the processing time of the entire HDR process is T SUM
  • the sum of the time T HDR required for one HDR synthesis and the time T MOVE required for one motion adaptation process is multiplied by the number of synthesized frames N, and T SUM is obtained.
  • T SUM the number M of motion adaptation processes when it is desired to keep the entire processing time TSUM below a certain value.
  • the signal processing apparatus 100 according to the present embodiment can improve the image quality of the moving object within a range that does not exceed a certain processing time by performing the motion adaptation process for the number M obtained by Expression (2).
  • the signal processing apparatus 100 can use both a method for determining whether or not the motion adaptation process can be performed based on a threshold value of the shutter time and a method for obtaining the number of executions of the motion adaptation process in advance. In this case, the signal processing apparatus 100 may use the smaller number of the upper limit of the number of motion adaptation processes considering blur and S / N and the number of times M obtained as described above.
  • the signal processing apparatus 100 may determine whether or not the motion adaptation process can be performed based on information on the amount of motion between images.
  • FIG. 7 is an explanatory diagram illustrating a specific example of the operation of the signal processing device 100 according to the embodiment of the present disclosure.
  • the signal processing apparatus 100 first performs motion detection on the long accumulation frame and the middle accumulation frame, and detects the magnitude of motion 12 between the images. If the detected movement is large, the moving speed of the moving object is fast with respect to the shutter time, which means that there is a high risk of occurrence of blur and double image. It may be assumed that the magnitude of the movement and the shutter time are basically proportional. Here, the signal processing apparatus 100 can also calculate the movement amounts Motion 23 and Motion 34 on the short accumulation side from the Motion 12.
  • the signal processing device 100 performs motion adaptation processing only between frames whose motion amount is equal to or greater than TH Motion. .
  • the motion adaptation process is executed between the long accumulation frame and the intermediate accumulation frame, and the motion adaptation process is not performed between the other frames.
  • the signal processing apparatus 100 determines whether or not the motion adaptation process can be performed based on the magnitude of the motion amount, the signal processing apparatus 100 does not perform the motion adaptation process if the motion amount between frames is zero or sufficiently small. There is no degradation in image quality, and motion adaptation processing for subsequent frames can be automatically turned off. That is, the signal processing apparatus 100 according to the embodiment of the present disclosure has an effect that the processing time can be shortened according to the amount of motion in the shooting scene.
  • the signal processing apparatus 100 uses both a method for determining whether or not the motion adaptive process can be performed based on the shutter time threshold value, and a method for determining whether or not the motion adaptive process can be performed based on information on the amount of motion between images. It is also possible. In this case, the signal processing apparatus 100 has an upper limit of the number of times of motion adaptation processing considering blur and S / N and an upper limit number of times of motion adaptation processing considering information on the amount of motion between images. The smaller number may be used.
  • the signal processing apparatus 100 can use both a method for determining whether or not the motion adaptive process can be executed based on information on the amount of motion between images, and a method for obtaining the number of executions of the motion adaptive process in advance.
  • the signal processing apparatus 100 may use the upper limit of the number of times of the motion adaptation processing considering information on the amount of motion between images and the smaller number of times M obtained as described above.
  • the signal processing device 100 determines whether or not to execute the motion adaptation process based on a threshold of the shutter time, determines whether or not to execute the motion adaptation process based on information on the amount of motion between images, It is also possible to use a method for determining whether or not to execute the motion adaptation process based on the information on the amount of motion in between.
  • the signal processing apparatus 100 has an upper limit of the number of times of motion adaptation processing considering blur and S / N and an upper limit number of times of motion adaptation processing considering information on the amount of motion between images. Of the times M obtained as described above, the smaller number may be used.
  • the signal processing apparatus 100 may determine whether or not to execute the motion adaptation process based on a motion amount in a predetermined range of a part, for example, the central part, instead of the entire image.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device that is mounted on any type of mobile body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, and a robot. May be.
  • FIG. 8 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, a vehicle exterior information detection unit 12030, a vehicle interior information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are illustrated.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
  • the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp.
  • the body control unit 12020 can be input with radio waves transmitted from a portable device that substitutes for a key or signals from various switches.
  • the body system control unit 12020 receives input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
  • the vehicle outside information detection unit 12030 detects information outside the vehicle on which the vehicle control system 12000 is mounted.
  • the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030.
  • the vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle and receives the captured image.
  • the vehicle outside information detection unit 12030 may perform an object detection process or a distance detection process such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to the amount of received light.
  • the imaging unit 12031 can output an electrical signal as an image, or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared rays.
  • the vehicle interior information detection unit 12040 detects vehicle interior information.
  • a driver state detection unit 12041 that detects a driver's state is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera that images the driver, and the vehicle interior information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver is asleep.
  • the microcomputer 12051 calculates a control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside / outside the vehicle acquired by the vehicle outside information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit A control command can be output to 12010.
  • the microcomputer 12051 realizes an ADAS (Advanced Driver Assistance System) function including vehicle collision avoidance or impact mitigation, following traveling based on inter-vehicle distance, vehicle speed maintaining traveling, vehicle collision warning, or vehicle lane departure warning, etc. It is possible to perform cooperative control for the purpose.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform cooperative control for the purpose of automatic driving that autonomously travels without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on information outside the vehicle acquired by the vehicle outside information detection unit 12030.
  • the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or the oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of anti-glare, such as switching from a high beam to a low beam. It can be carried out.
  • the sound image output unit 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
  • the display unit 12062 may include at least one of an on-board display and a head-up display, for example.
  • FIG. 9 is a diagram illustrating an example of an installation position of the imaging unit 12031.
  • the vehicle 12100 includes imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door, and an upper part of a windshield in the vehicle interior of the vehicle 12100.
  • the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirror mainly acquire an image of the side of the vehicle 12100.
  • the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100.
  • the forward images acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 9 shows an example of the shooting range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively
  • the imaging range 12114 The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, an overhead image when the vehicle 12100 is viewed from above is obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
  • the microcomputer 12051 based on the distance information obtained from the imaging units 12101 to 12104, the distance to each three-dimensional object in the imaging range 12111 to 12114 and the temporal change in this distance (relative speed with respect to the vehicle 12100).
  • a predetermined speed for example, 0 km / h or more
  • the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like.
  • automatic brake control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • cooperative control for the purpose of autonomous driving or the like autonomously traveling without depending on the operation of the driver can be performed.
  • the microcomputer 12051 converts the three-dimensional object data related to the three-dimensional object to other three-dimensional objects such as a two-wheeled vehicle, a normal vehicle, a large vehicle, a pedestrian, and a utility pole based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles.
  • the microcomputer 12051 identifies obstacles around the vehicle 12100 as obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see.
  • the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 is connected via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration or avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the captured images of the imaging units 12101 to 12104. Such pedestrian recognition is, for example, whether or not the user is a pedestrian by performing a pattern matching process on a sequence of feature points indicating the outline of an object and a procedure for extracting feature points in the captured images of the imaging units 12101 to 12104 as infrared cameras. It is carried out by the procedure for determining.
  • the audio image output unit 12052 When the microcomputer 12051 determines that there is a pedestrian in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 has a rectangular contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so as to be superimposed and displayed.
  • voice image output part 12052 may control the display part 12062 so that the icon etc. which show a pedestrian may be displayed on a desired position.
  • the technology according to the present disclosure can be applied to the microcomputer 12051 among the configurations described above.
  • the signal processing apparatus 100 can be applied to a microcomputer 12051.
  • the number of motion detections is set to a smaller number than the number of times of execution, thereby avoiding harmful effects such as fatal blur and significant S / N reduction of moving objects in HDR images.
  • the signal processing device 100 that can reduce the amount of calculation when generating the HDR image is provided.
  • a signal processing device 100 is provided.
  • the signal processing apparatus 100 is provided in which the image quality of the HDR image including the image quality is automatically adjusted.
  • a signal processing apparatus 100 that can be used is provided.
  • each step in the processing executed by each device in this specification does not necessarily have to be processed in chronological order in the order described as a sequence diagram or flowchart.
  • each step in the processing executed by each device may be processed in an order different from the order described as the flowchart, or may be processed in parallel.
  • a composition processing unit that performs composition processing at least N-1 times on images of N frames (N is an integer of 3 or more) having different exposure times;
  • a motion adaptation processing unit for performing motion adaptation processing at the time of one synthesis using two of the groups of images in the synthesis processing unit;
  • the signal processing device wherein the number of motion adaptation processes in the motion adaptation processing unit when the synthesis processing unit performs the N-1 synthesis processes is N-2 times or less.
  • the motion adaptation processing unit performs the motion adaptation processing only on an image whose exposure time of the image that is a target of the synthesis processing in the synthesis processing unit is a predetermined threshold value or more.
  • composition processing at least N ⁇ 1 times on images of N frames (N is an integer of 3 or more) having different exposure times; Performing a motion adaptation process in one synthesis using two of the group of images; And execute A computer program in which the number of motion adaptation processes when performing the N-1 synthesis processes is N-2 or less.
  • Image sensor 11 Long accumulation pixel 12: Medium accumulation pixel 13: Short accumulation pixel 14: Ultra short accumulation pixel 100: Signal processing device

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Exposure Control For Cameras (AREA)
  • Details Of Cameras Including Film Mechanisms (AREA)

Abstract

[Problem] To provide a signal processing device which is capable of shortening processing time without degrading dynamic range performance or flexibly adjusting a tradeoff between image quality and processing time upon receiving an HDR image. [Solution] Provided is a signal processing device provided with: a synthesis processing unit which synthesizes, at least (N-1) times, an image of N frames which have exposure times that are different from each other; and a motion adaptation processing unit which performs a motion adaptation process in one-time synthesis that uses two of the group of the images in the synthesis processing unit, wherein the number of motion adaptation processes in the motion adaption processing unit is equal to or less than (N-2), when the synthesis processing unit performs the synthesis (N-1) times.

Description

信号処理装置、信号処理方法及びコンピュータプログラムSignal processing apparatus, signal processing method, and computer program
 本開示は、信号処理装置、信号処理方法及びコンピュータプログラムに関する。 The present disclosure relates to a signal processing device, a signal processing method, and a computer program.
 近年、HDR(High Dynamic Range:ハイダイナミックレンジ)画像を撮像するための技術として、複数種類の露光で撮像された画像信号を合成する撮像手法が利用されている。 In recent years, as a technique for capturing an HDR (High Dynamic Range) image, an imaging technique for combining image signals captured by a plurality of types of exposure has been used.
 合成処理を行うプラットフォームとしては、近年はスマートフォンが最も多くなっており、その際にはAP(アプリケーションプロセッサ)上でのソフトウェアによって処理が行われる。APの演算性能が十分でない環境において、動きに対して一定の画質を保ちながら広いダイナミックレンジをリーズナブルな処理時間で得ることが要求され、HDR画像を得るための時間を短縮させることを目的とした技術が開示されている(特許文献1、2参照)。 In recent years, smartphones are the most popular platform for performing composition processing, and in this case, processing is performed by software on an AP (application processor). In an environment where the computing performance of the AP is not sufficient, it is required to obtain a wide dynamic range with a reasonable processing time while maintaining a constant image quality with respect to movement, and the purpose is to shorten the time for obtaining an HDR image. Techniques are disclosed (see Patent Documents 1 and 2).
特開2013-152334号公報JP 2013-152334 A 特開2015-186062号公報Japanese Patent Laying-Open No. 2015-186062
 しかし、既存技術では、HDR画像を得るための処理時間は短縮するが画質が劣化したり、HDR画像の画質の劣化は抑えられるが処理時間を短縮させることが困難であったりした。また既存技術では、HDR画像を得る際に、画質と処理時間とのトレードオフを柔軟に調整することが行われていない。 However, with the existing technology, the processing time for obtaining the HDR image is shortened, but the image quality is deteriorated, or the deterioration of the image quality of the HDR image is suppressed, but it is difficult to shorten the processing time. Further, in the existing technology, when obtaining an HDR image, the trade-off between image quality and processing time is not adjusted flexibly.
 そこで本開示では、HDR画像を得る際に、ダイナミックレンジの性能は落とさず処理時間を短縮させたり、画質と処理時間とのトレードオフを柔軟に調整したりすることが可能な、新規かつ改良された信号処理装置、信号処理方法及びコンピュータプログラムを提案する。 Therefore, in the present disclosure, when obtaining an HDR image, the dynamic range performance is not degraded, the processing time is shortened, and the trade-off between the image quality and the processing time can be adjusted flexibly. A signal processing apparatus, a signal processing method, and a computer program are proposed.
 本開示によれば、露光時間がそれぞれ異なるN個(Nは3以上の整数)のフレームの画像に対して少なくともN-1回の合成処理を行う合成処理部と、前記合成処理部で前記画像の群のうち2つを使用する1回の合成の際に動き適応処理を行う動き適応処理部と、を備え、前記合成処理部が前記N-1回の合成処理を行う際の前記動き適応処理部での動き適応処理の回数をN―2回以下とする、信号処理装置が提供される。 According to the present disclosure, a composition processing unit that performs composition processing at least N−1 times on images of N frames (N is an integer of 3 or more) with different exposure times, and the image processing unit A motion adaptation processing unit that performs motion adaptation processing at the time of one synthesis using two of the groups, and the motion adaptation when the synthesis processing unit performs the N-1 synthesis processings A signal processing device is provided in which the number of motion adaptation processes in the processing unit is set to N-2 or less.
 また本開示によれば、プロセッサが、露光時間がそれぞれ異なるN個(Nは3以上の整数)のフレームの画像に対して少なくともN-1回の合成処理を行うことと、前記画像の群のうち2つを使用する1回の合成の際に動き適応処理を行うことと、を含み、前記N-1回の合成処理を行う際の動き適応処理の回数をN―2回以下とする、信号処理方法が提供される。 Further, according to the present disclosure, the processor performs at least N−1 synthesis processing on images of N frames (N is an integer of 3 or more) having different exposure times, and Performing motion adaptation processing at the time of one synthesis using two of them, and setting the number of motion adaptation processing at the time of performing the N-1 synthesis processing to be N-2 times or less, A signal processing method is provided.
 また本開示によれば、コンピュータに、露光時間がそれぞれ異なるN個(Nは3以上の整数)のフレームの画像に対して少なくともN-1回の合成処理を行うことと、前記画像の群のうち2つを使用する1回の合成の際に動き適応処理を行うことと、を実行させ、前記N-1回の合成処理を行う際の動き適応処理の回数をN―2回以下とする、コンピュータプログラムが提供される。 Further, according to the present disclosure, the computer performs at least N−1 synthesis processing on images of N frames (N is an integer of 3 or more) having different exposure times, And performing the motion adaptation process at the time of one synthesis using two of them, and the number of the motion adaptation processes at the time of performing the N-1 synthesis process is N-2 times or less. A computer program is provided.
 以上説明したように本開示によれば、HDR画像を得る際に、ダイナミックレンジの性能は落とさず処理時間を短縮させたり、画質と処理時間とのトレードオフを柔軟に調整したりすることが可能な、新規かつ改良された信号処理装置、信号処理方法及びコンピュータプログラムを提供することが出来る。 As described above, according to the present disclosure, when obtaining an HDR image, it is possible to shorten the processing time without reducing the performance of the dynamic range, and to flexibly adjust the trade-off between the image quality and the processing time. In addition, a new and improved signal processing apparatus, signal processing method, and computer program can be provided.
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。 Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
本実施形態における信号処理装置が前提とするHDR画像の生成の様子を示す説明図である。It is explanatory drawing which shows the mode of the production | generation of the HDR image which the signal processing apparatus in this embodiment presupposes. 異露光画素を二次元的に配置したイメージセンサの例を示す説明図である。It is explanatory drawing which shows the example of the image sensor which has arrange | positioned different exposure pixels in two dimensions. 同実施形態に係る信号処理装置の機能構成例を示す説明図である。It is explanatory drawing which shows the function structural example of the signal processing apparatus which concerns on the same embodiment. 同実施形態に係る信号処理装置によるHDR画像の生成処理について説明する説明図である。It is explanatory drawing explaining the production | generation process of the HDR image by the signal processing apparatus which concerns on the same embodiment. 同実施形態に係る信号処理装置の動作例を示す流れ図である。It is a flowchart which shows the operation example of the signal processing apparatus which concerns on the same embodiment. 同実施形態に係る信号処理装置の動作の具体例を示す説明図である。FIG. 6 is an explanatory diagram illustrating a specific example of the operation of the signal processing device according to the embodiment. 同実施形態に係る信号処理装置の動作の具体例を示す説明図である。FIG. 6 is an explanatory diagram illustrating a specific example of the operation of the signal processing device according to the embodiment. 本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。It is a block diagram which shows the schematic structural example of the vehicle control system which is an example of the mobile body control system with which the technique which concerns on this indication can be applied. 撮像部12031の設置位置の例を示す図である。It is a figure which shows the example of the installation position of the imaging part 12031. FIG.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 なお、説明は以下の順序で行うものとする。
 1.本開示の実施の形態
  1.1.経緯
  1.2.構成例
  1.3.動作例
 2.移動体への応用例
 3.まとめ
The description will be made in the following order.
1. Embodiment of the present disclosure 1.1. Background 1.2. Configuration example 1.3. Example of operation 2. Application examples to mobile objects Summary
 <1.本開示の実施の形態>
 [1.1.経緯]
 まず、本開示の実施の形態について詳細に説明する前に、本開示の実施の形態に至った経緯について説明する。
<1. Embodiment of the present disclosure>
[1.1. History]
First, before the embodiments of the present disclosure are described in detail, the background that led to the embodiments of the present disclosure will be described.
 上述したように、近年、HDR(ハイダイナミックレンジ)画像を撮像するための技術として、複数種類の露光で撮像された画像信号を合成する撮像手法が利用されている。撮像素子のサイズが小さいとダイナミックレンジが小さくなり、撮像した画像には、いわゆる白飛びや黒潰れが起きやすくなる。白飛びや黒潰れが起きた画像は肉眼での見え方とは異なる。そこで、ダイナミックレンジを拡大するHDR技術は、小さいサイズのイメージセンサのダイナミックレンジを拡大する技術として有用である。 As described above, in recent years, as a technique for capturing an HDR (High Dynamic Range) image, an imaging technique for combining image signals captured by a plurality of types of exposure has been used. If the size of the image sensor is small, the dynamic range becomes small, and so-called whiteout or blackout tends to occur in the captured image. An image in which whiteout or blackout has occurred is different from how it is seen with the naked eye. Therefore, the HDR technology for expanding the dynamic range is useful as a technology for expanding the dynamic range of a small size image sensor.
 HDR画像は、露光が異なる複数の画像を合成することにより得られる。HDR画像の生成に際しては、動いている物体が被写体に含まれている場合を考慮する必要がある。動きへの対応としては、露光時間が長い画像(長蓄画像)で発生する動物体のブラーを低減する、またはフレーム合成タイプであれば動物体の位置ずれや二重像を低減することを目的として、動き領域を検出して、露光時間が短い画像(短蓄画像)のブレンド率を高めたり、動きベクトルを検出してオブジェクトの位置合わせを行ったりする動き適応処理が行われることもある。また近年では、露光数を増やしてより広いダイナミックレンジを得ようとする傾向がある。 The HDR image is obtained by combining a plurality of images with different exposures. When generating an HDR image, it is necessary to consider the case where a moving object is included in the subject. As a response to motion, the purpose is to reduce the blur of the moving object that occurs in images with long exposure times (long accumulation images), or to reduce the displacement and double image of moving objects if the frame composition type In some cases, a motion adaptation process is performed in which a motion region is detected to increase a blend ratio of an image having a short exposure time (short accumulation image), or a motion vector is detected to align an object. In recent years, there is a tendency to increase the number of exposures to obtain a wider dynamic range.
 合成処理を行うプラットフォームとしては、近年はスマートフォンが最も多くなっており、その際にはAP(アプリケーションプロセッサ)上でのソフトウェアによって処理が行われる。しかし、スマートフォン向けのAPの演算性能はまだ十分ではなく、特にミドルクラス以下のモデルに搭載されるAPの演算性能は比較的低い。このような環境において、HDR画像を得るために露光数を増やした上で複雑な演算処理を行わせれば、長い処理時間がかかる。ユーザーがHDR機能を有効にして撮影した後にHDR合成画像が出力されるまでに長い間待機させられることになる。演算性能が十分でない環境において、動きに対して一定の画質を保ちながら広いダイナミックレンジをリーズナブルな処理時間で得ることは容易ではない。 In recent years, smartphones are the most popular platform for performing composition processing, and in this case, processing is performed by software on an AP (application processor). However, the computing performance of APs for smartphones is not yet sufficient, and the computing performance of APs mounted on models of middle class and below is relatively low. In such an environment, if complicated calculation processing is performed after increasing the number of exposures to obtain an HDR image, a long processing time is required. The user waits for a long time until the HDR composite image is output after shooting with the HDR function enabled. In an environment where the calculation performance is not sufficient, it is not easy to obtain a wide dynamic range in a reasonable processing time while maintaining a constant image quality with respect to movement.
 APの演算性能が十分でない環境において、動きに対して一定の画質を保ちながら広いダイナミックレンジをリーズナブルな処理時間で得ることが要求され、HDR画像を得るための時間を短縮させることを目的とした技術が開示されている。 In an environment where the computing performance of the AP is not sufficient, it is required to obtain a wide dynamic range with a reasonable processing time while maintaining a constant image quality with respect to movement, and the purpose is to shorten the time for obtaining an HDR image. Technology is disclosed.
 しかし、既存技術では、HDR画像を得るための処理時間は短縮するが画質が劣化したり、HDR画像の画質の劣化は抑えられるが処理時間を短縮させることが困難であったりした。また既存技術では、HDR画像を得る際に、画質と処理時間とのトレードオフを柔軟に調整することが行われていない。 However, with the existing technology, the processing time for obtaining the HDR image is shortened, but the image quality is deteriorated, or the deterioration of the image quality of the HDR image is suppressed, but it is difficult to shorten the processing time. Further, in the existing technology, when obtaining an HDR image, the trade-off between image quality and processing time is not adjusted flexibly.
 そこで本件開示者は、上述した点に鑑みて、HDR画像を得る際に、ダイナミックレンジの性能は落とさず処理時間を短縮させたり、画質と処理時間とのトレードオフを柔軟に調整したりすることが可能な技術について鋭意検討を行った。その結果、本件開示者は、以下で説明するように、HDR画像を得る際に、ダイナミックレンジの性能は落とさず処理時間を短縮させたり、画質と処理時間とのトレードオフを柔軟に調整したりすることが可能な技術を考案するに至った。 Therefore, in view of the above-mentioned points, the present disclosure person can shorten the processing time without reducing the performance of the dynamic range and flexibly adjust the trade-off between the image quality and the processing time when obtaining the HDR image. We have intensively studied the technology that can be used. As a result, as described below, the present disclosure, when obtaining an HDR image, shortens the processing time without reducing the performance of the dynamic range, or flexibly adjusts the trade-off between image quality and processing time. It came to devise the technology that can do.
 以上、本開示の実施の形態に至った経緯について説明した。続いて、本開示の実施の形態について詳細に説明する。 As above, the background to the embodiment of the present disclosure has been described. Subsequently, an embodiment of the present disclosure will be described in detail.
 [1.2.構成例]
 本実施形態では、露光が異なる複数の画像フレームを入力し、その複数の画像フレームを用いてHDR画像を生成する信号処理装置を想定する。図1は、本実施形態における信号処理装置が前提とするHDR画像の生成の様子を示す説明図である。図1では、4つの画像フレームを用いてHDR画像を生成している様子が示されている。4つの画像フレームを、露光時間が長い順から長蓄、中蓄、短蓄、超短蓄とする。一例を挙げれば、長蓄の画像は露光時間が1/30秒、中蓄の画像は露光時間が1/120秒、短蓄の画像は露光時間が1/480秒、超短蓄の画像は露光時間が1/1920秒である。もちろんそれぞれの画像の露光時間は係る例に限定されるものでは無い。また図1では、露光時間が異なる4つの画像を用いてHDR画像を生成する様子が示されているが、HDR画像の元となる画像の数は4つに限定されるものでは無く、2つ以上であればよい。また図1には、露光時間が長いものから合成する様子が示されているが、HDR画像の生成に際しては露光時間が短いものから合成してもよい。
[1.2. Configuration example]
In the present embodiment, a signal processing apparatus is assumed that inputs a plurality of image frames with different exposures and generates an HDR image using the plurality of image frames. FIG. 1 is an explanatory diagram showing a state of generating an HDR image that is assumed by the signal processing apparatus according to the present embodiment. FIG. 1 shows a state where an HDR image is generated using four image frames. The four image frames are designated as long accumulation, middle accumulation, short accumulation, and ultrashort accumulation in order of long exposure time. For example, an image with long accumulation has an exposure time of 1/30 seconds, an image with intermediate accumulation has an exposure time of 1/120 seconds, an image with short accumulation has an exposure time of 1/480 seconds, and an image with ultra short accumulation has The exposure time is 1/1920 seconds. Of course, the exposure time of each image is not limited to this example. Further, FIG. 1 shows a state in which an HDR image is generated using four images having different exposure times. However, the number of images that are the basis of the HDR image is not limited to four, and two images are generated. That is all you need. Further, FIG. 1 shows a state in which the exposure time is long, but the HDR image may be combined from the short exposure time when generating the HDR image.
 4つの画像が合成されて得られる画像はダイナミックレンジが拡張されているので、通常のダイナミックレンジに収めるために階調圧縮処理を施す。階調圧縮処理を施した後はHDRを行わない時にも使用するカメラ信号処理系に接続される。カメラ信号処理はホワイトバランス、デモザイク、カラーマトリックス、エッジエンハンス、ノイズリダクションなどが含まれる。 Since the dynamic range of the image obtained by synthesizing the four images is expanded, gradation compression processing is performed in order to fit the normal dynamic range. After the gradation compression process, the camera signal processing system is used even when HDR is not performed. Camera signal processing includes white balance, demosaic, color matrix, edge enhancement, noise reduction, and the like.
 なお、露光の異なる複数の画像を得る方式としては、上述したように露光時間を変えながら時間順次で撮像してもよいし、イメージセンサに異露光画素を二次元的に配置して、1回の撮影で各露光プレーンを得てもよい。またはこれらの方式を組み合わせてもよい。図2は、異露光画素を二次元的に配置したイメージセンサの例を示す説明図である。図2に示したイメージセンサ10は、長蓄画素11、中蓄画素12、短蓄画素13、及び超短蓄画素14がマトリクス状に配置された構成を有する。 In addition, as a method for obtaining a plurality of images with different exposures, as described above, images may be taken in time sequential manner while changing the exposure time, or different exposure pixels are arranged two-dimensionally in the image sensor, and once. Each exposure plane may be obtained by shooting. Alternatively, these methods may be combined. FIG. 2 is an explanatory diagram showing an example of an image sensor in which differently exposed pixels are two-dimensionally arranged. The image sensor 10 shown in FIG. 2 has a configuration in which long accumulation pixels 11, middle accumulation pixels 12, short accumulation pixels 13, and ultrashort accumulation pixels 14 are arranged in a matrix.
 また、露光の異なる複数の画像を得る方式として、シャッタータイムは変えずに感度を変化させる方式であってもよい。シャッタータイムを短くするとフリッカーへの対応が求められるため、シャッタータイムは変えずに感度を変化させることでフリッカーへの対応を省略することが出来る。 Further, as a method of obtaining a plurality of images with different exposures, a method of changing the sensitivity without changing the shutter time may be used. If the shutter time is shortened, it is required to deal with flicker. Therefore, the correspondence to flicker can be omitted by changing the sensitivity without changing the shutter time.
 次に、本開示の実施の形態に係る信号処理装置の機能構成例について説明する。図3は、本開示の実施の形態に係る信号処理装置100の機能構成例を示す説明図である。図3に示したのは、HDR画像を生成する処理を実行する信号処理装置100の機能構成例である。以下、図1を用いて本開示の実施の形態に係る信号処理装置100の機能構成例について説明する。 Next, a functional configuration example of the signal processing device according to the embodiment of the present disclosure will be described. FIG. 3 is an explanatory diagram illustrating a functional configuration example of the signal processing device 100 according to the embodiment of the present disclosure. FIG. 3 shows a functional configuration example of the signal processing device 100 that executes processing for generating an HDR image. Hereinafter, a functional configuration example of the signal processing device 100 according to the embodiment of the present disclosure will be described with reference to FIG.
 図3に示したように、本実施形態に係る信号処理装置100は、飽和検出部102と、動き検出部104と、黒潰れ検出部106と、ブレンド率決定部108と、合成部110と、を含んで構成される。 As shown in FIG. 3, the signal processing apparatus 100 according to the present embodiment includes a saturation detection unit 102, a motion detection unit 104, a black crush detection unit 106, a blend rate determination unit 108, a synthesis unit 110, It is comprised including.
 飽和検出部102は、信号処理装置100に入力される、露光時間が異なる2つの画像のうち、露光時間が長い方の画像の飽和の度合いを検出する。画像の飽和の度合いとは、ある閾値以下は0、閾値を超えた画素値に対して多値で定義される情報である。飽和検出部102は、検出した飽和の度合いの情報をブレンド率決定部108に送る。飽和の度合いの情報は、ブレンド率決定部108における、2つの画像のブレンド率(合成比率)の決定に用いられる。 The saturation detection unit 102 detects the degree of saturation of an image having a longer exposure time among two images having different exposure times input to the signal processing apparatus 100. The degree of image saturation is information that is defined as multi-valued with respect to a pixel value that is 0 below a certain threshold and exceeds the threshold. The saturation detection unit 102 sends information on the detected degree of saturation to the blend rate determination unit 108. Information on the degree of saturation is used by the blend rate determination unit 108 to determine the blend rate (synthesis ratio) of two images.
 動き検出部104は、信号処理装置100に入力される2つの画像を用いて画像間の動きを検出する。動き検出部104は、動きを検出する処理として、例えば、事前に短蓄フレームに露光比ゲインをかけて明るさを正規化した上で、長蓄フレームと短蓄フレームとの画素値の差分を取る。動き検出部104は、この差分値が所定の閾値より大きければ画像間に動きがあると判定する。また動き検出部104は、動きを検出する処理として、ある一定の範囲の平均値を算出してその差分を取ってもよい。また動き検出部104は、動きを検出する処理として、より正確に動きを検出するために周波数や勾配に注目してその変化を検出してもよい。また動き検出部104は、複数のフレーム間で対応する位置の変化を検出しても良いし、移動ベクトルを検出してその値を使用する方法で動きを検出しても良い。動き検出部104は、動きとして検出した領域の情報をブレンド率決定部108に送る。ブレンド率決定部108は、動きとして検出された領域では、ブラーや位置ずれの低減を目的として露光時間が短い方の画像のブレンド率を高める。 The motion detection unit 104 detects a motion between images using two images input to the signal processing device 100. As a process for detecting motion, for example, the motion detection unit 104 normalizes brightness by applying an exposure ratio gain to a short storage frame in advance, and then calculates a difference between pixel values of the long storage frame and the short storage frame. take. The motion detection unit 104 determines that there is motion between images if the difference value is greater than a predetermined threshold value. Further, the motion detection unit 104 may calculate an average value in a certain range and take the difference as a process of detecting the motion. In addition, the motion detection unit 104 may detect a change by paying attention to a frequency or a gradient in order to detect a motion more accurately as a process for detecting the motion. In addition, the motion detection unit 104 may detect a corresponding change in position between a plurality of frames, or may detect a motion by a method of detecting a movement vector and using the value. The motion detection unit 104 sends information on the area detected as motion to the blend rate determination unit 108. The blend rate determination unit 108 increases the blend rate of an image with a shorter exposure time for the purpose of reducing blur and misalignment in an area detected as motion.
 黒潰れ検出部106は、露光時間が異なる2つの画像のうち、露光時間が短い方の画像の黒潰れの度合いを検出する。画像の黒潰れの度合いとは、ある閾値以上は0、閾値を下回る画素値に対して多値で定義される情報である。黒潰れ検出部106は、検出した黒潰れの度合いの情報をブレンド率決定部108に送る。黒潰れの度合いの情報は、ブレンド率決定部108における、2つの画像のブレンド率の決定に用いられる。 The black crushing detection unit 106 detects the degree of black crushing of an image having a shorter exposure time out of two images having different exposure times. The degree of black crushing in an image is information that is defined as multivalued for pixel values that are 0 or more below a certain threshold. The black crushing detection unit 106 sends information on the detected degree of black crushing to the blend rate determination unit 108. Information on the degree of black crushing is used in the blend rate determination unit 108 to determine the blend rate of two images.
 ブレンド率決定部108は、信号処理装置100に入力される2つの画像のブレンド率を決定する。ブレンド率決定部108は、2つの画像のブレンド率を決定するにあたり、飽和検出部102が検出した画像の飽和の度合い、黒潰れ検出部106が検出した画像の黒潰れの度合い、動き検出部104が検出した動きの有無の情報を用いる。例えば、飽和検出部102が検出した画像の飽和の度合いが所定の閾値以上であれば、ブレンド率決定部108は、露光時間が短い方の画像のブレンド率を高める。また例えば、黒潰れ検出部106が検出した画像の黒潰れの度合いが所定の閾値以上であれば、ブレンド率決定部108は、露光時間が長い方の画像のブレンド率を高める。また例えば、動き検出部104が動きとして検出した領域では、ブレンド率決定部108は、ブラーや位置ずれの低減を目的として露光時間が短い方の画像のブレンド率を高める。ブレンド率決定部108は、これらの要素から最終的なブレンド率を決定し、決定したブレンド率の情報を合成部110に送る。 The blend rate determination unit 108 determines the blend rate of two images input to the signal processing apparatus 100. In determining the blend ratio of two images, the blend rate determination unit 108 determines the degree of saturation of the image detected by the saturation detection unit 102, the level of black collapse of the image detected by the black collapse detection unit 106, and the motion detection unit 104. The information on the presence / absence of the motion detected by is used. For example, if the degree of saturation of the image detected by the saturation detection unit 102 is equal to or greater than a predetermined threshold, the blend rate determination unit 108 increases the blend rate of the image with the shorter exposure time. For example, if the degree of black crushing of the image detected by the black crushing detection unit 106 is equal to or greater than a predetermined threshold, the blend rate determination unit 108 increases the blend rate of the image with the longer exposure time. Further, for example, in the region detected by the motion detection unit 104 as a motion, the blend rate determination unit 108 increases the blend rate of an image with a shorter exposure time for the purpose of reducing blur and misalignment. The blend rate determination unit 108 determines a final blend rate from these elements, and sends information on the determined blend rate to the synthesis unit 110.
 合成部110は、ブレンド率決定部108が決定したブレンド率に基づき、信号処理装置100に入力される2つの画像を合成する処理を行う。合成部110により合成された画像は、さらに別の、HDR画像の生成対象になる画像と合成される。 The synthesizing unit 110 performs a process of synthesizing two images input to the signal processing device 100 based on the blend rate determined by the blend rate determining unit 108. The image synthesized by the synthesis unit 110 is further synthesized with another image that is an HDR image generation target.
 信号処理装置100は、全てのHDR画像の生成対象になる画像に対して一連の処理を行うことでHDR画像の生成を行う。 The signal processing apparatus 100 generates an HDR image by performing a series of processes on all HDR image generation targets.
 信号処理装置100におけるHDR画像の生成処理の中では、アルゴリズムにもよるが画像間での被写体の動き検出を含む動き適応処理の演算量が比較的大きい。もし動き適応処理を全く行わないならば、演算量は小さく、合成処理は比較的短時間で終了するが、代わりに動物体のブラーが目立ったり、多重輪郭が発生するアーティファクトが見えたりする。 In the HDR image generation processing in the signal processing apparatus 100, although it depends on the algorithm, the amount of calculation of the motion adaptation processing including motion detection of the subject between the images is relatively large. If motion adaptation processing is not performed at all, the amount of calculation is small and the synthesis processing is completed in a relatively short time. Instead, blurring of the moving object is noticeable, and artifacts in which multiple contours are generated can be seen.
 本実施形態では、このトレードオフに注目して、HDR画像の画質と、HDR画像の生成に掛かる処理時間の関係を柔軟に制御することを目的としている。 In the present embodiment, focusing on this trade-off, an object is to flexibly control the relationship between the image quality of the HDR image and the processing time required for generating the HDR image.
 図4は、本実施形態に係る信号処理装置100によるHDR画像の生成処理について説明する説明図である。図4に示したのは、露光時間が異なる4つの画像からHDR画像を生成する例である。図4には、各フレームのシャッタータイムが、長蓄側から1/30秒、1/120秒、1/480秒、1/1920秒であり、露光比間隔は4倍ずつ増えていき、全体として64倍のケースが示されている。なお、露光比の間隔は最大でも16倍程度とすることが望ましい。また、各フレームのシャッタータイムは、被写体や、撮像時の環境(明るさ等)に応じて変化しうる。 FIG. 4 is an explanatory diagram illustrating HDR image generation processing by the signal processing apparatus 100 according to the present embodiment. FIG. 4 shows an example in which an HDR image is generated from four images having different exposure times. In FIG. 4, the shutter time of each frame is 1/30 seconds, 1/120 seconds, 1/480 seconds, 1/1920 seconds from the long storage side, and the exposure ratio interval increases by 4 times. A 64 times case is shown. It is desirable that the exposure ratio interval is about 16 times at the maximum. In addition, the shutter time of each frame can change according to the subject and the environment (brightness, etc.) at the time of imaging.
 通常であれば、4つの画像からHDR画像を生成する際には動き適応処理を計3回実行するところ、図4に示した例では、長蓄側の1回だけとしている。シャッタータイムが最も長い長蓄フレームで発生する顕著なブラーを救済するが、さらに短蓄側への置き換えは行わない。激しいブラーが見られるのは長蓄画像のみで、長蓄画像を除けばブラー量は許容範囲に収まるケースが多いため、長蓄側の1回だけ動き適応処理を行えば十分な品質の合成画像を生成し得る。また動き検出を行って短蓄側への置換を進める場合でも、シャッタータイムが短くになるに従って短蓄側の動体のS/Nは低下していくため、短蓄画像を合成に使用したとしても画質の観点で使用に耐えないこともある。このような観点から、本実施形態に係る信号処理装置100は、動き適応処理を長蓄側の1回だけとしている。動き適応処理を長蓄側の1回だけとすることで、動体の致命的なブラーや著しいS/N低下といった弊害を排除した上で、動き適応処理にかかる演算量を減らすことができる。 Normally, when an HDR image is generated from four images, the motion adaptation process is executed a total of three times. In the example shown in FIG. Relieves noticeable blur that occurs in the long storage frame with the longest shutter time, but does not replace it with the short storage side. Only long accumulation images show severe blur, and in many cases the amount of blur is within the allowable range except for long accumulation images, so if you perform motion adaptation processing only once on the long accumulation side, a composite image with sufficient quality Can be generated. Even when motion detection is performed and the replacement to the short accumulating side is advanced, the S / N of the moving body on the short accumulating side decreases as the shutter time becomes short. It may be unusable in terms of image quality. From such a viewpoint, the signal processing apparatus 100 according to the present embodiment performs the motion adaptation process only once on the long accumulation side. By performing the motion adaptation process only once on the long storage side, it is possible to reduce the amount of calculation required for the motion adaptation process while eliminating adverse effects such as a fatal blur of the moving object and a significant S / N reduction.
 図4では、動き適応処理を1回だけ行うとして説明したが、動き適応処理の回数は係る例に限定されるもので無い。本実施形態に係る信号処理装置100の特徴は、動き適応処理の回数を、本来必要な回数よりも少ない回数で固定する点にある。 In FIG. 4, it has been described that the motion adaptation process is performed only once, but the number of times of the motion adaptation process is not limited to this example. A feature of the signal processing apparatus 100 according to the present embodiment is that the number of motion adaptive processes is fixed to a number smaller than the number of times originally necessary.
 動き適応処理の回数を減らすと言っても、回数を固定にするのでは、ある程度の品質を担保したHDR画像の生成には十分とは言えないケースがある。各フレームに含まれるブラー量というのは多くの要因で決まる。動体の移動速度が大きければ当然ブラー量は大きくなる。またカメラと動被写体との距離が遠かったり、レンズ画角が広かったりすれば移動速度は相対的に遅くなりブラー量は小さくなる。またブラー量に対する許容限も主観的なものであって、画像の評価者(例えば撮像者)によって異なる。加えて、実際の各フレームのシャッタータイムは、撮影対象シーンが明るいか暗いか、フレーム間の露光比を何倍に設定するか、全体の露光比つまりダイナミックレンジをどれくらい広く取るか、AE(Auto Exposure;自動露出)制御の方法やポリシーなど、さまざまな要素によって決まる。さらに、先に述べたようにシャッタータイムが短くなっていくに従って動体のS/Nは低下するので、ブラー低減を取るか動体のS/N確保を取るかのトレードオフとなるが、ブラー低減と動体のS/N確保とのバランスも主観的なものであって、やはり評価者によって異なる。以上のような要素を考慮すれば、適切な動き適応処理の回数というのは容易に決められるものではなく、状況に応じて柔軟に制御した方がよいと言える。 Even if the number of motion adaptation processes is reduced, fixing the number of times may not be sufficient for generating an HDR image with a certain level of quality. The amount of blur included in each frame is determined by many factors. If the moving speed of the moving object is high, the blur amount naturally increases. If the distance between the camera and the moving subject is long or the angle of view of the lens is wide, the moving speed is relatively slow and the blur amount is small. Further, the allowable limit for the blur amount is subjective, and varies depending on the evaluator (for example, the photographer) of the image. In addition, the actual shutter time of each frame depends on whether the scene to be photographed is bright or dark, how many times the exposure ratio between frames is set, how wide the overall exposure ratio, that is, the dynamic range, AE (Auto It depends on various factors such as the method and policy of exposure (auto exposure). Furthermore, as described above, the S / N of the moving object decreases as the shutter time becomes shorter, so this is a trade-off between reducing blur or securing S / N of moving object. The balance with the S / N securing of the moving object is also subjective and again varies depending on the evaluator. Considering the above factors, the appropriate number of motion adaptation processes is not easily determined, and it can be said that it is better to control flexibly according to the situation.
 そこで、本実施形態に係る信号処理装置100は、HDR画像の生成の基になる画像のシャッタータイムが所定の閾値以上であるかどうかを判断する。シャッタータイムが所定の閾値以上であれば、信号処理装置100は、その画像には動物体のブラーが目立っている可能性があるので、動き適応処理を実施して画像の合成処理を行う。なお、本実施形態では、合成処理とは、ブレンド率決定部108における2つの画像の合成比率の決定処理のこと、または、合成部110による、2つの画像の合成比率に基づいて画像を合成する処理のこと、のいずれかを言うものとする。また本実施形態では、HDR画像の生成の際に、露光時間が短い画像から順に画像を合成するとして説明するが、本開示は係る例に限定されるものでは無く、露光時間が長い画像から順に画像を合成しても良い。 Therefore, the signal processing apparatus 100 according to the present embodiment determines whether or not the shutter time of the image that is the basis for generating the HDR image is equal to or greater than a predetermined threshold value. If the shutter time is equal to or greater than a predetermined threshold value, the signal processing apparatus 100 performs motion adaptation processing to perform image synthesis processing because blur of the moving object may be conspicuous in the image. In the present embodiment, the composition process is a process for determining the composition ratio of two images in the blend ratio determination unit 108 or a composition of images based on the composition ratio of two images by the composition unit 110. Any one of processing shall be said. Further, in the present embodiment, when generating an HDR image, description will be made assuming that images are synthesized in order from an image with a short exposure time, but the present disclosure is not limited to such an example, and an image with a long exposure time is ordered. Images may be combined.
 [1.3.動作例]
 図5は、本開示の実施の形態に係る信号処理装置100の動作例を示す流れ図である。図5に示したのは、HDR画像の生成処理を行う際の信号処理装置100の動作例である。図5に示した流れ図において、Nは合成フレーム数、TはN番目のフレームのシャッタータイム、THShutはブラーやS/Nを考慮して予め定義するシャッタータイムの閾値を表している。ここでは、Nが小さい方が長蓄側であり、大きい方が短蓄側であるとする。以下、図5を用いて本開示の実施の形態に係る信号処理装置100の動作例を説明する。
[1.3. Example of operation]
FIG. 5 is a flowchart illustrating an operation example of the signal processing device 100 according to the embodiment of the present disclosure. FIG. 5 shows an operation example of the signal processing apparatus 100 when performing HDR image generation processing. In the flowchart shown in FIG. 5, N is the number of combined frames, TN is the shutter time of the Nth frame, and TH Shut is a threshold value of a shutter time that is defined in advance in consideration of blur and S / N. Here, it is assumed that the smaller N is the long accumulation side and the larger N is the short accumulation side. Hereinafter, an operation example of the signal processing apparatus 100 according to the embodiment of the present disclosure will be described with reference to FIG.
 信号処理装置100は、まずNの値を1に初期化し(ステップS101)、続いて、TがTHShut以上であるかどうか判断する(ステップS102)。このステップS102の判断は、例えば動き検出部104が実行する。 The signal processing apparatus 100, first initializes the value of N to 1 (step S101), followed by, T N is determined whether a more TH Shut (step S102). The determination in step S102 is executed by, for example, the motion detection unit 104.
 ステップS102の判断の結果、TがTHShut以上であれば(ステップS102、Yes)、信号処理装置100は、入力された2つの画像(N番目及びN+1番目のフレーム)を用いて動き適応処理を行う(ステップS103)。ステップS103の処理は、例えば動き検出部104が行う。 As a result of the determination in step S102, if TN is equal to or higher than TH Shut (step S102, Yes), the signal processing apparatus 100 uses the two input images (Nth and N + 1th frames) to perform motion adaptation processing. Is performed (step S103). The process of step S103 is performed by, for example, the motion detection unit 104.
 一方、ステップS102の判断の結果、TがTHShut以上でなければ(ステップS102、No)、信号処理装置100は、ステップS103の動き適応処理をスキップする。 On the other hand, as a result of the determination in step S102, if TN is not equal to or greater than TH Shut (step S102, No), the signal processing apparatus 100 skips the motion adaptation process in step S103.
 続いて、信号処理装置100は、入力された2つの画像の合成処理を行う(ステップS104)。画像の合成処理は、合成部110が実行する。信号処理装置100は、2つの画像の合成処理の際には、上述したように、飽和の度合いの情報や黒潰れの度合いの情報、2つの画像間の動きの情報等を用いる。 Subsequently, the signal processing apparatus 100 performs a process of combining the two input images (step S104). The compositing unit 110 executes the image compositing process. As described above, the signal processing apparatus 100 uses information on the degree of saturation, information on the degree of black crushing, information on movement between the two images, and the like when combining two images.
 続いて、信号処理装置100は、Nの値が合成対象のフレーム数より1少ない値と等しいかどうか判断する(ステップS105)。Nの値が合成対象のフレーム数より1少ない値と等しければ(ステップS105、Yes)、信号処理装置100は、合成対象のフレームについて全て処理が完了したとして一連の処理を終了する。一方、Nの値が合成対象のフレーム数より1少ない値と等しくなければ(ステップS105、No)、信号処理装置100は、Nの値を1つインクリメントし(ステップS106)、ステップS102の処理に戻る。 Subsequently, the signal processing apparatus 100 determines whether the value of N is equal to a value that is 1 less than the number of frames to be synthesized (step S105). If the value of N is equal to one less than the number of frames to be combined (step S105, Yes), the signal processing apparatus 100 ends the series of processes assuming that the processing has been completed for all the frames to be combined. On the other hand, if the value of N is not equal to a value that is one less than the number of frames to be combined (step S105, No), the signal processing apparatus 100 increments the value of N by 1 (step S106), and the process of step S102 is performed. Return.
 図6は、本開示の実施の形態に係る信号処理装置100の動作の具体例を示す説明図である。図6では、THShutが1/60秒であるとしている。 FIG. 6 is an explanatory diagram illustrating a specific example of the operation of the signal processing device 100 according to the embodiment of the present disclosure. In FIG. 6, it is assumed that TH Shut is 1/60 second.
 長蓄フレームのシャッタータイム(1/30秒)は、閾値THShutよりも長いので、長蓄フレームのブラー発生リスクは大きい。そして中蓄フレームのS/Nは十分であるとして、信号処理装置100は、長蓄フレームと中蓄フレームとの間で動き適応処理を実行する。 Since the shutter time (1/30 second) of the long accumulation frame is longer than the threshold value TH Shut , the risk of occurrence of blur in the long accumulation frame is great. Then, assuming that the S / N of the intermediate storage frame is sufficient, the signal processing device 100 executes a motion adaptation process between the long storage frame and the intermediate storage frame.
 次に、中蓄フレームのシャッタータイム(1/120秒)は閾値THShutよりも短いので、中蓄フレームのブラー発生リスクは小さい。また、中蓄フレームのS/Nが不足しているとして、信号処理装置100は、中蓄フレームと短蓄フレームとの間で動き適応処理は実行しない。 Next, the medium蓄frame shutter time (1/120 sec) is shorter than the threshold value TH Shut, blur risk of middle蓄frame is small. Further, assuming that the S / N of the intermediate storage frame is insufficient, the signal processing device 100 does not execute the motion adaptation process between the intermediate storage frame and the short storage frame.
 さらにシャッタータイムが短くなる短蓄フレームと超短蓄フレームの間の動き適応処理も、信号処理装置100は実行しない。本実施形態に係る信号処理装置100は、このようにして、ブラーやS/Nを考慮しながら動き適応処理の回数を減らすことができる。図6に示した例では、閾値THShutを考慮しなければ動き適応処理を3回実行するところを1回に減らしている。本実施形態に係る信号処理装置100は、ブラーやS/Nといった、合成後のHDR画像の画質を反映する要素としてシャッタータイムの閾値THShutを定義して、最終的に設定されたシャッタータイムとの関係から動き適応処理の実行回数を調整するものである。 Furthermore, the signal processing apparatus 100 does not execute the motion adaptation process between the short accumulation frame and the ultra short accumulation frame in which the shutter time is shortened. In this way, the signal processing apparatus 100 according to the present embodiment can reduce the number of motion adaptation processes while considering blur and S / N. In the example shown in FIG. 6, the place where the motion adaptation process is executed three times is reduced to one if the threshold TH Shut is not taken into consideration. The signal processing apparatus 100 according to the present embodiment defines a shutter time threshold TH Shut as an element that reflects the image quality of the combined HDR image, such as blur and S / N, and the finally set shutter time. The number of executions of the motion adaptation process is adjusted from the above relationship.
 本実施形態に係る信号処理装置100は、動き適応処理の実行回数を予め求めたうえでHDR合成処理を実行してもよい。その実行回数の計算式を数式(1)、(2)に示す。 The signal processing apparatus 100 according to the present embodiment may execute the HDR synthesis process after obtaining the number of executions of the motion adaptation process in advance. Formulas (1) and (2) show the calculation formula for the number of executions.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 HDR処理全体の処理時間をTSUMとすると、1回のHDR合成にかかる時間THDRと1回の動き適応処理にかかる時間TMOVEの和を、合成フレーム数N倍したものがTSUMになる(数式(1)参照)。この関係から、全体の処理時間TSUMを一定以下に収めたい場合の動き適応処理の回数Mが数式(2)で表される。本実施形態に係る信号処理装置100は、数式(2)で求めた回数Mだけ動き適応処理を行うことで、一定の処理時間を超えない範囲で動物体の画質を向上させることができる。 Assuming that the processing time of the entire HDR process is T SUM , the sum of the time T HDR required for one HDR synthesis and the time T MOVE required for one motion adaptation process is multiplied by the number of synthesized frames N, and T SUM is obtained. (See Equation (1)). From this relationship, the number M of motion adaptation processes when it is desired to keep the entire processing time TSUM below a certain value is expressed by Equation (2). The signal processing apparatus 100 according to the present embodiment can improve the image quality of the moving object within a range that does not exceed a certain processing time by performing the motion adaptation process for the number M obtained by Expression (2).
 信号処理装置100は、シャッタータイムの閾値に基づいて動き適応処理の実行可否を判断する手法と、動き適応処理の実行回数を予め求める手法とを併用することも可能である。この場合、信号処理装置100は、ブラーやS/Nを考慮した動き適応処理の回数の上限と、上述のように求めた回数Mのうち小さい回数の方を使用すればよい。 The signal processing apparatus 100 can use both a method for determining whether or not the motion adaptation process can be performed based on a threshold value of the shutter time and a method for obtaining the number of executions of the motion adaptation process in advance. In this case, the signal processing apparatus 100 may use the smaller number of the upper limit of the number of motion adaptation processes considering blur and S / N and the number of times M obtained as described above.
 本実施形態に係る信号処理装置100は、画像間の動きの量の情報に基づいて動き適応処理の実行可否を判断しても良い。図7は、本開示の実施の形態に係る信号処理装置100の動作の具体例を示す説明図である。 The signal processing apparatus 100 according to the present embodiment may determine whether or not the motion adaptation process can be performed based on information on the amount of motion between images. FIG. 7 is an explanatory diagram illustrating a specific example of the operation of the signal processing device 100 according to the embodiment of the present disclosure.
 信号処理装置100は、まず長蓄フレームと中蓄フレームとで動き検出を行い、画像間の動きの大きさMotion12を検出する。検出された動きが大きければ動物体の移動速度がシャッタータイムに対して速いということであり、ブラーや二重像発生のリスクが高いということになる。動きの大きさとシャッタータイムは基本的には比例すると仮定してよく、ここで、信号処理装置100は、Motion12からより短蓄側の動き量Motion23、Motion34も算出することができる。 The signal processing apparatus 100 first performs motion detection on the long accumulation frame and the middle accumulation frame, and detects the magnitude of motion 12 between the images. If the detected movement is large, the moving speed of the moving object is fast with respect to the shutter time, which means that there is a high risk of occurrence of blur and double image. It may be assumed that the magnitude of the movement and the shutter time are basically proportional. Here, the signal processing apparatus 100 can also calculate the movement amounts Motion 23 and Motion 34 on the short accumulation side from the Motion 12.
 そして、ブラーや二重像が目立たない動き量の大きさTHMotionを定義しておいた上で、信号処理装置100は、動き量がTHMotion以上のフレーム間に対してのみ動き適応処理を行う。図7に示した例では、長蓄フレームと中蓄フレームとの間で動き適応処理を実行し、それ以外のフレームの間では動き適応処理を行っていない。 Then, after defining the amount of motion TH Motion in which blur and double images are not conspicuous, the signal processing device 100 performs motion adaptation processing only between frames whose motion amount is equal to or greater than TH Motion. . In the example illustrated in FIG. 7, the motion adaptation process is executed between the long accumulation frame and the intermediate accumulation frame, and the motion adaptation process is not performed between the other frames.
 このように動き量の大きさに基づいて動き適応処理の実行可否を判断することで、信号処理装置100は、フレーム間の動き量がゼロまた十分小さければ、動き適応処理を実行しなくても画質の低下がなく、また、以降のフレームに対する動き適応処理を自動的にオフにできる。つまり、本開示の実施の形態に係る信号処理装置100は、撮影シーンにおける動き量に応じて処理時間を短縮することができるという効果を有する。 As described above, by determining whether or not the motion adaptation process can be performed based on the magnitude of the motion amount, the signal processing apparatus 100 does not perform the motion adaptation process if the motion amount between frames is zero or sufficiently small. There is no degradation in image quality, and motion adaptation processing for subsequent frames can be automatically turned off. That is, the signal processing apparatus 100 according to the embodiment of the present disclosure has an effect that the processing time can be shortened according to the amount of motion in the shooting scene.
 信号処理装置100は、シャッタータイムの閾値に基づいて動き適応処理の実行可否を判断する手法と、画像間の動きの量の情報に基づいて動き適応処理の実行可否を判断する手法とを併用することも可能である。この場合、信号処理装置100は、ブラーやS/Nを考慮した動き適応処理の回数の上限と、画像間の動きの量の情報を考慮した動き適応処理の回数の上限回数の方のうち、小さい回数の方を使用すればよい。 The signal processing apparatus 100 uses both a method for determining whether or not the motion adaptive process can be performed based on the shutter time threshold value, and a method for determining whether or not the motion adaptive process can be performed based on information on the amount of motion between images. It is also possible. In this case, the signal processing apparatus 100 has an upper limit of the number of times of motion adaptation processing considering blur and S / N and an upper limit number of times of motion adaptation processing considering information on the amount of motion between images. The smaller number may be used.
 また信号処理装置100は、画像間の動きの量の情報に基づいて動き適応処理の実行可否を判断する手法と、動き適応処理の実行回数を予め求める手法とを併用することも可能である。この場合、信号処理装置100は、画像間の動きの量の情報を考慮した動き適応処理の回数の上限と、上述のように求めた回数Mのうち小さい回数の方を使用すればよい。 Further, the signal processing apparatus 100 can use both a method for determining whether or not the motion adaptive process can be executed based on information on the amount of motion between images, and a method for obtaining the number of executions of the motion adaptive process in advance. In this case, the signal processing apparatus 100 may use the upper limit of the number of times of the motion adaptation processing considering information on the amount of motion between images and the smaller number of times M obtained as described above.
 また信号処理装置100は、シャッタータイムの閾値に基づいて動き適応処理の実行可否を判断する手法と、画像間の動きの量の情報に基づいて動き適応処理の実行可否を判断する手法と、画像間の動きの量の情報に基づいて動き適応処理の実行可否を判断する手法と、を併用することも可能である。この場合、信号処理装置100は、ブラーやS/Nを考慮した動き適応処理の回数の上限と、画像間の動きの量の情報を考慮した動き適応処理の回数の上限回数の方のうち、上述のように求めた回数Mのうち、小さい回数の方を使用すればよい。 In addition, the signal processing device 100 determines whether or not to execute the motion adaptation process based on a threshold of the shutter time, determines whether or not to execute the motion adaptation process based on information on the amount of motion between images, It is also possible to use a method for determining whether or not to execute the motion adaptation process based on the information on the amount of motion in between. In this case, the signal processing apparatus 100 has an upper limit of the number of times of motion adaptation processing considering blur and S / N and an upper limit number of times of motion adaptation processing considering information on the amount of motion between images. Of the times M obtained as described above, the smaller number may be used.
 動き量に基づいて動き適応処理の実行可否を判断する場合、各フレームの画像全体で動き量を判断しても良いが、主要被写体は画像の中心部分に存在することが多い。よって信号処理装置100は、画像全体ではなく、一部分、例えば中心部分の所定の範囲における動き量に基づいて動き適応処理の実行可否を判断してもよい。 When determining whether to execute the motion adaptation process based on the amount of motion, the amount of motion may be determined for the entire image of each frame, but the main subject often exists in the center of the image. Therefore, the signal processing apparatus 100 may determine whether or not to execute the motion adaptation process based on a motion amount in a predetermined range of a part, for example, the central part, instead of the entire image.
 <2.移動体への応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
<2. Application example to mobile objects>
The technology according to the present disclosure (present technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device that is mounted on any type of mobile body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, and a robot. May be.
 図8は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 8 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図8に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(interface)12053が図示されている。 The vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001. In the example shown in FIG. 8, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, a vehicle exterior information detection unit 12030, a vehicle interior information detection unit 12040, and an integrated control unit 12050. As a functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are illustrated.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp. In this case, the body control unit 12020 can be input with radio waves transmitted from a portable device that substitutes for a key or signals from various switches. The body system control unit 12020 receives input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 The vehicle outside information detection unit 12030 detects information outside the vehicle on which the vehicle control system 12000 is mounted. For example, the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle and receives the captured image. The vehicle outside information detection unit 12030 may perform an object detection process or a distance detection process such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received image.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to the amount of received light. The imaging unit 12031 can output an electrical signal as an image, or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared rays.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 The vehicle interior information detection unit 12040 detects vehicle interior information. For example, a driver state detection unit 12041 that detects a driver's state is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera that images the driver, and the vehicle interior information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver is asleep.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates a control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside / outside the vehicle acquired by the vehicle outside information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit A control command can be output to 12010. For example, the microcomputer 12051 realizes an ADAS (Advanced Driver Assistance System) function including vehicle collision avoidance or impact mitigation, following traveling based on inter-vehicle distance, vehicle speed maintaining traveling, vehicle collision warning, or vehicle lane departure warning, etc. It is possible to perform cooperative control for the purpose.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 Further, the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform cooperative control for the purpose of automatic driving that autonomously travels without depending on the operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Further, the microcomputer 12051 can output a control command to the body system control unit 12020 based on information outside the vehicle acquired by the vehicle outside information detection unit 12030. For example, the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or the oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of anti-glare, such as switching from a high beam to a low beam. It can be carried out.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図8の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The sound image output unit 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle. In the example of FIG. 8, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices. The display unit 12062 may include at least one of an on-board display and a head-up display, for example.
 図9は、撮像部12031の設置位置の例を示す図である。 FIG. 9 is a diagram illustrating an example of an installation position of the imaging unit 12031.
 図9では、車両12100は、撮像部12031として、撮像部12101,12102,12103,12104,12105を有する。 In FIG. 9, the vehicle 12100 includes imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
 撮像部12101,12102,12103,12104,12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102,12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。撮像部12101及び12105で取得される前方の画像は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door, and an upper part of a windshield in the vehicle interior of the vehicle 12100. The imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 provided in the side mirror mainly acquire an image of the side of the vehicle 12100. The imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100. The forward images acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
 なお、図9には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 FIG. 9 shows an example of the shooting range of the imaging units 12101 to 12104. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively, and the imaging range 12114 The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, an overhead image when the vehicle 12100 is viewed from above is obtained.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, the microcomputer 12051, based on the distance information obtained from the imaging units 12101 to 12104, the distance to each three-dimensional object in the imaging range 12111 to 12114 and the temporal change in this distance (relative speed with respect to the vehicle 12100). In particular, it is possible to extract, as a preceding vehicle, a three-dimensional object that travels at a predetermined speed (for example, 0 km / h or more) in the same direction as the vehicle 12100, particularly the closest three-dimensional object on the traveling path of the vehicle 12100. it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. Thus, cooperative control for the purpose of autonomous driving or the like autonomously traveling without depending on the operation of the driver can be performed.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, the microcomputer 12051 converts the three-dimensional object data related to the three-dimensional object to other three-dimensional objects such as a two-wheeled vehicle, a normal vehicle, a large vehicle, a pedestrian, and a utility pole based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 identifies obstacles around the vehicle 12100 as obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. The microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 is connected via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration or avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the captured images of the imaging units 12101 to 12104. Such pedestrian recognition is, for example, whether or not the user is a pedestrian by performing a pattern matching process on a sequence of feature points indicating the outline of an object and a procedure for extracting feature points in the captured images of the imaging units 12101 to 12104 as infrared cameras. It is carried out by the procedure for determining. When the microcomputer 12051 determines that there is a pedestrian in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 has a rectangular contour line for emphasizing the recognized pedestrian. The display unit 12062 is controlled so as to be superimposed and displayed. Moreover, the audio | voice image output part 12052 may control the display part 12062 so that the icon etc. which show a pedestrian may be displayed on a desired position.
 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、マイクロコンピュータ12051に適用され得る。具体的には、信号処理装置100は、マイクロコンピュータ12051に適用することができる。マイクロコンピュータ12051に本開示に係る技術を適用することにより、HDR画像を得る際に、ダイナミックレンジの性能は落とさず処理時間を短縮させたり、画質と処理時間とのトレードオフを柔軟に調整したりすることが可能となる。 Heretofore, an example of a vehicle control system to which the technology according to the present disclosure can be applied has been described. The technology according to the present disclosure can be applied to the microcomputer 12051 among the configurations described above. Specifically, the signal processing apparatus 100 can be applied to a microcomputer 12051. By applying the technology according to the present disclosure to the microcomputer 12051, when obtaining an HDR image, the performance of the dynamic range is not reduced, the processing time is shortened, and the trade-off between the image quality and the processing time is flexibly adjusted. It becomes possible to do.
 <3.まとめ>
 以上説明したように本開示の実施の形態によれば、動き検出回数を全て実行するよりも少ない回数にすることで、HDR画像において致命的なブラーや動体の著しいS/N低下といった弊害を避けながら、HDR画像の生成時の演算量を削減することができる信号処理装置100が提供される。
<3. Summary>
As described above, according to the embodiment of the present disclosure, the number of motion detections is set to a smaller number than the number of times of execution, thereby avoiding harmful effects such as fatal blur and significant S / N reduction of moving objects in HDR images. However, the signal processing device 100 that can reduce the amount of calculation when generating the HDR image is provided.
 また本開示の実施の形態によれば、最終的に決定された各フレームのシャッタータイムから動き適応処理の実行回数を制御することが可能であり、HDR画像の画質を保ちながら処理時間を短縮することができる信号処理装置100が提供される。 Further, according to the embodiment of the present disclosure, it is possible to control the number of times of motion adaptation processing from the finally determined shutter time of each frame, and shorten the processing time while maintaining the image quality of the HDR image. A signal processing device 100 is provided.
 また本開示の実施の形態によれば、合成処理全体の時間が既定値を超えない範囲で動き適応処理の実行回数を制御することが可能であり、既定の処理時間を守ったうえで動物体を含んだHDR画像の画質が自動的に調整される信号処理装置100が提供される。 Further, according to the embodiment of the present disclosure, it is possible to control the number of executions of the motion adaptation process within a range in which the total synthesis process time does not exceed a predetermined value, The signal processing apparatus 100 is provided in which the image quality of the HDR image including the image quality is automatically adjusted.
 また本開示の実施の形態によれば、動き検出の結果から動き適応処理の実行回数を制御することが可能であり、動物体を含んだHDR画像の画質を保ちながら処理時間を短縮することができる信号処理装置100が提供される。 Further, according to the embodiment of the present disclosure, it is possible to control the number of executions of the motion adaptation process from the result of motion detection, and to shorten the processing time while maintaining the image quality of the HDR image including the moving object. A signal processing apparatus 100 that can be used is provided.
 本明細書の各装置が実行する処理における各ステップは、必ずしもシーケンス図またはフローチャートとして記載された順序に沿って時系列に処理する必要はない。例えば、各装置が実行する処理における各ステップは、フローチャートとして記載した順序と異なる順序で処理されても、並列的に処理されてもよい。 Each step in the processing executed by each device in this specification does not necessarily have to be processed in chronological order in the order described as a sequence diagram or flowchart. For example, each step in the processing executed by each device may be processed in an order different from the order described as the flowchart, or may be processed in parallel.
 また、各装置に内蔵されるCPU、ROMおよびRAMなどのハードウェアを、上述した各装置の構成と同等の機能を発揮させるためのコンピュータプログラムも作成可能である。また、該コンピュータプログラムを記憶させた記憶媒体も提供されることが可能である。また、機能ブロック図で示したそれぞれの機能ブロックをハードウェアで構成することで、一連の処理をハードウェアで実現することもできる。 In addition, it is possible to create a computer program for causing hardware such as CPU, ROM, and RAM incorporated in each device to exhibit functions equivalent to the configuration of each device described above. A storage medium storing the computer program can also be provided. Moreover, a series of processes can also be realized by hardware by configuring each functional block shown in the functional block diagram with hardware.
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、特許請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 露光時間がそれぞれ異なるN個(Nは3以上の整数)のフレームの画像に対して少なくともN-1回の合成処理を行う合成処理部と、
 前記合成処理部で前記画像の群のうち2つを使用する1回の合成の際に動き適応処理を行う動き適応処理部と、
を備え、
 前記合成処理部が前記N-1回の合成処理を行う際の前記動き適応処理部での動き適応処理の回数をN―2回以下とする、信号処理装置。
(2)
 前記動き適応処理部は、前記合成処理部での合成処理の対象となる前記画像の露光時間が所定の閾値以上の画像に対してのみ動き適応処理を行う、前記(1)に記載の信号処理装置。
(3)
 前記動き適応処理部は、前記合成処理部での合成処理の実行時間が所定の閾値を超えない最大の回数で動き適応処理を行う、前記(1)に記載の信号処理装置。
(4)
 前記合成処理部で合成される2つの画像に対して動き検出処理を実行する動き検出処理部をさらに備え、
 前記動き適応処理部は、前記動き検出処理により検出された前記2つの画像の間の動きの大きさに応じて該2つの画像に対する動き適応処理を行うかを判断する、前記(1)~(3)のいずれかに記載の信号処理装置。
(5)
 前記合成処理部は、前記合成処理として、合成される2つの前記画像の合成比率を生成する処理を行う、前記(1)~(4)のいずれかに記載の信号処理装置。
(6)
 前記合成処理部は、前記合成処理として、合成される2つの前記画像の合成比率に基づいて2つの前記画像の合成を行う、前記(1)~(4)のいずれか1に記載の信号処理装置。
(7)
 前記合成処理部は、前記露光時間が長い画像から順に少なくともN-1回の合成処理を行う、前記(1)~(6)のいずれかに記載の信号処理装置。
(8)
 プロセッサが、
 露光時間がそれぞれ異なるN個(Nは3以上の整数)のフレームの画像に対して少なくともN-1回の合成処理を行うことと、
 前記画像の群のうち2つを使用する1回の合成の際に動き適応処理を行うことと、
を含み、
 前記N-1回の合成処理を行う際の動き適応処理の回数をN―2回以下とする、信号処理方法。
(9)
 コンピュータに、
 露光時間がそれぞれ異なるN個(Nは3以上の整数)のフレームの画像に対して少なくともN-1回の合成処理を行うことと、
 前記画像の群のうち2つを使用する1回の合成の際に動き適応処理を行うことと、
を実行させ、
 前記N-1回の合成処理を行う際の動き適応処理の回数をN―2回以下とする、コンピュータプログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1)
A composition processing unit that performs composition processing at least N-1 times on images of N frames (N is an integer of 3 or more) having different exposure times;
A motion adaptation processing unit for performing motion adaptation processing at the time of one synthesis using two of the groups of images in the synthesis processing unit;
With
The signal processing device, wherein the number of motion adaptation processes in the motion adaptation processing unit when the synthesis processing unit performs the N-1 synthesis processes is N-2 times or less.
(2)
The signal processing according to (1), wherein the motion adaptation processing unit performs the motion adaptation processing only on an image whose exposure time of the image that is a target of the synthesis processing in the synthesis processing unit is a predetermined threshold value or more. apparatus.
(3)
The signal processing apparatus according to (1), wherein the motion adaptive processing unit performs the motion adaptive processing at the maximum number of times that the synthesis processing execution time in the synthesis processing unit does not exceed a predetermined threshold.
(4)
A motion detection processing unit that performs motion detection processing on the two images synthesized by the synthesis processing unit;
The motion adaptation processing unit determines whether to perform motion adaptation processing on the two images according to the magnitude of motion between the two images detected by the motion detection processing. The signal processing device according to any one of 3).
(5)
The signal processing apparatus according to any one of (1) to (4), wherein the synthesis processing unit performs a process of generating a synthesis ratio of the two images to be synthesized as the synthesis process.
(6)
The signal processing according to any one of (1) to (4), wherein the synthesis processing unit performs synthesis of the two images based on a synthesis ratio of the two images to be synthesized as the synthesis processing. apparatus.
(7)
The signal processing apparatus according to any one of (1) to (6), wherein the composition processing unit performs composition processing at least N−1 times in order from the image with the long exposure time.
(8)
Processor
Performing composition processing at least N−1 times on images of N frames (N is an integer of 3 or more) having different exposure times;
Performing a motion adaptation process in one synthesis using two of the group of images;
Including
A signal processing method in which the number of motion adaptation processes when performing N-1 synthesis processes is N-2 or less.
(9)
On the computer,
Performing composition processing at least N−1 times on images of N frames (N is an integer of 3 or more) having different exposure times;
Performing a motion adaptation process in one synthesis using two of the group of images;
And execute
A computer program in which the number of motion adaptation processes when performing the N-1 synthesis processes is N-2 or less.
10   :イメージセンサ
11   :長蓄画素
12   :中蓄画素
13   :短蓄画素
14   :超短蓄画素
100  :信号処理装置
10: Image sensor 11: Long accumulation pixel 12: Medium accumulation pixel 13: Short accumulation pixel 14: Ultra short accumulation pixel 100: Signal processing device

Claims (9)

  1.  露光時間がそれぞれ異なるN個(Nは3以上の整数)のフレームの画像に対して少なくともN-1回の合成処理を行う合成処理部と、
     前記合成処理部で前記画像の群のうち2つを使用する1回の合成の際に動き適応処理を行う動き適応処理部と、
    を備え、
     前記合成処理部が前記N-1回の合成処理を行う際の前記動き適応処理部での動き適応処理の回数をN―2回以下とする、信号処理装置。
    A composition processing unit that performs composition processing at least N-1 times on images of N frames (N is an integer of 3 or more) having different exposure times;
    A motion adaptation processing unit for performing motion adaptation processing at the time of one synthesis using two of the groups of images in the synthesis processing unit;
    With
    The signal processing device, wherein the number of motion adaptation processes in the motion adaptation processing unit when the synthesis processing unit performs the N-1 synthesis processes is N-2 times or less.
  2.  前記動き適応処理部は、前記合成処理部での合成処理の対象となる前記画像の露光時間が所定の閾値以上の画像に対してのみ動き適応処理を行う、請求項1に記載の信号処理装置。 The signal processing apparatus according to claim 1, wherein the motion adaptation processing unit performs motion adaptation processing only on an image whose exposure time of the image that is a target of the synthesis processing in the synthesis processing unit is a predetermined threshold value or more. .
  3.  前記動き適応処理部は、前記合成処理部での合成処理の実行時間が所定の閾値を超えない最大の回数で動き適応処理を行う、請求項1に記載の信号処理装置。 The signal processing apparatus according to claim 1, wherein the motion adaptive processing unit performs the motion adaptive processing at the maximum number of times that the synthesis processing execution time in the synthesis processing unit does not exceed a predetermined threshold.
  4.  前記合成処理部で合成される2つの画像に対して動き検出処理を実行する動き検出処理部をさらに備え、
     前記動き適応処理部は、前記動き検出処理により検出された前記2つの画像の間の動きの大きさに応じて該2つの画像に対する動き適応処理を行うかを判断する、請求項1に記載の信号処理装置。
    A motion detection processing unit that performs motion detection processing on the two images synthesized by the synthesis processing unit;
    The motion adaptation processing unit according to claim 1, wherein the motion adaptation processing unit determines whether to perform motion adaptation processing on the two images according to a magnitude of motion between the two images detected by the motion detection processing. Signal processing device.
  5.  前記合成処理部は、前記合成処理として、合成される2つの前記画像の合成比率を生成する処理を行う、請求項1に記載の信号処理装置。 The signal processing apparatus according to claim 1, wherein the synthesis processing unit performs a process of generating a synthesis ratio of the two images to be synthesized as the synthesis process.
  6.  前記合成処理部は、前記合成処理として、合成される2つの前記画像の合成比率に基づいて2つの前記画像の合成を行う、請求項1に記載の信号処理装置。 The signal processing apparatus according to claim 1, wherein the synthesis processing unit performs synthesis of the two images based on a synthesis ratio of the two images to be synthesized as the synthesis processing.
  7.  前記合成処理部は、前記露光時間が長い画像から順に少なくともN-1回の合成処理を行う、請求項1に記載の信号処理装置。 The signal processing apparatus according to claim 1, wherein the synthesis processing unit performs synthesis processing at least N-1 times in order from the image having the long exposure time.
  8.  プロセッサが、
     露光時間がそれぞれ異なるN個(Nは3以上の整数)のフレームの画像に対して少なくともN-1回の合成処理を行うことと、
     前記画像の群のうち2つを使用する1回の合成の際に動き適応処理を行うことと、
    を含み、
     前記N-1回の合成処理を行う際の動き適応処理の回数をN―2回以下とする、信号処理方法。
    Processor
    Performing composition processing at least N−1 times on images of N frames (N is an integer of 3 or more) having different exposure times;
    Performing a motion adaptation process in one synthesis using two of the group of images;
    Including
    A signal processing method in which the number of motion adaptation processes when performing N-1 synthesis processes is N-2 or less.
  9.  コンピュータに、
     露光時間がそれぞれ異なるN個(Nは3以上の整数)のフレームの画像に対して少なくともN-1回の合成処理を行うことと、
     前記画像の群のうち2つを使用する1回の合成の際に動き適応処理を行うことと、
    を実行させ、
     前記N-1回の合成処理を行う際の動き適応処理の回数をN―2回以下とする、コンピュータプログラム。
    On the computer,
    Performing composition processing at least N−1 times on images of N frames (N is an integer of 3 or more) having different exposure times;
    Performing a motion adaptation process in one synthesis using two of the group of images;
    And execute
    A computer program in which the number of motion adaptation processes when performing the N-1 synthesis processes is N-2 or less.
PCT/JP2018/014210 2017-05-29 2018-04-03 Signal processing device, signal processing method and computer program WO2018220993A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017105680A JP2018201158A (en) 2017-05-29 2017-05-29 Signal processing apparatus, signal processing method, and computer program
JP2017-105680 2017-05-29

Publications (1)

Publication Number Publication Date
WO2018220993A1 true WO2018220993A1 (en) 2018-12-06

Family

ID=64454728

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/014210 WO2018220993A1 (en) 2017-05-29 2018-04-03 Signal processing device, signal processing method and computer program

Country Status (2)

Country Link
JP (1) JP2018201158A (en)
WO (1) WO2018220993A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636227A (en) * 2019-09-24 2019-12-31 合肥富煌君达高科信息技术有限公司 High dynamic range HDR image synthesis method and high-speed camera integrating same

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000050151A (en) * 1998-07-28 2000-02-18 Olympus Optical Co Ltd Image pickup device
JP2001333420A (en) * 2000-05-22 2001-11-30 Hitachi Ltd Image supervisory method and device
JP2009071408A (en) * 2007-09-11 2009-04-02 Mitsubishi Electric Corp Image processing apparatus, and image processing method
JP2010239610A (en) * 2009-03-13 2010-10-21 Omron Corp Image processing device and image processing method
JP2015142342A (en) * 2014-01-30 2015-08-03 オリンパス株式会社 Imaging apparatus, image generation method and image generation program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000050151A (en) * 1998-07-28 2000-02-18 Olympus Optical Co Ltd Image pickup device
JP2001333420A (en) * 2000-05-22 2001-11-30 Hitachi Ltd Image supervisory method and device
JP2009071408A (en) * 2007-09-11 2009-04-02 Mitsubishi Electric Corp Image processing apparatus, and image processing method
JP2010239610A (en) * 2009-03-13 2010-10-21 Omron Corp Image processing device and image processing method
JP2015142342A (en) * 2014-01-30 2015-08-03 オリンパス株式会社 Imaging apparatus, image generation method and image generation program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636227A (en) * 2019-09-24 2019-12-31 合肥富煌君达高科信息技术有限公司 High dynamic range HDR image synthesis method and high-speed camera integrating same
CN110636227B (en) * 2019-09-24 2021-09-10 合肥富煌君达高科信息技术有限公司 High dynamic range HDR image synthesis method and high-speed camera integrating same

Also Published As

Publication number Publication date
JP2018201158A (en) 2018-12-20

Similar Documents

Publication Publication Date Title
US10432847B2 (en) Signal processing apparatus and imaging apparatus
US11082626B2 (en) Image processing device, imaging device, and image processing method
WO2017175492A1 (en) Image processing device, image processing method, computer program and electronic apparatus
WO2020230660A1 (en) Image recognition device, solid-state imaging device, and image recognition method
US20220161654A1 (en) State detection device, state detection system, and state detection method
US11553117B2 (en) Image pickup control apparatus, image pickup apparatus, control method for image pickup control apparatus, and non-transitory computer readable medium
WO2017195459A1 (en) Imaging device and imaging method
WO2018008426A1 (en) Signal processing device and method, and imaging device
WO2017169233A1 (en) Imaging processing device, imaging processing method, computer program and electronic device
US11025828B2 (en) Imaging control apparatus, imaging control method, and electronic device
WO2021065494A1 (en) Distance measurement sensor, signal processing method, and distance measurement module
WO2017149964A1 (en) Image processing device, image processing method, computer program, and electronic device
WO2018220993A1 (en) Signal processing device, signal processing method and computer program
WO2020209079A1 (en) Distance measurement sensor, signal processing method, and distance measurement module
US20200402206A1 (en) Image processing device, image processing method, and program
WO2021065500A1 (en) Distance measurement sensor, signal processing method, and distance measurement module
WO2021065495A1 (en) Ranging sensor, signal processing method, and ranging module
US20210217146A1 (en) Image processing apparatus and image processing method
US10999488B2 (en) Control device, imaging device, and control method
WO2018142969A1 (en) Display control device and method, and program
WO2022249562A1 (en) Signal processing device, method, and program
WO2022219874A1 (en) Signal processing device and method, and program
WO2020137503A1 (en) Image processing device
WO2023210197A1 (en) Imaging device and signal processing method
WO2021124921A1 (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18810811

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18810811

Country of ref document: EP

Kind code of ref document: A1