WO2012137696A1 - 車両用画像処理装置 - Google Patents
車両用画像処理装置 Download PDFInfo
- Publication number
- WO2012137696A1 WO2012137696A1 PCT/JP2012/058811 JP2012058811W WO2012137696A1 WO 2012137696 A1 WO2012137696 A1 WO 2012137696A1 JP 2012058811 W JP2012058811 W JP 2012058811W WO 2012137696 A1 WO2012137696 A1 WO 2012137696A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- exposure control
- imaging unit
- exposure
- captured
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/25—Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
Definitions
- the present invention relates to a vehicular image processing apparatus that processes a captured image to detect a three-dimensional object, a road installation object, and a light.
- the present invention has been made in view of the above points, and an object of the present invention is to provide a vehicle image processing apparatus having a large dynamic range of images and capable of reliably detecting road installations such as white lines and lights.
- the image processing apparatus for a vehicle includes a first image pickup unit, a second image pickup unit, and exposure control for the road installation object / light recognition for exposure control of the first image pickup unit and the second image pickup unit.
- a switching unit that switches between the control and the exposure control for three-dimensional object recognition, and the road installation object, the light, or the three-dimensional object is detected from images captured by the first imaging unit and the second imaging unit.
- the exposure control for road installation / light recognition the exposure of the first imaging unit and the exposure of the second imaging unit are different from each other.
- the vehicle image processing apparatus of the present invention detects road installations and lights, exposure control of both the first imaging unit and the second imaging unit is used as road installation / light recognition exposure control.
- the exposure of the first imaging unit at that time is different from the exposure of the second imaging unit.
- the image captured by the first imaging unit and the image captured by the second imaging unit generally have a larger dynamic range than the image captured by only one imaging unit.
- the road installation and lighting are detected due to insufficient dynamic range of the image. It is hard to happen that it cannot be detected.
- At least a part of the dynamic range of the first imaging unit and the dynamic range of the second imaging unit preferably overlap. By doing so, a range of brightness that cannot be detected in the middle will not occur.
- the upper limit of the dynamic range of the first imaging unit can be matched with the lower limit of the dynamic range of the second imaging unit.
- the lower limit of the dynamic range of the first imaging unit and the upper limit of the dynamic range of the second imaging unit can be matched.
- the dynamic range of the first imaging unit and the dynamic range of the second imaging unit may partially overlap.
- the detection unit synthesizes images captured by the first imaging unit and the second imaging unit when the road installation / light recognition exposure control is being executed, and the road installation is obtained from the synthesized image. And light can be detected. Since the dynamic range of the synthesized image is larger than the dynamic range of the image before synthesis (the image captured by the first imaging unit or the second imaging unit), the dynamic range can be obtained by using this synthesized image. It is hard to happen that road installations and lights cannot be detected due to the lack of.
- the detection unit is configured to display an image captured by the first imaging unit when the road installation / light recognition exposure control is being executed, and when the road installation / light recognition exposure control is being executed. It is possible to select an image with high contrast from the images picked up by the two image pickup units, and to detect road installations and lights from the selected image. By doing so, it is unlikely that road installations and lights cannot be detected due to insufficient dynamic range of the image.
- the road installation / light recognition exposure control may include two or more types of control with different exposure conditions.
- Examples of exposure control for road installation / light recognition include exposure control for lane (white line) detection, sign detection, signal detection, and light detection.
- FIG. 1 is a block diagram illustrating a configuration of a stereo image sensor 1.
- FIG. 5 is a flowchart illustrating processing (entire) executed by the stereo image sensor 1; 4 is a flowchart showing exposure control of the right camera 3.
- 6 is a flowchart showing exposure control of the left camera 5. It is explanatory drawing showing the transition of the kind of exposure control, and the brightness of the right camera 3 and the left camera 5.
- FIG. 5 is a flowchart illustrating processing (entire) executed by the stereo image sensor 1;
- 5 is a flowchart illustrating processing (entire) executed by the stereo image sensor 1;
- 4 is a flowchart showing exposure control of the right camera 3.
- 6 is a flowchart showing exposure control of the left camera 5.
- 5 is a flowchart illustrating processing (entire) executed by the stereo image sensor 1;
- the stereo image sensor 1 is an in-vehicle device mounted on a vehicle, and includes a right camera (first imaging unit) 3, a left camera (second imaging unit) 5, and a CPU (switching unit, detection unit) 7. Prepare.
- Each of the right camera 3 and the left camera 5 includes a photoelectric conversion element (not shown) such as a CCD or a CMOS, and can capture the front of the vehicle.
- the right camera 3 and the left camera 5 can control exposure by changing the exposure time or the gain in the output signal of the photoelectric conversion element.
- An image captured by the right camera 3 and the left camera 5 is 8-bit data.
- CPU 7 controls the right camera 3 and the left camera 5 (including exposure control). Moreover, CPU7 acquires the image imaged with the right camera 3 and the left camera 5, and detects a solid object, a road installation thing, and a light from the image. The process executed by the CPU 7 will be described later.
- the CPU 7 outputs the detection result of the three-dimensional object, the road installation object, and the light to the vehicle control device 9 and the alarm device 11 through CAN (communication system in a vehicle).
- the vehicle control device 9 executes known processes such as collision avoidance and lane keeping based on the output. Further, the alarm device 11 issues a warning of collision or lane departure based on the output from the stereo image sensor 1.
- step 10 exposure control of the right camera 3 and the left camera 5 is performed.
- the exposure control of the left camera 5 will be described based on the flowchart of FIG.
- step 110 the frame N0 of the most recently captured image is acquired, and X, which is a remainder when the frame N0 is divided by 3 (any one of 0, 1, and 2), is calculated.
- the frame N0 is a number assigned to an image (frame) captured by the left camera 5.
- Frame NO starts from 1 and increases by 1. For example, when the left camera 5 images n times, the frames N0 attached to the n images (frames) are 1, 2, 3, 4, 5,.
- the value of X is, for example, 1 when the frame NO of the most recently captured image is 1, 4, 7,..., And the frame NO of the most recently captured image is 2, 5, 8,. In the case of .., it is 2, and in the case where the frame number of the most recently captured image is 3, 6, 9,.
- step 120 the process proceeds to step 120, and the three-dimensional object exposure control is set for the left camera 5.
- This three-dimensional object exposure control is exposure control suitable for a three-dimensional object detection process described later.
- step 140 monocular exposure control (one type of road installation / light recognition exposure control) B is set for the left camera 5.
- This monocular exposure control B is control for setting the exposure of the left camera 5 to an exposure suitable for recognizing a sign.
- monocular exposure control B is the brightness of the image in its control is represented by ⁇ ⁇ 2 0. This ⁇ is a value different from ⁇ .
- monocular exposure control one type of road installation / light recognition exposure control
- This monocular exposure control C is a control for setting the exposure of the right camera 3 to an exposure suitable for recognizing a lane (white line) on the road.
- monocular exposure control C is the brightness of the image is represented by alpha ⁇ 2 8 in its control, which is 256 times the brightness of the monocular exposure control A ( ⁇ ⁇ 2 0).
- step 240 the monocular exposure control (one type of road installation / light recognition exposure control) D is set for the right camera 3.
- This monocular exposure control D is a control for setting the exposure of the right camera 3 to an exposure suitable for recognizing a sign.
- exposure control D monocular is the brightness of the image is represented by beta ⁇ 2 8 in its control, which is 256 times the brightness of the monocular exposure control B ( ⁇ ⁇ 2 0).
- step 30 it is determined whether X calculated in the immediately preceding steps 110 and 210 is 0, 1 or 2.
- X it progresses to step 40, and a solid object detection process is performed.
- the case where X is 0 is a case where the exposure control of the right camera 3 and the left camera 5 is set to the three-dimensional object exposure control in Steps 120 and 220, respectively, and imaging is performed under the conditions.
- the distance to the object to be imaged is calculated by associating the same point of the object and obtaining the shift amount (parallax) of the associated point (corresponding point).
- the imaging object is in front, when the image from the right camera 3 and the image from the left camera 5 are superimposed, the imaging object is shifted in the horizontal direction. Then, the position where the imaging objects overlap most is obtained while shifting one image pixel by pixel. The number of pixels shifted at this time is n.
- the focal length of the lens is f
- the processing between the optical axes is m
- the pixel pitch is d
- step 30 determines whether X is 1 or not the exposure control of the right camera 3 and the left camera 5 is set to the monocular exposure control C and A in steps 130 and 230 and imaging is performed under the conditions. .
- step 60 the image captured by the right camera 3 (image captured by the monocular exposure control C) and the image captured by the left camera 5 (image captured by the monocular exposure control A) are synthesized, and the composite image P Create
- the composite image P is a sum of the pixel value of each pixel in the image captured by the right camera 3 and the pixel value of each pixel in the image captured by the left camera 5 for each pixel. That is, the pixel value in each pixel of the composite image P is the sum of the pixel value of the corresponding pixel in the image captured by the right camera 3 and the pixel value of the corresponding pixel in the image captured by the left camera 5. is there.
- the image captured by the right camera 3 and the image captured by the left camera 5 are each 8-bit data.
- the brightness of the image captured by the right camera 3 is 256 times the brightness of the image captured by the left camera 5. Therefore, each pixel value of the image captured by the right camera 3 is multiplied by 256 and then summed.
- the synthesized image P synthesized as described above becomes 16-bit data, and the size of the dynamic range is 256 times that of the image captured by the right camera 3 or the image captured by the left camera 5.
- step 70 After the completion of step 70, the process proceeds to step 50 and increments the frame number by 1.
- step 30 determines whether X is 2 or not the exposure control of the right camera 3 and the left camera 5 is set to the monocular exposure control D and B in Steps 140 and 240 and imaging is performed under the conditions. .
- step 80 the image captured by the right camera 3 (image captured by the monocular exposure control D) and the image captured by the left camera 5 (image captured by the monocular exposure control B) are synthesized, and the combined image Q Create
- the composite image Q is a sum of the pixel value of each pixel in the image captured by the right camera 3 and the pixel value of each pixel in the image captured by the left camera 5 for each pixel. That is, the pixel value in each pixel of the composite image Q is the sum of the pixel value of the corresponding pixel in the image captured by the right camera 3 and the pixel value of the corresponding pixel in the image captured by the left camera 5. is there.
- step 90 processing for detecting a sign from the composite image P synthesized in step 80 is executed. Specifically, in the composite image Q, a point (edge point) having a luminance change amount equal to or greater than a predetermined value is searched, and an image of the edge point (edge image) is created. Then, in the edge image, the sign is detected from the shape of the region formed by the edge points by a known technique such as matching. Note that “monocular application 2” in step 90 of FIG. 2 means a sign detection application.
- step 90 the process proceeds to step 50, and the frame number is incremented by 1.
- FIG. 5 shows how the types of exposure control and the brightness of the right camera 3 and the left camera 5 change as the frame number increases.
- “bright 1”, “bright 2”, “dark 1”, and “dark 2” are ⁇ ⁇ 2 8 , ⁇ ⁇ 2 8 , ⁇ ⁇ 2 0 , and ⁇ ⁇ 2 0 , respectively.
- the stereo image sensor 1 synthesizes an image captured by the right camera 3 and an image captured by the left camera 5 to generate a composite image P and a composite image Q having a large dynamic range.
- a road installation for example, a lane, a sign
- a lamp for example, a vehicle headlight, taillight, etc.
- the stereo image sensor 1 repeatedly executes the processing shown in the flowchart of FIG. 6 every 33 msec.
- step 310 exposure control of the right camera 3 and the left camera 5 is performed.
- the exposure control is the same as in the first embodiment.
- step 320 the front of the vehicle is imaged by the right camera 3 and the left camera 5, and the images are acquired. Note that the right camera 3 and the left camera 5 capture images simultaneously.
- step 330 the frame No. of the most recently captured image is acquired, and it is determined whether X, which is the remainder when the frame No. is divided by 3, is 0, 1, or 2. If X is 0, the process proceeds to step 340 to execute a three-dimensional object detection process.
- X is 0 is a case where the exposure control of the right camera 3 and the left camera 5 is set to the three-dimensional object exposure control and imaging is performed under the condition.
- the contents of the three-dimensional object detection process are the same as those in the first embodiment.
- step 350 the frame number is increased by 1.
- step 330 determines whether X is 1 or not the exposure control of the right camera 3 and the left camera 5 is set to the monocular exposure control C and A, respectively, and imaging is performed under the conditions.
- the contrast is higher between the image captured by the right camera 3 (image captured by the monocular exposure control C) and the image captured by the left camera 5 (image captured by the monocular exposure control A).
- Select an image Specifically, the selection is performed as follows. In each of the image picked up by the right camera 3 and the image picked up by the left camera 5, a point (edge point) where the amount of change in luminance is a predetermined value or more is searched to create an image of the edge point (edge image). Then, the edge image of the image captured by the right camera 3 and the edge image of the image captured by the left camera 5 are compared to determine which edge image has more edge points. Of the image captured by the right camera 3 and the image captured by the left camera 5, the one having more edge points is selected as an image having higher contrast.
- step 370 processing for detecting a lane (white line) from the image selected in step 360 is executed. Specifically, in the edge image of the selected image, a lane (white line) is detected from the shape of the region formed by the edge points by a known technique such as matching.
- step 370 the process proceeds to step 350 and increments frame No. by one.
- step 330 determines whether X is 2 or more images are captured.
- X is 2
- the exposure control of the right camera 3 and the left camera 5 is set to monocular exposure control D and B, respectively, and imaging is performed under the conditions.
- the contrast is higher between the image captured by the right camera 3 (image captured by the monocular exposure control D) and the image captured by the left camera 5 (image captured by the monocular exposure control B).
- Select an image Specifically, the selection is performed as follows. In each of the image picked up by the right camera 3 and the image picked up by the left camera 5, a point (edge point) where the amount of change in luminance is a predetermined value or more is searched to create an image of the edge point (edge image). Then, the edge image of the image captured by the right camera 3 and the edge image of the image captured by the left camera 5 are compared to determine which edge image has more edge points. Of the image captured by the right camera 3 and the image captured by the left camera 5, the one having more edge points is selected as an image having higher contrast.
- step 390 processing for detecting a sign from the image selected in step 380 is executed. Specifically, in the edge image of the selected image, a sign is detected by a known technique such as matching from the shape of the region formed by the edge points.
- step 390 the process proceeds to step 350 and increments the frame number by 1.
- the stereo image sensor 1 is an image having a higher contrast between the image captured by the right camera 3 and the image captured by the left camera 5 (an image having no so-called whiteout or blackout). ) To detect road installations and lights from the selected image. Therefore, it is unlikely that road installations and lights cannot be detected due to a lack of the dynamic range of the image.
- ⁇ Third Embodiment> 1. Configuration of Stereo Image Sensor 1
- the configuration of the stereo image sensor 1 is the same as that of the first embodiment.
- the stereo image sensor 1 repeatedly executes the processing shown in the flowchart of FIG. 7 every 33 msec.
- step 410 exposure control of the right camera 3 and the left camera 5 is performed.
- the exposure control of the left camera 5 will be described based on the flowchart of FIG.
- step 510 the frame N0 of the most recently captured image is acquired, and X, which is a remainder (one of 0, 1, 2) when the frame N0 is divided by 3, is calculated.
- X which is a remainder (one of 0, 1, 2) when the frame N0 is divided by 3, is calculated.
- the meaning of the frame N0 is the same as that in the first embodiment.
- step 520 set the three-dimensional object exposure control for the left camera 5.
- the three-dimensional object exposure control is exposure control suitable for the three-dimensional object detection process.
- monocular exposure control (one type of road installation / light recognition exposure control) E is set for the left camera 5.
- This monocular exposure control E is control for setting the exposure of the left camera 5 to an exposure suitable for recognizing a lane (white line) on the road.
- monocular exposure control E is the brightness of the image in its control is represented by ⁇ ⁇ 2 0.
- the process proceeds to step 540, and the monocular exposure control (one type of road installation / light recognition exposure control) F is set for the left camera 5.
- This monocular exposure control F is a control for setting the exposure of the left camera 5 to an exposure suitable for recognizing a lane (white line) on the road.
- the brightness of the image in the control is represented by ⁇ ⁇ 2 16 , which is 2 16 times the brightness ( ⁇ ⁇ 2 0 ) in the monocular exposure control E.
- step 610 the frame N0 of the most recently captured image is acquired, and X, which is a remainder when the frame N0 is divided by 3 (any one of 0, 1, and 2) is calculated. Since the right camera 3 and the left camera 5 always capture images simultaneously, the frame N0 of the image most recently captured by the right camera 3 is the same as the frame N0 of the image most recently captured by the left camera 5. .
- the process proceeds to step 620, and the three-dimensional object exposure control is set for the right camera 3.
- the three-dimensional object exposure control is exposure control suitable for the three-dimensional object detection process.
- monocular exposure control G is a control for setting the exposure of the right camera 3 to an exposure suitable for recognizing a lane (white line) on the road.
- monocular exposure control G is the brightness of the image is represented by alpha ⁇ 2 8 in that the control is a 2 8 times the brightness of the monocular exposure control E ( ⁇ ⁇ 2 0).
- the process proceeds to step 640 where the monocular exposure control (one type of road installation / light recognition exposure control) H is set for the right camera 3.
- This monocular exposure control H is a control for setting the exposure of the right camera 3 to an exposure suitable for recognizing a lane (white line) on the road.
- the brightness of the image in the control is represented by ⁇ ⁇ 2 24 , which is 2 24 times the brightness ( ⁇ ⁇ 2 0 ) in the monocular exposure control E.
- step 420 the right camera 3 and the left camera 5 capture the front of the vehicle and acquire the images. Note that the right camera 3 and the left camera 5 capture images simultaneously.
- Step 430 it is determined whether X calculated in the immediately preceding Steps 510 and 610 is 0, 1 or 2. If X is 0, the process proceeds to step 440 to execute a three-dimensional object detection process. Note that the case where X is 0 is a case where the exposure control of the right camera 3 and the left camera 5 is set to the three-dimensional object exposure control in Steps 520 and 620 and imaging is performed under the conditions. The contents of the three-dimensional object detection process are the same as those in the first embodiment.
- step 430 determines whether X is 1 or not. If it is determined in step 430 that X is 1, the process proceeds to step 450 where the frame number is increased by 1.
- step 430 determines whether X is 2 or more images are captured.
- step 440 the exposure control of the right camera 3 and the left camera 5 is set to the monocular exposure control H and F, respectively, and imaging is performed under those conditions. Is the case.
- step 460 the following four images are combined to create a combined image R.
- the composite image R is a sum of pixel values of each pixel in the four images. That is, the pixel value in each pixel of the composite image R is 4
- each of the four images is 8-bit data.
- the image captured in the monocular exposure control E the brightness of an image captured in monocular exposure control G is 2 8 times
- the brightness of an image captured in monocular exposure control F is 2 16 times
- the brightness of an image captured in monocular exposure control H is 2 to 24 times. Therefore, when the sum of the pixel value of each pixel is 2 8-fold, respectively, 2 to 16 times, summing the pixel values in terms of the 2 to 24 times.
- the composite image R becomes the 32bit data, the dynamic range, image captured by the right camera 3, or as compared to the image captured by the left camera 5, a 2 to 24 times.
- step 470 a process of detecting a lane (white line) from the composite image R synthesized in step 460 is executed. Specifically, in the composite image R, a point (edge point) having a luminance change amount equal to or greater than a predetermined value is searched, and an image of the edge point (edge image) is created. In the edge image, a lane (white line) is detected from the shape of the region formed by the edge points by a known method such as matching.
- step 470 the process proceeds to step 450 where the frame number is incremented by one.
- the stereo image sensor 1 combines the two images captured by the right camera 3 and the two images captured by the left camera 5 to create a composite image R having a large dynamic range, From the composite image R, road installations and lights are detected. Therefore, it is unlikely that road installations and lights cannot be detected due to a lack of the dynamic range of the image.
- ⁇ Fourth Embodiment> Configuration of Stereo Image Sensor 1
- the configuration of the stereo image sensor 1 is the same as that of the first embodiment.
- the stereo image sensor 1 repeatedly executes the process shown in the flowchart of FIG. 10 every 33 msec.
- step 710 exposure control of the right camera 3 and the left camera 5 is performed.
- the exposure control is the same as in the third embodiment.
- step 720 the front of the vehicle is imaged by the right camera 3 and the left camera 5, and the images are acquired. Note that the right camera 3 and the left camera 5 capture images simultaneously.
- step 730 the frame No. of the most recently captured image is acquired, and it is determined whether X, which is the remainder when the frame No. is divided by 3, is 0, 1, or 2. If X is 0, the process proceeds to step 740 to execute a three-dimensional object detection process.
- the contents of the three-dimensional object detection process are the same as those in the first embodiment.
- step 750 the frame number is increased by 1.
- step 730 determines whether X is 1 or not the exposure control of the right camera 3 and the left camera 5 is set to the monocular exposure control G and E, respectively, and imaging is performed under the conditions.
- step 730 determines whether X is 2 or more images are captured.
- X is 2
- the exposure control of the right camera 3 and the left camera 5 is set to monocular exposure control H and F, respectively, and imaging is performed under those conditions.
- step 760 an image having the highest contrast is selected from the following four images.
- an image captured by the right camera 3 when X was 1 an image captured in the monocular exposure control G
- An image captured by the left camera 5 when X was most recently 1 image captured by the monocular exposure control E
- an image captured by the right camera 3 image captured by the monocular exposure control H
- an image captured by the left camera 5 an image captured by the monocular exposure control F
- the selection of the image having the highest contrast is performed as follows. In each of the four images, a point (edge point) in which the amount of change in luminance is equal to or greater than a predetermined value is searched, and an image of the edge point (edge image) is created. Then, the edge images of the four images are compared to determine which edge image has the most edge points. Of the four images, the image with the most edge points is selected as the image with the highest contrast.
- each of the four images is 8-bit data.
- the image captured in the monocular exposure control E the brightness of an image captured in monocular exposure control G is 2 8 times
- the brightness of an image captured in monocular exposure control F is 2 16 times
- the brightness of an image captured in monocular exposure control H is 2 to 24 times.
- step 770 processing for detecting a lane (white line) from the image selected in step 760 is executed. Specifically, in the image selected in step 760, a point (edge point) having a luminance change amount equal to or greater than a predetermined value is searched to create an image of the edge point (edge image). In the edge image, a lane (white line) is detected from the shape of the region formed by the edge points by a known method such as matching.
- step 770 After step 770 ends, the process proceeds to step 750 and increments frame No. by one.
- the stereo image sensor 1 selects an image having the highest contrast from the two images captured by the right camera 3 and the two images captured by the left camera 5, and from the selected images. Detect road installations and lights. Therefore, it is unlikely that road installations and lights cannot be detected due to a lack of the dynamic range of the image.
- the first road installation object and the light are detected from the image captured by the right camera 3 (image captured by the monocular exposure control C).
- the second road installation object and the light may be detected from the image captured by the left camera 5 (image captured by the monocular exposure control A).
- the third road installation object and the light are detected from the image captured by the right camera 3 (image captured by the monocular exposure control D).
- the fourth road installation object and the light may be detected from the image captured by the left camera 5 (image captured by the monocular exposure control B).
- the first to fourth road installations and lights can be arbitrarily set, for example, from white lines, signs, signals, and other vehicle lights.
- the number of images to be combined is not limited to 2, 4, and can be any number (eg, 3, 5, 6, 7, 8,). .
- images may be selected from images other than 2, 4 (for example, 3, 5, 6, 7, 8,).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
Description
1.ステレオ画像センサ1の構成
ステレオ画像センサ(車両用画像処理装置)1の構成を、図1のブロック図に基づいて説明する。
ステレオ画像センサ1(特にCPU7)が実行する処理を、図2~図4のフローチャート及び図5の説明図に基づいて説明する。
(1)ステレオ画像センサ1は、右カメラ3で撮像した画像と、左カメラ5で撮像した画像とを合成して、ダイナミックレンジが大きい合成画像P、合成画像Qを作成し、その合成画像P、合成画像Qから、道路設置物(例えばレーン、標識)や灯火(例えば車両のヘッドライト、テールライト等)を検出する。そのため、画像のダイナミックレンジの不足により、道路設置物や灯火を検出できないということが起こり難い。
1.ステレオ画像センサ1の構成
ステレオ画像センサ1の構成は前記第1の実施形態と同様である。
ステレオ画像センサ1が実行する処理を、図6のフローチャートに基づいて説明する。
ステレオ画像センサ1は、右カメラ3で撮像した画像と、左カメラ5で撮像した画像とのうち、よりコントラストが大きい画像(いわゆる白飛びや黒飛びをしていない画像)を選択し、その選択した画像から、道路設置物や灯火を検出する。そのため、画像のダイナミックレンジの不足により、道路設置物や灯火を検出できないということが起こり難い。
1.ステレオ画像センサ1の構成
ステレオ画像センサ1の構成は前記第1の実施形態と同様である。
ステレオ画像センサ1が実行する処理を、図7~図9のフローチャートに基づいて説明する。
・Xが直近に1であったときに、左カメラ5で撮像した画像(単眼用露出制御Eにおいて撮像した画像)
・Xが2であるとき(直前のステップ420)に、右カメラ3で撮像した画像(単眼用露出制御Hにおいて撮像した画像)
・Xが2であるとき(直前のステップ420)に、左カメラ5で撮像した画像(単眼用露出制御Fにおいて撮像した画像)
合成画像Rは、上記4つの画像における各画素の画素値を、各画素ごとに合計したものである。すなわち、合成画像Rの各画素における画素値は、上記4
ステレオ画像センサ1は、右カメラ3で撮像した2つの画像と、左カメラ5で撮像した2つの画像とを合成して、ダイナミックレンジが大きい合成画像Rを作成し、その合成画像Rから、道路設置物や灯火を検出する。そのため、画像のダイナミックレンジの不足により、道路設置物や灯火を検出できないということが起こり難い。
1.ステレオ画像センサ1の構成
ステレオ画像センサ1の構成は前記第1の実施形態と同様である。
ステレオ画像センサ1が実行する処理を、図10のフローチャートに基づいて説明する。
・Xが直近に1であったときに、左カメラ5で撮像した画像(単眼用露出制御Eにおいて撮像した画像)
・Xが2であるとき(直前のステップ420)に、右カメラ3で撮像した画像(単眼用露出制御Hにおいて撮像した画像)
・Xが2であるとき(直前のステップ420)に、左カメラ5で撮像した画像(単眼用露出制御Fにおいて撮像した画像)
最もコントラストが高い画像の選択は、具体的には、以下のように行う。上記4つの画像のそれぞれにおいて、輝度の変化量が所定値以上の点(エッジ点)を検索し、エッジ点の画像(エッジ画像)を作成する。そして、上記4つの画像のエッジ画像を対比し、どのエッジ画像にエッジ点が最も多いかを判断する。上記4つの画像のうち、最もエッジ点が多い画像を、最もコントラストが高い画像として選択する。
ステレオ画像センサ1は、右カメラ3で撮像した2つの画像と、左カメラ5で撮像した2つの画像のうち、コントラストが最も大きい画像を選択し、その選択した画像から、道路設置物や灯火を検出する。そのため、画像のダイナミックレンジの不足により、道路設置物や灯火を検出できないということが起こり難い。
Claims (6)
- 第1の撮像部と、
第2の撮像部と、
前記第1の撮像部及び前記第2の撮像部の露出制御を、道路設置物・灯火認識用露出制御と、立体物認識用露出制御との間で切り替える切り替え部と、
前記第1の撮像部及び前記第2の撮像部により撮像された画像から前記道路設置物、灯火又は前記立体物を検出する検出部と、
を備え、
前記道路設置物・灯火認識用露出制御において、前記第1の撮像部の露出と、前記第2の撮像部の露出とが異なることを特徴とする車両用画像処理装置。 - 前記道路設置物・灯火認識用露出制御が実行されているとき、前記第1の撮像部と、前記第2の撮像部とは、同時に撮像を行うことを特徴とする請求項1記載の車両用画像処理装置。
- 前記道路設置物・灯火認識用露出制御における、前記第1の撮像部のダイナミックレンジと、前記第2の撮像部のダイナミックレンジとは、少なくとも一部が重複することを特徴とする請求項1又は2記載の車両用画像処理装置。
- 前記検出部は、前記道路設置物・灯火認識用露出制御が実行されているときに前記第1の撮像部及び前記第2の撮像部で撮像された画像を合成し、その合成された画像から前記道路設置物、又は灯火を検出することを特徴とする請求項1~3のいずれか1項記載の車両用画像処理装置。
- 前記検出部は、前記道路設置物・灯火認識用露出制御が実行されているときに前記第1の撮像部で撮像された画像と、前記道路設置物・灯火認識用露出制御が実行されているときに前記第2の撮像部で撮像された画像の中から、よりコントラストが高い画像を選択し、その選択された画像から前記道路設置物、又は灯火を検出することを特徴とする請求項1~3のいずれか1項記載の車両用画像処理装置。
- 前記道路設置物・灯火認識用露出制御には、露出の条件が異なる2種以上の制御が存在することを特徴とする請求項1~5のいずれか1項記載の車両用画像処理装置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE112012001606.8T DE112012001606T5 (de) | 2011-04-06 | 2012-04-02 | Bildverarbeitungsvorrichtung für ein Fahrzeug |
US14/110,066 US20140055572A1 (en) | 2011-04-06 | 2012-04-02 | Image processing apparatus for a vehicle |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-084565 | 2011-04-06 | ||
JP2011084565A JP2012221103A (ja) | 2011-04-06 | 2011-04-06 | 車両用画像処理装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012137696A1 true WO2012137696A1 (ja) | 2012-10-11 |
Family
ID=46969096
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/058811 WO2012137696A1 (ja) | 2011-04-06 | 2012-04-02 | 車両用画像処理装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20140055572A1 (ja) |
JP (1) | JP2012221103A (ja) |
DE (1) | DE112012001606T5 (ja) |
WO (1) | WO2012137696A1 (ja) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5587930B2 (ja) * | 2012-03-09 | 2014-09-10 | 日立オートモティブシステムズ株式会社 | 距離算出装置及び距離算出方法 |
KR101353052B1 (ko) * | 2013-07-31 | 2014-01-20 | 주식회사 피엘케이 테크놀로지 | 교통표지판 인식을 위한 차량용 영상인식시스템 |
US9465444B1 (en) * | 2014-06-30 | 2016-10-11 | Amazon Technologies, Inc. | Object recognition for gesture tracking |
JP6416654B2 (ja) * | 2015-02-17 | 2018-10-31 | トヨタ自動車株式会社 | 白線検出装置 |
EP3314572B1 (en) * | 2015-06-26 | 2019-08-07 | Koninklijke Philips N.V. | Edge detection on images with correlated noise |
JPWO2017154827A1 (ja) * | 2016-03-11 | 2019-02-14 | 富士フイルム株式会社 | 撮像装置 |
US10623634B2 (en) | 2017-04-17 | 2020-04-14 | Intel Corporation | Systems and methods for 360 video capture and display based on eye tracking including gaze based warnings and eye accommodation matching |
EP3637758B1 (en) * | 2017-06-07 | 2024-09-04 | Hitachi Astemo, Ltd. | Image processing device |
JP7427594B2 (ja) * | 2018-08-22 | 2024-02-05 | 日立Astemo株式会社 | 画像処理装置 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11175702A (ja) * | 1997-12-15 | 1999-07-02 | Toyota Motor Corp | 車両用ライン検出装置及び路上ライン検出方法並びにプログラムを記録した媒体 |
JP2007096684A (ja) * | 2005-09-28 | 2007-04-12 | Fuji Heavy Ind Ltd | 車外環境認識装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006106522A2 (en) * | 2005-04-07 | 2006-10-12 | Visionsense Ltd. | Method for reconstructing a three- dimensional surface of an object |
US20100091119A1 (en) * | 2008-10-10 | 2010-04-15 | Lee Kang-Eui | Method and apparatus for creating high dynamic range image |
US9142026B2 (en) * | 2010-02-26 | 2015-09-22 | Thomson Licensing | Confidence map, method for generating the same and method for refining a disparity map |
WO2012084277A1 (en) * | 2010-12-22 | 2012-06-28 | Thomson Licensing | Apparatus and method for determining a disparity estimate |
AU2013305770A1 (en) * | 2012-08-21 | 2015-02-26 | Pelican Imaging Corporation | Systems and methods for parallax detection and correction in images captured using array cameras |
-
2011
- 2011-04-06 JP JP2011084565A patent/JP2012221103A/ja active Pending
-
2012
- 2012-04-02 US US14/110,066 patent/US20140055572A1/en not_active Abandoned
- 2012-04-02 WO PCT/JP2012/058811 patent/WO2012137696A1/ja active Application Filing
- 2012-04-02 DE DE112012001606.8T patent/DE112012001606T5/de not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11175702A (ja) * | 1997-12-15 | 1999-07-02 | Toyota Motor Corp | 車両用ライン検出装置及び路上ライン検出方法並びにプログラムを記録した媒体 |
JP2007096684A (ja) * | 2005-09-28 | 2007-04-12 | Fuji Heavy Ind Ltd | 車外環境認識装置 |
Also Published As
Publication number | Publication date |
---|---|
JP2012221103A (ja) | 2012-11-12 |
US20140055572A1 (en) | 2014-02-27 |
DE112012001606T5 (de) | 2014-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2012137696A1 (ja) | 車両用画像処理装置 | |
JP5846872B2 (ja) | 画像処理装置 | |
EP2437494B1 (en) | Device for monitoring area around vehicle | |
US11676394B2 (en) | Processing device for conversion of images | |
US7986812B2 (en) | On-vehicle camera with two or more angles of view | |
US10142595B2 (en) | Driving assistance device and method of detecting vehicle adjacent thereto | |
WO2017134982A1 (ja) | 撮像装置 | |
US11516451B2 (en) | Imaging apparatus, imaging processing method, image processing device and imaging processing system | |
JP6723079B2 (ja) | 物体距離検出装置 | |
US10455159B2 (en) | Imaging setting changing apparatus, imaging system, and imaging setting changing method | |
JP6701327B2 (ja) | グレア検出方法及び装置 | |
JP2006322795A (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
JP4224449B2 (ja) | 画像抽出装置 | |
JP6266022B2 (ja) | 画像処理装置、警報装置、および画像処理方法 | |
KR101709009B1 (ko) | 어라운드 뷰 왜곡 보정 시스템 및 방법 | |
JP4797441B2 (ja) | 車両用画像処理装置 | |
KR101501678B1 (ko) | 노출 조절을 이용한 차량용 영상 촬영장치 및 그 방법 | |
JP6405765B2 (ja) | 撮像装置及び判定方法 | |
JP5310162B2 (ja) | 車両灯火判定装置 | |
KR101030210B1 (ko) | 자동차용 장애물 인식 시스템 및 그 방법 | |
JP4539400B2 (ja) | ステレオカメラの補正方法、ステレオカメラ補正装置 | |
KR101982091B1 (ko) | 서라운드 뷰 모니터링 시스템 | |
JP5017921B2 (ja) | 車両用画像処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12768586 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1120120016068 Country of ref document: DE Ref document number: 112012001606 Country of ref document: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14110066 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12768586 Country of ref document: EP Kind code of ref document: A1 |