WO2022254795A1 - Dispositif de traitement d'image et procédé de traitement d'image - Google Patents

Dispositif de traitement d'image et procédé de traitement d'image Download PDF

Info

Publication number
WO2022254795A1
WO2022254795A1 PCT/JP2022/004704 JP2022004704W WO2022254795A1 WO 2022254795 A1 WO2022254795 A1 WO 2022254795A1 JP 2022004704 W JP2022004704 W JP 2022004704W WO 2022254795 A1 WO2022254795 A1 WO 2022254795A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional object
object information
information
image processing
Prior art date
Application number
PCT/JP2022/004704
Other languages
English (en)
Japanese (ja)
Inventor
啓佑 岩崎
寛人 三苫
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to DE112022001328.1T priority Critical patent/DE112022001328T5/de
Priority to JP2023525380A priority patent/JPWO2022254795A1/ja
Publication of WO2022254795A1 publication Critical patent/WO2022254795A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/10Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument
    • G01C3/14Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument with binocular observation at a single point, e.g. stereoscopic type
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9323Alternative operation using light waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to an image processing device and an image processing method for recognizing the environment outside the vehicle based on an image of the outside of the vehicle.
  • flare halation
  • the surroundings of the light source become bright and blurry
  • a high-intensity light source for example, a brake lamp during braking
  • an image processing apparatus that can image the target without causing halation with a single camera.
  • the vehicle is extracted from the images acquired by the camera.
  • a region whose brightness is to be corrected is calculated, and in the calculated region whose brightness is to be corrected, a portion having brightness that causes halation, such as a headlight portion and a road surface reflection portion of the headlight.
  • the flare part of the headlight is calculated, the calculated part that causes halation is corrected to brightness that does not cause halation by multiplying it by a predetermined reduction rate, and the corrected halation area is superimposed on the image acquired by the camera. output.
  • the present invention can accurately calculate the parallax information around the high-brightness light source even if flare occurs when capturing an image of a three-dimensional object with a high-brightness light source. It is an object of the present invention to provide an image processing apparatus capable of making good estimations.
  • an image recognition apparatus of the present invention includes a camera that captures a first image with a first exposure and captures a second image with a second exposure that is less than the first exposure; a three-dimensional object extracting unit for extracting a first region in which a three-dimensional object exists from a first image and extracting a second region in which the three-dimensional object exists from the second image; and first three-dimensional object information from the first region. and a three-dimensional object information detection unit that detects second three-dimensional object information from the second region, and a three-dimensional object information integration unit that integrates the first three-dimensional object information and the second three-dimensional object information. It is used as an image processing device.
  • the image processing apparatus of the present invention even if flare occurs when imaging a three-dimensional object with a high-intensity light source, parallax information around the high-intensity light source can be accurately calculated, and the position, speed, shape, and position of the three-dimensional object can be calculated.
  • the type and the like can be estimated with high accuracy.
  • FIG. 1 is a hardware configuration diagram of a stereo camera device according to an embodiment
  • FIG. 1 is a functional block diagram of a stereo camera device of one embodiment
  • FIG. 4 is a processing flowchart of the stereo camera device of one embodiment. 4 is a flowchart of a three-dimensional object recognition process of a comparative example
  • 4 is a flowchart of solid object recognition processing according to the present invention. An example of the procedure for calculating the amount of flare light.
  • FIG. 3 is a functional block diagram showing details of an object recognition unit; The functional block diagram which shows the detail of a normal shutter three-dimensional object process part. The functional block diagram which shows the detail of a low-exposure shutter three-dimensional object processing part.
  • a stereo camera device which is an embodiment of the image processing device of the present invention, will be described below with reference to the drawings.
  • FIG. 1 is a hardware configuration diagram showing an outline of a stereo camera device 10 of this embodiment.
  • the stereo camera device 10 is an in-vehicle device that recognizes the environment outside the vehicle based on captured images outside the vehicle. Recognize the environment outside the vehicle. Then, the stereo camera device 10 determines control policies such as acceleration/deceleration assistance and steering assistance of the own vehicle according to the recognized environment outside the vehicle.
  • the stereo camera device 10 includes cameras 11 (a left camera 11L and a right camera 11R), an image input interface 12, an image processing section 13, an arithmetic processing section 14, a storage section 15, a CAN An interface 16 and an abnormality monitoring unit 17 are provided.
  • the configuration from the image input interface 12 to the abnormality monitoring unit 17 is a single or a plurality of computer units interconnected via an internal bus.
  • Various functions such as the image processing unit 13 and the arithmetic processing unit 14 are realized by executing the functions, but the present invention will be described below while omitting such well-known techniques.
  • the camera 11 is a stereo camera composed of a left camera 11L and a right camera 11R, which is installed on the vehicle so as to capture an image in front of the vehicle.
  • the imaging element of the left camera 11L alternates between a first left image P1 L with a normal exposure (hereinafter referred to as "normal shutter”) and a second left image P2 L with an exposure less than normal (hereinafter referred to as "low exposure shutter”).
  • normal shutter a normal exposure
  • low exposure shutter an exposure less than normal
  • the "exposure amount” in this embodiment is a physical quantity obtained by multiplying the exposure time of the imaging element by the gain for amplifying the output of the imaging element.
  • the exposure amount of the "normal shutter” is, for example, an exposure amount obtained by multiplying the exposure time of 20 msec by a gain 16 times the low-exposure shutter gain G described later, and the exposure amount of the "low-exposure shutter” is, for example, The amount of exposure is obtained by multiplying the exposure time of 0.1 msec by the predetermined low-exposure shutter gain G, but each amount of exposure is not limited to the example.
  • each camera alternately captures the first image P1 (P1 L , P1 R ) with the normal shutter and the second image P2 (P2 L , P2 R ) with the low exposure shutter. Then, the imaging of the second image P2 with the low-exposure shutter may be omitted. Only when it is determined that the lamp size W1 (see FIG. 5B) in the image P1 is equal to or larger than the predetermined threshold value Wth , the second image P2 with the low exposure shutter may be captured.
  • the image input interface 12 is an interface that controls imaging by the camera 11 and captures the captured images P (P1 L , P2 L , P1 R , P2 R ). An image P captured through this interface is transmitted to the image processing unit 13 or the like through an internal bus.
  • the image processing unit 13 compares the left image P L (P1 L , P2 L ) captured by the left camera 11L and the right image P R (P1 R , P2 R ) captured by the right camera 11R, and performs Then, the device-specific deviation due to the imaging element is corrected, and correction such as noise interpolation is performed.
  • the image processing unit 13 calculates mutually corresponding portions between the left and right images having the same exposure amount, calculates parallax information I (distance information for each point on the image), and stores it in the storage unit 15 . Specifically, the first left image P1L and the first right image P1R captured at the same timing are compared to calculate the first parallax information I1, and the second left image P2L and the second left image P2L captured at the same timing are compared to the first right image P1R. The second parallax information I2 is calculated from the comparison of the two right images P2- R .
  • the arithmetic processing unit 14 uses the image P and the parallax information I stored in the storage unit 15 to recognize various objects necessary for perceiving the environment around the vehicle.
  • Various objects recognized here include people, other vehicles, other obstacles, traffic lights, signs, tail lamps and headlights of other vehicles, and the like. Some of these recognition results and intermediate calculation results are recorded in the storage unit 15 . Furthermore, the arithmetic processing unit 14 uses the object recognition result to determine the control policy of the host vehicle.
  • the storage unit 15 is a storage device such as a semiconductor memory, and stores the corrected image P and the parallax information I output from the image processing unit 13, and the object recognition result output from the arithmetic processing unit 14 and control of the own vehicle. Memorize policies.
  • the CAN interface 16 is an interface for transmitting the object recognition result obtained by the arithmetic processing unit 14 and the control policy of the own vehicle to an in-vehicle network CAN (Controller Area Network).
  • a control system (ECU, etc.) for controlling the driving system, braking system, steering system, etc. of the own vehicle is connected to the in-vehicle network CAN.
  • Driving support such as automatic braking and steering avoidance can be executed according to the environment.
  • the abnormality monitoring unit 17 monitors whether each unit in the stereo camera device 10 is operating abnormally or whether an error has occurred during data transfer, and is designed to prevent abnormal operations.
  • the left camera 11L alternately captures the first left image P1L with the normal shutter and the second left image P2L with the low-exposure shutter
  • the right camera 11R also captures the first right image P1R with the normal shutter and the low-exposure shutter.
  • the second right image P2- R of the exposure shutter is alternately captured.
  • the image correction unit 13a of the image processing unit 13 performs image correction processing to absorb the unique peculiarities of the image sensor for each image P (P1 L , P2 L , P1 R , P2 R ).
  • the corrected image P is stored in the image buffer 15 a in the storage unit 15 .
  • the parallax calculation unit 13b of the image processing unit 13 compares the left and right images captured at the same timing (the first left image P1 L and the first right image P1 R , or the second left image P2 L and the second right image P2 R ) is collated, and parallax information I is calculated for each exposure amount. This calculation makes it clear where on the left image PL corresponds to where on the right image PR , so the distance to the object on each image can be obtained according to the principle of triangulation. Then, the parallax information I (I1, I2) for each exposure amount obtained here is stored in the parallax buffer 15b of the storage unit 15.
  • the object recognition unit 14a of the arithmetic processing unit 14 uses the image P and the parallax information I stored in the storage unit 15 to perform object recognition processing for extracting an area in which a three-dimensional object exists.
  • Three-dimensional objects to be recognized include, for example, people, cars, other three-dimensional objects, signs, traffic lights, tail lamps, etc.
  • the recognition dictionary 15c is referred to as necessary.
  • the vehicle control unit 14b of the arithmetic processing unit 14 controls the own vehicle in consideration of the object recognition result output from the object recognition unit 14a and the state of the own vehicle (speed, steering angle, etc.). determine policy. For example, if there is a possibility of a collision with the preceding vehicle, the system issues a warning to the occupants to encourage them to take actions to avoid a collision, or controls the vehicle's braking and steering angle to avoid the preceding vehicle. is generated and output to the control system (ECU, etc.) via the CAN interface 16 and the in-vehicle network CAN.
  • the control system ECU, etc.
  • step S1 the image correction section 13a of the image processing section 13 processes the right image PR . Specifically, the image correction unit 13a performs device-specific deviation correction, noise correction, and the like on the right image P R (P1 R , P2 R ) captured by the right camera 11R. Store in the buffer 15a.
  • step S2 the image correction section 13a of the image processing section 13 processes the left image PL .
  • the image correction unit 13a applies device-specific deviation correction, noise correction, and the like to the left image P L (P1 L , P2 L ) captured by the left camera 11L. Store in the buffer 15a. Note that the order of steps S1 and S2 may be reversed.
  • step S3 the parallax calculation unit 13b of the image processing unit 13 compares the corrected left image PL and right image PR stored in the image buffer 15a to calculate the parallax, and obtains the parallax information I is stored in the parallax buffer 15 b of the storage unit 15 .
  • step S4 the object recognition unit 14a of the arithmetic processing unit 14 uses either the left image PL or the right image PR and the parallax information I to perform object recognition.
  • a three-dimensional object such as a preceding vehicle is also recognized in this step, the details of the three-dimensional object recognition method of this embodiment will be described later.
  • step S5 the vehicle control unit 14b of the arithmetic processing unit 14 determines a vehicle control policy based on the object recognition result, and outputs the result to the in-vehicle network CAN.
  • step S41 a three-dimensional object in the first image P1 is extracted based on the parallax distribution in the first parallax information I1 calculated in step S3 of FIG. to estimate Also, the history of the distance to the three-dimensional object is recorded, and the speed of the three-dimensional object is estimated from the history information.
  • step S42 the recognition dictionary 15c is referenced to determine the type of the three-dimensional object detected in step S41.
  • step S43 the three-dimensional object information (shape, distance, speed) estimated in step S41 and the three-dimensional object type determined in step S42 are output to the subsequent vehicle control unit 14b.
  • the image processing apparatus of the comparative example obtains three-dimensional object information (shape, distance, speed) and three-dimensional object type by the above method. , becomes inaccurate around the high-brightness light source, and the three-dimensional object information around the high-brightness light source detected in step S41 is also inaccurate. As a result, in the image processing apparatus of the comparative example, the reliability of the three-dimensional object type around the high-brightness light source determined in step S42 was also low, and there was a possibility that driving support control based on low-reliability information would not be appropriate.
  • the object recognition unit 14a detects information of a three-dimensional object based on each of the parallax distributions in the two types of parallax information I calculated in step S3 of FIG. Specifically, based on the parallax distribution in the first parallax information I1, the area of the three-dimensional object is extracted from the first image P1, and the shape of the three-dimensional object and the distance to the three-dimensional object are estimated. Also, the history of the distance to the three-dimensional object in the first image P1 is recorded, and the speed of the three-dimensional object is estimated from the history information.
  • the region of the three-dimensional object in the second image P2 is extracted, and the shape of the three-dimensional object and the distance to the three-dimensional object are estimated. Also, the history of the distance to the three-dimensional object in the second image P2 is recorded, and the speed of the three-dimensional object is estimated from the history information.
  • step S41 the object recognition unit 14a also detects light spots in the image P.
  • FIG. 5B(a) is an example of the light spot group detected from the first image P1 with the normal shutter
  • FIG. 5B(b) is an example of the light spot group detected from the low exposure shutter second image P2. An example.
  • step S4a when a plurality of light spot groups are extracted in step S41, the object recognition unit 14a determines from the distance and position of each light spot group whether they are lamp pairs of the same vehicle. Then, when it is determined that the lamp pair is the same vehicle, the coordinate information as the lamp pair is output, and the process proceeds to step S4b. On the other hand, if it is determined that the pair of lamps does not belong to the same vehicle, the process proceeds to step S42.
  • step S4b the object recognition unit 14a uses the position information of lamp pairs between multiple shutters to link three-dimensional object information that is presumed to be the same object.
  • the first three-dimensional object information of the lamp pair L1 illustrated in FIG. 5B(a) and the second three-dimensional object information of the lamp pair L2 illustrated in FIG. 5B(b) are linked.
  • step S4c the object recognition unit 14a determines whether the amount of flare light from the lamp pair is greater than or equal to the threshold. If the amount of flare light from the lamp pair is equal to or greater than the threshold, the process proceeds to step S4d, and if the amount of flare light is less than the threshold, the process proceeds to step S42.
  • the determination of the amount of flare light in step S4c is performed, for example, as follows. First, a luminance threshold value for extracting a lamp for each exposure amount is determined in advance, and a region of pixels exceeding the luminance threshold value is extracted as a lamp (see FIGS. 5B(a) and (b)). Next, the lamp sizes W1 and W2 are obtained from the width of each lamp. Then, the difference between the lamp size W1 of the normal shutter (first image P1) and the lamp size W2 of the low-exposure shutter (second image P2) is calculated as the amount of flare light, and it is determined whether or not this amount of flare light is greater than or equal to the threshold. As a method of calculating the difference, as illustrated in FIG. 5B(c), a method of defining the area obtained by subtracting the circular area of lamp size W2 from the circular area of lamp size W1 as the amount of flare light can be considered.
  • the description so far has been made on the assumption that either the left image PL or the right image PR is selected and the processing in FIG. 5A is performed. 5A, the amount of flare light may be calculated for each of the left and right images. As a result, even if one of the left and right cameras is out of order, it is possible to continue desired control using the captured images of the normal cameras.
  • step S4d the object recognition unit 14a integrates the first three-dimensional object information of the normal shutter (first image P1) and the second three-dimensional object information of the low-exposure shutter (second image P2) linked in step S4b. Calculate the three-dimensional object information. Specifically, a weighted average of the first three-dimensional object information and the second three-dimensional object information is obtained using an integration coefficient individually set for each piece of three-dimensional object information (position, speed, shape, etc.).
  • the distance D1 to the three-dimensional object indicated by the first three-dimensional object information is integrated by Equation 1 below.
  • Distance D after integration Distance D1 ⁇ (1 ⁇ 0.3)+Distance D2 ⁇ 0.3 (Formula 1) Since the first image P1 with the normal shutter and the second image P2 with the low exposure shutter are alternately captured, there is a slight time difference between the two images. Therefore, it is more desirable to perform integration after correcting each piece of information in consideration of the time difference between the imaging timings of the first image P1 and the second image P2 and the relative relationship between the target object (preceding vehicle) and the host vehicle.
  • the integration coefficient need not be a fixed value, and may be increased or decreased in proportion to the distance to the three-dimensional object, for example. Further, the integration coefficient may be set to a value corresponding to the amount of flare light by checking the correlation between the amount of flare light and the accuracy of each information of the object in advance. Furthermore, when the amount of flare light is calculated using both the left and right cameras, the variation in the amount of flare light may also be reflected in the integration coefficient. When the amount of flare light is large, by setting a large weight for the information of the higher accuracy of the normal exposure and the low exposure shutter, an improvement in accuracy after integration can be expected.
  • the integration coefficient may be set individually for each piece of information (position, speed, shape, etc.) further subdivided. For example, for a shape, set the height and width separately. If the amount of flare light is large or small, the integration coefficient may be set so that the result of any shutter is not used at all during integration.
  • step S42 the object recognition unit 14a determines the type of the three-dimensional object using the integrated three-dimensional object information when the three-dimensional object information is integrated in step S4d. ) is used to determine the type of the three-dimensional object.
  • step S43 the object recognition unit 14a passes the first three-dimensional object information estimated in step S41 or the integrated three-dimensional object information integrated in step S4d and the three-dimensional object type determined in step S42 to the vehicle control unit in the subsequent stage. 14b.
  • the first three-dimensional object information based on the first image P1 susceptible to flare is replaced with the second three-dimensional object information based on the second image P2 less susceptible to flare.
  • the storage unit 15 outputs the first image P1 and the first parallax information I1 of the normal shutter and the second image P2 and the second parallax information I2 of the low exposure shutter to the object recognition unit 14a. do.
  • the first image P1 of the normal shutter and the first parallax information I1 are input to the normal shutter three-dimensional object processing unit 21 in the object recognition unit 14a.
  • FIG. 6B is an example of the configuration of the normal shutter three-dimensional object processing unit 21.
  • the first image P1 and the first Three-dimensional object detection in step S41 is performed using the parallax information I1, and first three-dimensional object information is output.
  • the lamp detection section 21e performs lamp detection in step S41 on the first image P1, and outputs first lamp information.
  • the low-exposure shutter solid object processing unit 22 in the object recognition unit 14a receives the second image P2 of the low-exposure shutter and the second parallax information I2.
  • FIG. 6C is an example of the configuration of the low-exposure shutter three-dimensional object processing unit 22.
  • the second image P2 and the Three-dimensional object detection in step S41 is performed using the two-parallax information I2, and second three-dimensional object information is output.
  • the lamp detection unit 22e performs the lamp detection in step S41 on the second image P2, and outputs the second lamp information.
  • the lamp pair detection unit 23 performs lamp pair detection in step S4a on the first image P1 based on the first lamp information from the normal shutter three-dimensional object processing unit 21 (see FIG. 5B(a)). Further, the lamp pair detection unit 24 performs lamp pair detection in step S4a on the second image P2 based on the second lamp information from the low-exposure shutter three-dimensional object processing unit 22 (see FIG. 5B(b)).
  • the lamp detection result linking unit 25 performs linking processing in step S4b for the three-dimensional object information of each lamp pair detected by the lamp pair detecting unit 23 and the lamp pair detecting unit 24.
  • the flare determination unit 26 calculates the amount of flare light by the method illustrated in FIG. 5B(c).
  • the three-dimensional object information integration unit 27 performs the processing of steps S4c and S4d.
  • the 1-dimensional object information and the 2nd 3-dimensional object information from the low-exposure shutter 3-dimensional object processing unit 22 are integrated under a predetermined rule.
  • the amount of flare light calculated by the flare determination unit 26 is less than the predetermined threshold value, the first solid object information from the normal shutter solid object processing unit 21 is output as it is.
  • the type determination unit 28 performs the process of step S42, and determines the type of the three-dimensional object based on the object information output by the three-dimensional object information integration unit 27 and the recognition dictionary 15c.
  • the detection result output unit 29 executes step S43, and outputs three-dimensional object information (position, speed, shape, etc.) output from the three-dimensional object information integration unit 27 and three-dimensional object information output from the type determination unit 28.
  • the type is output to the vehicle control unit 14b.
  • the stereo camera device 10 of the present embodiment described above even if flare occurs when capturing an image of a three-dimensional object with a high-intensity light source, parallax information around the high-intensity light source can be accurately calculated. Position, speed, shape, type, etc. can be estimated with high accuracy.
  • the image processing apparatus of the present invention is a stereo camera apparatus, but the image processing apparatus of the present invention may be a monocular camera apparatus that alternately takes images with a normal shutter and a low exposure shutter. .
  • the image processing apparatus of the present invention may be a monocular camera apparatus that alternately takes images with a normal shutter and a low exposure shutter.
  • the image processing apparatus of the present invention may be a monocular camera apparatus that alternately takes images with a normal shutter and a low exposure shutter.
  • the three-dimensional object information integration method of the present invention described above can be used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'image qui, même dans le cas d'un flare se produisant lors de la photographie d'un objet stéréoscopique pourvu d'une source de lumière à luminance élevée, peut calculer avec précision des informations de disparité sur l'environnement de la source de lumière à luminance élevée et peut estimer, avec précision, la position, la vitesse, le type et similaire de l'objet stéréoscopique. Le dispositif de traitement d'image comprend : une caméra qui capture une première image présentant une première valeur d'exposition et capture une seconde image présentant une seconde valeur d'exposition inférieure à la première valeur d'exposition ; une unité d'extraction d'objet stéréoscopique qui extrait, dans la première image, une première région dans laquelle un objet stéréoscopique est présent et extrait, dans la seconde image, une seconde région dans laquelle l'objet stéréoscopique est présent ; une unité de détection d'informations d'objet stéréoscopique qui détecte des premières informations d'objet stéréoscopique dans la première région et détecte des secondes informations d'objet stéréoscopique dans la seconde région ; et une unité d'intégration d'informations d'objet stéréoscopique qui intègre les premières informations d'objet stéréoscopique et les secondes informations d'objet stéréoscopique.
PCT/JP2022/004704 2021-05-31 2022-02-07 Dispositif de traitement d'image et procédé de traitement d'image WO2022254795A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE112022001328.1T DE112022001328T5 (de) 2021-05-31 2022-02-07 Bildverarbeitungsvorrichtung und bildverarbeitungsverfahren
JP2023525380A JPWO2022254795A1 (fr) 2021-05-31 2022-02-07

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021090994 2021-05-31
JP2021-090994 2021-05-31

Publications (1)

Publication Number Publication Date
WO2022254795A1 true WO2022254795A1 (fr) 2022-12-08

Family

ID=84324114

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/004704 WO2022254795A1 (fr) 2021-05-31 2022-02-07 Dispositif de traitement d'image et procédé de traitement d'image

Country Status (3)

Country Link
JP (1) JPWO2022254795A1 (fr)
DE (1) DE112022001328T5 (fr)
WO (1) WO2022254795A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11201741A (ja) * 1998-01-07 1999-07-30 Omron Corp 画像処理方法およびその装置
JP2007096684A (ja) * 2005-09-28 2007-04-12 Fuji Heavy Ind Ltd 車外環境認識装置
JP2012026838A (ja) * 2010-07-22 2012-02-09 Ricoh Co Ltd 測距装置及び撮像装置
JP2021025833A (ja) * 2019-08-01 2021-02-22 株式会社ブルックマンテクノロジ 距離画像撮像装置、及び距離画像撮像方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010073009A (ja) 2008-09-19 2010-04-02 Denso Corp 画像処理装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11201741A (ja) * 1998-01-07 1999-07-30 Omron Corp 画像処理方法およびその装置
JP2007096684A (ja) * 2005-09-28 2007-04-12 Fuji Heavy Ind Ltd 車外環境認識装置
JP2012026838A (ja) * 2010-07-22 2012-02-09 Ricoh Co Ltd 測距装置及び撮像装置
JP2021025833A (ja) * 2019-08-01 2021-02-22 株式会社ブルックマンテクノロジ 距離画像撮像装置、及び距離画像撮像方法

Also Published As

Publication number Publication date
JPWO2022254795A1 (fr) 2022-12-08
DE112022001328T5 (de) 2024-01-04

Similar Documents

Publication Publication Date Title
US10286834B2 (en) Vehicle exterior environment recognition apparatus
US9704404B2 (en) Lane detection apparatus and operating method for the same
US8055017B2 (en) Headlamp monitoring apparatus for image exposure adjustment
JP4595833B2 (ja) 物体検出装置
US10000210B2 (en) Lane recognition apparatus
JP5863536B2 (ja) 車外監視装置
CN103403779B (zh) 车载摄像机和车载摄像机系统
JP3312729B2 (ja) フェールセーフ機能を有する車外監視装置
US9852502B2 (en) Image processing apparatus
JP4807733B2 (ja) 車外環境認識装置
JP3301995B2 (ja) ステレオ式車外監視装置
JP2016178523A (ja) 撮像装置、撮像方法、プログラム、車両制御システム、および車両
US10247551B2 (en) Vehicle image processing device for environment recognition
US9524645B2 (en) Filtering device and environment recognition system
JP2009239485A (ja) 車両用環境認識装置および先行車追従制御システム
WO2022254795A1 (fr) Dispositif de traitement d'image et procédé de traitement d'image
JP6891082B2 (ja) 物体距離検出装置
JP2001043377A (ja) フェールセーフ機能を有する車外監視装置
JP2020201876A (ja) 情報処理装置及び運転支援システム
JP3272701B2 (ja) 車外監視装置
US10417505B2 (en) Vehicle detection warning device and vehicle detection warning method
JP6808753B2 (ja) 画像補正装置および画像補正方法
WO2023112127A1 (fr) Dispositif de reconnaissance d'image et procédé de reconnaissance d'image
US20150254516A1 (en) Apparatus for Verified Detection of a Traffic Participant and Apparatus for a Vehicle for Verified Detection of a Traffic Participant
WO2024009605A1 (fr) Dispositif de traitement d'image et procédé de traitement d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22815554

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023525380

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 112022001328

Country of ref document: DE