WO2021049490A1 - Image registration device, image generation system, image registration method and image registration program - Google Patents

Image registration device, image generation system, image registration method and image registration program Download PDF

Info

Publication number
WO2021049490A1
WO2021049490A1 PCT/JP2020/033956 JP2020033956W WO2021049490A1 WO 2021049490 A1 WO2021049490 A1 WO 2021049490A1 JP 2020033956 W JP2020033956 W JP 2020033956W WO 2021049490 A1 WO2021049490 A1 WO 2021049490A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
background light
light
reflected light
Prior art date
Application number
PCT/JP2020/033956
Other languages
French (fr)
Japanese (ja)
Inventor
塚田 明宏
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Priority to CN202080063314.XA priority Critical patent/CN114365189A/en
Publication of WO2021049490A1 publication Critical patent/WO2021049490A1/en
Priority to US17/654,012 priority patent/US20220201164A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the disclosure according to this specification relates to an image registration device, an image generation system, an image registration method, and an image registration program.
  • Patent Document 1 discloses a distance measuring sensor. This distance measuring sensor can generate a reflected light image including distance information by detecting the reflected light reflected from an object by light irradiation by a light receiving element.
  • Patent Document 2 discloses a camera. The camera can generate a high-resolution camera image by detecting the incident light from the outside by the camera element.
  • the reflected light image and the camera image can be processed by the application.
  • the reflected light image and the camera image may have a deviation ⁇ t in the detection timing.
  • the reflected light image and the object reflected in the camera image move during the deviation ⁇ t, it becomes difficult to accurately associate and process the object reflected in the reflected light image and the object reflected in the camera image. .. Therefore, even if the application uses both the reflected light image and the camera image, the information of these images cannot be fully utilized, and therefore the processing accuracy cannot be sufficiently improved.
  • One of the purposes of the disclosure of this specification is to provide an image registration device, an image generation system, an image registration method, and an image registration program that enhance the processing accuracy of the application.
  • the light receiving element senses the reflected light image including the distance information by detecting the reflected light reflected from the object by the light irradiation, and the light receiving element senses the background light with respect to the reflected light. It is communicably connected to a ranging sensor that generates a reflected light image and a background light image that has the same coordinate system as the reflected light image, and the reflected light image and background are detected by the camera element by detecting incident light from the outside.
  • An image registration device communicatively connected to a camera that produces a camera image with a higher resolution than an optical image.
  • An image acquisition unit that acquires a reflected light image, a background light image, and a camera image, By specifying the correspondence between the feature points of the background light image and the feature points of the camera image, an image processing unit that performs image registration between the reflected light image having the same coordinate system as the background light image and the camera image, To be equipped.
  • image registration between the acquired reflected light image and the camera image is performed using the background light image having the same coordinate system as the reflected light image. That is, by comparing the feature points of the background light image whose properties are closer to those of the camera image than those of the reflected light image with the feature points of the camera image, it becomes easy to identify the correspondence between the feature points.
  • the coordinate system of the reflected light image and the coordinate system of the camera image can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image can be remarkably improved.
  • another one of the disclosed aspects is an image generation system that generates an image to be processed by an application.
  • the light receiving element senses the reflected light reflected from the object by light irradiation
  • the light receiving element senses the reflected light image including the distance information
  • the light receiving element senses the background light for the reflected light
  • the coordinate system becomes the same as the reflected light image.
  • a distance measuring sensor that generates a background light image and
  • a camera that generates a camera image with a higher resolution than the reflected light image and the background light image by detecting the incident light from the outside by the camera element.
  • image registration between the reflected light image of the same coordinate system as the background light image and the camera image is performed to obtain the distance information. It includes an image processing unit that generates a composite image in which information of a camera image is integrated.
  • image registration between the reflected light image and the camera image is performed using the background light image having the same coordinate system as the reflected light image. That is, by comparing the feature points of the background light image whose properties are closer to those of the camera image than the reflected light image and the feature points of the camera image, it is easy to identify the correspondence between the feature points. With such an association, the coordinate system of the reflected light image and the coordinate system of the camera image can be accurately matched. Then, the distance information and the camera image information, which are information from different image generation sources of the distance measuring sensor and the camera, can be provided in the form of a composite image that can be easily processed by the application. Therefore, the processing accuracy of the application using both the reflected light image and the camera image can be remarkably improved.
  • the image registration method is an image generated by a distance measuring sensor, and distance information is obtained by detecting the reflected light reflected from an object by light irradiation by a light receiving element.
  • a reflected light image including the reflected light image and a background light image having the same coordinate system as the reflected light image by the light receiving element detecting the background light with respect to the reflected light. It is an image generated by the camera, and a camera image having a higher resolution than the reflected light image and the background light image is prepared by detecting the incident light from the outside by the camera element.
  • Detecting the feature points of the background light image and the feature points of the camera image respectively, Identifying the correspondence between the detected feature points of the background light image and the feature points of the camera image, It includes making each pixel of one of the background light image and the camera image correspond to each pixel of the other based on the specific result of the correspondence relationship.
  • the feature points of the prepared background light image and the feature points of the camera image are detected, respectively.
  • the correspondence between the detected feature points of the background light image and the feature points of the camera image is specified.
  • each pixel of the background light image and the camera image is made to correspond to each pixel of the other based on the specific result of the correspondence relationship.
  • the image registration between the reflected light image and the camera image is performed using the background light image which has the same coordinate system as the reflected light image and whose properties are closer to those of the camera image than the reflected light image. It becomes easy to identify the correspondence between.
  • the coordinate system of the reflected light image and the coordinate system of the camera image can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image can be remarkably improved.
  • the coordinates of each pixel are made to correspond using the result, so that the processing amount can be suppressed or compared with the case where the correspondence relationship of each pixel is specified unnecessarily. Image registration with high accuracy can be performed while improving the processing speed.
  • another one of the disclosed aspects is an image registration program that performs image registration between the image generated by the distance measuring sensor and the image generated by the camera.
  • An image generated by a distance measuring sensor in which the light receiving element senses a reflected light image including distance information by detecting the reflected light reflected from an object by light irradiation, and a background light with respect to the reflected light.
  • the process of acquiring the reflected light image and the background light image that has the same coordinate system as the
  • the process of detecting the feature points of the background light image and the feature points of the camera image respectively. Processing to identify the correspondence between the detected feature points of the background light image and the feature points of the camera image, Based on the specific result of the correspondence relationship, the process of making each pixel of one of the background light image and the camera image correspond to each pixel of the other is executed.
  • the feature points of the acquired background light image and the feature points of the camera image are detected, respectively.
  • the correspondence between the detected feature points of the background light image and the feature points of the camera image is specified.
  • each pixel of the background light image and the camera image is made to correspond to each pixel of the other based on the specific result of the correspondence relationship.
  • the image registration between the reflected light image and the camera image is performed using the background light image which has the same coordinate system as the reflected light image and whose properties are closer to those of the camera image than the reflected light image. It becomes easy to identify the correspondence between.
  • the coordinate system of the reflected light image and the coordinate system of the camera image can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image can be remarkably improved.
  • the coordinates of each pixel are made to correspond using the result, so that the processing amount can be suppressed or compared with the case where the correspondence relationship of each pixel is specified unnecessarily. Image registration with high accuracy can be performed while improving the processing speed.
  • FIG. 5A It is a figure which shows the whole image of the image generation system and the operation support ECU of 1st Embodiment. It is a figure which shows the mounting state of the distance measuring sensor and the outside world camera of 1st Embodiment in a vehicle. It is a block diagram which shows the structure of the image processing ECU of 1st Embodiment. It is a figure for demonstrating the detection of the feature point in the background light image of 1st Embodiment. It is the figure which changed the background light image included in FIG. 4A into a line diagram. It is a figure for demonstrating the detection of the feature point in the camera image of 1st Embodiment. It is the figure which changed the camera image included in FIG. 5A into a diagram.
  • the image registration device As shown in FIG. 1, the image registration device according to the first embodiment of the present disclosure is used for a vehicle 1 as a moving body, and is an image processing ECU (Electronic Control Unit) configured to be mounted on the vehicle 1. It is 30.
  • the image processing ECU 30 constitutes an image generation system 100 together with a distance measuring sensor 10 and an external camera 20.
  • the image generation system 100 of the present embodiment can generate peripheral monitoring image information that integrates the measurement results of the distance measuring sensor 10 and the external camera 20 and provide the peripheral monitoring image information to the driving support ECU 50 and the like.
  • the image processing ECU 30 is communicably connected to the communication bus of the vehicle-mounted network mounted on the vehicle 1.
  • the image processing ECU 30 is one of a plurality of nodes provided in the vehicle network.
  • the driving support ECU 50 and the like are connected to the communication bus of the vehicle-mounted network as nodes.
  • the operation support ECU 50 has a configuration mainly including a computer equipped with a processor, a RAM (RandomAccessMemory), a storage unit, an input / output interface, a bus connecting these, and the like.
  • the driving support ECU 50 has at least one of a driving support function that assists the driver's driving operation in the vehicle 1 and a driving agency function that can act for the driver's driving operation.
  • the driving support ECU 50 recognizes the surrounding environment of the vehicle 1 based on the peripheral monitoring image information acquired from the image generation system 100 by executing the program stored in the storage unit by the processor.
  • the driving support ECU 50 realizes automatic driving or advanced driving support of the vehicle 1 according to the recognition result by executing the program stored in the storage unit by the processor.
  • the distance measuring sensor 10 is, for example, a SPAD LiDAR (Single Photon Avalanche Diode Light Detection And Ringing) arranged in front of the vehicle 1 or on the roof of the vehicle 1.
  • the distance measuring sensor 10 can measure at least the front measurement range MA1 in the periphery of the vehicle 1.
  • the distance measuring sensor 10 has a configuration including a light emitting unit 11, a light receiving unit 12, a control unit 13, and the like.
  • the light emitting unit 11 irradiates the light beam emitted from the light source toward the measurement range MA1 shown in FIG. 2 by scanning with a movable optical member (for example, a polygon mirror).
  • the light source is, for example, a semiconductor laser (Laser diode), and emits a light beam in the near infrared region, which is invisible to the occupants and humans in the outside world, in response to an electric signal from the control unit 13.
  • the light receiving unit 12 collects the reflected light reflected from the object within the measurement range MA1 or the background light for the reflected light by, for example, a condensing lens, and causes the irradiated light beam to enter the light receiving element 12a.
  • the light receiving element 12a is an element that converts light into an electric signal by photoelectric conversion, and is a SPAD light receiving element that realizes high sensitivity by amplifying a detection voltage.
  • a CMOS sensor in which the sensitivity in the near infrared region is set to be high with respect to the visible region is adopted. This sensitivity can also be adjusted by providing an optical filter in the light receiving unit 12.
  • the light receiving element 12a has a plurality of light receiving pixels in an array so as to be arranged in a one-dimensional direction or a two-dimensional direction.
  • the control unit 13 is a unit that controls the light emitting unit 11 and the light receiving unit 12.
  • the control unit 13 is arranged on a substrate common to, for example, the light receiving element 12a, and is mainly composed of a processor in a broad sense such as a microcomputer or an FPGA (Field-Programmable Gate Array).
  • the control unit 13 realizes a scanning control function, a reflected light measurement function, and a background light measurement function.
  • the scanning control function is a function that controls optical beam scanning.
  • the control unit 13 oscillates the light beam from the light source a plurality of times in a pulse shape at a timing based on the operating clock of the clock oscillator provided in the distance measuring sensor 10, and operates the movable optical member.
  • the reflected light measurement function is a function that reads out the voltage value based on the reflected light received by each light receiving pixel by using, for example, a rolling shutter method, and measures the intensity of the reflected light according to the timing of light beam scanning.
  • the distance from the distance measuring sensor 10 to the object reflecting the reflected light can be measured by detecting the time difference between the emission timing of the light beam and the reception timing of the reflected light.
  • the control unit 13 is image-like data in which the intensity of the reflected light and the distance information of the object reflecting the reflected light are associated with the two-dimensional coordinates on the image plane corresponding to the measurement range MA1. A reflected light image can be generated.
  • the background light measurement function is a function that reads out the voltage value based on the background light received by each light receiving pixel at the timing immediately before measuring the reflected light and measures the intensity of the background light.
  • the background light means incident light incident on the light receiving element 12a from the measurement range MA1 in the outside world, which does not substantially include reflected light.
  • the incident light includes natural light, display light incident from the display of the outside world, and the like.
  • the control unit 13 can generate a background light image ImL which is image-like data in which the intensity of the background light is associated with the two-dimensional coordinates on the image plane corresponding to the measurement range MA1. ..
  • the reflected light image and the background light image ImL are sensed by the common light receiving element 12a and acquired from the common optical system including the light receiving element 12a. Therefore, the coordinate system of the reflected light image and the coordinate system of the background light image ImL can be regarded as the same coordinate system that coincide with each other. Furthermore, it can be said that there is almost no difference in measurement timing between the reflected light image and the background light image ImL (for example, less than 1 ns). Therefore, the reflected light image and the background light image ImL can be regarded as being synchronized.
  • the integrated image data in which the data of three channels of the intensity of the reflected light, the distance of the object, and the intensity of the background light are stored corresponding to each pixel is used as the sensor image in the image processing ECU 30. Is output sequentially to.
  • the outside world camera 20 is, for example, a camera arranged on the vehicle interior side of the front windshield of the vehicle 1.
  • the outside world camera 20 can measure at least the front measurement range MA2 of the outside world of the vehicle 1, and more specifically, the measurement range MA2 that overlaps at least a part of the measurement range MA1 of the distance measurement sensor 10.
  • the external camera 20 has a configuration including a light receiving unit 22 and a control unit 23.
  • the light receiving unit 22 collects the incident light (background light) incident from the measurement range MA2 outside the camera by, for example, a light receiving lens, and causes the light receiving unit 22 to enter the camera element 22a.
  • the camera element 22a is an element that converts light into an electric signal by photoelectric conversion, and for example, a CCD sensor or a CMOS sensor can be adopted.
  • the camera element 22a is set to have high sensitivity in the visible region with respect to the near infrared region in order to efficiently receive natural light in the visible region.
  • the camera element 22a has a plurality of light receiving pixels (corresponding to so-called sub-pixels) in an array so as to be arranged in a two-dimensional direction. For example, red, green, and blue color filters are arranged on the light receiving pixels adjacent to each other. Each light receiving pixel receives visible light of a color corresponding to the arranged color filter.
  • the camera image ImC captured by the external camera 20 is a higher resolution image than the reflected light image and the background light image ImL, and is visible. It can be a color image of the area.
  • the control unit 23 is a unit that controls the light receiving unit 22.
  • the control unit 23 is arranged on a substrate common to, for example, the camera element 22a, and is mainly composed of a processor in a broad sense such as a microcomputer or an FPGA.
  • the control unit 23 realizes a shooting function.
  • the shooting function is a function for shooting the above-mentioned color image.
  • the control unit 23 reads out the voltage value based on the incident light received by each light receiving pixel at the timing based on the operating clock of the clock oscillator provided in the external camera 20, for example, by using the global shutter method, and the intensity of the incident light. Is detected and measured.
  • This clock oscillator is provided independently of the clock oscillator of the ranging sensor 10.
  • the control unit 23 can generate a camera image ImC which is image-like data in which the intensity of the incident light is associated with the two-dimensional coordinates on the image plane corresponding to the measurement range MA2. Such camera image ImC is sequentially output to the image processing ECU 30.
  • the distance measuring sensor 10 and the external camera 20 operate based on different clock oscillators, and the measurement timing cycles (that is, frame rates) are not always the same, and are often different. Therefore, a deviation ⁇ t in measurement timing may occur between the reflected light image and the background light image ImL and the camera image ImC. ⁇ t can be 1000 times or more the deviation of the measurement timing between the reflected light image and the background light image ImL.
  • the image processing ECU 30 is an electronic control device that performs combined image processing of a reflected light image, a background light image ImL, and a camera image ImC.
  • the image processing ECU 30 has a configuration mainly including a computer including a processing unit 31, a RAM 32, a storage unit 33, an input / output interface 34, a bus connecting them, and the like.
  • the processing unit 31 is hardware for arithmetic processing combined with the RAM 32.
  • the processing unit 31 is configured to include at least one arithmetic core such as a CPU (Central Processing Unit), a GPU (Graphical Processing Unit), and RISC (Reduced Instruction Set Computer).
  • the processing unit 31 may be configured to further include an FPGA and an IP core having other dedicated functions.
  • the RAM 32 may be configured to include a video RAM for image generation.
  • the processing unit 31 executes various processes for realizing the functions of each functional unit, which will be described later, by accessing the RAM 32.
  • the storage unit 33 is configured to include a non-volatile storage medium.
  • the storage unit 33 stores various programs (image registration program, etc.) executed by the processing unit 31.
  • the image processing ECU 30 has a plurality of functional units for performing image registration by executing the image registration program stored in the storage unit 33 by the processing unit 31. Specifically, as shown in FIG. 3, functional units such as an image acquisition unit 41 and an image processing unit 42 are constructed in the image processing ECU 30.
  • the image acquisition unit 41 acquires the reflected light image and the background light image ImL from the distance measuring sensor 10 and also acquires the camera image ImC from the outside world camera 20.
  • the image acquisition unit 41 sequentially provides the image processing unit 42 with the latest set of the reflected light image and the background light image ImC and the latest camera image ImC.
  • the image processing unit 42 sets the reflected light image having the same coordinate system as the background light image ImC and the camera image ImC. Perform image registration with.
  • the image processing unit 42 of the present embodiment is a sensor image in which data of three channels of reflected light intensity, object distance, and background light intensity are stored, and a high-resolution image and a color image in the visible range.
  • a composite image in which data of four or more channels of reflected light intensity, object distance, background light intensity, and color information are stored is output.
  • the composite image is an image in which six channels of data are stored.
  • the image processing unit 42 has a feature point detection function, a correspondence relationship identification function, and a coordinate matching function.
  • the feature point detection function realizes the processing of the first phase
  • the correspondence identification function realizes the processing of the second phase after the first phase
  • the coordinate matching function realizes the processing of the second phase.
  • the processing of the third phase of is realized.
  • the feature point detection function is a function for detecting the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC, respectively.
  • corners can be adopted for the feature points FPa and FPb.
  • various feature point detection methods using a feature point detector can be adopted. In particular, in this embodiment, a Harris corner detection method using Harris Corner Detectors 43a and 43b is adopted.
  • the Harris corner detectors 43a and 43b have the structure when the weighted sum of squares of the difference in intensity due to the movement of the pixel position in the evaluation target region is expressed by a structural tensor by approximation using Taylor expansion.
  • the feature points FPa and FPb are detected by the eigenvalue of the tensor.
  • whether the evaluation target region is a corner (corresponding to the feature points FPa and FPb), an edge, or a flat region is evaluated by the determinant of the structural tensor and the evaluation of the eigensum. Can be determined.
  • the Harris corner detector 43a detects a plurality of feature points FPa of the background light image ImL.
  • the Harris corner detector 43b detects a plurality of feature points FPb of the camera image ImC.
  • the feature points FPa and FPb are schematically represented by cross marks, but in reality, more feature points FPa and FPb are detected.
  • the Harris corner detectors 43a, 43b have one or more (two in this embodiment) parameters that affect the scale (in other words, parameters that are less invariant to the scale).
  • the first parameter is the size of the evaluation target area.
  • the second parameter is the kernel size of the gradient detection filter (eg Sobel's gradient detection filter).
  • the Harris corner detectors 43a and 43b usually have different numbers for the background light image ImL and the camera image ImC. Feature points FPa and FPb are detected. Therefore, even if there is an overlapping range between the measuring range MA1 of the distance measuring sensor 10 and the measuring range MA2 of the external camera 20, the same number of feature points FPa and FPb are detected for the overlapping range. Not necessarily.
  • the Harris corner detectors 43a and 43b are arranged one by one corresponding to the background light image ImC and the camera image ImC for convenience in FIG. 3, but are common to the background light image ImC and the camera image ImC ( It may be provided (by a common program). Of the processes of the Harris corner detectors 43a and 43b, only a highly versatile portion may be provided in common for the background light image ImC and the camera image ImC.
  • the correspondence relationship identification function specifies the correspondence relationship between the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC.
  • the positional relationship between the plurality of feature points FPa in the background light image ImC and the positional relationship between the plurality of feature points FPb in the camera image ImC in addition to not having the same number of detected feature points FPa and FPb, the positional relationship between the plurality of feature points FPa in the background light image ImC and the positional relationship between the plurality of feature points FPb in the camera image ImC.
  • the fact that there is a possibility that the correspondences are different and that the feature points FPa and FPb that do not have a correspondence relation may be included increase the difficulty of identifying the correspondence relation.
  • the light receiving element 12a of the distance measuring sensor 10 and the camera element 22a of the external camera 20 are arranged at different positions in the vehicle 1 as shown in FIG. 2, and the orientation of the arrangement may be different.
  • the positional relationship at the plurality of feature points FPa in the background light image ImC and the positional relationship at the plurality of feature points FPb in the camera image ImC are different.
  • the position of the object reflected in the background light image ImC and the position of the object reflected in the camera image ImC due to the deviation ⁇ t of the measurement timing. Can be very different, and even an object can be reflected in only one side. Therefore, the positional relationship at the plurality of feature points FPa in the background light image ImC and the positional relationship at the plurality of feature points FPb in the camera image ImC are different, and the feature points FPa and FPb having no corresponding relationship are included. However, it is easy to occur.
  • the image processing unit 42 is a feature amount obtained from a peripheral region including each feature point FPa and FPb, and is highly invariant with respect to the scale.
  • the features to identify the correspondence.
  • the feature quantity having high invariance with respect to the scale include information on the direction of the edge, an average value or a ratio of some physical quantity in the peripheral region, and the like.
  • information on the extreme value for the degree of smoothing when a low-pass filter is applied to the peripheral region is adopted as a feature amount that is highly invariant with respect to the scale.
  • the image processing unit 42 of the present embodiment detects the feature amount (hereinafter, SIFT feature amount) of SIFT (Scale-Invariant Feature Transform) by the SIFT feature amount detector (hereinafter, feature amount detector) 44a, 44b. Correspondence is specified using the detected SIFT features.
  • the feature amount detectors 44a and 44b apply a Gaussian filter as the above-mentioned low-pass filter to the peripheral region including the feature points FPa and FPb detected by the Harris corner detectors 43a and 43b.
  • the feature detectors 44a and 44b change the weighting coefficient ⁇ corresponding to the standard deviation of the Gaussian filter and search for a local extremum in the peripheral region.
  • the feature detectors 44a and 44b set at least a part of the promising (excluding edges) ⁇ of the ⁇ in which the local extremum is found as the SIFT feature that is highly invariant with respect to the scale.
  • the feature amount detectors 44a and 44b are arranged one by one corresponding to the background light image ImC and the camera image ImC for convenience in FIG. 3, but are common to the background light image ImC and the camera image ImC ( It may be provided (by a common program). Of the processes of the feature amount detectors 44a and 44b, only a part having high versatility may be provided in common for the background light image ImC and the camera image ImC.
  • the image processing unit 42 specifies the correspondence relationship in consideration of the difference in the position where the corresponding point on the image is reflected based on the relative position between the distance measuring sensor 10 and the external camera 20. Based on Epipolar Geometry, the image processing unit 42 uses the epipolar line projector 45 to display the epipolar line EL corresponding to the feature point FPb of the camera image ImC as a background light image, as shown in FIGS. 6A and 6B. Project to ImL.
  • the epipolar line EL is a line where the epipolar plane and the image plane intersect.
  • the epipolar plane is a plane that passes through the optical center of the ranging sensor 10, the optical center of the external camera 20, and the three-dimensional point of the subject corresponding to the feature point FPb of the camera image ImC.
  • the epipolar line projector 45 stores an E matrix (Essential matrix) defined based on the position of the light receiving element 12a and the position of the camera element 22a.
  • the E matrix is a matrix for mapping a point on the camera image ImC to a line (that is, an epipolar line EL) on the background light image ImC.
  • the feature point FPb of the background light image ImL corresponding to a certain feature point FPb of the camera image ImC is an epipolar ray projector. It should be on the epipolar line EL by 45.
  • the image processing unit 42 narrows down the feature points FPa having a corresponding relationship by using the determination region JA which is a band-shaped region centered on the epipolar line EL and has a predetermined allowable width W.
  • the permissible width W is set according to the amount of deviation assumed between the measurement timing of the background light image ImC and the measurement timing of the camera image ImC.
  • the image processing unit 42 uses the feature point FPa of the background light image ImL located inside the determination region JA as a candidate for a point corresponding to the feature point FPb of the camera image ImC which is the projection source of the epipolar line EL. Narrow down.
  • the image processing unit 42 identifies a one-to-one individual correspondence relationship from each feature point FPb of the camera image ImC and each feature point FPa of the background light image ImC. If the numbers of feature points FPa and FPb detected by the Harris corner detectors 43a and 43b do not match between the background light image ImL and the camera image ImC, naturally, the corresponding points are not found in the other party's image. Feature points FPa and FPb appear, but these feature points FPa and FPb are not used for image registration as a result and are excluded from the subsequent processing.
  • the coordinate matching function is a function of making each pixel of the background light image ImC and the camera image ImC correspond to each pixel of the other based on the result of specifying the correspondence relationship between the feature points FPa and FPb.
  • the image processing unit 42 non-linearly sets at least one of the background light image ImL and the camera image ImC based on the positional relationship between the pair of feature points FPa and FPb having a corresponding relationship. By smoothly distorting, the correspondence between the coordinate system of the background light image ImC and the coordinate system of the camera image ImC can be obtained.
  • the image processing unit 42 performs TPS (Thin Plate Spline) using, for example, a TPS model when matching coordinates.
  • TPS Thin Plate Spline
  • TPS is performed with the coordinates of the corresponding feature points FPa and FPb as covariates.
  • the TPS model identifies the correspondence between the background light image ImL and the camera image ImC of each pixel that does not correspond to the feature points FPa and FPb.
  • the measurement timing of the background light image ImL is after ⁇ t of the measurement timing of the camera image ImC, and the other vehicle in front is shifted ⁇ t with respect to the vehicle 1.
  • the ratio of the distance between the feature points reflecting the other vehicle to the distance between the feature points reflecting the landscape in the background light image ImC is the ratio between the feature points reflecting the other vehicle to the distance between the feature points reflecting the landscape in the camera image ImC. Can be less than the ratio of intervals.
  • the coordinate system of the background light image ImL is captured by the camera. It can be adjusted to the coordinate system of the image ImC.
  • the processing in the coordinate matching function corrects the deviation ⁇ t of the measurement timing, and makes it possible to treat the background light image ImL and the camera image ImC in the same manner as the data synchronized with each other.
  • the background light image ImL can be regarded as having the same coordinate system and synchronization with the reflected light image.
  • the image processing unit 42 makes it possible to handle the reflected light image including the distance information and the camera image ImC, which is a high-resolution and color image, in the same manner as the data synchronized with each other.
  • the background light image ImL functions like an adhesive for associating the two images.
  • the image processing unit 42 can output the composite image which is the above-mentioned integrated image data by converting the coordinates corresponding to each pixel of the background light image ImC into the coordinates on the camera image ImC. .. Since this composite image has a common coordinate system for each channel, the processing of the application program (hereinafter referred to as the application) using the composite image is simplified to reduce the calculation load and improve the processing accuracy of the application. Can be made to.
  • the composite image output by the image processing unit 42 is provided to the driving support ECU 50 as peripheral monitoring image information.
  • object recognition using a composite image is performed by executing an object recognition program as an application for the processor to recognize the surrounding environment of the vehicle 1.
  • an object recognition model 51 mainly composed of a neural network is constructed as one component of the object recognition program.
  • a structure called SegNet in which an encoder and a decoder are combined can be adopted.
  • the image acquisition unit 41 acquires the latest reflected light image and background light image ImL from the ranging sensor 10, and acquires the latest camera image ImC from the outside camera 20.
  • the image acquisition unit 41 provides these images to the image processing unit 42. After the processing of S11, the process proceeds to S12.
  • the image processing unit 42 detects the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC, respectively. After the processing of S12, the process proceeds to S13.
  • the image processing unit 42 specifies the correspondence between the feature point FPa of the background light image ImL detected in S12 and the feature point FPb of the camera image ImC. After the processing of S13, the process proceeds to S14.
  • the image processing unit 42 does not correspond to the feature points FPa and FPb between the background light image ImL and the camera image ImC because of the positional relationship between the feature points FPa and FPb whose correspondence is specified in S13. Coordinates of each pixel are associated (coordinate matching). After the processing of S14, the process proceeds to S15.
  • the image processing unit 42 converts the coordinate system of the background light image ImL and the reflected light image into the coordinate system of the camera image ImC, or vice versa, so that the reflected light image and the camera image ImC are combined. Complete image registration. A series of processes is completed by S15.
  • image registration between the acquired reflected light image and the camera image ImC is performed using the background light image ImL having the same coordinate system as the reflected light image.
  • the feature point FPa of the background light image ImC whose properties are closer to those of the camera image ImC than the reflected light image and the feature point FPb of the camera image ImC, it becomes easy to identify the correspondence between the feature points FPa and FPb.
  • the coordinate system of the reflected light image and the coordinate system of the camera image ImC can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image ImC can be remarkably improved. it can.
  • the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC are detected, respectively.
  • the correspondence between the detected feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC is specified.
  • each pixel of the background light image ImC and the camera image ImC is made to correspond to each pixel of the other based on the specific result of the correspondence relationship. That is, since the correspondence between the feature points FPa and FPb is specified and then the coordinates of each pixel are made to correspond using the result, the processing amount is suppressed as compared with the case where the correspondence between each pixel is specified unnecessarily. Alternatively, highly accurate image registration can be performed while improving the processing speed.
  • the difference in the position where the corresponding point on the image is reflected based on the relative position between the distance measuring sensor 10 and the outside world camera 20 is taken into consideration. By such consideration, it is possible to improve the accuracy of specifying the correspondence between the feature points FPa and FPb.
  • the epipolar line EL corresponding to the feature point FPb of the projection source among the background light image ImL and the camera image ImC is projected on the projection destination image. Then, it is determined that the feature point FPa of the projection destination located in the band-shaped determination region JA having a predetermined allowable width W along the epipolar line EL is a point corresponding to the feature point FPb of the projection source.
  • the allowable width W it is possible to identify the correspondence between the feature points FPa and FPb, absorb errors such as the projection error between the background light image ImL and the camera image ImC, and improve the identification accuracy. ..
  • the allowable width W is set according to the amount of deviation assumed between the measurement timing of the background light image ImL and the measurement timing of the camera image ImC. Even if the objects that are reflected in the background light image ImL and the camera image ImC and constitute the feature points FPa and FPb move during the measurement timing deviation ⁇ t, the determination region JA has an allowable width W according to the deviation amount. If the feature point FPa of the object is located in, the feature points FPa and FPb having a corresponding relationship are specified. Therefore, the accuracy in identifying the correspondence can be improved.
  • the SIFT feature amount is used as a feature amount that is highly invariant with respect to the scale.
  • SIFT features that are highly invariant to the scale there is a difference in the detection levels (detection sensitivities) of the feature points FPa and FPb due to the high resolution of the camera image ImC with respect to the resolution of the background light image ImL. Even if it occurs, it is possible to suppress erroneous determination of the correspondence relationship. Therefore, the accuracy in identifying the correspondence can be improved.
  • image registration of the reflected light image and the camera image ImC is performed using the background light image ImL of the same coordinate system as the reflected light image. That is, in comparison between the feature point FPa of the background light image ImC whose properties are closer to those of the camera image ImC than the reflected light image and the feature point FPb of the camera image ImC, it is easy to identify the correspondence between the feature points FPa and FPb. Become. With such an association, the coordinate system of the reflected light image and the coordinate system of the camera image ImC can be accurately matched.
  • the distance information and the camera image ImC information which are information from different image generation sources of the distance measuring sensor 10 and the external camera 20, can be provided in the form of a composite image that can be easily processed by the application. Therefore, the processing system of the application using both the reflected light image and the camera image ImC can be remarkably enhanced.
  • the feature point FPa of the prepared background light image ImC and the feature point FPb of the camera image ImC are detected, respectively.
  • the correspondence between the detected feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC is specified.
  • each pixel of the background light image ImC and the camera image ImC is made to correspond to each pixel of the other based on the specific result of the correspondence relationship.
  • the image registration between the reflected light image and the camera image ImC is performed using the background light image ImL which has the same coordinate system as the reflected light image and whose properties are closer to those of the camera image ImC than the reflected light image.
  • the feature point FPa of the acquired background light image ImC and the feature point FPb of the camera image ImC are detected, respectively.
  • the correspondence between the detected feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC is specified.
  • each pixel of the background light image ImC and the camera image ImC is made to correspond to each pixel of the other based on the specific result of the correspondence relationship.
  • the image registration between the reflected light image and the camera image ImC is performed using the background light image ImL which has the same coordinate system as the reflected light image and whose properties are closer to those of the camera image ImC than the reflected light image.
  • the second embodiment is a modification of the first embodiment.
  • the second embodiment will be described focusing on the points different from those of the first embodiment.
  • the functions of the image processing ECU 30 and the operation support ECU 50 of the first embodiment are integrated into one ECU, and the operation support ECU 230 is configured. Therefore, in the second embodiment, the operation support ECU 230 corresponds to the image registration device. Further, in the driving support ECU 230 of the second embodiment, it can be said that the image registration function constitutes a part for realizing the highly accurate peripheral recognition function. Therefore, the driving support ECU 230 provides the surrounding environment of the vehicle 1. It also corresponds to a peripheral environment recognition device that recognizes.
  • the operation support ECU 230 has a processing unit 31, a RAM 32, a storage unit 33, an input / output interface 34, and the like, similarly to the image processing ECU 30 of the first embodiment.
  • the operation support ECU 230 of the second embodiment has a plurality of functions by executing the image registration program and the object recognition program stored in the storage unit 33 by the processing unit 31. Has a part. Specifically, as shown in FIG. 9, functional units such as an image acquisition unit 41, an image processing unit 242, and an object recognition unit 48 are constructed in the driving support ECU 230.
  • the image acquisition unit 41 is the same as that of the first embodiment.
  • the object recognition unit 48 uses the same object recognition model 48a as in the first embodiment to perform object recognition using semantic segmentation.
  • the image processing unit 242 of the second embodiment has a feature point detection function, a correspondence relationship identification function, and a coordinate matching function, as in the first embodiment.
  • the first point is that the feature point detection function considers the ratio between the resolution of the background light image ImC and the resolution of the camera image ImC (hereinafter referred to as the resolution ratio), and that the SIFT feature amount is not used in the correspondence identification function. Different from the embodiment.
  • the storage unit 33 of the driving support ECU 230 is provided with a sensor database (hereinafter, sensor DB) 243c.
  • the sensor system DB243c stores information on various sensors and cameras mounted on the vehicle 1. This information includes information on the specifications of the light receiving element 12a of the distance measuring sensor 10 and information on the specifications of the camera element 22a of the external camera 20.
  • the information regarding the specifications of the light receiving element 12a of the distance measuring sensor 10 includes the resolution information of the light receiving element 12a, and the information regarding the specifications of the camera element 22a of the external camera 20 includes the resolution information of the camera element 22a. included. From the information of these resolutions, the image processing unit 242 can grasp the resolution ratio.
  • the Harris corner detector 243a of the second embodiment has a scale parameter for detecting the feature point FPa of the background light image ImC and a scale parameter for detecting the feature point FPb of the camera image ImC based on the resolution ratio. Is different. Specifically, in the feature point detection of the camera image ImC, which has a high resolution with respect to the background light image ImL, the size of the evaluation target area as the scale parameter and the kernel of the gradient detection filter are compared with the case of the background light image ImL. At least one of the sizes is reduced. Then, the detection levels of the feature points FPa and FPb can be brought close to each other between the background light image ImL and the camera image ImC.
  • the correspondence relationship identification function As a result, even if SIFT is not used in the correspondence relationship identification function, it becomes easy to specify the correspondence relationship between the feature point FPa of the detected background light image ImC and the feature point FPb of the camera image ImC. In other words, the correspondence can be identified with high accuracy.
  • the Harris corner detectors 243a and 243b as feature point detectors for detecting the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC determine the scale parameters that affect the scale.
  • the scale parameter used for detecting the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC are based on the ratio of the obtained background light image ImC resolution and the camera image ImC resolution. It is different from the scale parameter used to detect.
  • the detection levels of the feature points FPa and FPb can be brought close to each other between the background light image ImC and the camera image ImC. .. Since the feature points FPa and FPb detected at close levels can be compared with each other, the accuracy in specifying the correspondence can be improved.
  • the distance measuring sensor 10 and the external camera 20 may form an integrated sensor unit.
  • an image registration device such as the image processing ECU 30 of the first embodiment may be included as a component of this sensor unit.
  • the image processing ECU 30 may include an object recognition unit 48 as in the second embodiment and recognize the surrounding environment of the vehicle 1.
  • the analyzed information in which the image processing ECU 30 recognizes the surrounding environment of the vehicle 1 may be provided to the driving support ECU 50 having a driving support function or the like.
  • the image processing unit 42 does not have to integrate the reflected light image, the background light image ImL, and the camera image ImC into a multi-channel composite image and output it.
  • the image processing unit 42 outputs the reflected light image, the background light image ImL, and the camera image ImC as separate image data, and in addition to these image data, outputs coordinate correspondence data indicating the correspondence between the coordinates of each image. You may try to do it.
  • the image processing unit 42 may output the image-registered reflected light image and the camera image ImC, and may not output the background light image ImC.
  • the camera image ImC may be a grayscale image instead of a color image.
  • the object recognition using the image-registered reflected light image and the camera image ImC does not have to be the object recognition using semantic segmentation.
  • the object recognition may be, for example, object recognition using a bounding box.
  • the image-registered reflected light image and the camera image ImC may be used for applications other than object recognition in the vehicle 1.
  • the distance measuring sensor 10 and the camera 20 may be installed in the conference room, and the image-registered reflected light image and the camera image ImC may be used in a communication application for video conferencing.
  • information on the orientation for imparting rotational invariance may be added to the detected SIFT feature amount.
  • the orientation information is useful, for example, in a situation where the inclination of the installation surface of the distance measuring sensor 10 and the inclination of the installation surface of the camera 20 are different.
  • the epipolar line EL corresponding to the feature point FPb of the camera image ImC is projected onto the background light image ImL, or the epipolar line EL corresponding to the feature point FPa of the background light image ImC is projected onto the camera image ImC.
  • an F matrix Frundamental matrix
  • the F matrix is useful in situations where the ranging sensor 10 and camera 20 have not been calibrated.
  • the image processing unit 42 may perform image registration on an image generated by the ranging sensor 10 and an image generated by the camera 20 as well as an additional image generated by a millimeter-wave radar or the like. Good.
  • each function provided by the image processing ECU 30 can be provided by software and hardware for executing the software, only software, only hardware, or a combination thereof. Further, when such a function is provided by an electronic circuit as hardware, each function can also be provided by a digital circuit including a large number of logic circuits or an analog circuit.
  • the form of the storage medium for storing the abnormality detection program or the like capable of realizing the above abnormality detection method may be changed as appropriate.
  • the storage medium is not limited to the configuration provided on the circuit board, but is provided in the form of a memory card or the like, is inserted into the slot portion, and is electrically connected to the control circuit of the image processing ECU 30. You may. Further, the storage medium may be an optical disk or a hard disk as a copy base of the program of the image processing ECU 30.
  • control unit and its method described in the present disclosure may be realized by a dedicated computer constituting a processor programmed to execute one or a plurality of functions embodied by a computer program.
  • the apparatus and method thereof described in the present disclosure may be realized by a dedicated hardware logic circuit.
  • the apparatus and method thereof described in the present disclosure may be realized by one or more dedicated computers configured by a combination of a processor that executes a computer program and one or more hardware logic circuits.
  • the computer program may be stored in a computer-readable non-transitional tangible recording medium as an instruction executed by the computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

This image registration device is connected communicatively to a distance measurement sensor (10) which generates a reflected light image, including distance information, by a light receiving element detecting light reflected by an object irradiated with light, and which generates a background light image, with the same coordinate system as the reflected light image, by said light receiving element detecting background light corresponding to the reflected light, and the image registration device is connected communicatively to a camera (20) which generates a camera image by a camera element detecting incident external light. This image registration device is provided with an image acquisition unit (41) which acquires a reflected light image, a background light image and a camera image, and an image processing unit (42) which, by specifying the relation between feature points in the background light image and feature points in the camera image, performs image registration of the reflected light image, which has the same coordinate system as the background light image, and the camera image.

Description

イメージレジストレーション装置、画像生成システム、イメージレジストレーション方法及びイメージレジストレーションプログラムImage registration device, image generation system, image registration method and image registration program 関連出願の相互参照Cross-reference of related applications
 この出願は、2019年9月10日に日本に出願された特許出願第2019-164860号を基礎としており、基礎の出願の内容を、全体的に、参照により援用している。 This application is based on Patent Application No. 2019-164860 filed in Japan on September 10, 2019, and the contents of the basic application are incorporated by reference as a whole.
 この明細書による開示は、イメージレジストレーション装置、画像生成システム、イメージレジストレーション方法及びイメージレジストレーションプログラムに関する。 The disclosure according to this specification relates to an image registration device, an image generation system, an image registration method, and an image registration program.
 特許文献1には、測距センサが開示されている。この測距センサは、光照射によって物体から反射された反射光を受光素子が感知することにより距離情報を含む反射光画像を生成可能である。特許文献2には、カメラが開示されている。カメラは、外部からの入射光をカメラ素子が検出することにより高解像のカメラ画像を生成可能である。 Patent Document 1 discloses a distance measuring sensor. This distance measuring sensor can generate a reflected light image including distance information by detecting the reflected light reflected from an object by light irradiation by a light receiving element. Patent Document 2 discloses a camera. The camera can generate a high-resolution camera image by detecting the incident light from the outside by the camera element.
特開2019-95452号公報JP-A-2019-95452 特開2018-69878号公報JP-A-2018-69878
 反射光画像及びカメラ画像は、アプリケーションにより処理され得る。しかしながら、反射光画像とカメラ画像とでは、検出タイミングにずれΔtが生じ得る。ずれΔtの間に、反射光画像及びカメラ画像に映り込む物体が移動した場合には、反射光画像に映り込む物体とカメラ画像に映り込む物体とを精度良く関連付けて処理することが困難となる。故に、アプリケーションが反射光画像及びカメラ画像の両方を用いたとしても、これら画像の情報を最大限活用できないため、処理精度を十分に高めることができなかった。 The reflected light image and the camera image can be processed by the application. However, the reflected light image and the camera image may have a deviation Δt in the detection timing. When the reflected light image and the object reflected in the camera image move during the deviation Δt, it becomes difficult to accurately associate and process the object reflected in the reflected light image and the object reflected in the camera image. .. Therefore, even if the application uses both the reflected light image and the camera image, the information of these images cannot be fully utilized, and therefore the processing accuracy cannot be sufficiently improved.
 この明細書の開示による目的のひとつは、アプリケーションの処理精度を高めるイメージレジストレーション装置、画像生成システム、イメージレジストレーション方法及びイメージレジストレーションプログラムを提供することにある。 One of the purposes of the disclosure of this specification is to provide an image registration device, an image generation system, an image registration method, and an image registration program that enhance the processing accuracy of the application.
 ここに開示された態様のひとつは、光照射によって物体から反射された反射光を受光素子が感知することにより距離情報を含む反射光画像と、反射光に対する背景光を受光素子が感知することにより反射光画像と同座標系となっている背景光画像とを生成する測距センサに対して通信可能に接続されると共に、外部からの入射光をカメラ素子が検出することにより反射光画像及び背景光画像よりも高解像のカメラ画像を生成するカメラに対して通信可能に接続されたイメージレジストレーション装置であって、
 反射光画像、背景光画像及びカメラ画像を取得する画像取得部と、
 背景光画像の特徴点とカメラ画像の特徴点との対応関係を特定することにより、背景光画像と同座標系の反射光画像と、カメラ画像とのイメージレジストレーションを実施する画像処理部と、を備える。
One of the embodiments disclosed here is that the light receiving element senses the reflected light image including the distance information by detecting the reflected light reflected from the object by the light irradiation, and the light receiving element senses the background light with respect to the reflected light. It is communicably connected to a ranging sensor that generates a reflected light image and a background light image that has the same coordinate system as the reflected light image, and the reflected light image and background are detected by the camera element by detecting incident light from the outside. An image registration device communicatively connected to a camera that produces a camera image with a higher resolution than an optical image.
An image acquisition unit that acquires a reflected light image, a background light image, and a camera image,
By specifying the correspondence between the feature points of the background light image and the feature points of the camera image, an image processing unit that performs image registration between the reflected light image having the same coordinate system as the background light image and the camera image, To be equipped.
 このような態様によると、取得した反射光画像とカメラ画像とのイメージレジストレーションが、反射光画像と同座標系の背景光画像を用いて実施される。すなわち、反射光画像よりも性質がカメラ画像に近い背景光画像の特徴点を、カメラ画像の特徴点との比較では、特徴点同士の対応関係の特定が容易となる。こうした対応付けにより、反射光画像の座標系とカメラ画像の座標系とを精度良く合わせ込むことができるので、反射光画像及びカメラ画像の両方を用いるアプリケーションの処理精度を格段に高めることができる。 According to such an aspect, image registration between the acquired reflected light image and the camera image is performed using the background light image having the same coordinate system as the reflected light image. That is, by comparing the feature points of the background light image whose properties are closer to those of the camera image than those of the reflected light image with the feature points of the camera image, it becomes easy to identify the correspondence between the feature points. By such association, the coordinate system of the reflected light image and the coordinate system of the camera image can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image can be remarkably improved.
 また、開示された態様の他のひとつは、アプリケーションに処理させる画像を生成する画像生成システムであって、
 光照射によって物体から反射された反射光を受光素子が感知することにより距離情報を含む反射光画像と、反射光に対する背景光を受光素子が感知することにより、反射光画像と同座標系となっている背景光画像とを生成する測距センサと、
 外部からの入射光をカメラ素子が検出することにより反射光画像及び背景光画像よりも高解像のカメラ画像を生成するカメラと、
 背景光画像の特徴点とカメラ画像の特徴点との対応関係を特定することにより、背景光画像と同座標系の反射光画像と、カメラ画像とのイメージレジストレーションを実施して、距離情報とカメラ画像の情報とが統合された複合画像を生成する画像処理部と、を備える。
In addition, another one of the disclosed aspects is an image generation system that generates an image to be processed by an application.
When the light receiving element senses the reflected light reflected from the object by light irradiation, the light receiving element senses the reflected light image including the distance information, and when the light receiving element senses the background light for the reflected light, the coordinate system becomes the same as the reflected light image. A distance measuring sensor that generates a background light image and
A camera that generates a camera image with a higher resolution than the reflected light image and the background light image by detecting the incident light from the outside by the camera element.
By specifying the correspondence between the feature points of the background light image and the feature points of the camera image, image registration between the reflected light image of the same coordinate system as the background light image and the camera image is performed to obtain the distance information. It includes an image processing unit that generates a composite image in which information of a camera image is integrated.
 このような態様によると、反射光画像とカメラ画像とのイメージレジストレーションが、反射光画像と同座標系の背景光画像を用いて実施される。すなわち、反射光画像よりも性質がカメラ画像に近い背景光画像の特徴点と、カメラ画像の特徴点との比較では、特徴点同士の対応関係の特定が容易となる。こうした対応付けにより、反射光画像の座標系とカメラ画像の座標系とを精度良く合わせ込むことができる。そして、測距センサ及びカメラの異なる画像生成元からの情報である、距離情報とカメラ画像の情報とは、アプリケーションにて処理し易い複合画像の形態にて提供可能となる。このため、反射光画像及びカメラ画像の両方を用いるアプリケーションの処理精度を格段に高めることができる。 According to such an aspect, image registration between the reflected light image and the camera image is performed using the background light image having the same coordinate system as the reflected light image. That is, by comparing the feature points of the background light image whose properties are closer to those of the camera image than the reflected light image and the feature points of the camera image, it is easy to identify the correspondence between the feature points. With such an association, the coordinate system of the reflected light image and the coordinate system of the camera image can be accurately matched. Then, the distance information and the camera image information, which are information from different image generation sources of the distance measuring sensor and the camera, can be provided in the form of a composite image that can be easily processed by the application. Therefore, the processing accuracy of the application using both the reflected light image and the camera image can be remarkably improved.
 また、開示された態様の他のひとつとして、イメージレジストレーション方法は、測距センサが生成した画像であって、光照射によって物体から反射された反射光を受光素子が感知することにより距離情報を含む反射光画像と、反射光に対する背景光を受光素子が感知することにより反射光画像と同座標系となっている背景光画像とを、用意することと、
 カメラが生成した画像であって、外部からの入射光をカメラ素子が検出することにより反射光画像及び背景光画像よりも高解像のカメラ画像を用意することと、
 背景光画像の特徴点とカメラ画像の特徴点とをそれぞれ検出することと、
 検出された背景光画像の特徴点とカメラ画像の特徴点との対応関係を特定することと、
 対応関係の特定結果に基づき、背景光画像及びカメラ画像のうち一方の各画素を、他方の各画素に対応させることと、を含む。
Further, as another one of the disclosed aspects, the image registration method is an image generated by a distance measuring sensor, and distance information is obtained by detecting the reflected light reflected from an object by light irradiation by a light receiving element. To prepare a reflected light image including the reflected light image and a background light image having the same coordinate system as the reflected light image by the light receiving element detecting the background light with respect to the reflected light.
It is an image generated by the camera, and a camera image having a higher resolution than the reflected light image and the background light image is prepared by detecting the incident light from the outside by the camera element.
Detecting the feature points of the background light image and the feature points of the camera image, respectively,
Identifying the correspondence between the detected feature points of the background light image and the feature points of the camera image,
It includes making each pixel of one of the background light image and the camera image correspond to each pixel of the other based on the specific result of the correspondence relationship.
 このような態様によると、用意した背景光画像の特徴点と、カメラ画像の特徴点とがそれぞれ検出される。その後、検出された背景光画像の特徴点とカメラ画像の特徴点との対応関係が特定される。その後、対応関係の特定結果に基づき、背景光画像及びカメラ画像のうち一方の各画素が、他方の各画素に対応させられる。こうして、反射光画像とカメラ画像とのイメージレジストレーションが、反射光画像と同座標系であり、反射光画像よりも性質がカメラ画像に近い背景光画像を用いて実施されるので、特徴点同士の対応関係の特定が容易となる。こうした対応付けにより、反射光画像の座標系とカメラ画像の座標系とを精度良く合わせ込むことができるので、反射光画像及びカメラ画像の両方を用いるアプリケーションの処理精度を格段に高めることができる。そして、特徴点同士の対応関係を特定した後、その結果を用いて各画素の座標を対応させているので、各画素の対応関係をむやみに特定していく場合よりも、処理量の抑制又は処理速度の向上を図りつつ、精度が高いイメージレジストレーションを実施することができる。 According to such an aspect, the feature points of the prepared background light image and the feature points of the camera image are detected, respectively. After that, the correspondence between the detected feature points of the background light image and the feature points of the camera image is specified. After that, each pixel of the background light image and the camera image is made to correspond to each pixel of the other based on the specific result of the correspondence relationship. In this way, the image registration between the reflected light image and the camera image is performed using the background light image which has the same coordinate system as the reflected light image and whose properties are closer to those of the camera image than the reflected light image. It becomes easy to identify the correspondence between. By such association, the coordinate system of the reflected light image and the coordinate system of the camera image can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image can be remarkably improved. Then, after specifying the correspondence relationship between the feature points, the coordinates of each pixel are made to correspond using the result, so that the processing amount can be suppressed or compared with the case where the correspondence relationship of each pixel is specified unnecessarily. Image registration with high accuracy can be performed while improving the processing speed.
 また、開示された態様の他のひとつは、測距センサが生成した画像と、カメラが生成した画像とのイメージレジストレーションを実施するイメージレジストレーションプログラムであって、
 少なくとも1つの処理部に、
 測距センサが生成した画像であって、光照射によって物体から反射された反射光を受光素子が感知することにより距離情報を含む反射光画像と、反射光に対する背景光を受光素子が感知することにより反射光画像と同座標系となっている背景光画像とを、取得する処理と、
 カメラが生成した画像であって、外部からの入射光をカメラ素子が検出することにより反射光画像及び背景光画像よりも高解像のカメラ画像を取得する処理と、
 背景光画像の特徴点とカメラ画像の特徴点とをそれぞれ検出する処理と、
 検出された背景光画像の特徴点とカメラ画像の特徴点との対応関係を特定する処理と、
 対応関係の特定結果に基づき、背景光画像及びカメラ画像のうち一方の各画素を、他方の各画素に対応させる処理と、を実行させる。
Further, another one of the disclosed aspects is an image registration program that performs image registration between the image generated by the distance measuring sensor and the image generated by the camera.
In at least one processing unit
An image generated by a distance measuring sensor, in which the light receiving element senses a reflected light image including distance information by detecting the reflected light reflected from an object by light irradiation, and a background light with respect to the reflected light. The process of acquiring the reflected light image and the background light image that has the same coordinate system as the
A process of acquiring a camera image having a higher resolution than the reflected light image and the background light image by detecting the incident light from the outside in the image generated by the camera.
The process of detecting the feature points of the background light image and the feature points of the camera image, respectively.
Processing to identify the correspondence between the detected feature points of the background light image and the feature points of the camera image,
Based on the specific result of the correspondence relationship, the process of making each pixel of one of the background light image and the camera image correspond to each pixel of the other is executed.
 このような態様によると、取得した背景光画像の特徴点と、カメラ画像の特徴点とがそれぞれ検出される。その後、検出された背景光画像の特徴点とカメラ画像の特徴点との対応関係が特定される。その後、対応関係の特定結果に基づき、背景光画像及びカメラ画像のうち一方の各画素が、他方の各画素に対応させられる。こうして、反射光画像とカメラ画像とのイメージレジストレーションが、反射光画像と同座標系であり、反射光画像よりも性質がカメラ画像に近い背景光画像を用いて実施されるので、特徴点同士の対応関係の特定が容易となる。こうした対応付けにより、反射光画像の座標系とカメラ画像の座標系とを精度良く合わせ込むことができるので、反射光画像及びカメラ画像の両方を用いるアプリケーションの処理精度を格段に高めることができる。そして、特徴点同士の対応関係を特定した後、その結果を用いて各画素の座標を対応させているので、各画素の対応関係をむやみに特定していく場合よりも、処理量の抑制又は処理速度の向上を図りつつ、精度が高いイメージレジストレーションを実施することができる。 According to such an aspect, the feature points of the acquired background light image and the feature points of the camera image are detected, respectively. After that, the correspondence between the detected feature points of the background light image and the feature points of the camera image is specified. After that, each pixel of the background light image and the camera image is made to correspond to each pixel of the other based on the specific result of the correspondence relationship. In this way, the image registration between the reflected light image and the camera image is performed using the background light image which has the same coordinate system as the reflected light image and whose properties are closer to those of the camera image than the reflected light image. It becomes easy to identify the correspondence between. By such association, the coordinate system of the reflected light image and the coordinate system of the camera image can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image can be remarkably improved. Then, after specifying the correspondence relationship between the feature points, the coordinates of each pixel are made to correspond using the result, so that the processing amount can be suppressed or compared with the case where the correspondence relationship of each pixel is specified unnecessarily. Image registration with high accuracy can be performed while improving the processing speed.
 なお、請求の範囲等における括弧内の符号は、後述する実施形態の部分との対応関係を例示的に示すものであって、技術的範囲を限定することを意図するものではない。 Note that the reference numerals in parentheses in the claims, etc., exemplify the correspondence with the parts of the embodiments described later, and are not intended to limit the technical scope.
第1実施形態の画像生成システム及び運転支援ECUの全体像を示す図である。It is a figure which shows the whole image of the image generation system and the operation support ECU of 1st Embodiment. 第1実施形態の測距センサ及び外界カメラの車両への搭載状態を示す図である。It is a figure which shows the mounting state of the distance measuring sensor and the outside world camera of 1st Embodiment in a vehicle. 第1実施形態の画像処理ECUの構成を示す構成図である。It is a block diagram which shows the structure of the image processing ECU of 1st Embodiment. 第1実施形態の背景光画像における特徴点の検出を説明するための図である。It is a figure for demonstrating the detection of the feature point in the background light image of 1st Embodiment. 図4Aに含まれる背景光画像を線図に変更した図である。It is the figure which changed the background light image included in FIG. 4A into a line diagram. 第1実施形態のカメラ画像における特徴点の検出を説明するための図である。It is a figure for demonstrating the detection of the feature point in the camera image of 1st Embodiment. 図5Aに含まれるカメラ画像を線図に変更した図である。It is the figure which changed the camera image included in FIG. 5A into a diagram. 第1実施形態の特徴点の対応関係の特定を説明するための図である。It is a figure for demonstrating the identification of the correspondence relation of the feature point of 1st Embodiment. 図6Aに示す含まれるカメラ画像及び背景光画像を線図に変更した図である。It is a figure which changed the included camera image and background light image shown in FIG. 6A into a line diagram. 第1実施形態の座標のマッチングを説明するための図である。It is a figure for demonstrating the matching of coordinates of 1st Embodiment. 第1実施形態の画像処理ECUの処理を説明するためのフローチャートである。It is a flowchart for demonstrating the processing of the image processing ECU of 1st Embodiment. 第2実施形態における図3に対応する図である。It is a figure corresponding to FIG. 3 in the second embodiment.
 以下、複数の実施形態を図面に基づいて説明する。なお、各実施形態において対応する構成要素には同一の符号を付すことにより、重複する説明を省略する場合がある。各実施形態において構成の一部分のみを説明している場合、当該構成の他の部分については、先行して説明した他の実施形態の構成を適用することができる。また、各実施形態の説明において明示している構成の組み合わせばかりではなく、特に組み合わせに支障が生じなければ、明示していなくても複数の実施形態の構成同士を部分的に組み合せることができる。 Hereinafter, a plurality of embodiments will be described based on the drawings. In addition, duplicate description may be omitted by assigning the same reference numerals to the corresponding components in each embodiment. When only a part of the configuration is described in each embodiment, the configuration of the other embodiment described above can be applied to the other parts of the configuration. Further, not only the combination of the configurations specified in the description of each embodiment, but also the configurations of a plurality of embodiments can be partially combined even if the combination is not specified. ..
 (第1実施形態)
 図1に示すように、本開示の第1実施形態によるイメージレジストレーション装置は、移動体としての車両1に用いられ、車両1に搭載されるように構成された画像処理ECU(Electronic Control Unit)30となっている。画像処理ECU30は、画像生成システム100を、測距センサ10及び外界カメラ20と共に構成している。本実施形態の画像生成システム100は、測距センサ10及び外界カメラ20の測定結果を統合した周辺監視画像情報を生成して、運転支援ECU50等に提供することが可能である。
(First Embodiment)
As shown in FIG. 1, the image registration device according to the first embodiment of the present disclosure is used for a vehicle 1 as a moving body, and is an image processing ECU (Electronic Control Unit) configured to be mounted on the vehicle 1. It is 30. The image processing ECU 30 constitutes an image generation system 100 together with a distance measuring sensor 10 and an external camera 20. The image generation system 100 of the present embodiment can generate peripheral monitoring image information that integrates the measurement results of the distance measuring sensor 10 and the external camera 20 and provide the peripheral monitoring image information to the driving support ECU 50 and the like.
 画像処理ECU30は、車両1に搭載された車載ネットワークの通信バスに通信可能に接続されている。画像処理ECU30は、車両ネットワークに設けられた複数のノードのうちの1つである。車載ネットワークの通信バスには、測距センサ10、外界カメラ20の他、運転支援ECU50等がそれぞれノードとして接続されている。 The image processing ECU 30 is communicably connected to the communication bus of the vehicle-mounted network mounted on the vehicle 1. The image processing ECU 30 is one of a plurality of nodes provided in the vehicle network. In addition to the distance measuring sensor 10 and the external camera 20, the driving support ECU 50 and the like are connected to the communication bus of the vehicle-mounted network as nodes.
 運転支援ECU50は、プロセッサ、RAM(Random Access Memory)、記憶部、入出力インターフェース、及びこれらを接続するバス等を備えたコンピュータを主体として含む構成である。運転支援ECU50は、車両1においてドライバの運転操作を支援する運転支援機能、及びドライバの運転操作を代行可能な運転代行機能の少なくとも一方を備えている。運転支援ECU50は、記憶部に記憶されたプログラムをプロセッサによって実行することにより、画像生成システム100から取得する周辺監視画像情報に基づき、車両1の周辺環境を認識する。運転支援ECU50は、記憶部に記憶されたプログラムをプロセッサによって実行することにより、認識結果に応じた車両1の自動運転又は高度運転支援を実現する。 The operation support ECU 50 has a configuration mainly including a computer equipped with a processor, a RAM (RandomAccessMemory), a storage unit, an input / output interface, a bus connecting these, and the like. The driving support ECU 50 has at least one of a driving support function that assists the driver's driving operation in the vehicle 1 and a driving agency function that can act for the driver's driving operation. The driving support ECU 50 recognizes the surrounding environment of the vehicle 1 based on the peripheral monitoring image information acquired from the image generation system 100 by executing the program stored in the storage unit by the processor. The driving support ECU 50 realizes automatic driving or advanced driving support of the vehicle 1 according to the recognition result by executing the program stored in the storage unit by the processor.
 次に、画像生成システム100に含まれる測距センサ10、外界カメラ20及び画像処理ECU30の各詳細を、順に説明する。 Next, details of the distance measuring sensor 10, the external camera 20, and the image processing ECU 30 included in the image generation system 100 will be described in order.
 測距センサ10は、例えば車両1の前方、又は車両1のルーフに配置された、SPAD LiDAR(Single Photon Avalanche Diode Light Detection And Ranging)となっている。測距センサ10は、車両1の周辺のうち少なくとも前方の測定範囲MA1を測定可能となっている。 The distance measuring sensor 10 is, for example, a SPAD LiDAR (Single Photon Avalanche Diode Light Detection And Ringing) arranged in front of the vehicle 1 or on the roof of the vehicle 1. The distance measuring sensor 10 can measure at least the front measurement range MA1 in the periphery of the vehicle 1.
 測距センサ10は、発光部11、受光部12、制御ユニット13等を含む構成である。発光部11は、光源から発光された光ビームを、可動光学部材(例えばポリゴンミラー)を用いて走査することにより、図2に示す測定範囲MA1へ向けて照射する。光源は、例えば半導体レーザ(Laser diode)であり、制御ユニット13からの電気信号に応じて、乗員及び外界の人間から視認不能な近赤外域の光ビームを発光する。 The distance measuring sensor 10 has a configuration including a light emitting unit 11, a light receiving unit 12, a control unit 13, and the like. The light emitting unit 11 irradiates the light beam emitted from the light source toward the measurement range MA1 shown in FIG. 2 by scanning with a movable optical member (for example, a polygon mirror). The light source is, for example, a semiconductor laser (Laser diode), and emits a light beam in the near infrared region, which is invisible to the occupants and humans in the outside world, in response to an electric signal from the control unit 13.
 受光部12は、照射された光ビームが測定範囲MA1内の物体から反射される反射光又は反射光に対する背景光を例えば集光レンズにより集光して、受光素子12aへ入射させる。 The light receiving unit 12 collects the reflected light reflected from the object within the measurement range MA1 or the background light for the reflected light by, for example, a condensing lens, and causes the irradiated light beam to enter the light receiving element 12a.
 受光素子12aは、光電変換により光を電気信号に変換する素子であり、検出電圧を増幅することにより、高感度を実現したSPAD受光素子である。受光素子12aには、例えば近赤外域の反射光を検出するために、可視域に対して近赤外域の感度が高く設定されたCMOSセンサが採用されている。この感度は、受光部12に光学フィルタを設けることよっても調整できる。受光素子12aは、複数の受光画素を1次元方向又は2次元方向に並ぶようにアレイ状に有する。 The light receiving element 12a is an element that converts light into an electric signal by photoelectric conversion, and is a SPAD light receiving element that realizes high sensitivity by amplifying a detection voltage. For the light receiving element 12a, for example, in order to detect the reflected light in the near infrared region, a CMOS sensor in which the sensitivity in the near infrared region is set to be high with respect to the visible region is adopted. This sensitivity can also be adjusted by providing an optical filter in the light receiving unit 12. The light receiving element 12a has a plurality of light receiving pixels in an array so as to be arranged in a one-dimensional direction or a two-dimensional direction.
 制御ユニット13は、発光部11及び受光部12を制御するユニットである。制御ユニット13は、例えば受光素子12aと共通の基板上に配置され、例えばマイコンないしFPGA(Field-Programmable Gate Array)等の広義のプロセッサを主体として構成されている。制御ユニット13は、走査制御機能、反射光測定機能、及び背景光測定機能を実現している。 The control unit 13 is a unit that controls the light emitting unit 11 and the light receiving unit 12. The control unit 13 is arranged on a substrate common to, for example, the light receiving element 12a, and is mainly composed of a processor in a broad sense such as a microcomputer or an FPGA (Field-Programmable Gate Array). The control unit 13 realizes a scanning control function, a reflected light measurement function, and a background light measurement function.
 走査制御機能は、光ビーム走査を制御する機能である。制御ユニット13は、測距センサ10に設けられたクロック発振器の動作クロックに基づいたタイミングにて、光源から光ビームをパルス状に複数回発振させると共に、可動光学部材を動作させる。 The scanning control function is a function that controls optical beam scanning. The control unit 13 oscillates the light beam from the light source a plurality of times in a pulse shape at a timing based on the operating clock of the clock oscillator provided in the distance measuring sensor 10, and operates the movable optical member.
 反射光測定機能は、光ビーム走査のタイミングに合わせて、例えばローリングシャッタ方式を用いて各受光画素が受光した反射光に基づく電圧値を読み出し、反射光の強度を測定する機能である。反射光の測定においては、光ビームの発光タイミングと反射光の受光タイミングとの時間差を検出することにより、測距センサ10から反射光を反射した物体までの距離を測定することができる。反射光の測定により、制御ユニット13は、測定範囲MA1に対応した画像平面上の2次元座標に反射光の強度及び当該反射光を反射した物体の距離情報が関連付けられた画像状のデータである反射光画像を、生成することができる。 The reflected light measurement function is a function that reads out the voltage value based on the reflected light received by each light receiving pixel by using, for example, a rolling shutter method, and measures the intensity of the reflected light according to the timing of light beam scanning. In the measurement of the reflected light, the distance from the distance measuring sensor 10 to the object reflecting the reflected light can be measured by detecting the time difference between the emission timing of the light beam and the reception timing of the reflected light. By measuring the reflected light, the control unit 13 is image-like data in which the intensity of the reflected light and the distance information of the object reflecting the reflected light are associated with the two-dimensional coordinates on the image plane corresponding to the measurement range MA1. A reflected light image can be generated.
 背景光測定機能は、反射光を測定する直前のタイミングにて、各受光画素が受光した背景光に基づく電圧値を読み出し、背景光の強度を測定する機能である。ここで背景光とは、反射光を実質的に含まない、外界のうち測定範囲MA1から受光素子12aへ入射する入射光を意味する。入射光には、自然光、外界の表示等から入射する表示光等が含まれる。背景光の測定により、制御ユニット13は、測定範囲MA1に対応した画像平面上の2次元座標に背景光の強度が関連付けられた画像状のデータである背景光画像ImLを、生成することができる。 The background light measurement function is a function that reads out the voltage value based on the background light received by each light receiving pixel at the timing immediately before measuring the reflected light and measures the intensity of the background light. Here, the background light means incident light incident on the light receiving element 12a from the measurement range MA1 in the outside world, which does not substantially include reflected light. The incident light includes natural light, display light incident from the display of the outside world, and the like. By measuring the background light, the control unit 13 can generate a background light image ImL which is image-like data in which the intensity of the background light is associated with the two-dimensional coordinates on the image plane corresponding to the measurement range MA1. ..
 反射光画像及び背景光画像ImLは、共通の受光素子12aにより感知され、当該受光素子12aを含む共通の光学系から取得される。したがって、反射光画像の座標系と背景光画像ImLの座標系とは、互いに一致する同座標系とみなすことができる。さらには、反射光画像と背景光画像ImLとの間にて、測定タイミングのずれも殆どない(例えば1ns未満)といえる。したがって、反射光画像と背景光画像ImLとは、同期もとれているとみなすことができる。 The reflected light image and the background light image ImL are sensed by the common light receiving element 12a and acquired from the common optical system including the light receiving element 12a. Therefore, the coordinate system of the reflected light image and the coordinate system of the background light image ImL can be regarded as the same coordinate system that coincide with each other. Furthermore, it can be said that there is almost no difference in measurement timing between the reflected light image and the background light image ImL (for example, less than 1 ns). Therefore, the reflected light image and the background light image ImL can be regarded as being synchronized.
 例えば本実施形態では、各画素に対応して、反射光の強度、物体の距離、及び背景光の強度の3チャンネルのデータが格納された一体的な画像データが、センサ画像として、画像処理ECU30へ逐次出力される。 For example, in the present embodiment, the integrated image data in which the data of three channels of the intensity of the reflected light, the distance of the object, and the intensity of the background light are stored corresponding to each pixel is used as the sensor image in the image processing ECU 30. Is output sequentially to.
 外界カメラ20は、例えば車両1のフロントウインドシールドの車室内側に配置されているカメラである。外界カメラ20は、車両1の外界のうち少なくとも前方の測定範囲MA2、より詳細には測距センサ10の測定範囲MA1と少なくとも一部を重複させた測定範囲MA2を、測定可能となっている。 The outside world camera 20 is, for example, a camera arranged on the vehicle interior side of the front windshield of the vehicle 1. The outside world camera 20 can measure at least the front measurement range MA2 of the outside world of the vehicle 1, and more specifically, the measurement range MA2 that overlaps at least a part of the measurement range MA1 of the distance measurement sensor 10.
 外界カメラ20は、受光部22及び制御ユニット23を含む構成である。受光部22は、カメラ外部の測定範囲MA2から入射する入射光(背景光)を例えば受光レンズにより集光して、カメラ素子22aへ入射させる。 The external camera 20 has a configuration including a light receiving unit 22 and a control unit 23. The light receiving unit 22 collects the incident light (background light) incident from the measurement range MA2 outside the camera by, for example, a light receiving lens, and causes the light receiving unit 22 to enter the camera element 22a.
 カメラ素子22aは、光電変換により光を電気信号に変換する素子であり、例えばCCDセンサ又はCMOSセンサを採用することが可能である。カメラ素子22aでは、可視域の自然光を効率的に受光するために、近赤外域に対して可視領域の感度が高く設定されている。カメラ素子22aは、複数の受光画素(いわゆるサブ画素に相当する)を2次元方向に並ぶようにアレイ状に有する。互いに隣接する受光画素には、例えば赤色、緑色、青色のカラーフィルタが配置されている。各受光画素は、配置されたカラーフィルタに対応した色の可視光を受光する。赤色の強度、緑色の強度、青色の強度がそれぞれ測定されることによって、外界カメラ20が撮影するカメラ画像ImCは、反射光画像及び背景光画像ImLよりも高解像の画像であって、可視域のカラー画像となり得る。 The camera element 22a is an element that converts light into an electric signal by photoelectric conversion, and for example, a CCD sensor or a CMOS sensor can be adopted. The camera element 22a is set to have high sensitivity in the visible region with respect to the near infrared region in order to efficiently receive natural light in the visible region. The camera element 22a has a plurality of light receiving pixels (corresponding to so-called sub-pixels) in an array so as to be arranged in a two-dimensional direction. For example, red, green, and blue color filters are arranged on the light receiving pixels adjacent to each other. Each light receiving pixel receives visible light of a color corresponding to the arranged color filter. By measuring the intensity of red, the intensity of green, and the intensity of blue, the camera image ImC captured by the external camera 20 is a higher resolution image than the reflected light image and the background light image ImL, and is visible. It can be a color image of the area.
 制御ユニット23は、受光部22を制御するユニットである。制御ユニット23は、例えばカメラ素子22aと共通の基板上に配置され、マイコンないしFPGA等の広義のプロセッサを主体として構成されている。制御ユニット23は、撮影機能を実現している。 The control unit 23 is a unit that controls the light receiving unit 22. The control unit 23 is arranged on a substrate common to, for example, the camera element 22a, and is mainly composed of a processor in a broad sense such as a microcomputer or an FPGA. The control unit 23 realizes a shooting function.
 撮影機能は、上述のカラー画像を撮影する機能である。制御ユニット23は、外界カメラ20に設けられたクロック発振器の動作クロックに基づいたタイミングにて、例えばグローバルシャッタ方式を用いて各受光画素が受光した入射光に基づく電圧値を読み出し、入射光の強度を感知して測定する。このクロック発振器は、測距センサ10のクロック発振器とは別に、独立して設けられている。制御ユニット23は、測定範囲MA2に対応した画像平面上の2次元座標に入射光の強度が関連付けられた画像状のデータであるカメラ画像ImCを、生成することができる。こうしたカメラ画像ImCが、画像処理ECU30へ逐次出力される。 The shooting function is a function for shooting the above-mentioned color image. The control unit 23 reads out the voltage value based on the incident light received by each light receiving pixel at the timing based on the operating clock of the clock oscillator provided in the external camera 20, for example, by using the global shutter method, and the intensity of the incident light. Is detected and measured. This clock oscillator is provided independently of the clock oscillator of the ranging sensor 10. The control unit 23 can generate a camera image ImC which is image-like data in which the intensity of the incident light is associated with the two-dimensional coordinates on the image plane corresponding to the measurement range MA2. Such camera image ImC is sequentially output to the image processing ECU 30.
 測距センサ10と外界カメラ20とは、別のクロック発振器に基づき動作し、測定タイミングの周期(すなわちフレームレート)も一致しているとは限らず、異なる場合が多い。このため、反射光画像及び背景光画像ImLとカメラ画像ImCとの間には、測定タイミングのずれΔtが生じ得る。Δtは、反射光画像と背景光画像ImLとの間の測定タイミングのずれの1000倍以上となり得る。 The distance measuring sensor 10 and the external camera 20 operate based on different clock oscillators, and the measurement timing cycles (that is, frame rates) are not always the same, and are often different. Therefore, a deviation Δt in measurement timing may occur between the reflected light image and the background light image ImL and the camera image ImC. Δt can be 1000 times or more the deviation of the measurement timing between the reflected light image and the background light image ImL.
 画像処理ECU30は、反射光画像、背景光画像ImL、及びカメラ画像ImCを複合的に画像処理する電子制御装置である。画像処理ECU30は、処理部31、RAM32、記憶部33、入出力インターフェース34、及びこれらを接続するバス等を備えたコンピュータを主体として含む構成である。処理部31は、RAM32と結合された演算処理のためのハードウェアである。処理部31は、CPU(Central Processing Unit)、GPU(Graphical Processing Unit)、RISC(Reduced Instruction Set Computer)等の演算コアを少なくとも1つ含む構成である。処理部31は、FPGA及び他の専用機能を備えたIPコア等をさらに含む構成であってもよい。RAM32は、画像生成のためのビデオRAMを含む構成であってもよい。処理部31は、RAM32へのアクセスにより、後述する各機能部の機能を実現するための種々の処理を実行する。記憶部33は、不揮発性の記憶媒体を含む構成である。記憶部33には、処理部31によって実行される種々のプログラム(イメージレジストレーションプログラム等)が格納されている。 The image processing ECU 30 is an electronic control device that performs combined image processing of a reflected light image, a background light image ImL, and a camera image ImC. The image processing ECU 30 has a configuration mainly including a computer including a processing unit 31, a RAM 32, a storage unit 33, an input / output interface 34, a bus connecting them, and the like. The processing unit 31 is hardware for arithmetic processing combined with the RAM 32. The processing unit 31 is configured to include at least one arithmetic core such as a CPU (Central Processing Unit), a GPU (Graphical Processing Unit), and RISC (Reduced Instruction Set Computer). The processing unit 31 may be configured to further include an FPGA and an IP core having other dedicated functions. The RAM 32 may be configured to include a video RAM for image generation. The processing unit 31 executes various processes for realizing the functions of each functional unit, which will be described later, by accessing the RAM 32. The storage unit 33 is configured to include a non-volatile storage medium. The storage unit 33 stores various programs (image registration program, etc.) executed by the processing unit 31.
 画像処理ECU30は、記憶部33に記憶されたイメージレジストレーションプログラムを処理部31によって実行することで、イメージレジストレーション(Image Registration)を実施するための複数の機能部を有する。具体的に図3に示すように、画像処理ECU30には、画像取得部41及び画像処理部42等の機能部が構築される。 The image processing ECU 30 has a plurality of functional units for performing image registration by executing the image registration program stored in the storage unit 33 by the processing unit 31. Specifically, as shown in FIG. 3, functional units such as an image acquisition unit 41 and an image processing unit 42 are constructed in the image processing ECU 30.
 画像取得部41は、測距センサ10から反射光画像及び背景光画像ImLを取得すると共に、外界カメラ20からカメラ画像ImCを取得する。画像取得部41は、最新の反射光画像及び背景光画像ImLの組と最新のカメラ画像ImCとを、画像処理部42に逐次提供する。 The image acquisition unit 41 acquires the reflected light image and the background light image ImL from the distance measuring sensor 10 and also acquires the camera image ImC from the outside world camera 20. The image acquisition unit 41 sequentially provides the image processing unit 42 with the latest set of the reflected light image and the background light image ImC and the latest camera image ImC.
 画像処理部42は、背景光画像ImLの特徴点FPaとカメラ画像ImCの特徴点FPbとの対応関係を特定することにより、背景光画像ImLと同座標系をもつ反射光画像と、カメラ画像ImCとのイメージレジストレーションを実施する。特に本実施形態の画像処理部42は、反射光の強度、物体の距離、及び背景光の強度の3チャンネルのデータが格納されたセンサ画像と、高解像画像かつ可視域のカラー画像であるカメラ画像ImCとが入力されると、反射光の強度、物体の距離、背景光の強度、及びカラー情報の4チャンネル以上のデータが格納された複合画像を出力する。本実施形態では、カラー情報が赤色の強度、緑色の強度、及び青色の強度の3チャンネルのデータで構成されているので、複合画像は、6チャンネルのデータが格納された画像となっている。 By specifying the correspondence between the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC, the image processing unit 42 sets the reflected light image having the same coordinate system as the background light image ImC and the camera image ImC. Perform image registration with. In particular, the image processing unit 42 of the present embodiment is a sensor image in which data of three channels of reflected light intensity, object distance, and background light intensity are stored, and a high-resolution image and a color image in the visible range. When the camera image ImC is input, a composite image in which data of four or more channels of reflected light intensity, object distance, background light intensity, and color information are stored is output. In the present embodiment, since the color information is composed of three channels of data of red intensity, green intensity, and blue intensity, the composite image is an image in which six channels of data are stored.
 画像処理部42は、特徴点検出機能、対応関係特定機能及び座標マッチング機能を有する。イメージレジストレーションにおいて、特徴点検出機能は第1フェイズの処理を実現し、対応関係特定機能は第1フェイズよりも後の第2フェイズの処理を実現し、座標マッチング機能は第2フェイズよりも後の第3フェイズの処理を実現する。 The image processing unit 42 has a feature point detection function, a correspondence relationship identification function, and a coordinate matching function. In image registration, the feature point detection function realizes the processing of the first phase, the correspondence identification function realizes the processing of the second phase after the first phase, and the coordinate matching function realizes the processing of the second phase. The processing of the third phase of is realized.
 特徴点検出機能は、背景光画像ImLの特徴点FPaと、カメラ画像ImCの特徴点FPbとをそれぞれ検出する機能である。特徴点FPa,FPbには、例えばコーナーを採用することができる。特徴点FPa,FPbの検出には、特徴点検出器を用いた種々の特徴点検出法を採用可能である。特に本実施形態ではハリスコーナー検出器(Harris Corner Detector)43a,43bによるハリスコーナー検出法が採用されている。 The feature point detection function is a function for detecting the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC, respectively. For example, corners can be adopted for the feature points FPa and FPb. For the detection of feature points FPa and FPb, various feature point detection methods using a feature point detector can be adopted. In particular, in this embodiment, a Harris corner detection method using Harris Corner Detectors 43a and 43b is adopted.
 ハリスコーナー検出器43a,43bは、画素の位置の評価対象領域内の移動に伴う強度の差分の重み付き2乗和を、テイラー展開を用いた近似により構造テンソルにて表現した場合における、当該構造テンソルの固有値によって、特徴点FPa,FPbを検出する。ハリスコーナー検出器43a,43bは、構造テンソルの行列式と固有和の評価によって、評価対象領域がコーナー(これが特徴点FPa,FPbに相当する)であるか、エッジであるか、平坦であるかを判別可能である。 The Harris corner detectors 43a and 43b have the structure when the weighted sum of squares of the difference in intensity due to the movement of the pixel position in the evaluation target region is expressed by a structural tensor by approximation using Taylor expansion. The feature points FPa and FPb are detected by the eigenvalue of the tensor. In the Harris corner detectors 43a and 43b, whether the evaluation target region is a corner (corresponding to the feature points FPa and FPb), an edge, or a flat region is evaluated by the determinant of the structural tensor and the evaluation of the eigensum. Can be determined.
 図4A及び図4Bに示すように、ハリスコーナー検出器43aは、背景光画像ImLの特徴点FPaを複数検出する。図5A及び図5Bに示すように、ハリスコーナー検出器43bは、カメラ画像ImCの特徴点FPbを複数検出する。図4A~5Bでは、特徴点FPa,FPbが十字のマークにて模式的に表現されているが、実際には、より多くの特徴点FPa,FPbが検出される。 As shown in FIGS. 4A and 4B, the Harris corner detector 43a detects a plurality of feature points FPa of the background light image ImL. As shown in FIGS. 5A and 5B, the Harris corner detector 43b detects a plurality of feature points FPb of the camera image ImC. In FIGS. 4A to 5B, the feature points FPa and FPb are schematically represented by cross marks, but in reality, more feature points FPa and FPb are detected.
 ハリスコーナー検出器43a,43bは、スケールに影響する1つ以上の(本実施形態では2つの)パラメータ(換言すると、スケールに対する不変性が低いパラメータ)を有する。例えば、1つ目のパラメータは、評価対象領域のサイズである。2つ目のパラメータは、勾配検出フィルタ(例えばSobelの勾配検出フィルタ)のカーネルサイズである。 The Harris corner detectors 43a, 43b have one or more (two in this embodiment) parameters that affect the scale (in other words, parameters that are less invariant to the scale). For example, the first parameter is the size of the evaluation target area. The second parameter is the kernel size of the gradient detection filter (eg Sobel's gradient detection filter).
 こうしたスケールに影響するパラメータに対して、背景光画像ImLの解像度とカメラ画像ImCの解像度が異なるので、ハリスコーナー検出器43a,43bは、通常、背景光画像ImL及びカメラ画像ImCについて、互いに異なる数の特徴点FPa,FPbを検出する。したがって、測距センサ10の測定範囲MA1と外界カメラ20の測定範囲MA2との間に重複範囲が存在していたとしても、当該重複範囲に対して、同数の特徴点FPa,FPbが検出されるとは限らない。 Since the resolution of the background light image ImC and the resolution of the camera image ImC are different for the parameters affecting such scale, the Harris corner detectors 43a and 43b usually have different numbers for the background light image ImL and the camera image ImC. Feature points FPa and FPb are detected. Therefore, even if there is an overlapping range between the measuring range MA1 of the distance measuring sensor 10 and the measuring range MA2 of the external camera 20, the same number of feature points FPa and FPb are detected for the overlapping range. Not necessarily.
 ハリスコーナー検出器43a,43bは、図3において便宜上、背景光画像ImL、カメラ画像ImCにそれぞれ対応して1つずつ配置されているが、背景光画像ImL及びカメラ画像ImCに対して共通に(共通のプログラムによって)設けられてもよい。ハリスコーナー検出器43a,43bの処理のうち、汎用性の高い一部分のみが、背景光画像ImL及びカメラ画像ImCに対して共通に設けられてもよい。 The Harris corner detectors 43a and 43b are arranged one by one corresponding to the background light image ImC and the camera image ImC for convenience in FIG. 3, but are common to the background light image ImC and the camera image ImC ( It may be provided (by a common program). Of the processes of the Harris corner detectors 43a and 43b, only a highly versatile portion may be provided in common for the background light image ImC and the camera image ImC.
 対応関係特定機能は、背景光画像ImLの特徴点FPaとカメラ画像ImCの特徴点FPbとの対応関係を特定する。本実施形態においては、検出される特徴点FPa,FPbの同数でないことに加え、背景光画像ImL内の複数の特徴点FPaにおける位置関係とカメラ画像ImC内の複数の特徴点FPbにおける位置関係とが異なる可能性があること、対応関係がない特徴点FPa,FPbが含まれている可能性があることが、対応関係の特定の困難性を高めている。 The correspondence relationship identification function specifies the correspondence relationship between the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC. In the present embodiment, in addition to not having the same number of detected feature points FPa and FPb, the positional relationship between the plurality of feature points FPa in the background light image ImC and the positional relationship between the plurality of feature points FPb in the camera image ImC. The fact that there is a possibility that the correspondences are different and that the feature points FPa and FPb that do not have a correspondence relation may be included increase the difficulty of identifying the correspondence relation.
 すなわち、測距センサ10の受光素子12aと外界カメラ20のカメラ素子22aとは、図2に示すように車両1において互いに異なる位置に配置されており、配置の向きも異なり得る。この結果、上述のように背景光画像ImL内の複数の特徴点FPaにおける位置関係とカメラ画像ImC内の複数の特徴点FPbにおける位置関係とが異なる。 That is, the light receiving element 12a of the distance measuring sensor 10 and the camera element 22a of the external camera 20 are arranged at different positions in the vehicle 1 as shown in FIG. 2, and the orientation of the arrangement may be different. As a result, as described above, the positional relationship at the plurality of feature points FPa in the background light image ImC and the positional relationship at the plurality of feature points FPb in the camera image ImC are different.
 また、本実施形態のように、高速に移動する車両1に用いられる場合では、測定タイミングのずれΔtにより、背景光画像ImLに映り込む物体の位置と、カメラ画像ImCに映り込む物体の位置とが大きく異なる可能性があり、片方だけに物体が映り込む可能性すらある。したがって、背景光画像ImL内の複数の特徴点FPaにおける位置関係とカメラ画像ImC内の複数の特徴点FPbにおける位置関係とが異なること及び対応関係がない特徴点FPa,FPbが含まれていることが、発生し易い。 Further, in the case of being used for the vehicle 1 moving at high speed as in the present embodiment, the position of the object reflected in the background light image ImC and the position of the object reflected in the camera image ImC due to the deviation Δt of the measurement timing. Can be very different, and even an object can be reflected in only one side. Therefore, the positional relationship at the plurality of feature points FPa in the background light image ImC and the positional relationship at the plurality of feature points FPb in the camera image ImC are different, and the feature points FPa and FPb having no corresponding relationship are included. However, it is easy to occur.
 こうした対応関係の特定の困難性に対応すべく、第1に、画像処理部42は、各特徴点FPa,FPbを含む周辺領域から得られる特徴量であって、スケールに対して不変性が高い特徴量を用いて、対応関係を特定する。スケールに対して不変性が高い特徴量としては、例えばエッジの方向に関する情報、周辺領域における何らかの物理量の平均値又は比率等が挙げられる。本実施形態では、スケールに対して不変性が高い特徴量として、周辺領域にローパスフィルタを適用した場合の平滑化度合に対する極値の情報が採用されている。 In order to deal with the difficulty of identifying such a correspondence, first, the image processing unit 42 is a feature amount obtained from a peripheral region including each feature point FPa and FPb, and is highly invariant with respect to the scale. Use the features to identify the correspondence. Examples of the feature quantity having high invariance with respect to the scale include information on the direction of the edge, an average value or a ratio of some physical quantity in the peripheral region, and the like. In the present embodiment, as a feature amount that is highly invariant with respect to the scale, information on the extreme value for the degree of smoothing when a low-pass filter is applied to the peripheral region is adopted.
 例えば本実施形態の画像処理部42は、SIFT(Scale-Invariant Feature Transform)の特徴量(以下、SIFT特徴量)をSIFT特徴量検出器(以下、特徴量検出器)44a,44bによって検出し、検出されたSIFT特徴量を用いて、対応関係を特定する。特徴量検出器44a,44bは、ハリスコーナー検出器43a,43bにて検出された各特徴点FPa,FPbを含む周辺領域に上述のローパスフィルタとしてガウシアンフィルタ(Gaussian Filter)を適用する。 For example, the image processing unit 42 of the present embodiment detects the feature amount (hereinafter, SIFT feature amount) of SIFT (Scale-Invariant Feature Transform) by the SIFT feature amount detector (hereinafter, feature amount detector) 44a, 44b. Correspondence is specified using the detected SIFT features. The feature amount detectors 44a and 44b apply a Gaussian filter as the above-mentioned low-pass filter to the peripheral region including the feature points FPa and FPb detected by the Harris corner detectors 43a and 43b.
 特徴量検出器44a,44bは、当該ガウシアンフィルタの標準偏差に相当する重み係数σを変化させ、当該周辺領域の局所的極値を探索する。特徴量検出器44a,44bは、局所的極値が発見されたσのうち有望な(エッジを除外した)少なくとも一部のσを、スケールに対して不変性が高いSIFT特徴量とする。 The feature detectors 44a and 44b change the weighting coefficient σ corresponding to the standard deviation of the Gaussian filter and search for a local extremum in the peripheral region. The feature detectors 44a and 44b set at least a part of the promising (excluding edges) σ of the σ in which the local extremum is found as the SIFT feature that is highly invariant with respect to the scale.
 こうして、背景光画像ImLの各特徴点FPaに個別対応するSIFT特徴量と、カメラ画像ImCの各特徴点FPbに個別対応するSIFT特徴量との比較において、特徴点FPa,FPb同士のマッチング精度を高めることができる。 In this way, in the comparison between the SIFT feature amount individually corresponding to each feature point FPa of the background light image ImC and the SIFT feature amount individually corresponding to each feature point FPb of the camera image ImC, the matching accuracy between the feature points FPa and FPb is improved. Can be enhanced.
 特徴量検出器44a,44bは、図3において便宜上、背景光画像ImL、カメラ画像ImCにそれぞれ対応して1つずつ配置されているが、背景光画像ImL及びカメラ画像ImCに対して共通に(共通のプログラムによって)設けられてもよい。特徴量検出器44a,44bの処理のうち、汎用性の高い一部分のみが、背景光画像ImL及びカメラ画像ImCに対して共通に設けられてもよい。 The feature amount detectors 44a and 44b are arranged one by one corresponding to the background light image ImC and the camera image ImC for convenience in FIG. 3, but are common to the background light image ImC and the camera image ImC ( It may be provided (by a common program). Of the processes of the feature amount detectors 44a and 44b, only a part having high versatility may be provided in common for the background light image ImC and the camera image ImC.
 第2に、画像処理部42は、測距センサ10と外界カメラ20との間の相対位置に基づく画像上の対応点が映り込む位置の相違を考慮して、対応関係を特定する。画像処理部42は、エピポーラ幾何(Epipolar Geometry)に基づき、エピポーラ線投影器45によって、図6A及び図6Bに示すように、カメラ画像ImCの特徴点FPbに対応するエピポーラ線ELを、背景光画像ImLに投影する。エピポーラ線ELは、エピポーラ平面と画像平面とが交わる線である。エピポーラ平面は、測距センサ10の光学中心と、外界カメラ20の光学中心と、カメラ画像ImCの特徴点FPbに対応する被写体の3次元点とを通る平面である。 Secondly, the image processing unit 42 specifies the correspondence relationship in consideration of the difference in the position where the corresponding point on the image is reflected based on the relative position between the distance measuring sensor 10 and the external camera 20. Based on Epipolar Geometry, the image processing unit 42 uses the epipolar line projector 45 to display the epipolar line EL corresponding to the feature point FPb of the camera image ImC as a background light image, as shown in FIGS. 6A and 6B. Project to ImL. The epipolar line EL is a line where the epipolar plane and the image plane intersect. The epipolar plane is a plane that passes through the optical center of the ranging sensor 10, the optical center of the external camera 20, and the three-dimensional point of the subject corresponding to the feature point FPb of the camera image ImC.
 実際には、エピポーラ線投影器45は、受光素子12aの位置及びカメラ素子22aの位置に基づいて規定されたE行列(Essential matrix)を記憶している。E行列は、カメラ画像ImC上の点を、背景光画像ImL上の線(すなわちエピポーラ線EL)に写像するための行列である。 Actually, the epipolar line projector 45 stores an E matrix (Essential matrix) defined based on the position of the light receiving element 12a and the position of the camera element 22a. The E matrix is a matrix for mapping a point on the camera image ImC to a line (that is, an epipolar line EL) on the background light image ImC.
 仮に、背景光画像ImLとカメラ画像ImCとの間で同期がとれているとすれば、カメラ画像ImCのある特徴点FPbと対応関係をもつ背景光画像ImLの特徴点FPbは、エピポーラ線投影器45によるエピポーラ線EL上に存在するはずである。しかしながら、本実施形態では、背景光画像ImLとカメラ画像ImCとの間にて測定タイミングのずれΔtがあり、ずれΔtの間に背景光画像ImL及びカメラ画像ImCに映り込む物体が移動している可能性がある。 Assuming that the background light image ImC and the camera image ImC are synchronized, the feature point FPb of the background light image ImL corresponding to a certain feature point FPb of the camera image ImC is an epipolar ray projector. It should be on the epipolar line EL by 45. However, in the present embodiment, there is a deviation Δt in the measurement timing between the background light image ImL and the camera image ImC, and the object reflected in the background light image ImL and the camera image ImC moves between the deviation Δt. there is a possibility.
 このため、画像処理部42は、エピポーラ線ELを中心線とした帯状の領域であって、所定の許容幅Wを有する判定領域JAを用いて、対応関係をもつ特徴点FPaを絞り込む。許容幅Wは、背景光画像ImLの測定タイミングとカメラ画像ImCの測定タイミングとの間に想定されるずれ量に応じて、設定されている。具体的に、画像処理部42は、判定領域JAの内部に位置する背景光画像ImLの特徴点FPaを、エピポーラ線ELの投影元であるカメラ画像ImCの特徴点FPbに対応する点の候補として絞り込む。そして、絞り込まれた特徴点FPaのうち、SIFT特徴量が最も近似する特徴点FPaを、対応する点であると判定する。こうして画像処理部42は、カメラ画像ImCの各特徴点FPbと、背景光画像ImLの各特徴点FPaとから、1対1の個別対応関係を特定していく。ハリスコーナー検出器43a,43bにより検出された特徴点FPa,FPbの数が背景光画像ImLとカメラ画像ImCとの間で一致しない場合、当然に、対応関係にある点が相手方の画像に発見されない特徴点FPa,FPbが出現するが、こうした特徴点FPa,FPbは、結果的にイメージレジストレーションに使用されずに、以降の処理から除外される。 Therefore, the image processing unit 42 narrows down the feature points FPa having a corresponding relationship by using the determination region JA which is a band-shaped region centered on the epipolar line EL and has a predetermined allowable width W. The permissible width W is set according to the amount of deviation assumed between the measurement timing of the background light image ImC and the measurement timing of the camera image ImC. Specifically, the image processing unit 42 uses the feature point FPa of the background light image ImL located inside the determination region JA as a candidate for a point corresponding to the feature point FPb of the camera image ImC which is the projection source of the epipolar line EL. Narrow down. Then, among the narrowed down feature points FPa, the feature point FPa with which the SIFT feature amount is closest to be determined is determined to be the corresponding point. In this way, the image processing unit 42 identifies a one-to-one individual correspondence relationship from each feature point FPb of the camera image ImC and each feature point FPa of the background light image ImC. If the numbers of feature points FPa and FPb detected by the Harris corner detectors 43a and 43b do not match between the background light image ImL and the camera image ImC, naturally, the corresponding points are not found in the other party's image. Feature points FPa and FPb appear, but these feature points FPa and FPb are not used for image registration as a result and are excluded from the subsequent processing.
 座標マッチング機能は、特徴点FPa,FPbの対応関係を特定した結果に基づいて、背景光画像ImL及びカメラ画像ImCのうち一方の各画素を、他方の各画素に対応させる機能である。具体的に図7に示すように、画像処理部42は、対応関係をもつ特徴点FPa,FPbの対同士の位置関係に基づき、背景光画像ImL及びカメラ画像ImCのうち少なくとも一方を非線形的かつ滑らかに歪ませることで、背景光画像ImLの座標系とカメラ画像ImCの座標系との対応関係を得ることができる。 The coordinate matching function is a function of making each pixel of the background light image ImC and the camera image ImC correspond to each pixel of the other based on the result of specifying the correspondence relationship between the feature points FPa and FPb. Specifically, as shown in FIG. 7, the image processing unit 42 non-linearly sets at least one of the background light image ImL and the camera image ImC based on the positional relationship between the pair of feature points FPa and FPb having a corresponding relationship. By smoothly distorting, the correspondence between the coordinate system of the background light image ImC and the coordinate system of the camera image ImC can be obtained.
 画像処理部42は、座標のマッチングにあたり、例えばTPSモデルを用いてTPS(Thin Plate Spline)を実施する。TPSモデルは、対応関係にある特徴点FPa,FPbの座標を共変量とし、TPSを実施する。TPSモデルは、背景光画像ImLとカメラ画像ImCとの間にて、特徴点FPa,FPbに該当しない各画素の対応関係を特定する。 The image processing unit 42 performs TPS (Thin Plate Spline) using, for example, a TPS model when matching coordinates. In the TPS model, TPS is performed with the coordinates of the corresponding feature points FPa and FPb as covariates. The TPS model identifies the correspondence between the background light image ImL and the camera image ImC of each pixel that does not correspond to the feature points FPa and FPb.
 各画素の対応関係を特定する意味を説明するための具体例として、カメラ画像ImCの測定タイミングのΔt後が背景光画像ImLの測定タイミングであり、車両1に対して前方の他車両がずれΔtの間に遠ざかる場合を考える。この場合、背景光画像ImLにおいて風景を映す特徴点同士の間隔に対する他車両を映す特徴点同士の間隔の比は、カメラ画像ImCにおける風景を映す特徴点同士の間隔に対する他車両を映す特徴点同士の間隔の比よりも小さくなり得る。したがって、背景光画像ImLにおいて他車両を映す領域が風景を映す領域に対して拡大されるように、当該背景光画像ImLを非線形的に歪ませることで、背景光画像ImLの座標系を、カメラ画像ImCの座標系に合わせ込むことができる。 As a specific example for explaining the meaning of specifying the correspondence relationship of each pixel, the measurement timing of the background light image ImL is after Δt of the measurement timing of the camera image ImC, and the other vehicle in front is shifted Δt with respect to the vehicle 1. Consider the case of moving away from. In this case, the ratio of the distance between the feature points reflecting the other vehicle to the distance between the feature points reflecting the landscape in the background light image ImC is the ratio between the feature points reflecting the other vehicle to the distance between the feature points reflecting the landscape in the camera image ImC. Can be less than the ratio of intervals. Therefore, by non-linearly distorting the background light image ImL so that the area reflecting the other vehicle is enlarged with respect to the area reflecting the landscape in the background light image ImL, the coordinate system of the background light image ImL is captured by the camera. It can be adjusted to the coordinate system of the image ImC.
 すなわち、座標マッチング機能における処理は、測定タイミングのずれΔtを補正し、背景光画像ImLとカメラ画像ImCとを、互いに同期がとれたデータと同様の取り扱いとすることを可能にする。上述のように、背景光画像ImLは、反射光画像と同座標系かつ同期がとれているとみなすことができる。結果的に画像処理部42は、距離情報を含む反射光画像と、高解像かつカラー画像であるカメラ画像ImCとを、互いに同期がとれたデータと同様の取り扱いとすることを可能にする。こうした反射光画像とカメラ画像ImCとのイメージレジストレーションにおいて、背景光画像ImLは、2つの画像を対応付けるための接着剤のように機能する。 That is, the processing in the coordinate matching function corrects the deviation Δt of the measurement timing, and makes it possible to treat the background light image ImL and the camera image ImC in the same manner as the data synchronized with each other. As described above, the background light image ImL can be regarded as having the same coordinate system and synchronization with the reflected light image. As a result, the image processing unit 42 makes it possible to handle the reflected light image including the distance information and the camera image ImC, which is a high-resolution and color image, in the same manner as the data synchronized with each other. In the image registration of such a reflected light image and the camera image ImC, the background light image ImL functions like an adhesive for associating the two images.
 そして、画像処理部42は、背景光画像ImLの各画素に対応する座標を、カメラ画像ImC上の座標に変換することにより、上述の一体的な画像データである複合画像を出力することができる。この複合画像は、各チャンネルに対して共通の座標系をもつので、これを用いるアプリケーションプログラム(以下、アプリケーション)の処理を単純化させて計算の負荷を軽減させると共に、当該アプリケーションの処理精度を向上させることができる。 Then, the image processing unit 42 can output the composite image which is the above-mentioned integrated image data by converting the coordinates corresponding to each pixel of the background light image ImC into the coordinates on the camera image ImC. .. Since this composite image has a common coordinate system for each channel, the processing of the application program (hereinafter referred to as the application) using the composite image is simplified to reduce the calculation load and improve the processing accuracy of the application. Can be made to.
 本実施形態では、画像処理部42が出力した複合画像は、周辺監視画像情報として運転支援ECU50へ提供される。運転支援ECU50においては、プロセッサが車両1の周辺環境を認識するためのアプリケーションとしての物体認識プログラムを実行することにより、複合画像を利用した物体認識が実施される。 In the present embodiment, the composite image output by the image processing unit 42 is provided to the driving support ECU 50 as peripheral monitoring image information. In the driving support ECU 50, object recognition using a composite image is performed by executing an object recognition program as an application for the processor to recognize the surrounding environment of the vehicle 1.
 本実施形態では、セマンティックセグメンテーション(Semantic Segmentation)を用いた物体認識が実施される。運転支援ECU50の記憶部には、物体認識プログラムの一構成要素としてニューラルネットワークを主体とした物体認識モデル51が構築されている。このニューラルネットワークには、例えばエンコーダとデコーダとを結合したSegNetと呼ばれる構造を採用することができる。 In this embodiment, object recognition using Semantic Segmentation is performed. In the storage unit of the driving support ECU 50, an object recognition model 51 mainly composed of a neural network is constructed as one component of the object recognition program. For this neural network, for example, a structure called SegNet in which an encoder and a decoder are combined can be adopted.
 次に、イメージレジストレーションプログラムに基づき、反射光画像とカメラ画像ImCとのイメージレジストレーションを実施するイメージレジストレーション方法の詳細を、図8のフローチャートを用いて説明する。このフローチャートの各ステップに基づく一連の画像処理は、例えば、所定時間毎に、又は測距センサ10又は外界カメラ20が新規の画像を生成する毎に、実施される。 Next, the details of the image registration method for performing image registration between the reflected light image and the camera image ImC based on the image registration program will be described with reference to the flowchart of FIG. A series of image processing based on each step of this flowchart is performed, for example, at predetermined time intervals or every time the distance measuring sensor 10 or the external camera 20 generates a new image.
 まず、S11では、画像取得部41は、最新の反射光画像及び背景光画像ImLを測距センサ10から取得すると共に、最新のカメラ画像ImCを外界カメラ20から取得する。画像取得部41は、これら画像を画像処理部42に提供する。S11の処理後、S12へ移る。 First, in S11, the image acquisition unit 41 acquires the latest reflected light image and background light image ImL from the ranging sensor 10, and acquires the latest camera image ImC from the outside camera 20. The image acquisition unit 41 provides these images to the image processing unit 42. After the processing of S11, the process proceeds to S12.
 S12では、画像処理部42は、背景光画像ImLの特徴点FPaと、カメラ画像ImCの特徴点FPbとを、それぞれ検出する。S12の処理後、S13へ移る。 In S12, the image processing unit 42 detects the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC, respectively. After the processing of S12, the process proceeds to S13.
 S13では、画像処理部42は、S12にて検出された背景光画像ImLの特徴点FPaと、カメラ画像ImCの特徴点FPbとの対応関係を特定する。S13の処理後、S14へ移る。 In S13, the image processing unit 42 specifies the correspondence between the feature point FPa of the background light image ImL detected in S12 and the feature point FPb of the camera image ImC. After the processing of S13, the process proceeds to S14.
 S14では、画像処理部42は、S13で対応関係が特定された特徴点FPa,FPb同士の位置関係から、背景光画像ImLとカメラ画像ImCとの間にて、特徴点FPa,FPbに該当しない各画素の座標の対応付け(座標のマッチング)を行なう。S14の処理後、S15へ移る。 In S14, the image processing unit 42 does not correspond to the feature points FPa and FPb between the background light image ImL and the camera image ImC because of the positional relationship between the feature points FPa and FPb whose correspondence is specified in S13. Coordinates of each pixel are associated (coordinate matching). After the processing of S14, the process proceeds to S15.
 S15では、画像処理部42は、背景光画像ImL及び反射光画像の座標系を、カメラ画像ImCの座標系に変換するか、その逆を実施することにより、反射光画像とカメラ画像ImCとのイメージレジストレーションを完了する。S15を以って一連の処理を終了する。 In S15, the image processing unit 42 converts the coordinate system of the background light image ImL and the reflected light image into the coordinate system of the camera image ImC, or vice versa, so that the reflected light image and the camera image ImC are combined. Complete image registration. A series of processes is completed by S15.
 (作用効果)
 以上説明した第1実施形態の作用効果を以下に改めて説明する。
(Action effect)
The effects of the first embodiment described above will be described again below.
 第1実施形態の画像処理ECU30によると、取得した反射光画像とカメラ画像ImCとのイメージレジストレーションが、反射光画像と同座標系の背景光画像ImLを用いて実施される。反射光画像よりも性質がカメラ画像ImCに近い背景光画像ImLの特徴点FPaと、カメラ画像ImCの特徴点FPbとの比較では、特徴点FPa,FPb同士の対応関係の特定が容易となる。こうした対応付けにより、反射光画像の座標系とカメラ画像ImCの座標系とを精度良く合わせ込むことができるので、反射光画像及びカメラ画像ImCの両方を用いるアプリケーションの処理精度を格段に高めることができる。 According to the image processing ECU 30 of the first embodiment, image registration between the acquired reflected light image and the camera image ImC is performed using the background light image ImL having the same coordinate system as the reflected light image. By comparing the feature point FPa of the background light image ImC whose properties are closer to those of the camera image ImC than the reflected light image and the feature point FPb of the camera image ImC, it becomes easy to identify the correspondence between the feature points FPa and FPb. By such association, the coordinate system of the reflected light image and the coordinate system of the camera image ImC can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image ImC can be remarkably improved. it can.
 また、第1実施形態によると、イメージレジストレーションにおいては、背景光画像ImLの特徴点FPaとカメラ画像ImCの特徴点FPbとがそれぞれ検出される。その後、検出された背景光画像ImLの特徴点FPaとカメラ画像ImCの特徴点FPbとの対応関係が特定される。その後、対応関係の特定結果に基づき、背景光画像ImL及びカメラ画像ImCのうち一方の各画素が、他方の各画素に対応させられる。すなわち、特徴点FPa,FPb同士の対応関係を特定した後、その結果を用いて各画素の座標を対応させるので、各画素の対応関係をむやみに特定していく場合よりも、処理量の抑制又は処理速度の向上を図りつつ、精度が高いイメージレジストレーションを実施することができる。 Further, according to the first embodiment, in the image registration, the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC are detected, respectively. After that, the correspondence between the detected feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC is specified. Then, each pixel of the background light image ImC and the camera image ImC is made to correspond to each pixel of the other based on the specific result of the correspondence relationship. That is, since the correspondence between the feature points FPa and FPb is specified and then the coordinates of each pixel are made to correspond using the result, the processing amount is suppressed as compared with the case where the correspondence between each pixel is specified unnecessarily. Alternatively, highly accurate image registration can be performed while improving the processing speed.
 また、第1実施形態によると、対応関係の特定において、測距センサ10と外界カメラ20との間の相対位置に基づく画像上の対応点が映り込む位置の相違が考慮される。こうした考慮によって、特徴点FPa,FPb同士の対応関係の特定精度を高めることができる。 Further, according to the first embodiment, in specifying the correspondence relationship, the difference in the position where the corresponding point on the image is reflected based on the relative position between the distance measuring sensor 10 and the outside world camera 20 is taken into consideration. By such consideration, it is possible to improve the accuracy of specifying the correspondence between the feature points FPa and FPb.
 また、第1実施形態によると、対応関係の特定において、背景光画像ImL及びカメラ画像ImCのうち投影元の特徴点FPbに対応するエピポーラ線ELが、投影先の画像に投影される。そして、エピポーラ線ELに沿った所定の許容幅Wを有する帯状の判定領域JA内に位置する投影先の特徴点FPaが、投影元の特徴点FPbに対応する点であると判定される。許容幅Wを持たせた判定により、特徴点FPa,FPb同士の対応関係に特定にて、背景光画像ImLとカメラ画像ImCとの投影誤差等の誤差を吸収し、特定精度を高めることができる。 Further, according to the first embodiment, in the identification of the correspondence relationship, the epipolar line EL corresponding to the feature point FPb of the projection source among the background light image ImL and the camera image ImC is projected on the projection destination image. Then, it is determined that the feature point FPa of the projection destination located in the band-shaped determination region JA having a predetermined allowable width W along the epipolar line EL is a point corresponding to the feature point FPb of the projection source. By determining the allowable width W, it is possible to identify the correspondence between the feature points FPa and FPb, absorb errors such as the projection error between the background light image ImL and the camera image ImC, and improve the identification accuracy. ..
 また、第1実施形態によると、許容幅Wは、背景光画像ImLの測定タイミングとカメラ画像ImCの測定タイミングとの間に想定されるずれ量に応じて、設定されている。測定タイミングのずれΔtの間に、背景光画像ImL及びカメラ画像ImCに映り込み、特徴点FPa,FPbを構成する物体が移動したとしても、ずれ量に応じた許容幅Wを有する判定領域JA内に当該物体の特徴点FPaが位置していれば、対応関係をもつ特徴点FPa,FPb同士が特定される。故に、対応関係の特定における精度を向上させることができる。 Further, according to the first embodiment, the allowable width W is set according to the amount of deviation assumed between the measurement timing of the background light image ImL and the measurement timing of the camera image ImC. Even if the objects that are reflected in the background light image ImL and the camera image ImC and constitute the feature points FPa and FPb move during the measurement timing deviation Δt, the determination region JA has an allowable width W according to the deviation amount. If the feature point FPa of the object is located in, the feature points FPa and FPb having a corresponding relationship are specified. Therefore, the accuracy in identifying the correspondence can be improved.
 また、第1実施形態によると、背景光画像ImLの特徴点FPaとカメラ画像ImCの特徴点FPbとの対応関係の特定において、各特徴点FPa,FPbを含む周辺領域から得られる特徴量であって、スケールに対して不変性が高い特徴量としてのSIFT特徴量が用いられる。スケールに対して不変性が高いSIFT特徴量を使用することにより、背景光画像ImLの解像度に対してカメラ画像ImCの解像度が高いことによる特徴点FPa,FPbの検出レベル(検出感度)の違いが生じていたとしても、対応関係の誤判定を抑制することができる。故に、対応関係の特定における精度を向上させることができる。 Further, according to the first embodiment, in specifying the correspondence between the feature point FPa of the background light image ImL and the feature point FPb of the camera image ImC, it is a feature amount obtained from the peripheral region including the feature points FPa and FPb. Therefore, the SIFT feature amount is used as a feature amount that is highly invariant with respect to the scale. By using SIFT features that are highly invariant to the scale, there is a difference in the detection levels (detection sensitivities) of the feature points FPa and FPb due to the high resolution of the camera image ImC with respect to the resolution of the background light image ImL. Even if it occurs, it is possible to suppress erroneous determination of the correspondence relationship. Therefore, the accuracy in identifying the correspondence can be improved.
 また、第1実施形態の画像生成システム100によると、反射光画像とカメラ画像ImCとのイメージレジストレーションが、反射光画像と同座標系の背景光画像ImLを用いて実施される。すなわち、反射光画像よりも性質がカメラ画像ImCに近い背景光画像ImLの特徴点FPaと、カメラ画像ImCの特徴点FPbとの比較では、特徴点FPa,FPb同士の対応関係の特定が容易となる。こうした対応付けにより、反射光画像の座標系とカメラ画像ImCの座標系とを精度良く合わせ込むことができる。そして、測距センサ10及び外界カメラ20の異なる画像生成元からの情報である、距離情報とカメラ画像ImCの情報とは、アプリケーションにて処理し易い複合画像の形態にて提供可能となる。このため、反射光画像及びカメラ画像ImCの両方を用いるアプリケーションの処理制度を格段に高めることができる。 Further, according to the image generation system 100 of the first embodiment, image registration of the reflected light image and the camera image ImC is performed using the background light image ImL of the same coordinate system as the reflected light image. That is, in comparison between the feature point FPa of the background light image ImC whose properties are closer to those of the camera image ImC than the reflected light image and the feature point FPb of the camera image ImC, it is easy to identify the correspondence between the feature points FPa and FPb. Become. With such an association, the coordinate system of the reflected light image and the coordinate system of the camera image ImC can be accurately matched. Then, the distance information and the camera image ImC information, which are information from different image generation sources of the distance measuring sensor 10 and the external camera 20, can be provided in the form of a composite image that can be easily processed by the application. Therefore, the processing system of the application using both the reflected light image and the camera image ImC can be remarkably enhanced.
 また、第1実施形態のイメージレジストレーション方法によると、用意した背景光画像ImLの特徴点FPaと、カメラ画像ImCの特徴点FPbとがそれぞれ検出される。その後、検出された背景光画像ImLの特徴点FPaとカメラ画像ImCの特徴点FPbとの対応関係が特定される。その後、対応関係の特定結果に基づき、背景光画像ImL及びカメラ画像ImCのうち一方の各画素が、他方の各画素に対応させられる。こうして、反射光画像とカメラ画像ImCとのイメージレジストレーションが、反射光画像と同座標系であり、反射光画像よりも性質がカメラ画像ImCに近い背景光画像ImLを用いて実施されるので、特徴点FPa,FPb同士の対応関係の特定が容易となる。こうした対応付けにより、反射光画像の座標系とカメラ画像ImCの座標系とを精度良く合わせ込むことができるので、反射光画像及びカメラ画像ImCの両方を用いるアプリケーションの処理精度を格段に高めることができる。そして、特徴点FPa,FPb同士の対応関係を特定した後、その結果を用いて各画素の座標を対応させているので、各画素の対応関係をむやみに特定していく場合よりも、処理量の抑制又は処理速度の向上を図りつつ、精度が高いイメージレジストレーションを実施することができる。 Further, according to the image registration method of the first embodiment, the feature point FPa of the prepared background light image ImC and the feature point FPb of the camera image ImC are detected, respectively. After that, the correspondence between the detected feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC is specified. Then, each pixel of the background light image ImC and the camera image ImC is made to correspond to each pixel of the other based on the specific result of the correspondence relationship. In this way, the image registration between the reflected light image and the camera image ImC is performed using the background light image ImL which has the same coordinate system as the reflected light image and whose properties are closer to those of the camera image ImC than the reflected light image. It becomes easy to identify the correspondence between the feature points FPa and FPb. By such association, the coordinate system of the reflected light image and the coordinate system of the camera image ImC can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image ImC can be remarkably improved. it can. Then, after the correspondence between the feature points FPa and FPb is specified, the coordinates of each pixel are made to correspond using the result, so that the processing amount is higher than the case where the correspondence of each pixel is specified unnecessarily. It is possible to carry out highly accurate image registration while suppressing the above or improving the processing speed.
 また、第1実施形態のイメージレジストレーションプログラムによると、取得した背景光画像ImLの特徴点FPaと、カメラ画像ImCの特徴点FPbとがそれぞれ検出される。その後、検出された背景光画像ImLの特徴点FPaとカメラ画像ImCの特徴点FPbとの対応関係が特定される。その後、対応関係の特定結果に基づき、背景光画像ImL及びカメラ画像ImCのうち一方の各画素が、他方の各画素に対応させられる。こうして、反射光画像とカメラ画像ImCとのイメージレジストレーションが、反射光画像と同座標系であり、反射光画像よりも性質がカメラ画像ImCに近い背景光画像ImLを用いて実施されるので、特徴点FPa,FPb同士の対応関係の特定がより容易となる。こうした対応付けにより、反射光画像の座標系とカメラ画像ImCの座標系とを精度良く合わせ込むことができるので、反射光画像及びカメラ画像ImCの両方を用いるアプリケーションの処理精度を格段に高めることができる。そして、特徴点FPa,FPb同士の対応関係を特定した後、その結果を用いて各画素の座標を対応させているので、各画素の対応関係をむやみに特定していく場合よりも、処理量の抑制又は処理速度の向上を図りつつ、精度が高いイメージレジストレーションを実施することができる。 Further, according to the image registration program of the first embodiment, the feature point FPa of the acquired background light image ImC and the feature point FPb of the camera image ImC are detected, respectively. After that, the correspondence between the detected feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC is specified. Then, each pixel of the background light image ImC and the camera image ImC is made to correspond to each pixel of the other based on the specific result of the correspondence relationship. In this way, the image registration between the reflected light image and the camera image ImC is performed using the background light image ImL which has the same coordinate system as the reflected light image and whose properties are closer to those of the camera image ImC than the reflected light image. It becomes easier to identify the correspondence between the feature points FPa and FPb. By such association, the coordinate system of the reflected light image and the coordinate system of the camera image ImC can be accurately matched, so that the processing accuracy of the application using both the reflected light image and the camera image ImC can be remarkably improved. it can. Then, after the correspondence between the feature points FPa and FPb is specified, the coordinates of each pixel are made to correspond using the result, so that the processing amount is higher than the case where the correspondence of each pixel is specified unnecessarily. It is possible to carry out highly accurate image registration while suppressing the above or improving the processing speed.
 (第2実施形態)
 図9に示すように、第2実施形態は第1実施形態の変形例である。第2実施形態について、第1実施形態とは異なる点を中心に説明する。
(Second Embodiment)
As shown in FIG. 9, the second embodiment is a modification of the first embodiment. The second embodiment will be described focusing on the points different from those of the first embodiment.
 第2実施形態では、第1実施形態の画像処理ECU30の機能と運転支援ECU50との機能が1つのECUに統合され、運転支援ECU230が構成されている。したがって、第2実施形態では、運転支援ECU230がイメージレジストレーション装置に相当する。また、第2実施形態の運転支援ECU230では、イメージレジストレーション機能が高精度な周辺認識機能を実現するための一部分を構成しているともいえるから、当該運転支援ECU230は、車両1の周辺環境を認識する周辺環境認識装置にも相当する。運転支援ECU230は、第1実施形態の画像処理ECU30と同様に、処理部31、RAM32、記憶部33、入出力インターフェース34等を有する。 In the second embodiment, the functions of the image processing ECU 30 and the operation support ECU 50 of the first embodiment are integrated into one ECU, and the operation support ECU 230 is configured. Therefore, in the second embodiment, the operation support ECU 230 corresponds to the image registration device. Further, in the driving support ECU 230 of the second embodiment, it can be said that the image registration function constitutes a part for realizing the highly accurate peripheral recognition function. Therefore, the driving support ECU 230 provides the surrounding environment of the vehicle 1. It also corresponds to a peripheral environment recognition device that recognizes. The operation support ECU 230 has a processing unit 31, a RAM 32, a storage unit 33, an input / output interface 34, and the like, similarly to the image processing ECU 30 of the first embodiment.
 第2実施形態の運転支援ECU230は、第1実施形態の画像処理ECU30と同様に、記憶部33に記憶されたイメージレジストレーションプログラム及び物体認識プログラムを処理部31によって実行することで、複数の機能部を有する。具体的に図9に示すように、運転支援ECU230には、画像取得部41、画像処理部242、及び物体認識部48等の機能部が構築される。 Similar to the image processing ECU 30 of the first embodiment, the operation support ECU 230 of the second embodiment has a plurality of functions by executing the image registration program and the object recognition program stored in the storage unit 33 by the processing unit 31. Has a part. Specifically, as shown in FIG. 9, functional units such as an image acquisition unit 41, an image processing unit 242, and an object recognition unit 48 are constructed in the driving support ECU 230.
 画像取得部41は、第1実施形態と同様である。物体認識部48は、第1実施形態と同様の物体認識モデル48aを用いて、セマンティックセグメンテーションを用いた物体認識を実施する。 The image acquisition unit 41 is the same as that of the first embodiment. The object recognition unit 48 uses the same object recognition model 48a as in the first embodiment to perform object recognition using semantic segmentation.
 第2実施形態の画像処理部242は、第1実施形態と同様に、特徴点検出機能、対応関係特定機能及び座標マッチング機能を有する。ただし、特徴点検出機能において背景光画像ImLの解像度とカメラ画像ImCの解像度との比(以下、解像度比)を考慮する点、及び対応関係特定機能においてSIFT特徴量を使用しない点が、第1実施形態とは異なる。 The image processing unit 242 of the second embodiment has a feature point detection function, a correspondence relationship identification function, and a coordinate matching function, as in the first embodiment. However, the first point is that the feature point detection function considers the ratio between the resolution of the background light image ImC and the resolution of the camera image ImC (hereinafter referred to as the resolution ratio), and that the SIFT feature amount is not used in the correspondence identification function. Different from the embodiment.
 具体的に、運転支援ECU230の記憶部33には、センサ系データベース(以下、センサ系DB)243cが設けられている。センサ系DB243cには、車両1に搭載された各種センサ及びカメラの情報が記憶されている。この情報には、測距センサ10の受光素子12aの仕様に関する情報、外界カメラ20のカメラ素子22aの仕様に関する情報が含まれる。測距センサ10の受光素子12aの仕様に関する情報には、当該受光素子12aの解像度の情報が含まれ、外界カメラ20のカメラ素子22aの仕様に関する情報には、当該カメラ素子22aの解像度の情報が含まれる。これら解像度の情報から、画像処理部242は、解像度比を把握可能である。 Specifically, the storage unit 33 of the driving support ECU 230 is provided with a sensor database (hereinafter, sensor DB) 243c. The sensor system DB243c stores information on various sensors and cameras mounted on the vehicle 1. This information includes information on the specifications of the light receiving element 12a of the distance measuring sensor 10 and information on the specifications of the camera element 22a of the external camera 20. The information regarding the specifications of the light receiving element 12a of the distance measuring sensor 10 includes the resolution information of the light receiving element 12a, and the information regarding the specifications of the camera element 22a of the external camera 20 includes the resolution information of the camera element 22a. included. From the information of these resolutions, the image processing unit 242 can grasp the resolution ratio.
 第2実施形態のハリスコーナー検出器243aは、解像度比に基づいて、背景光画像ImLの特徴点FPaを検出する際のスケールパラメータと、カメラ画像ImCの特徴点FPbを検出する際のスケールパラメータとを、相違させている。具体的に、背景光画像ImLに対して高解像であるカメラ画像ImCの特徴点検出においては、背景光画像ImLの場合よりも、スケールパラメータとしての評価対象領域のサイズ及び勾配検出フィルタのカーネルサイズのうち少なくとも一方が、小さくされる。そうすると、特徴点FPa,FPbの検出レベルを、背景光画像ImLとカメラ画像ImCとの間で近づけることができる。 The Harris corner detector 243a of the second embodiment has a scale parameter for detecting the feature point FPa of the background light image ImC and a scale parameter for detecting the feature point FPb of the camera image ImC based on the resolution ratio. Is different. Specifically, in the feature point detection of the camera image ImC, which has a high resolution with respect to the background light image ImL, the size of the evaluation target area as the scale parameter and the kernel of the gradient detection filter are compared with the case of the background light image ImL. At least one of the sizes is reduced. Then, the detection levels of the feature points FPa and FPb can be brought close to each other between the background light image ImL and the camera image ImC.
 この結果、対応関係特定機能においてSIFTを用いなくても、検出された背景光画像ImLの特徴点FPaとカメラ画像ImCの特徴点FPbの対応関係特定は、容易となる。換言すると、精度良く対応関係を特定することができる。 As a result, even if SIFT is not used in the correspondence relationship identification function, it becomes easy to specify the correspondence relationship between the feature point FPa of the detected background light image ImC and the feature point FPb of the camera image ImC. In other words, the correspondence can be identified with high accuracy.
 以上説明した第2実施形態によると、背景光画像ImLの特徴点FPa及びカメラ画像ImCの特徴点FPbを検出する特徴点検出器としてのハリスコーナー検出器243a,243bがスケールに影響するスケールパラメータを有する。こうした構成において、把握された背景光画像ImLの解像度とカメラ画像ImCの解像度との比に基づいて、背景光画像ImLの特徴点FPaの検出に使用するスケールパラメータと、カメラ画像ImCの特徴点FPbの検出に使用するスケールパラメータとは、相違させられている。こうすると、背景光画像ImLの解像度に対してカメラ画像ImCの解像度が高くても、特徴点FPa,FPbの検出レベルを、背景光画像ImLとカメラ画像ImCとの間で近づけることが可能となる。近いレベルにて検出された特徴点FPa,FPb同士を比較可能となるので、対応関係の特定における精度を向上させることができる。 According to the second embodiment described above, the Harris corner detectors 243a and 243b as feature point detectors for detecting the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC determine the scale parameters that affect the scale. Have. In such a configuration, the scale parameter used for detecting the feature point FPa of the background light image ImC and the feature point FPb of the camera image ImC are based on the ratio of the obtained background light image ImC resolution and the camera image ImC resolution. It is different from the scale parameter used to detect. In this way, even if the resolution of the camera image ImC is higher than the resolution of the background light image ImC, the detection levels of the feature points FPa and FPb can be brought close to each other between the background light image ImC and the camera image ImC. .. Since the feature points FPa and FPb detected at close levels can be compared with each other, the accuracy in specifying the correspondence can be improved.
 (他の実施形態)
 以上、複数の実施形態について説明したが、本開示は、それらの実施形態に限定して解釈されるものではなく、本開示の要旨を逸脱しない範囲内において種々の実施形態及び組み合わせに適用することができる。
(Other embodiments)
Although the plurality of embodiments have been described above, the present disclosure is not construed as being limited to those embodiments, and is applied to various embodiments and combinations within the scope of the gist of the present disclosure. Can be done.
 具体的に変形例1としては、測距センサ10と外界カメラ20とが一体型のセンサユニットを構成していてもよい。さらに、第1実施形態の画像処理ECU30のようなイメージレジストレーション装置が、このセンサユニットの構成要素として含まれていてもよい。 Specifically, as a modification 1, the distance measuring sensor 10 and the external camera 20 may form an integrated sensor unit. Further, an image registration device such as the image processing ECU 30 of the first embodiment may be included as a component of this sensor unit.
 第1実施形態に関する変形例2としては、画像処理ECU30が、第2実施形態のような物体認識部48を備え、車両1の周辺環境を認識してもよい。画像処理ECU30が車両1の周辺環境を認識した解析済み情報が、運転支援機能等を有する運転支援ECU50に提供されてもよい。 As a modification 2 regarding the first embodiment, the image processing ECU 30 may include an object recognition unit 48 as in the second embodiment and recognize the surrounding environment of the vehicle 1. The analyzed information in which the image processing ECU 30 recognizes the surrounding environment of the vehicle 1 may be provided to the driving support ECU 50 having a driving support function or the like.
 変形例3としては、画像処理部42は、反射光画像、背景光画像ImL及びカメラ画像ImCを、多チャンネルの複合画像に統合して出力しなくてもよい。画像処理部42は、反射光画像、背景光画像ImL及びカメラ画像ImCを、別々の画像データとして出力し、これらの画像データに加えて、各画像の座標の対応関係を示す座標対応データを出力するようにしてもよい。 As a modification 3, the image processing unit 42 does not have to integrate the reflected light image, the background light image ImL, and the camera image ImC into a multi-channel composite image and output it. The image processing unit 42 outputs the reflected light image, the background light image ImL, and the camera image ImC as separate image data, and in addition to these image data, outputs coordinate correspondence data indicating the correspondence between the coordinates of each image. You may try to do it.
 変形例4としては、画像処理部42は、イメージレジストレーション済の反射光画像及びカメラ画像ImCを出力すればよく、背景光画像ImLを出力しなくてもよい。 As a modification 4, the image processing unit 42 may output the image-registered reflected light image and the camera image ImC, and may not output the background light image ImC.
 変形例5としては、カメラ画像ImCは、カラー画像でなく、グレースケール画像であってもよい。 As a modification 5, the camera image ImC may be a grayscale image instead of a color image.
 変形例6としては、イメージレジストレーション済みの反射光画像及びカメラ画像ImCを用いた物体認識は、セマンティックセグメンテーションを用いた物体認識でなくてもよい。物体認識は、例えばバウンディングボックス(Bounding Box)を用いた物体認識であってもよい。 As a modification 6, the object recognition using the image-registered reflected light image and the camera image ImC does not have to be the object recognition using semantic segmentation. The object recognition may be, for example, object recognition using a bounding box.
 変形例7としては、イメージレジストレーション済みの反射光画像及びカメラ画像ImCは、車両1における物体認識以外のアプリケーションに用いられてもよい。例えば、測距センサ10及びカメラ20を会議室に設置して、イメージレジストレーション済の反射光画像及びカメラ画像ImCをテレビ会議の通信用アプリケーションにて用いてもよい。 As a modification 7, the image-registered reflected light image and the camera image ImC may be used for applications other than object recognition in the vehicle 1. For example, the distance measuring sensor 10 and the camera 20 may be installed in the conference room, and the image-registered reflected light image and the camera image ImC may be used in a communication application for video conferencing.
 変形例8としては、検出されたSIFT特徴量に、回転不変性を持たせるための向き(Orientation)の情報が付加されてもよい。向きの情報は、例えば測距センサ10の設置面の傾斜とカメラ20の設置面の傾斜とが異なる状況下にて有用である。 As a modification 8, information on the orientation for imparting rotational invariance may be added to the detected SIFT feature amount. The orientation information is useful, for example, in a situation where the inclination of the installation surface of the distance measuring sensor 10 and the inclination of the installation surface of the camera 20 are different.
 変形例9としては、カメラ画像ImCの特徴点FPbに対応するエピポーラ線ELを、背景光画像ImLに投影する場合、又は背景光画像ImLの特徴点FPaに対応するエピポーラ線ELを、カメラ画像ImCに投影する場合、E行列の代わりにF行列(Fundamental matrix)を用いてもよい。F行列は、測距センサ10及びカメラ20のキャリブレーションが実施されていない状況下にて有用である。 As a modification 9, the epipolar line EL corresponding to the feature point FPb of the camera image ImC is projected onto the background light image ImL, or the epipolar line EL corresponding to the feature point FPa of the background light image ImC is projected onto the camera image ImC. When projecting to, an F matrix (Fundamental matrix) may be used instead of the E matrix. The F matrix is useful in situations where the ranging sensor 10 and camera 20 have not been calibrated.
 変形例10としては、画像処理部42は、測距センサ10が生成した画像及びカメラ20が生成した画像に加えて、ミリ波レーダ等が生成した追加の画像に対するイメージレジストレーションを実施してもよい。 As a modification 10, the image processing unit 42 may perform image registration on an image generated by the ranging sensor 10 and an image generated by the camera 20 as well as an additional image generated by a millimeter-wave radar or the like. Good.
 変形例11としては、画像処理ECU30によって提供されていた各機能は、ソフトウェア及びそれを実行するハードウェア、ソフトウェアのみ、ハードウェアのみ、あるいはそれらの複合的な組み合わせによっても提供可能である。さらに、こうした機能がハードウェアとしての電子回路によって提供される場合、各機能は、多数の論理回路を含むデジタル回路、又はアナログ回路によっても提供可能である。 As a modification 11, each function provided by the image processing ECU 30 can be provided by software and hardware for executing the software, only software, only hardware, or a combination thereof. Further, when such a function is provided by an electronic circuit as hardware, each function can also be provided by a digital circuit including a large number of logic circuits or an analog circuit.
 変形例12としては、上記の異常検出方法を実現可能な異常検出プログラム等を記憶する記憶媒体の形態も、適宜変更されてもよい。例えば記憶媒体は、回路基板上に設けられた構成に限定されず、メモリカード等の形態で提供され、スロット部に挿入されて、画像処理ECU30の制御回路に電気的に接続される構成であってもよい。さらに、記憶媒体は、画像処理ECU30のプログラムのコピー基となる光学ディスク及びハードディスクであってもよい。 As the modification 12, the form of the storage medium for storing the abnormality detection program or the like capable of realizing the above abnormality detection method may be changed as appropriate. For example, the storage medium is not limited to the configuration provided on the circuit board, but is provided in the form of a memory card or the like, is inserted into the slot portion, and is electrically connected to the control circuit of the image processing ECU 30. You may. Further, the storage medium may be an optical disk or a hard disk as a copy base of the program of the image processing ECU 30.
 本開示に記載の制御部及びその手法は、コンピュータプログラムにより具体化された一つ乃至は複数の機能を実行するようにプログラムされたプロセッサを構成する専用コンピュータにより、実現されてもよい。あるいは、本開示に記載の装置及びその手法は、専用ハードウェア論理回路により、実現されてもよい。もしくは、本開示に記載の装置及びその手法は、コンピュータプログラムを実行するプロセッサと一つ以上のハードウェア論理回路との組み合わせにより構成された一つ以上の専用コンピュータにより、実現されてもよい。また、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていてもよい。 The control unit and its method described in the present disclosure may be realized by a dedicated computer constituting a processor programmed to execute one or a plurality of functions embodied by a computer program. Alternatively, the apparatus and method thereof described in the present disclosure may be realized by a dedicated hardware logic circuit. Alternatively, the apparatus and method thereof described in the present disclosure may be realized by one or more dedicated computers configured by a combination of a processor that executes a computer program and one or more hardware logic circuits. Further, the computer program may be stored in a computer-readable non-transitional tangible recording medium as an instruction executed by the computer.

Claims (10)

  1.  光照射によって物体から反射された反射光を受光素子(12a)が感知することにより距離情報を含む反射光画像と、前記反射光に対する背景光を前記受光素子が感知することにより前記反射光画像と同座標系となっている背景光画像(ImL)とを生成する測距センサ(10)に対して通信可能に接続されると共に、外部からの入射光をカメラ素子(22a)が検出することにより前記反射光画像及び前記背景光画像よりも高解像のカメラ画像(ImC)を生成するカメラ(20)に対して通信可能に接続されたイメージレジストレーション装置であって、
     前記反射光画像、前記背景光画像及び前記カメラ画像を取得する画像取得部(41)と、
     前記背景光画像の特徴点(FPa)と前記カメラ画像の特徴点(FPb)との対応関係を特定することにより、前記背景光画像と同座標系の前記反射光画像と、前記カメラ画像とのイメージレジストレーションを実施する画像処理部(42)と、を備えるイメージレジストレーション装置。
    A reflected light image including distance information when the light receiving element (12a) senses the reflected light reflected from the object by light irradiation, and the reflected light image when the light receiving element senses the background light with respect to the reflected light. By being communicably connected to the ranging sensor (10) that generates a background light image (IML) having the same coordinate system, and by detecting incident light from the outside by the camera element (22a). An image registration device communicatively connected to a camera (20) that generates a camera image (ImC) having a higher resolution than the reflected light image and the background light image.
    An image acquisition unit (41) that acquires the reflected light image, the background light image, and the camera image, and
    By specifying the correspondence between the feature point (FPa) of the background light image and the feature point (FPb) of the camera image, the reflected light image having the same coordinate system as the background light image and the camera image An image registration device including an image processing unit (42) for performing image registration.
  2.  前記画像処理部は、
     前記背景光画像の特徴点と前記カメラ画像の特徴点とをそれぞれ検出し、
     検出された前記背景光画像の特徴点と前記カメラ画像の特徴点との前記対応関係を特定し、
     前記対応関係の特定結果に基づき、前記背景光画像及び前記カメラ画像のうち一方の各画素を、他方の各画素に対応させる請求項1に記載のイメージレジストレーション装置。
    The image processing unit
    The feature points of the background light image and the feature points of the camera image are detected, respectively.
    The correspondence between the detected feature points of the background light image and the feature points of the camera image is specified.
    The image registration apparatus according to claim 1, wherein one pixel of the background light image and the camera image is associated with each other pixel based on the specific result of the correspondence relationship.
  3.  前記画像処理部は、前記測距センサと前記カメラとの間の相対位置に基づく画像上の対応点が映り込む位置の相違を考慮して、前記対応関係を特定する請求項2に記載のイメージレジストレーション装置。 The image according to claim 2, wherein the image processing unit considers the difference in the position where the corresponding point on the image is reflected based on the relative position between the distance measuring sensor and the camera, and specifies the corresponding relationship. Registration device.
  4.  前記画像処理部は、
     前記背景光画像及び前記カメラ画像のうち投影元の特徴点に対応するエピポーラ線(EL)を、投影先の画像に投影し、前記エピポーラ線に沿った所定の許容幅(W)を有する帯状の判定領域(JA)内に位置する前記投影先の特徴点を、前記投影元の特徴点に対応する点であると判定する請求項3に記載のイメージレジストレーション装置。
    The image processing unit
    An epipolar line (EL) corresponding to a feature point of a projection source among the background light image and the camera image is projected onto the projection destination image, and a strip-shaped strip having a predetermined allowable width (W) along the epipolar line. The image registration apparatus according to claim 3, wherein the feature point of the projection destination located in the determination area (JA) is determined to be a point corresponding to the feature point of the projection source.
  5.  前記許容幅は、前記背景光画像の測定タイミングと前記カメラ画像の測定タイミングとの間に想定されるずれ量に応じて、設定されている請求項4に記載のイメージレジストレーション装置。 The image registration apparatus according to claim 4, wherein the allowable width is set according to an amount of deviation assumed between the measurement timing of the background light image and the measurement timing of the camera image.
  6.  前記画像処理部は、
     前記背景光画像の特徴点と前記カメラ画像の特徴点との対応関係の特定において、各前記特徴点を含む周辺領域から得られる特徴量であって、スケールに対して不変性が高い特徴量を用いる請求項2から5のいずれか1項に記載のイメージレジストレーション装置。
    The image processing unit
    In identifying the correspondence between the feature points of the background light image and the feature points of the camera image, the feature quantities obtained from the peripheral region including each of the feature points and which are highly invariant with respect to the scale are defined. The image registration apparatus according to any one of claims 2 to 5 to be used.
  7.  前記画像処理部は、
     スケールに影響するパラメータを有する特徴点検出器を用いて、前記背景光画像の特徴点と、前記カメラ画像の特徴点とをそれぞれ検出し、
     前記背景光画像の解像度と前記カメラ画像の解像度との比を把握し、前記比に基づいて、前記背景光画像の特徴点の検出に使用する前記パラメータと、前記カメラ画像の特徴点の検出に使用する前記パラメータとを相違させている請求項2から5のいずれか1項に記載のイメージレジストレーション装置。
    The image processing unit
    Using a feature point detector having parameters that affect the scale, the feature points of the background light image and the feature points of the camera image are detected, respectively.
    The ratio of the resolution of the background light image to the resolution of the camera image is grasped, and based on the ratio, the parameters used for detecting the feature points of the background light image and the feature points of the camera image are detected. The image registration apparatus according to any one of claims 2 to 5, which is different from the parameters used.
  8.  アプリケーションに処理させる画像を生成する画像生成システムであって、
     光照射によって物体から反射された反射光を受光素子(12a)が感知することにより距離情報を含む反射光画像と、前記反射光に対する背景光を前記受光素子が感知することにより、前記反射光画像と同座標系となっている背景光画像(ImL)とを生成する測距センサ(10)と、
     外部からの入射光をカメラ素子(22a)が検出することにより前記反射光画像及び前記背景光画像よりも高解像のカメラ画像(ImC)を生成するカメラ(20)と、
     前記背景光画像の特徴点(FPa)と前記カメラ画像の特徴点(FPb)との対応関係を特定することにより、前記背景光画像と同座標系の前記反射光画像と、前記カメラ画像とのイメージレジストレーションを実施して、前記距離情報と前記カメラ画像の情報とが統合された複合画像を生成する画像処理部(42)と、を備える画像生成システム。
    An image generation system that generates images to be processed by an application.
    The reflected light image including distance information when the light receiving element (12a) senses the reflected light reflected from the object by light irradiation, and the reflected light image when the light receiving element senses the background light with respect to the reflected light. A ranging sensor (10) that generates a background light image (IML) that has the same coordinate system as
    A camera (20) that generates a camera image (ImC) having a higher resolution than the reflected light image and the background light image by detecting incident light from the outside by the camera element (22a).
    By specifying the correspondence between the feature point (FPa) of the background light image and the feature point (FPb) of the camera image, the reflected light image having the same coordinate system as the background light image and the camera image An image generation system including an image processing unit (42) that performs image registration and generates a composite image in which the distance information and the information of the camera image are integrated.
  9.  測距センサ(10)が生成した画像であって、光照射によって物体から反射された反射光を受光素子(12a)が感知することにより距離情報を含む反射光画像と、前記反射光に対する背景光を前記受光素子が感知することにより前記反射光画像と同座標系となっている背景光画像とを、用意することと、
     カメラ(20)が生成した画像であって、外部からの入射光をカメラ素子(22a)が検出することにより前記反射光画像及び前記背景光画像よりも高解像のカメラ画像(ImC)を用意することと、
     前記背景光画像の特徴点(FPa)と前記カメラ画像の特徴点(FPb)とをそれぞれ検出することと、
     検出された前記背景光画像の特徴点と前記カメラ画像の特徴点との対応関係を特定することと、
     前記対応関係の特定結果に基づき、前記背景光画像及び前記カメラ画像のうち一方の各画素を、他方の各画素に対応させることと、を含むイメージレジストレーション方法。
    An image generated by the distance measuring sensor (10), which includes a reflected light image including distance information when the light receiving element (12a) senses the reflected light reflected from an object by light irradiation, and a background light for the reflected light. To prepare a background light image having the same coordinate system as the reflected light image by detecting the light receiving element.
    An image generated by the camera (20), in which the camera element (22a) detects incident light from the outside, a camera image (ImC) having a higher resolution than the reflected light image and the background light image is prepared. To do and
    Detecting the feature points (FPa) of the background light image and the feature points (FPb) of the camera image, respectively,
    Identifying the correspondence between the detected feature points of the background light image and the feature points of the camera image, and
    An image registration method comprising making each pixel of one of the background light image and the camera image correspond to each pixel of the other based on the specific result of the correspondence relationship.
  10.  測距センサ(10)が生成した画像と、カメラ(20)が生成した画像とのイメージレジストレーションを実施するイメージレジストレーションプログラムであって、
     少なくとも1つの処理部(31)に、
     前記測距センサが生成した画像であって、光照射によって物体から反射された反射光を受光素子(12a)が感知することにより距離情報を含む反射光画像と、前記反射光に対する背景光を前記受光素子が感知することにより前記反射光画像と同座標系となっている背景光画像(ImL)とを、取得する処理と、
     前記カメラが生成した画像であって、外部からの入射光をカメラ素子(22a)が検出することにより前記反射光画像及び前記背景光画像よりも高解像のカメラ画像(ImC)を取得する処理と、
     前記背景光画像の特徴点(FPa)と前記カメラ画像の特徴点(FPb)とをそれぞれ検出する処理と、
     検出された前記背景光画像の特徴点と前記カメラ画像の特徴点との対応関係を特定する処理と、
     前記対応関係の特定結果に基づき、前記背景光画像及び前記カメラ画像のうち一方の各画素を、他方の各画素に対応させる処理と、を実行させるイメージレジストレーションプログラム。
    An image registration program that performs image registration between the image generated by the distance measuring sensor (10) and the image generated by the camera (20).
    In at least one processing unit (31),
    An image generated by the ranging sensor, which includes a reflected light image including distance information when the light receiving element (12a) senses the reflected light reflected from an object by light irradiation, and a background light for the reflected light. A process of acquiring a background light image (IML) having the same coordinate system as the reflected light image by being sensed by the light receiving element, and
    A process of acquiring a camera image (ImC) having a higher resolution than the reflected light image and the background light image by detecting the incident light from the outside in the image generated by the camera by the camera element (22a). When,
    A process of detecting a feature point (FPa) of the background light image and a feature point (FPb) of the camera image, respectively.
    Processing for specifying the correspondence between the detected feature points of the background light image and the feature points of the camera image, and
    An image registration program that executes a process of making each pixel of one of the background light image and the camera image correspond to each pixel of the other based on the specific result of the correspondence relationship.
PCT/JP2020/033956 2019-09-10 2020-09-08 Image registration device, image generation system, image registration method and image registration program WO2021049490A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080063314.XA CN114365189A (en) 2019-09-10 2020-09-08 Image registration device, image generation system, image registration method, and image registration program
US17/654,012 US20220201164A1 (en) 2019-09-10 2022-03-08 Image registration apparatus, image generation system, image registration method, and image registration program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019164860A JP7259660B2 (en) 2019-09-10 2019-09-10 Image registration device, image generation system and image registration program
JP2019-164860 2019-09-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/654,012 Continuation US20220201164A1 (en) 2019-09-10 2022-03-08 Image registration apparatus, image generation system, image registration method, and image registration program product

Publications (1)

Publication Number Publication Date
WO2021049490A1 true WO2021049490A1 (en) 2021-03-18

Family

ID=74862393

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/033956 WO2021049490A1 (en) 2019-09-10 2020-09-08 Image registration device, image generation system, image registration method and image registration program

Country Status (4)

Country Link
US (1) US20220201164A1 (en)
JP (1) JP7259660B2 (en)
CN (1) CN114365189A (en)
WO (1) WO2021049490A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7294302B2 (en) * 2020-10-29 2023-06-20 トヨタ自動車株式会社 object detector

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07332966A (en) * 1994-06-09 1995-12-22 Hitachi Ltd Distance-measuring apparatus for vehicle
JP2014207493A (en) * 2011-08-24 2014-10-30 パナソニック株式会社 Imaging apparatus
JP2018173346A (en) * 2017-03-31 2018-11-08 株式会社トプコン Laser scanner
JP2019074429A (en) * 2017-10-17 2019-05-16 株式会社ジェイ・エム・エス Ultrasonic flowmeter and blood purifying device
JP2019128350A (en) * 2018-01-23 2019-08-01 株式会社リコー Image processing method, image processing device, on-vehicle device, moving body and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6913598B2 (en) 2017-10-17 2021-08-04 スタンレー電気株式会社 Distance measuring device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07332966A (en) * 1994-06-09 1995-12-22 Hitachi Ltd Distance-measuring apparatus for vehicle
JP2014207493A (en) * 2011-08-24 2014-10-30 パナソニック株式会社 Imaging apparatus
JP2018173346A (en) * 2017-03-31 2018-11-08 株式会社トプコン Laser scanner
JP2019074429A (en) * 2017-10-17 2019-05-16 株式会社ジェイ・エム・エス Ultrasonic flowmeter and blood purifying device
JP2019128350A (en) * 2018-01-23 2019-08-01 株式会社リコー Image processing method, image processing device, on-vehicle device, moving body and system

Also Published As

Publication number Publication date
JP7259660B2 (en) 2023-04-18
JP2021043679A (en) 2021-03-18
US20220201164A1 (en) 2022-06-23
CN114365189A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN110799918B (en) Method, apparatus and computer-readable storage medium for vehicle, and vehicle
KR101766603B1 (en) Image processing apparatus, image processing system, image processing method, and computer program
US10643349B2 (en) Method of calibrating a camera and a laser scanner
JP4885584B2 (en) Rangefinder calibration method and apparatus
WO2012053521A1 (en) Optical information processing device, optical information processing method, optical information processing system, and optical information processing program
JP6667065B2 (en) Position estimation device and position estimation method
US20100315490A1 (en) Apparatus and method for generating depth information
CN113034612B (en) Calibration device, method and depth camera
JP2001524228A (en) Machine vision calibration target and method for determining position and orientation of target in image
US20180276844A1 (en) Position or orientation estimation apparatus, position or orientation estimation method, and driving assist device
US20190235063A1 (en) System and method for calibrating light intensity
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
US11467030B2 (en) Method and arrangements for providing intensity peak position in image data from light triangulation in a three-dimensional imaging system
WO2021049490A1 (en) Image registration device, image generation system, image registration method and image registration program
WO2022040140A1 (en) Enhanced multispectral sensor calibration
JP2016102755A (en) Information processing device, information processing method and program
JP2016052096A (en) Image processing program, information processing system, and image processing method
JP4546155B2 (en) Image processing method, image processing apparatus, and image processing program
JP2020085798A (en) Three-dimensional position detection device, three-dimensional position detection system and three-dimensional position detection method
JP7103324B2 (en) Anomaly detection device for object recognition and anomaly detection program for object recognition
KR102525568B1 (en) lidar position compensation system and method using a reflector
JP7140091B2 (en) Image processing device, image processing method, image processing program, and image processing system
Kostrin et al. Application of an Automated Optoelectronic System for Determining Position of an Object
Peppa Precision analysis of 3D camera
JP2002031511A (en) Three-dimensional digitizer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20862981

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20862981

Country of ref document: EP

Kind code of ref document: A1