WO2023120603A1 - Imaging device and image processing method - Google Patents

Imaging device and image processing method Download PDF

Info

Publication number
WO2023120603A1
WO2023120603A1 PCT/JP2022/047211 JP2022047211W WO2023120603A1 WO 2023120603 A1 WO2023120603 A1 WO 2023120603A1 JP 2022047211 W JP2022047211 W JP 2022047211W WO 2023120603 A1 WO2023120603 A1 WO 2023120603A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
windshield
captured
spread function
Prior art date
Application number
PCT/JP2022/047211
Other languages
French (fr)
Japanese (ja)
Inventor
昭典 佐藤
薫 草深
Original Assignee
京セラ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京セラ株式会社 filed Critical 京セラ株式会社
Publication of WO2023120603A1 publication Critical patent/WO2023120603A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/346Image reproducers using prisms or semi-transparent mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to an imaging device and an image processing method.
  • Patent Document 1 A conventional imaging device is described in Patent Document 1, for example.
  • An imaging device includes a windshield, a camera configured to capture an image of at least the eyes of a driver of a mobile object through the windshield, and a controller that controls the camera, wherein the controller transforming the image captured by the camera into a first processed image in which double images are positioned side by side in a predetermined direction, based on a first point spread function representing the effect of the windshield on the image captured by the camera; and correcting the first processed image to a second processed image from which the double image is removed based on a second point spread function representing the feature of the double image in the first processed image; An inverse transform of the transform is performed on the second processed image to produce a third processed image in which the effects of the first point spread function are removed.
  • a windshield, a camera configured to capture an image of at least a driver's eye through the windshield, and a controller for controlling the camera are prepared, and the controller controls the transforming an image captured by the camera into a first processed image in which double images are positioned side by side in a predetermined direction based on a first point spread function representing the influence of the windshield; is corrected to a second processed image from which the double image is removed based on a second point spread function representing the feature of the double image in the first processed image, and the second processed image is converted to the transformed to produce a third processed image in which the effects of the first point spread function have been removed.
  • FIG. 1 is a diagram schematically showing an overall configuration of a stereoscopic image display device including an imaging device according to an example of an embodiment of the present disclosure
  • FIG. It is a figure which shows the structure of a three-dimensional projection apparatus typically. It is a figure which shows the double image elimination procedure by distortion processing. It is a figure which shows the double image elimination procedure by distortion processing. It is a figure which shows the double image elimination procedure by distortion processing. It is a figure which shows the double image elimination procedure by distortion processing. It is a figure which shows the double image elimination procedure by distortion processing. It is a figure which shows the structure of a three-dimensional projection apparatus.
  • 1 is a diagram showing a mobile object according to an embodiment of the present disclosure; FIG. FIG.
  • FIG. 4 illustrates the path of light reflected by the windshield of the imaging device
  • FIG. 10 is a diagram showing the relationship between the thickness of the windshield and the amount of deviation of double images
  • 4 is a flowchart for explaining the operation of the imaging device
  • FIG. 10 is a diagram showing the difference in the amount of deviation of double images depending on the reflection position of the windshield
  • FIG. 5 is a diagram showing changes in the width of a double image due to bending of the windshield
  • FIG. 10 is a diagram showing a photographing panel on which a measurement pattern is drawn
  • 12 is a diagram showing an image to be processed of the photographing panel shown in FIG. 11;
  • FIG. 10 is a diagram showing the relationship between the thickness of the windshield and the amount of deviation of double images
  • 4 is a flowchart for explaining the operation of the imaging device
  • FIG. 10 is a diagram showing the difference in the amount of deviation of double images depending on the reflection position of the windshield
  • FIG. 5 is a diagram showing changes in the width of a double image due to
  • an imaging device mounted on a vehicle which includes a camera for imaging the driver's eyes through the windshield of the vehicle.
  • the image captured through the conventional windshield produces a double image and becomes an unclear image, but it is required to detect the driver's eyes with high accuracy.
  • FIG. 1 is a diagram schematically showing the overall configuration of a stereoscopic image display device 2 including an imaging device 1 according to an example of the embodiment of the present disclosure.
  • the stereoscopic image display device 2 includes an imaging device 1 and a three-dimensional projection device 12 .
  • the three-dimensional projection device 12 includes a display controller 107 , an acquisition unit 103 , a memory 108 , an illuminator 4 , a display panel 5 and a parallax barrier 6 .
  • the stereoscopic image display device 2 may be mounted on the mobile object 10 .
  • the imaging device 1 includes a windshield 25, a camera 11 configured to capture an image of at least the eyes 31 of a driver 13 of a mobile body 10 through the windshield 25, and a controller 7 for controlling the camera 11. .
  • the controller 7 is configured as a processor, for example. Controller 7 may include one or more processors.
  • the processor may include a general-purpose processor configured to load a specific program to perform a specific function, and a dedicated processor specialized for specific processing.
  • a dedicated processor may include an Application Specific Integrated Circuit (ASIC).
  • the processor may include a programmable logic device (PLD).
  • PLD programmable logic device
  • a PLD may include an FPGA (Field-Programmable Gate Array).
  • the controller 7 may be either a SoC (System-on-a-Chip) with which one or more processors cooperate, or a SiP (System In a Package).
  • a "moving object" in the present disclosure may include, for example, a vehicle, a ship, an aircraft, and the like.
  • Vehicles may include, for example, automobiles, industrial vehicles, railroad vehicles, utility vehicles, fixed-wing aircraft that travel on runways, and the like.
  • Motor vehicles may include, for example, cars, trucks, buses, motorcycles, trolleybuses, and the like.
  • Industrial vehicles may include, for example, industrial vehicles for agriculture and construction, and the like.
  • Industrial vehicles may include, for example, forklifts, golf carts, and the like.
  • Industrial vehicles for agriculture may include, for example, tractors, tillers, transplanters, binders, combines, lawn mowers, and the like.
  • Industrial vehicles for construction may include, for example, bulldozers, scrapers, excavators, mobile cranes, tippers, road rollers, and the like. Vehicles may include those that are powered by humans. Vehicle classification is not limited to the above example. For example, automobiles may include road-drivable industrial vehicles. Multiple classifications may contain the same vehicle. Vessels may include, for example, marine jets, boats, tankers, and the like. Aircraft may include, for example, fixed-wing aircraft, rotary-wing aircraft, and the like.
  • the mobile object 10 is a passenger car
  • the mobile body 10 is not limited to a passenger car, and may be any of the above examples.
  • the camera 11 may be attached to the mobile object 10 .
  • Camera 11 is configured to capture an image including the face (human face) of driver 13 of mobile object 10 .
  • the mounting position of the camera 11 is arbitrary inside and outside the moving body 10 .
  • camera 11 may be located within the dashboard of mobile 10 .
  • the camera 11 may be a visible light camera or an infrared camera.
  • the camera 11 may have the functions of both a visible light camera and an infrared camera.
  • the camera 11 may include, for example, a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor) image sensor.
  • the position of the eye 31 of the driver 13 may be the pupil position.
  • the target image is a captured image captured by the camera 11 .
  • the captured image includes the eye 31 of the driver 13 or the part of the face for which the relative positional relationship with the eye 31 of the driver 13 is specified.
  • the eyes 31 of the driver 13 included in the captured image may be both eyes, or may be only the right eye 31R or the left eye 31L.
  • the part of the face for which the relative positional relationship with the eyes 31 of the driver 13 is specified may be, for example, the eyebrows or the nose.
  • the imaging device 1 is configured to capture an image of at least the eyes 31 of the driver 13 with light reflected through the windshield 25 .
  • the imaging device 1 is configured to acquire an image of a subject and generate an image of the subject.
  • a camera 11 of the imaging device 1 includes an imaging element.
  • the imaging device may include, for example, a CCD (Charge Coupled Device) imaging device or a CMOS (Complementary Metal Oxide Semiconductor) imaging device.
  • the imaging device 1 is arranged so that the face of the driver 13 is located on the subject side.
  • the imaging device 1 is configured to detect the position of at least one of the driver's 13 left eye 31L and right eye 31R.
  • the imaging device 1 may be configured to use a predetermined position as an origin and detect the direction and amount of displacement of the position of the left eye 31 from the origin.
  • the imaging device 1 may be configured to detect the position of at least one of the left eye 31L and the right eye 31R from the captured image of the imaging device 1 captured by the camera.
  • the imaging device 1 may be configured to use two or more cameras 11 to detect the position of at least one of the left eye 31L and the right eye 31R as coordinates in a three-dimensional space.
  • the imaging device 1 does not have to be equipped with the camera 11.
  • the imaging device 1 may comprise an input terminal configured to receive signals from a camera 11 external to the device.
  • a camera 11 outside the device may be directly connected to the input terminal.
  • An external camera 11 may be indirectly connected to the input terminal via a shared network.
  • An imaging device 1 that does not include a camera 11 may include an input terminal configured for the camera 11 to input a video signal.
  • the imaging device 1 without the camera 11 may be configured to detect the position of at least one of the left eye 31L and the right eye 31R from the video signal input to the input terminal.
  • the imaging device 1 may include a sensor.
  • the sensor may be an ultrasonic sensor, an optical sensor, or the like.
  • the imaging device 1 may be configured to detect the position of the head of the driver 13 with a sensor and detect the position of at least one of the left eye 31L and the right eye 31R based on the position of the head.
  • the imaging device 1 may be configured to detect the position of at least one of the left eye 31L and the right eye 31R as coordinates in a three-dimensional space using one or more sensors.
  • the imaging device 1 may be configured to detect the movement distance of the left eye 31L and the right eye 31R along the eyeball arrangement direction based on the detection result of the position of at least one of the left eye 31L and the right eye 31R. .
  • the imaging device 1 is configured to output position information indicating the positions of the eyes 31 of the driver 13 to the acquisition unit 103 .
  • the imaging device 1 may be configured to output the position information to the acquisition unit 103 via a communication network such as wired, wireless, and CAN (Controller Area Network).
  • the camera 11 is configured to be capable of capturing a first captured image including an image of an area where the face of the driver 13 is assumed.
  • the driver 13 may be, for example, the driver of the mobile object 10, which is a passenger car.
  • the camera 11 may be attached to the mobile object 10 .
  • the mounting position of the camera 11 is arbitrary inside and outside the moving body 10 .
  • a captured image captured by the camera 11 is input to the display controller 107 via the acquisition unit 3 .
  • the controller 7 is configured to detect the position of at least one eye 31L, 31R of the driver 13 based on the captured image.
  • the detection result by the imaging device 1 may be coordinate information indicating the pupil position of the driver's 13 left eye 31L or right eye 31R.
  • the imaging device 1 is configured to output coordinate information regarding the detected pupil position of the left eye 31L or right eye 31R to the three-dimensional projection device 12 . Based on this coordinate information, the 3D projection device 12 may be configured to control the projected image.
  • the controller 7 may be an external device.
  • the camera 11 may be configured to output the captured image to the external controller 7 .
  • the external controller 7 may be configured to detect the pupil position of the driver's 13 left eye 31L or right eye 31R from the image output from the camera 11 .
  • the external controller 7 may be configured to output coordinate information regarding the detected pupil position of the left eye 31L or right eye 31R to the stereoscopic image display device 2 . Based on this coordinate information, the stereoscopic image display device 2 may be configured to control the image to be projected.
  • the camera 11 may be configured to output a captured image captured via wired communication or wireless communication to the external controller 7 .
  • the external controller 7 may be configured to output the coordinate information to the stereoscopic image display device 2 via wired communication or wireless communication. Wired communication may include CAN, for example.
  • FIG. 4 is a diagram showing the configuration of the three-dimensional projection device 12
  • FIG. 5 is a diagram showing a moving body according to one embodiment of the present disclosure.
  • the position of the three-dimensional projection device 12 is arbitrary inside and outside the moving body 10 .
  • the 3D projection device 12 may be located within the dashboard of the vehicle 10 .
  • the three-dimensional projection device 12 is configured to emit image light toward the windshield 25 .
  • the windshield 25 is a reflector configured to reflect image light emitted from the three-dimensional projection device 12 .
  • Image light reflected by the windshield 25 reaches the eyebox 16 .
  • the eye box 16 is a real space area where the eyes 31L and 31R of the driver 13 are assumed to exist, for example, considering the physique, posture, and changes in the posture of the driver 13 .
  • the shape of the eyebox 16 is arbitrary. Eyebox 16 may include two-dimensional or three-dimensional regions.
  • a dashed line shown in FIG. 2 indicates a path along which at least part of image light emitted from the three-dimensional projection device 12 reaches the eyebox 16 .
  • the path traveled by the image light is also referred to as the optical path.
  • the image light emitted from the three-dimensional projection device 12 represents parallax images including a right-eye image and a left-eye image.
  • the driver 13 can visually recognize the virtual image 14 by the parallax image light reaching the eyebox 16 .
  • the virtual image 14 is positioned on a path (indicated by a dashed line in FIG. 1) obtained by extending the path from the windshield 25 to the eyes 31L and 31R to the front of the mobile object 10 .
  • the three-dimensional projection device 12 can function as a head-up display by making the driver 13 visually recognize the virtual image 14 .
  • the direction in which the eyes 31L and 31R of the driver 13 are aligned corresponds to the x-axis direction.
  • the vertical direction corresponds to the y-axis direction.
  • the imaging range of camera 11 includes eyebox 16 .
  • At least part of the image light emitted from the three-dimensional projection device 12 reaches the windshield 25 via the optical member 110 (see FIG. 4).
  • the image light is reflected by the windshield 25 and reaches the eyes 31 of the driver 13 .
  • the eyes 31 of the driver 13 can visually recognize the first virtual image 14a located on the side of the windshield 25 in the negative direction of the z-axis.
  • the first virtual image 14a corresponds to the image displayed by the three-dimensional projection device 12 .
  • the opening region 6b of the parallax barrier 6 and the light shielding surface 6a form a second virtual image 14b in front of the windshield 25 and on the windshield 25 side of the first virtual image 14a.
  • the driver 13 can visually recognize the image as if the display panel were present at the position of the first virtual image 14a and the parallax barrier 6 was present at the position of the second virtual image 14b.
  • the three-dimensional projection device 12 causes the image light reflected by the windshield 25 to reach the driver's 13 left eye 31L and right eye 31R. That is, the three-dimensional projection device 12 causes the image light to travel from the stereoscopic image display device 2 to the left eye 31L and right eye 31R of the driver 13 along the optical path 140 indicated by the dashed line in FIG.
  • the driver 13 can visually recognize the image light arriving along the optical path 140 as the virtual image 14 .
  • the stereoscopic image display device 2 can provide stereoscopic vision according to the movement of the driver by controlling the display according to the positions of the left eye 31L and right eye 31R of the driver 13 .
  • a part of the configuration of the three-dimensional projection device 12 may be shared with other devices or parts included in the mobile body 10 .
  • the moving body 10 also uses the windshield 25 as part of the configuration of the imaging device 1 .
  • the display panel 5 is not limited to a transmissive display panel, and other display panels such as a self-luminous display panel can also be used.
  • Transmissive display panels include MEMS (Micro Electro Mechanical Systems) shutter type display panels in addition to liquid crystal panels.
  • Self-luminous display panels include organic EL (electro-luminescence) and inorganic EL display panels. If a self-luminous display panel is used as the display panel 5, the irradiator 4 becomes unnecessary. When a self-luminous display panel is used as the display panel 5, the parallax barrier 6 is positioned on the side of the display panel 5 from which the image light is emitted.
  • a stereoscopic image display device 2 includes an imaging device 1 and a three-dimensional projection device 12, as shown in FIG.
  • the imaging device 1 may be configured to acquire captured images from a camera 11 that is configured to capture an image of a space where the driver's eyes are expected to exist at regular imaging time intervals (eg, 20 fps). .
  • the imaging device 1 is configured to sequentially detect images of a left eye (first eye) 31L and a right eye (second eye) 31R from captured images acquired from the camera 11 .
  • the imaging device 1 is configured to detect respective positions of the left eye 31L and the right eye 31R in the real space based on the images of the left eye 31L and the right eye 31R in the image space.
  • the imaging device 1 may be configured to detect the positions of the left eye 31L and the right eye 31R from the image captured by one camera 11 as coordinates in a three-dimensional space.
  • the imaging device 1 may be configured to detect the positions of the left eye 31L and the right eye 31R as coordinates in a three-dimensional space from images captured by two or more cameras 11 .
  • the imaging device 1 may include a camera 11 .
  • the stereoscopic image display device 2 includes an acquisition unit 103, an illuminator 4, a display panel 5, a parallax barrier 6 as an optical element, a memory 108, and a display controller 107.
  • the image captured by the imaging device 1 is captured as, for example, a double image due to the influence of multiple reflections by the windshield 25 due to the light being reflected by the outer and inner surfaces of the windshield 25, and is accurately captured by the left eye 31L or Since the position of the right eye 31R cannot be detected, the controller 7 of the imaging device 1 is configured to remove double images.
  • FIG. 6 is a diagram showing paths of light reflected by the windshield 25 of the imaging device 1.
  • FIG. When the eye 31 of the driver 13 is imaged by the camera 11 through the windshield 25, light is reflected on the outer and inner surfaces of the windshield 25 as shown in FIG.
  • i(x, y) be an image captured by the camera 11
  • s(x, y) be an original image without double images
  • g(x, y) be a point spread function that defines a double image
  • g(x, y) be a double image.
  • the convolution operation code is "*"
  • An xy coordinate system was set with the coordinate origin (0, 0) as the upper left corner of the image, the x axis extending in the horizontal direction, and the y axis extending in the vertical direction.
  • the point spread function may be determined using, for example, a pattern image for obtaining the point spread function, such as a captured image of a display panel on which a point image is displayed, or may be calculated based on the shape of the windshield 25. good.
  • Equation 2 Denote the Fourier transform of the function by F, the inverse Fourier transform by F ⁇ 1 , and the Fourier transform of i(x, y), g(x, y) by I(u, v), G(u, v) , the original image s(x, y) can be obtained by Equation 2 below.
  • the original image s(x, y) can be obtained by Equation 3 below.
  • Equation 4 the kernel K in Equation 3 above can be expressed by Equation 4 below.
  • the controller 7 reads out the point spread function stored in the memory 108 and executes double image elimination processing. Next, the controller 7 deforms the captured image by distortion processing so that the double images are aligned in the y-axis direction, which is a predetermined direction, and converts the captured image into a first processed image using the first point spread function. Based on the point spread function of the double image of the processed image, the first processed image from which the double image has been removed is corrected by the reverse distortion process, and a restored image is produced.
  • 3A to 3D are diagrams for explaining the procedure for removing double images by distortion processing.
  • a panel on which a plurality of point images shown in FIG. 3D are drawn is captured by the camera 11 through the windshield 25 .
  • the captured image becomes a double image as shown in FIG. 3A due to the influence of the windshield 25, and each point image becomes two point images.
  • the distortion in the x-axis direction of each of these two point images is corrected.
  • the interval in the y-axis direction between the two point images is made uniform. This processing is performed by enlarging in the y-axis direction.
  • This distortion processing may be performed based on a point spread function representing the influence of the windshield 25 on the captured image. Due to the distortion processing, the two point images are positioned at the same interval in the y-axis direction. The same one-dimensional kernel K, which depends only on the y-coordinate, can thereby be applied to the entire captured image. Processing by the controller 7 is simplified because the same one-dimensional kernel K can be used for all pixels. Removing the double images from the image of FIG. 3B yields the point image shown in FIG. 3C. 3A to 3B, the image can be restored to the original image shown in FIG. 3D from which the double image is removed by performing the process of returning the image by the amount of movement for aligning it with the y-axis. I understand.
  • the controller 7 Since the controller 7 is a one-dimensional integral, the amount of calculation can be reduced and the calculation speed can be increased. If the alignment of the two point images caused by the influence of the windshield 25 deviates from the y-axis direction, integration will be performed in a long and narrow two-dimensional area. By using such an image, the area to be integrated with respect to the x-coordinate can be reduced, and the arithmetic processing load of the controller 7 can be reduced.
  • the first processed image is transformed so that the distance in the y-axis direction between the double images is the same. This allows double images to be removed with a single kernel, reducing computational effort.
  • the captured image i(x , y) can restore the original image s(x, y) from which double images have been removed. If the point spread function g(x, y) is not known, the second point spread function g(x, y) is estimated from the photographed image of FIG. 3D, and the original image s(x, y) is you can ask.
  • the aforementioned second point spread function is determined based on the first point spread function. Also, the second point spread function may be determined based on the first point spread function and the deformation due to the distortion process.
  • the imaging device 1 may include a light emitting element (Light Emitting Diode; LED) that emits infrared rays.
  • the captured image described above is a difference image between the first image captured without illumination of the driver and the second image captured with illumination of the driver. Thereby, the influence of light transmitted through the windshield 25 can be eliminated.
  • the acquisition unit 3 is configured to acquire the position data indicating the positions of the eyes sequentially transmitted by the imaging device 1 .
  • the illuminator 4 can be configured to illuminate the display panel 5 in a planar manner.
  • the illuminator 4 may include a light source, a light guide plate, a diffusion plate, a diffusion sheet, and the like.
  • the irradiator 4 emits irradiation light from a light source, and is configured to make the irradiation light uniform in the surface direction of the display panel 5 by means of a light guide plate, a diffusion plate, a diffusion sheet, and the like.
  • Illuminator 4 may be configured to emit homogenized light towards display panel 5 .
  • the display panel 5 may employ a display panel such as a transmissive liquid crystal display panel.
  • the display panel 5 has a plurality of partitioned areas on a planar active area.
  • the active area is configured to display parallax images.
  • the parallax images include a left-eye image (first image) and a right-eye image (second image) having parallax with respect to the left-eye image.
  • the plurality of partitioned regions are regions partitioned in a first direction x and a direction y perpendicular to the first direction within the plane of the active area.
  • the first direction x may be, for example, the horizontal direction.
  • the direction y orthogonal to the first direction x may be, for example, the vertical direction.
  • a direction orthogonal to the horizontal and vertical directions may be referred to as the depth direction.
  • the horizontal direction is represented as the x-axis direction
  • the vertical direction is represented as the y-axis direction
  • the depth direction is represented as the z-axis direction.
  • the active area comprises a plurality of sub-pixels arranged in a grid along the horizontal and vertical directions.
  • Each of the plurality of sub-pixels can correspond to any one of colors R (Red), G (Green), and B (Blue).
  • a set of three sub-pixels of R, G, and B can constitute one pixel.
  • One pixel may be referred to as one pixel.
  • a plurality of sub-pixels forming one pixel may be arranged horizontally. Multiple sub-pixels of the same color may be aligned vertically.
  • the horizontal length Hpx of each of the plurality of sub-pixels may be the same as each other.
  • the vertical length Hpy of each of the plurality of sub-pixels may be the same.
  • the display panel 5 is not limited to a transmissive liquid crystal panel, and other display panels such as organic EL can be used.
  • Transmissive display panels include MEMS (Micro Electro Mechanical Systems) shutter type display panels in addition to liquid crystal panels.
  • Self-luminous display panels include organic EL (electro-luminescence) and inorganic EL display panels. If the display panel 5 is a self-luminous display panel, the stereoscopic image display device 2 does not need to include the illuminator 4 .
  • the parallax barrier 6 is configured to define the light beam direction of the image light of the parallax image emitted from the display panel 5 .
  • the parallax barrier 6 has a plane along the active area, as shown in FIG.
  • the parallax barrier 6 is separated from the active area by a predetermined distance (gap) g.
  • a parallax barrier 6 may be located on the opposite side of the illuminator 4 with respect to the display panel 5 .
  • the parallax barrier 6 is positioned on the side of the display panel 5 facing the illuminator 4 .
  • the parallax barrier 6 is an optical panel configured to define the traveling direction of incident light. As shown in the example of FIG.
  • the parallax barrier 6 when the parallax barrier 6 is positioned closer to the illuminator 4 than the display panel 5 is, the light emitted from the illuminator 4 enters the display panel 5 and is further parallaxed. Incident on the barrier 6 .
  • the parallax barrier 6 is configured to block or attenuate part of the light emitted from the illuminator 4 and transmitted through the display panel 5, and to transmit the light toward the left eye 31L or the right eye 31R.
  • the display panel 5 emits the incident light traveling in the direction defined by the parallax barrier 6 as image light traveling in the same direction.
  • the parallax barrier 6 is configured to block or attenuate part of the image light emitted from the display panel 5 and transmit the other part toward the eyes 31 of the driver 13. .
  • the display controller 107 is configured to store in the memory 108 the position data indicating the position (actually measured position) of the eye 31 acquired by the acquisition unit 103 and the order in which the position data was acquired.
  • the memory 108 sequentially stores measured positions of the eye 31 based on each of a plurality of captured images captured at predetermined imaging time intervals.
  • the memory 108 may also store the order in which the eye 31 is positioned at the actual measurement position.
  • the predetermined imaging time interval is a time interval between one captured image and another captured image, which can be appropriately set according to the performance and design of the camera 11 .
  • the display controller 107 is configured as a processor, for example.
  • Controller 7 may include one or more processors.
  • the processor may include a general-purpose processor configured to load a specific program to perform a specific function, and a dedicated processor specialized for specific processing.
  • a dedicated processor may include an Application Specific Integrated Circuit (ASIC).
  • the processor may include a programmable logic device (PLD).
  • PLD programmable logic device
  • a PLD may include an FPGA (Field-Programmable Gate Array).
  • the controller 7 may be either a SoC (System-on-a-Chip) with which one or more processors cooperate, or a SiP (System In a Package).
  • the memory 108 is composed of arbitrary storage devices such as RAM (Random Access Memory) and ROM (Read Only Memory).
  • the memory 108 is configured to store information received by the input unit, information converted by the controller 7, and the like.
  • the memory 108 is configured to store position information of the eye 31 of the driver 13 obtained by the input unit.
  • FIG. 6 is a diagram showing paths of light reflected by the windshield 25 of the imaging device 1.
  • the windshield 25 has a translucent glass layer 25a and an infrared reflective film 25b interposed as an intermediate film in the glass layer 25a. The light is reflected at the interface between the glass layer 25a and the infrared reflecting film 25b and enters the imaging device 1. FIG. Therefore, the image of the driver incident on the imaging device 1 is received as a double image.
  • the controller 7 is configured to be able to remove double images.
  • the kernel Since the spread function of the double image is not local, the larger the image size, the larger the kernel. However, if there is a difference in the intensity of the double images, the kernel will be small, so in practice it can be processed by a convolution integral of a small size.
  • FIG. 7 is a diagram showing the relationship between the thickness T of the windshield 25 and the deviation amount D of the double image.
  • the images captured by the camera 11 are shifted by D and moved to the same position, creating a double image.
  • FIG. 8 is a flowchart for explaining the operation of the imaging device 1.
  • FIG. The number of pixels between the double images is proportional to the reciprocal of the distance from the camera 11 to the object. Also, it is necessary to prepare a kernel for each distance to the subject. An image by transmitted light that does not form a double image may deteriorate the image of the driver that is to be taken as a double image by this processing.
  • step S1 the lighting device is turned on, the imaging device 1 captures an image of the driver, and in step S2, the lighting device is turned off, and the vehicle is driven again.
  • a person is imaged by the imaging device 1 .
  • step S3 the controller 7 takes in two captured images from the imaging device 1, calculates the difference between these captured images, and removes the transparent image. As a reference, some double images are removed.
  • step S5 the pupil position of the driver is detected from the captured image after processing, and in step S6, the distance to the subject is calculated using the standard interocular distance E.
  • the remaining double images are removed using the modified kernel.
  • the pupil position is detected and the distance from the pupil distance to the pupil position is obtained.
  • Applies a kernel suitable for double images determined by the distance in front of the subject removes double images, and uses the clear image to use accurate pupil position detection results in a 3D HUD or driver monitoring system. Used in DMS and augmented reality head-up display AR-HUD.
  • the image by transmitted light is erased by subtracting the captured image with the light emitting element on/off, and then the double image is processed.
  • FIG. 9 is a diagram showing the difference in the amount of deviation of the double image depending on the reflection position of the windshield 25.
  • FIG. 10 is a diagram showing changes in the width of the double image due to the curvature of the windshield 25.
  • the width of the double image also changes due to deformation called undulation in which the windshield 25 has different curvatures in the x-, y-, and z-axis directions.
  • FIG. 11 is a diagram showing a photographing panel on which a measurement pattern is drawn. Therefore, as shown in FIG. 11, a panel on which a plurality of parallel lines are drawn as a pattern for photographing is prepared. is obtained. By calculating the distance and the brightness ratio between the double images from this picked-up image and performing image processing on the shift in the direction of one axis, it is possible to obtain a processed image in which the double images are aligned only along one axis. If the distance and the brightness ratio between the double images are basically known in this way, a kernel for that location can be created.
  • the distribution of kernels for the entire processed image can be obtained by interpolation processing. By applying these kernels to the corresponding locations, it is possible to reduce the amount of computation and obtain a restored image from which double images have been removed.
  • the driver's eyes can be detected with high accuracy.
  • the imaging device can be implemented in the following configuration (1).
  • the controller is Transformation for transforming the captured image of the camera into a first processed image in which double images are positioned side by side in a predetermined direction based on a first point spread function representing the effect of the windshield on the captured image of the camera.
  • the image processing method according to the present disclosure can be implemented in the following configurations (2) to (6).
  • (2) providing a windshield, a camera configured to image at least the driver's eyes through the windshield, and a controller for controlling the camera; by the controller, Transformation for transforming the captured image of the camera into a first processed image in which double images are positioned side by side in a predetermined direction based on a first point spread function representing the effect of the windshield on the captured image of the camera.
  • An image processing method comprising performing an inverse transform of the transform on the second processed image to produce a third processed image from which the effects of the first point spread function have been removed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention is provided with a windshield, a camera configured so as to capture an image of at least the eyes of a driver of a moving body through the windshield, and a controller which controls the camera. The controller carries out conversion to transform the image captured by the camera into a first processing image in which overlapping images are positioned side by side in a prescribed direction on the basis of a first point spread function representing the effect of the windshield, corrects the first processing image into a second processing image in which overlapping images have been removed on the basis of a second point spread function representing features of the overlapping images in the first processing image, carries out a reverse conversion relative to the conversion to transform the second processing image into the first processing image, and generates a third processing image in which the effects of the first point spread function have been eliminated.

Description

撮像装置および画像処理方法Imaging device and image processing method
 本開示は、撮像装置および画像処理方法に関する。 The present disclosure relates to an imaging device and an image processing method.
 従来技術の撮像装置は、例えば特許文献1に記載されている。 A conventional imaging device is described in Patent Document 1, for example.
特開2021-138319号公報JP 2021-138319 A
 本開示の撮像装置は、ウインドシールドと、前記ウインドシールドを介して少なくとも移動体の運転者の眼を撮像するように構成されるカメラと、前記カメラを制御するコントローラと、を備え、前記コントローラは、前記ウインドシールドの前記カメラの撮像画像への影響を表す第1の点広がり関数に基づいて、前記カメラの撮像画像を2重像が所定方向に並んで位置する第1処理画像に変形させる変換を実行し、前記第1処理画像を、該第1処理画像における2重像の特徴を表す第2の点広がり関数に基づいて、前記2重像が除去された第2処理画像に補正し、前記第2処理画像を、前記変換の逆変換を実行して、前記第1の点広がり関数の影響が除去された第3処理画像を生成する。 An imaging device according to the present disclosure includes a windshield, a camera configured to capture an image of at least the eyes of a driver of a mobile object through the windshield, and a controller that controls the camera, wherein the controller transforming the image captured by the camera into a first processed image in which double images are positioned side by side in a predetermined direction, based on a first point spread function representing the effect of the windshield on the image captured by the camera; and correcting the first processed image to a second processed image from which the double image is removed based on a second point spread function representing the feature of the double image in the first processed image; An inverse transform of the transform is performed on the second processed image to produce a third processed image in which the effects of the first point spread function are removed.
 本開示の画像処理方法は、ウインドシールドと、ウインドシールドを介して少なくとも運転者の眼を撮像するように構成されるカメラと、前記カメラを制御するコントローラと、を準備し、前記コントローラによって、前記ウインドシールドの影響を表す第1の点広がり関数に基づいて、前記カメラの撮像画像を2重像が所定方向に並んで位置する第1処理画像に変形させる変換を実行し、前記第1処理画像を、該第1処理画像における2重像の特徴を表す第2の点広がり関数に基づいて、前記2重像が除去された第2処理画像に補正し、前記第2処理画像を、前記変換の逆変換を実行して、前記第1の点広がり関数の影響が除去された第3処理画像を生成する。 In the image processing method of the present disclosure, a windshield, a camera configured to capture an image of at least a driver's eye through the windshield, and a controller for controlling the camera are prepared, and the controller controls the transforming an image captured by the camera into a first processed image in which double images are positioned side by side in a predetermined direction based on a first point spread function representing the influence of the windshield; is corrected to a second processed image from which the double image is removed based on a second point spread function representing the feature of the double image in the first processed image, and the second processed image is converted to the transformed to produce a third processed image in which the effects of the first point spread function have been removed.
 本開示の目的、特色、及び利点は、下記の詳細な説明と図面とからより明確になるであろう。
本開示の実施形態の一例の撮像装置を備える立体画像表示装置の全体構成を模式的に示す図である。 3次元投影装置の構成を模式的に示す図である。 歪み処理による2重像除去手順を示す図である。 歪み処理による2重像除去手順を示す図である。 歪み処理による2重像除去手順を示す図である。 歪み処理による2重像除去手順を示す図である。 3次元投影装置の構成を示す図である。 本開示の実施形態に係る移動体を示す図である。 撮像装置のウインドシールドによって反射される光経路を示す図である。 ウインドシールドの厚さと2重像のずれ量と関係を示す図である。 撮像装置の動作を説明するためのフローチャートである。 ウインドシールドの反射位置による2重像のずれ量の違いを示す図である。 ウインドシールドの湾曲による2重像の幅の変化を示す図である。 測定用パターンが描かれた撮影用パネルを示す図である。 図11に示される撮影用パネルの被処理画像を示す図である。
Objects, features and advantages of the present disclosure will become more apparent from the following detailed description and drawings.
1 is a diagram schematically showing an overall configuration of a stereoscopic image display device including an imaging device according to an example of an embodiment of the present disclosure; FIG. It is a figure which shows the structure of a three-dimensional projection apparatus typically. It is a figure which shows the double image elimination procedure by distortion processing. It is a figure which shows the double image elimination procedure by distortion processing. It is a figure which shows the double image elimination procedure by distortion processing. It is a figure which shows the double image elimination procedure by distortion processing. It is a figure which shows the structure of a three-dimensional projection apparatus. 1 is a diagram showing a mobile object according to an embodiment of the present disclosure; FIG. FIG. 4 illustrates the path of light reflected by the windshield of the imaging device; FIG. 10 is a diagram showing the relationship between the thickness of the windshield and the amount of deviation of double images; 4 is a flowchart for explaining the operation of the imaging device; FIG. 10 is a diagram showing the difference in the amount of deviation of double images depending on the reflection position of the windshield; FIG. 5 is a diagram showing changes in the width of a double image due to bending of the windshield; FIG. 10 is a diagram showing a photographing panel on which a measurement pattern is drawn; 12 is a diagram showing an image to be processed of the photographing panel shown in FIG. 11; FIG.
 例えば前述の特許文献1に記載されるように、従来、車両のウインドシールドを介して運転者の眼を撮像するカメラを備え、車両に搭載された撮像装置が知られている。 For example, as described in the above-mentioned Patent Document 1, conventionally, there is known an imaging device mounted on a vehicle, which includes a camera for imaging the driver's eyes through the windshield of the vehicle.
 従来のウインドシールドを介して撮像された画像は2重像が発生し不鮮明な画像となるが、運転者の眼を精度よく検出することが求められている。 The image captured through the conventional windshield produces a double image and becomes an unclear image, but it is required to detect the driver's eyes with high accuracy.
 以下、図面を参照しながら、本発明の実施形態が説明される。なお、以下の説明で用いられる図は模式的なものである。図面上の寸法、比率等は現実のものと必ずしも一致しない。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. Note that the diagrams used in the following description are schematic. The dimensions, ratios, etc. on the drawings do not necessarily match the actual ones.
 図1は、本開示の実施形態の一例の撮像装置1を備える立体画像表示装置2の全体構成を模式的に示す図である。立体画像表示装置2は、撮像装置1と、3次元投影装置12と、を備える。3次元投影装置12は、表示用コントローラ107と、取得部103と、メモリ108と、照射器4と、表示パネル5と、パララックスバリア6と、を備える。立体画像表示装置2は、移動体10に搭載されてよい。撮像装置1は、ウインドシールド25と、ウインドシールド25を介して少なくとも移動体10の運転者13の眼31を撮像するように構成されるカメラ11と、カメラ11を制御するコントローラ7と、を備える。 FIG. 1 is a diagram schematically showing the overall configuration of a stereoscopic image display device 2 including an imaging device 1 according to an example of the embodiment of the present disclosure. The stereoscopic image display device 2 includes an imaging device 1 and a three-dimensional projection device 12 . The three-dimensional projection device 12 includes a display controller 107 , an acquisition unit 103 , a memory 108 , an illuminator 4 , a display panel 5 and a parallax barrier 6 . The stereoscopic image display device 2 may be mounted on the mobile object 10 . The imaging device 1 includes a windshield 25, a camera 11 configured to capture an image of at least the eyes 31 of a driver 13 of a mobile body 10 through the windshield 25, and a controller 7 for controlling the camera 11. .
 コントローラ7は、例えばプロセッサとして構成される。コントローラ7は、1以上のプロセッサを含んでよい。プロセッサは、特定のプログラムを読み込ませて特定の機能を実行するように構成された汎用のプロセッサ、及び特定の処理に特化した専用のプロセッサを含んでよい。専用のプロセッサは、特定用途向けIC(ASIC:Application Specific Integrated Circuit)を含んでよい。プロセッサは、プログラマブルロジックデバイス(PLD:Programmable Logic Device)を含んでよい。PLDは、FPGA(Field-Programmable Gate Array)を含んでよい。コントローラ7は、1つまたは複数のプロセッサが協働するSoC(System-on-a-Chip)、及びSiP(System In a Package)のいずれかであってよい。 The controller 7 is configured as a processor, for example. Controller 7 may include one or more processors. The processor may include a general-purpose processor configured to load a specific program to perform a specific function, and a dedicated processor specialized for specific processing. A dedicated processor may include an Application Specific Integrated Circuit (ASIC). The processor may include a programmable logic device (PLD). A PLD may include an FPGA (Field-Programmable Gate Array). The controller 7 may be either a SoC (System-on-a-Chip) with which one or more processors cooperate, or a SiP (System In a Package).
 本開示における「移動体」は、例えば車両、船舶、及び航空機等を含んでよい。車両は、例えば自動車、産業車両、鉄道車両、生活車両、及び滑走路を走行する固定翼機等を含んでよい。自動車は、例えば乗用車、トラック、バス、二輪車、及びトロリーバス等を含んでよい。産業車両は、例えば農業及び建設向けの産業車両等を含んでよい。産業車両は、例えばフォークリフト及びゴルフカート等を含んでよい。農業向けの産業車両は、例えばトラクター、耕耘機、移植機、バインダー、コンバイン、及び芝刈り機等を含んでよい。建設向けの産業車両は、例えばブルドーザー、スクレーパー、ショベルカー、クレーン車、ダンプカー、及びロードローラ等を含んでよい。車両は、人力で走行するものを含んでよい。車両の分類は、上述した例に限られない。例えば、自動車は、道路を走行可能な産業車両を含んでよい。複数の分類に同じ車両が含まれてよい。船舶は、例えばマリンジェット、ボート、及びタンカー等を含んでよい。航空機は、例えば固定翼機及び回転翼機等を含んでよい。 A "moving object" in the present disclosure may include, for example, a vehicle, a ship, an aircraft, and the like. Vehicles may include, for example, automobiles, industrial vehicles, railroad vehicles, utility vehicles, fixed-wing aircraft that travel on runways, and the like. Motor vehicles may include, for example, cars, trucks, buses, motorcycles, trolleybuses, and the like. Industrial vehicles may include, for example, industrial vehicles for agriculture and construction, and the like. Industrial vehicles may include, for example, forklifts, golf carts, and the like. Industrial vehicles for agriculture may include, for example, tractors, tillers, transplanters, binders, combines, lawn mowers, and the like. Industrial vehicles for construction may include, for example, bulldozers, scrapers, excavators, mobile cranes, tippers, road rollers, and the like. Vehicles may include those that are powered by humans. Vehicle classification is not limited to the above example. For example, automobiles may include road-drivable industrial vehicles. Multiple classifications may contain the same vehicle. Vessels may include, for example, marine jets, boats, tankers, and the like. Aircraft may include, for example, fixed-wing aircraft, rotary-wing aircraft, and the like.
 以下では、移動体10が、乗用車である場合を例として説明する。移動体10は、乗用車に限らず、上記例のいずれかであってよい。カメラ11は、移動体10に取り付けられてよい。カメラ11は、移動体10の運転者13の顔(人の顔)を含む画像を撮像するように構成される。カメラ11の取り付け位置は、移動体10の内部及び外部において任意である。例えば、カメラ11は、移動体10のダッシュボード内に位置してよい。 A case where the mobile object 10 is a passenger car will be described below as an example. The mobile body 10 is not limited to a passenger car, and may be any of the above examples. The camera 11 may be attached to the mobile object 10 . Camera 11 is configured to capture an image including the face (human face) of driver 13 of mobile object 10 . The mounting position of the camera 11 is arbitrary inside and outside the moving body 10 . For example, camera 11 may be located within the dashboard of mobile 10 .
 カメラ11は、可視光カメラまたは赤外線カメラであってよい。カメラ11は、可視光カメラと赤外線カメラの両方の機能を有していてよい。カメラ11は、例えばCCD(Charge Coupled Device)又はCMOS(Complementary Metal Oxide Semiconductor)イメージセンサを含んでよい。 The camera 11 may be a visible light camera or an infrared camera. The camera 11 may have the functions of both a visible light camera and an infrared camera. The camera 11 may include, for example, a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor) image sensor.
 運転者13の眼31の位置は、瞳孔位置であってよい。本開示の撮像装置1において、対象画像は、カメラ11によって撮像された撮像画像である。撮像画像は、運転者13の眼31または運転者13の眼31との相対的な位置関係が特定されている顔の部位を含む。撮像画像が含む運転者13の眼31は、両眼であってもよく、右眼31Rのみまたは左眼31Lのみであってもよい。運転者13の眼31との相対的な位置関係が特定されている顔の部位とは、例えば、眉毛または鼻などであってよい。 The position of the eye 31 of the driver 13 may be the pupil position. In the imaging device 1 of the present disclosure, the target image is a captured image captured by the camera 11 . The captured image includes the eye 31 of the driver 13 or the part of the face for which the relative positional relationship with the eye 31 of the driver 13 is specified. The eyes 31 of the driver 13 included in the captured image may be both eyes, or may be only the right eye 31R or the left eye 31L. The part of the face for which the relative positional relationship with the eyes 31 of the driver 13 is specified may be, for example, the eyebrows or the nose.
 撮像装置1は、ウインドシールド25を介する反射光によって少なくとも運転者13の眼31を撮像するように構成される。撮像装置1は、被写体の像を取得して被写体の画像を生成するように構成される。撮像装置1のカメラ11は撮像素子を含む。撮像素子は、例えばCCD(Charge Coupled Device)撮像素子またはCMOS(Complementary Metal Oxide Semiconductor)撮像素子を含んでよい。撮像装置1は、被写体側に運転者13の顔が位置するように配置される。撮像装置1は、運転者13の左眼31L及び右眼31Rの少なくとも一方の位置を検出するように構成される。例えば、撮像装置1は、所定の位置を原点とし、原点からの左眼31の位置の変位方向及び変位量を検出するように構成してよい。撮像装置1は、カメラによって撮像された撮像装置1の撮像画像から、左眼31L及び右眼31Rの少なくとも一方の位置を検出するように構成してよい。撮像装置1は、2台以上のカメラ11を用いて、左眼31L及び右眼31Rの少なくとも一方の位置を3次元空間の座標として検出するように構成してよい。 The imaging device 1 is configured to capture an image of at least the eyes 31 of the driver 13 with light reflected through the windshield 25 . The imaging device 1 is configured to acquire an image of a subject and generate an image of the subject. A camera 11 of the imaging device 1 includes an imaging element. The imaging device may include, for example, a CCD (Charge Coupled Device) imaging device or a CMOS (Complementary Metal Oxide Semiconductor) imaging device. The imaging device 1 is arranged so that the face of the driver 13 is located on the subject side. The imaging device 1 is configured to detect the position of at least one of the driver's 13 left eye 31L and right eye 31R. For example, the imaging device 1 may be configured to use a predetermined position as an origin and detect the direction and amount of displacement of the position of the left eye 31 from the origin. The imaging device 1 may be configured to detect the position of at least one of the left eye 31L and the right eye 31R from the captured image of the imaging device 1 captured by the camera. The imaging device 1 may be configured to use two or more cameras 11 to detect the position of at least one of the left eye 31L and the right eye 31R as coordinates in a three-dimensional space.
 撮像装置1は、カメラ11を備えなくてもよい。撮像装置1は、装置外のカメラ11からの信号を入力するように構成される入力端子を備えてよい。装置外のカメラ11は、入力端子に直接的に接続されてよい。装置外のカメラ11は、共有のネットワークを介して入力端子に間接的に接続されてよい。カメラ11を備えない撮像装置1は、カメラ11が映像信号を入力するように構成される入力端子を備えてよい。カメラ11を備えない撮像装置1は、入力端子に入力された映像信号から左眼31L及び右眼31Rの少なくとも一方の位置を検出するように構成してよい。 The imaging device 1 does not have to be equipped with the camera 11. The imaging device 1 may comprise an input terminal configured to receive signals from a camera 11 external to the device. A camera 11 outside the device may be directly connected to the input terminal. An external camera 11 may be indirectly connected to the input terminal via a shared network. An imaging device 1 that does not include a camera 11 may include an input terminal configured for the camera 11 to input a video signal. The imaging device 1 without the camera 11 may be configured to detect the position of at least one of the left eye 31L and the right eye 31R from the video signal input to the input terminal.
 撮像装置1は、センサを備えてよい。センサは、超音波センサまたは光センサ等であってよい。撮像装置1は、センサによって運転者13の頭部の位置を検出し、頭部の位置に基づいて左眼31L及び右眼31Rの少なくとも一方の位置を検出するように構成してよい。撮像装置1は、1個または2個以上のセンサによって、左眼31L及び右眼31Rの少なくとも一方の位置を3次元空間の座標として検出するように構成してよい。 The imaging device 1 may include a sensor. The sensor may be an ultrasonic sensor, an optical sensor, or the like. The imaging device 1 may be configured to detect the position of the head of the driver 13 with a sensor and detect the position of at least one of the left eye 31L and the right eye 31R based on the position of the head. The imaging device 1 may be configured to detect the position of at least one of the left eye 31L and the right eye 31R as coordinates in a three-dimensional space using one or more sensors.
 撮像装置1は、左眼31L及び右眼31Rの少なくとも一方の位置の検出結果に基づいて、眼球配列方向に沿った、左眼31L及び右眼31Rの移動距離を検出するように構成してよい。 The imaging device 1 may be configured to detect the movement distance of the left eye 31L and the right eye 31R along the eyeball arrangement direction based on the detection result of the position of at least one of the left eye 31L and the right eye 31R. .
 撮像装置1は、運転者13の眼31の位置を示す位置情報を取得部103に出力するように構成される。撮像装置1は、位置情報を、有線、無線、及びCAN(Controller Area Network)等の通信ネットワークを介して取得部103に出力するように構成してよい。 The imaging device 1 is configured to output position information indicating the positions of the eyes 31 of the driver 13 to the acquisition unit 103 . The imaging device 1 may be configured to output the position information to the acquisition unit 103 via a communication network such as wired, wireless, and CAN (Controller Area Network).
 カメラ11は、運転者13の顔が有ると想定される領域の画像を含む第1撮像画像を撮像可能に構成される。本実施形態において、運転者13は、例えば、乗用車である移動体10の運転者であってよい。運転者の顔が有ると想定される領域は、例えば、運転者用シートの上部付近であってよい。カメラ11は、移動体10に取り付けられてよい。カメラ11の取り付け位置は、移動体10の内部及び外部において任意である。 The camera 11 is configured to be capable of capturing a first captured image including an image of an area where the face of the driver 13 is assumed. In this embodiment, the driver 13 may be, for example, the driver of the mobile object 10, which is a passenger car. The area where the driver's face is assumed to be, for example, near the top of the driver's seat. The camera 11 may be attached to the mobile object 10 . The mounting position of the camera 11 is arbitrary inside and outside the moving body 10 .
 カメラ11で撮像された撮像画像は、取得部3を経て表示用コントローラ107に入力される。コントローラ7は、撮像画像に基づいて運転者13の少なくとも一方の眼31L,31Rの位置を検出可能に構成される。撮像装置1による検出結果は、運転者13の左眼31Lまたは右眼31Rの瞳孔位置を示す座標情報であってよい。 A captured image captured by the camera 11 is input to the display controller 107 via the acquisition unit 3 . The controller 7 is configured to detect the position of at least one eye 31L, 31R of the driver 13 based on the captured image. The detection result by the imaging device 1 may be coordinate information indicating the pupil position of the driver's 13 left eye 31L or right eye 31R.
 撮像装置1は、検出した左眼31Lまたは右眼31Rの瞳孔位置に関する座標情報を3次元投影装置12に出力するように構成される。この座標情報に基づいて、3次元投影装置12は、投影する画像を制御するように構成してよい。 The imaging device 1 is configured to output coordinate information regarding the detected pupil position of the left eye 31L or right eye 31R to the three-dimensional projection device 12 . Based on this coordinate information, the 3D projection device 12 may be configured to control the projected image.
 撮像装置1は、コントローラ7が外部装置であってもよい。カメラ11は、撮像画像を外部のコントローラ7に出力するように構成してよい。この外部のコントローラ7では、カメラ11から出力された画像から、運転者13の左眼31Lまたは右眼31Rの瞳孔位置を検出するように構成されてよい。この外部のコントローラ7は、検出した左眼31Lまたは右眼31Rの瞳孔位置に関する座標情報を立体画像表示装置2に出力するように構成してよい。この座標情報に基づいて、立体画像表示装置2は、投影する画像を制御するように構成してよい。カメラ11は、有線通信又は無線通信を介して撮像した撮像画像を外部のコントローラ7へ出力するように構成してよい。外部のコントローラ7は、有線通信又は無線通信を介して座標情報を立体画像表示装置2へ出力するように構成してよい。有線通信は、例えばCAN等を含みうる。 In the imaging device 1, the controller 7 may be an external device. The camera 11 may be configured to output the captured image to the external controller 7 . The external controller 7 may be configured to detect the pupil position of the driver's 13 left eye 31L or right eye 31R from the image output from the camera 11 . The external controller 7 may be configured to output coordinate information regarding the detected pupil position of the left eye 31L or right eye 31R to the stereoscopic image display device 2 . Based on this coordinate information, the stereoscopic image display device 2 may be configured to control the image to be projected. The camera 11 may be configured to output a captured image captured via wired communication or wireless communication to the external controller 7 . The external controller 7 may be configured to output the coordinate information to the stereoscopic image display device 2 via wired communication or wireless communication. Wired communication may include CAN, for example.
 図4は、3次元投影装置12の構成を示す図であり、図5は本開示の一実施形態に係る移動体を示す図である。3次元投影装置12の位置は、移動体10の内部及び外部において任意である。例えば、3次元投影装置12は、移動体10のダッシュボード内に位置してよい。3次元投影装置12は、ウインドシールド25に向けて画像光を出射するように構成される。 FIG. 4 is a diagram showing the configuration of the three-dimensional projection device 12, and FIG. 5 is a diagram showing a moving body according to one embodiment of the present disclosure. The position of the three-dimensional projection device 12 is arbitrary inside and outside the moving body 10 . For example, the 3D projection device 12 may be located within the dashboard of the vehicle 10 . The three-dimensional projection device 12 is configured to emit image light toward the windshield 25 .
 ウインドシールド25は、3次元投影装置12から出射された画像光を反射するように構成される反射板である。ウインドシールド25で反射された画像光は、アイボックス16に到達する。アイボックス16は、例えば運転者13の体格、姿勢、及び姿勢の変化等を考慮して、運転者13の眼31L,31Rが存在しうると想定される実空間上の領域である。アイボックス16の形状は任意である。アイボックス16は、平面的又は立体的な領域を含んでよい。図2に示されている破線は、3次元投影装置12から出射される画像光の少なくとも一部がアイボックス16まで到達する経路を示す。画像光が進む経路は、光路とも称される。3次元投影装置12から出射される画像光は、右眼画像及び左眼画像を含む視差画像を示す。運転者13の眼31L,31Rがアイボックス16内に位置する場合、運転者13は、アイボックス16に到達する視差画像の画像光によって、虚像14を視認可能である。虚像14は、ウインドシールド25から眼31L,31Rに到達する経路を移動体10の前方に延長した経路(図1では、破線で示されている)の上に位置する。3次元投影装置12は、運転者13に虚像14を視認させることによって、ヘッドアップディスプレイとして機能しうる。図1において、運転者13の眼31L,31Rが並ぶ方向は、x軸方向に対応する。鉛直方向は、y軸方向に対応する。カメラ11の撮像範囲は、アイボックス16を含む。 The windshield 25 is a reflector configured to reflect image light emitted from the three-dimensional projection device 12 . Image light reflected by the windshield 25 reaches the eyebox 16 . The eye box 16 is a real space area where the eyes 31L and 31R of the driver 13 are assumed to exist, for example, considering the physique, posture, and changes in the posture of the driver 13 . The shape of the eyebox 16 is arbitrary. Eyebox 16 may include two-dimensional or three-dimensional regions. A dashed line shown in FIG. 2 indicates a path along which at least part of image light emitted from the three-dimensional projection device 12 reaches the eyebox 16 . The path traveled by the image light is also referred to as the optical path. The image light emitted from the three-dimensional projection device 12 represents parallax images including a right-eye image and a left-eye image. When the eyes 31</b>L and 31</b>R of the driver 13 are positioned within the eyebox 16 , the driver 13 can visually recognize the virtual image 14 by the parallax image light reaching the eyebox 16 . The virtual image 14 is positioned on a path (indicated by a dashed line in FIG. 1) obtained by extending the path from the windshield 25 to the eyes 31L and 31R to the front of the mobile object 10 . The three-dimensional projection device 12 can function as a head-up display by making the driver 13 visually recognize the virtual image 14 . In FIG. 1, the direction in which the eyes 31L and 31R of the driver 13 are aligned corresponds to the x-axis direction. The vertical direction corresponds to the y-axis direction. The imaging range of camera 11 includes eyebox 16 .
 3次元投影装置12から射出された画像光の少なくとも一部は、光学部材110(図4参照)を介してウインドシールド25に到達する。画像光は、ウインドシールド25で反射されて運転者13の眼31に到達する。運転者13の眼31は、ウインドシールド25よりもz軸の負の方向の側に位置する第1虚像14aを視認できる。第1虚像14aは、3次元投影装置12が表示している画像に対応する。パララックスバリア6の開口領域6bと遮光面6aとは、ウインドシールド25の前方であって第1虚像14aのウインドシールド25側に第2虚像14bをつくる。運転者13は、見かけ上、第1虚像14aの位置に表示パネルが存在し、第2虚像14bの位置にパララックスバリア6が存在するかのように、画像を視認しうる。以下の説明において、 At least part of the image light emitted from the three-dimensional projection device 12 reaches the windshield 25 via the optical member 110 (see FIG. 4). The image light is reflected by the windshield 25 and reaches the eyes 31 of the driver 13 . The eyes 31 of the driver 13 can visually recognize the first virtual image 14a located on the side of the windshield 25 in the negative direction of the z-axis. The first virtual image 14a corresponds to the image displayed by the three-dimensional projection device 12 . The opening region 6b of the parallax barrier 6 and the light shielding surface 6a form a second virtual image 14b in front of the windshield 25 and on the windshield 25 side of the first virtual image 14a. The driver 13 can visually recognize the image as if the display panel were present at the position of the first virtual image 14a and the parallax barrier 6 was present at the position of the second virtual image 14b. In the description below,
 3次元投影装置12は、ウインドシールド25で反射させた画像光を、運転者13の左眼31L及び右眼31Rに到達させる。つまり、3次元投影装置12は、図4の破線で示される光路140に沿って、立体画像表示装置2から運転者13の左眼31L及び右眼31Rまで画像光を進行させる。運転者13は、光路140に沿って到達した画像光を、虚像14として視認しうる。立体画像表示装置2は、運転者13の左眼31L及び右眼31Rの位置に応じて表示を制御することによって、運転者の動きに応じて立体視を提供しうる。 The three-dimensional projection device 12 causes the image light reflected by the windshield 25 to reach the driver's 13 left eye 31L and right eye 31R. That is, the three-dimensional projection device 12 causes the image light to travel from the stereoscopic image display device 2 to the left eye 31L and right eye 31R of the driver 13 along the optical path 140 indicated by the dashed line in FIG. The driver 13 can visually recognize the image light arriving along the optical path 140 as the virtual image 14 . The stereoscopic image display device 2 can provide stereoscopic vision according to the movement of the driver by controlling the display according to the positions of the left eye 31L and right eye 31R of the driver 13 .
 3次元投影装置12の構成の一部は、移動体10が備える他の装置または部品と兼用されてよい。例えば、移動体10は、ウインドシールド25を撮像装置1の構成の一部として兼用される。 A part of the configuration of the three-dimensional projection device 12 may be shared with other devices or parts included in the mobile body 10 . For example, the moving body 10 also uses the windshield 25 as part of the configuration of the imaging device 1 .
 表示パネル5としては、透過型の表示パネルに限られず、自発光型の表示パネル等他の表示パネルを使用することもできる。透過型の表示パネルは、液晶パネルの他に、MEMS(Micro Electro Mechanical Systems)シャッター式の表示パネルを含む。自発光型の表示パネルは、有機EL(electro-luminescence)、及び無機ELの表示パネルを含む。表示パネル5として、自発光型の表示パネルを使用した場合、照射器4は不要となる。表示パネル5として、自発光型の表示パネルを使用した場合、パララックスバリア6は、表示パネル5の画像光が出射される側に位置する。 The display panel 5 is not limited to a transmissive display panel, and other display panels such as a self-luminous display panel can also be used. Transmissive display panels include MEMS (Micro Electro Mechanical Systems) shutter type display panels in addition to liquid crystal panels. Self-luminous display panels include organic EL (electro-luminescence) and inorganic EL display panels. If a self-luminous display panel is used as the display panel 5, the irradiator 4 becomes unnecessary. When a self-luminous display panel is used as the display panel 5, the parallax barrier 6 is positioned on the side of the display panel 5 from which the image light is emitted.
 本開示の一実施形態に係る立体画像表示装置2は、図1に示すように、撮像装置1と、3次元投影装置12とを含む。 A stereoscopic image display device 2 according to an embodiment of the present disclosure includes an imaging device 1 and a three-dimensional projection device 12, as shown in FIG.
 撮像装置1は、運転者の眼があることが期待される空間を一定の撮像時間間隔(例えば、20fps)で撮像するように構成されるカメラ11から撮像画像を取得するように構成してよい。撮像装置1は、カメラ11から取得した撮像画像から順次、左眼(第1眼)31L及び右眼(第2眼)31Rの像を検出するように構成される。撮像装置1は、画像空間における左眼31L及び右眼31Rの像に基づいて、実空間における左眼31L及び右眼31Rそれぞれの位置を検出するように構成される。撮像装置1は、1つのカメラ11の撮像画像から、左眼31L及び右眼31Rの位置を3次元空間の座標として検出するように構成してよい。撮像装置1は、2個以上のカメラ11の撮像画像から、左眼31L及び右眼31Rそれぞれの位置を3次元空間の座標として検出するように構成してよい。撮像装置1はカメラ11を備えてもよい。 The imaging device 1 may be configured to acquire captured images from a camera 11 that is configured to capture an image of a space where the driver's eyes are expected to exist at regular imaging time intervals (eg, 20 fps). . The imaging device 1 is configured to sequentially detect images of a left eye (first eye) 31L and a right eye (second eye) 31R from captured images acquired from the camera 11 . The imaging device 1 is configured to detect respective positions of the left eye 31L and the right eye 31R in the real space based on the images of the left eye 31L and the right eye 31R in the image space. The imaging device 1 may be configured to detect the positions of the left eye 31L and the right eye 31R from the image captured by one camera 11 as coordinates in a three-dimensional space. The imaging device 1 may be configured to detect the positions of the left eye 31L and the right eye 31R as coordinates in a three-dimensional space from images captured by two or more cameras 11 . The imaging device 1 may include a camera 11 .
 立体画像表示装置2は、取得部103と、照射器4と、表示パネル5と、光学素子としてのパララックスバリア6と、メモリ108と、表示用コントローラ107とを含む。撮像装置1によって撮像した画像は、光はウインドシールド25の外側の表面及び内側の表面で反射され、ウインドシールド25による多重反射の影響によって、例えば2重像として撮像され、正確な左眼31Lまたは右眼31Rの位置を検出することができないので、撮像装置1のコントローラ7によって2重像を除去するように構成される。 The stereoscopic image display device 2 includes an acquisition unit 103, an illuminator 4, a display panel 5, a parallax barrier 6 as an optical element, a memory 108, and a display controller 107. The image captured by the imaging device 1 is captured as, for example, a double image due to the influence of multiple reflections by the windshield 25 due to the light being reflected by the outer and inner surfaces of the windshield 25, and is accurately captured by the left eye 31L or Since the position of the right eye 31R cannot be detected, the controller 7 of the imaging device 1 is configured to remove double images.
 図6は、撮像装置1のウインドシールド25によって反射される光経路を示す図である。カメラ11によって、運転者13の眼31をウインドシールド25を介して撮像すると、図5に示されるように、光はウインドシールド25の外側の表面及び内側の表面で反射される。カメラ11による撮像画像をi(x,y)とし、2重像のない元画像をs(x、y)とし、2重像を規定する点広がり関数をg(x,y)とし、2重像のある撮像画像をi(x、y)とし、畳み込み演算符号を「*」とすると、次式(1)が成立する。 FIG. 6 is a diagram showing paths of light reflected by the windshield 25 of the imaging device 1. FIG. When the eye 31 of the driver 13 is imaged by the camera 11 through the windshield 25, light is reflected on the outer and inner surfaces of the windshield 25 as shown in FIG. Let i(x, y) be an image captured by the camera 11, s(x, y) be an original image without double images, g(x, y) be a point spread function that defines a double image, and g(x, y) be a double image. Assuming that a captured image with an image is i(x, y) and the convolution operation code is "*", the following equation (1) holds.
    i(x,y)=s(x,y)*g(x,y)      …(1) i (x, y) = s (x, y) * g (x, y) (1)
 座標の原点(0,0)を画像に左上隅とし、x軸は水平方向に延び、y軸は垂直方向に延びるxy座標系を設定した。点広がり関数は、例えば点広がり関数を求めるためのパターン画像、例えば点画像が表示された表示パネルの撮像画像を用いて決定されてもよく、あるいはウインドシールド25の形状に基づいて算出されてもよい。 An xy coordinate system was set with the coordinate origin (0, 0) as the upper left corner of the image, the x axis extending in the horizontal direction, and the y axis extending in the vertical direction. The point spread function may be determined using, for example, a pattern image for obtaining the point spread function, such as a captured image of a display panel on which a point image is displayed, or may be calculated based on the shape of the windshield 25. good.
 Fで関数のフーリエ変換を表し、F-1で逆フーリエ変換を表し、i(x,y)、g(x、y)のフーリエ変換を、I(u,v)、G(u,v)で表すと、元画像s(x,y)は以下の式2で求めることができる。 Denote the Fourier transform of the function by F, the inverse Fourier transform by F −1 , and the Fourier transform of i(x, y), g(x, y) by I(u, v), G(u, v) , the original image s(x, y) can be obtained by Equation 2 below.
    s(x,y)=F-1(I(u,v)/G(u,v))  …(2) s(x, y)=F −1 (I(u, v)/G(u, v)) (2)
 あるいは、元画像s(x,y)は、次式3によって求めることができる。 Alternatively, the original image s(x, y) can be obtained by Equation 3 below.
    s(x,y)=K(x,y)*i(x,y)      …(3) s (x, y) = K (x, y) * i (x, y) (3)
 ここで、上記の式3におけるカーネルKは、次式4によって表わすことができる。 Here, the kernel K in Equation 3 above can be expressed by Equation 4 below.
    K(x,y)=F-1(1/G(u,v))       …(4) K(x, y)=F −1 (1/G(u, v)) (4)
 上記の式2,3を用いて元画像s(x、y)を求める場合、2次元の演算を実行する必要があり、コントローラ7の負荷が大きい。この負荷を低減するため、以下の処理を実行する。コントローラ7はメモリ108に記憶された点広がり関数を読み出して、2重像の除去処理を実行する。次にコントローラ7は、2重像が所定方向であるy軸方向に並ぶように撮像画像を歪み処理によって変形して、第1の点広がり関数を用いて第1処理画像に変換し、第1処理画像の2重像の点広がり関数に基づいて、2重像が除去された第1処理画像に先の逆の歪処理で補正し、復元画像を作製する。 When obtaining the original image s(x, y) using the above formulas 2 and 3, it is necessary to execute two-dimensional calculations, which imposes a heavy load on the controller 7. To reduce this load, the following processing is performed. The controller 7 reads out the point spread function stored in the memory 108 and executes double image elimination processing. Next, the controller 7 deforms the captured image by distortion processing so that the double images are aligned in the y-axis direction, which is a predetermined direction, and converts the captured image into a first processed image using the first point spread function. Based on the point spread function of the double image of the processed image, the first processed image from which the double image has been removed is corrected by the reverse distortion process, and a restored image is produced.
 コントローラ7の負荷を低減するために、本実施形態では、次に述べる図3A~図3Dに示されるように、複数の点像が描かれた測定板を準備して、カメラ11で撮像し、これを使って点広がり関数を決定する。これによって複数の点像の大きさが相違しても、1次元のカーネルを全体に適用し、複数の点像を補間すれば全体の分布が得られるので、積分範囲が少なく、演算処理量を低減できる。 In order to reduce the load on the controller 7, in this embodiment, as shown in FIGS. This is used to determine the point spread function. As a result, even if the sizes of multiple point images are different, the entire distribution can be obtained by applying a one-dimensional kernel to the entirety and interpolating the multiple point images. can be reduced.
 図3A~図3Dは、歪み処理による2重像除去手順を説明するための図である。本実施形態では、歪み処理による2重像除去処理を行うために、一例として、図3Dに示される複数の点画像が描かれたパネルをウインドシールド25を介してカメラ11によって撮像する。その撮像画像は、ウインドシールド25の影響によって図3Aに示されるように2重像となり、各点画像が2つの点画像になる。このような各2つの点画像のx軸方向のずれを歪み補正する。また図3Bに示されるように2つの点画像のy軸方向の間隔を一定に揃える。この処理は、y軸方向に拡大することによって行う。この歪み処理は、ウインドシールド25の撮像画像への影響を表す点広がり関数に基づいて行われてよい。歪み処理によって、2つの点画像がy軸方向に同一間隔で位置する。それによってy座標のみに依存する1次元の同じカーネルKを撮像画像全体に適用することができる。全画素に同じ1次元のカーネルKを使用することができるので、コントローラ7による処理が簡素化される。図3Bの画像から2重像を除去すると、図3Cに示される点画像が得られる。次に、図3Aから図3Bへの処理とは逆にy軸に揃えための移動量だけ戻す処理を行うことによって、2重像が除去された図3Dに示される元画像に復元可能であることが分かる。 3A to 3D are diagrams for explaining the procedure for removing double images by distortion processing. In this embodiment, in order to perform double image removal processing by distortion processing, as an example, a panel on which a plurality of point images shown in FIG. 3D are drawn is captured by the camera 11 through the windshield 25 . The captured image becomes a double image as shown in FIG. 3A due to the influence of the windshield 25, and each point image becomes two point images. The distortion in the x-axis direction of each of these two point images is corrected. Also, as shown in FIG. 3B, the interval in the y-axis direction between the two point images is made uniform. This processing is performed by enlarging in the y-axis direction. This distortion processing may be performed based on a point spread function representing the influence of the windshield 25 on the captured image. Due to the distortion processing, the two point images are positioned at the same interval in the y-axis direction. The same one-dimensional kernel K, which depends only on the y-coordinate, can thereby be applied to the entire captured image. Processing by the controller 7 is simplified because the same one-dimensional kernel K can be used for all pixels. Removing the double images from the image of FIG. 3B yields the point image shown in FIG. 3C. 3A to 3B, the image can be restored to the original image shown in FIG. 3D from which the double image is removed by performing the process of returning the image by the amount of movement for aligning it with the y-axis. I understand.
 コントローラ7は1次元の積分になることによって、計算量が少なくて済み、演算を高速化することができる。仮にウインドシールド25の影響によって生じた2つの点画像の並びがy軸方向からずれると、細長い2次元領域で積分することになるが、前述のように、撮像画像を、図3Bに示されるような画像を用いることによって、x座標に関する積分する領域が少なくて済み、コントローラ7の演算処理負荷が少なくて済む。 Since the controller 7 is a one-dimensional integral, the amount of calculation can be reduced and the calculation speed can be increased. If the alignment of the two point images caused by the influence of the windshield 25 deviates from the y-axis direction, integration will be performed in a long and narrow two-dimensional area. By using such an image, the area to be integrated with respect to the x-coordinate can be reduced, and the arithmetic processing load of the controller 7 can be reduced.
 コントローラ7において、第1処理画像は、2重像間のy軸方向の距離が同一であるように変換されている。これによって、単一のカーネルによって2重像を除去することができ、演算処理を削減できる。 In the controller 7, the first processed image is transformed so that the distance in the y-axis direction between the double images is the same. This allows double images to be removed with a single kernel, reducing computational effort.
 ウインドシールド25による影響である2重像を表す第1の点広がり関数である点広がり関数g(x,y)が前述のように既知であると、カメラ撮影によって得られた撮像画像i(x,y)に対する畳み込み演算によって、2重像が除去された元画像s(x,y)を復元することができる。また点広がり関数g(x,y)が既知でない場合は、前述の図3Dを撮影した画像から第2の点広がり関数g(x,y)を推定し、元画像s(x,y)を求めてよい。 If the point spread function g(x, y), which is the first point spread function representing the double image that is the effect of the windshield 25, is known as described above, the captured image i(x , y) can restore the original image s(x, y) from which double images have been removed. If the point spread function g(x, y) is not known, the second point spread function g(x, y) is estimated from the photographed image of FIG. 3D, and the original image s(x, y) is you can ask.
 前述の第2の点広がり関数は、第1の点広がり関数に基づいて決定されている。また第2点広がり関数は、第1点広がり関数と、歪み処理による変形とに基づいて決定されてよい。 The aforementioned second point spread function is determined based on the first point spread function. Also, the second point spread function may be determined based on the first point spread function and the deformation due to the distortion process.
 撮像装置1は、赤外線を発生する発光素子(Light Emitting Diode;LED)を備えていてもよい。前述の撮像画像は、運転者を照明しない状態で撮像された第1画像と、運転者を照明した状態で撮像された第2画像との差分画像である。これによって、ウインドシールド25の透過光の影響を除去することができる。 The imaging device 1 may include a light emitting element (Light Emitting Diode; LED) that emits infrared rays. The captured image described above is a difference image between the first image captured without illumination of the driver and the second image captured with illumination of the driver. Thereby, the influence of light transmitted through the windshield 25 can be eliminated.
 次に、3次元投影装置12について説明する。 Next, the three-dimensional projection device 12 will be explained.
 取得部3は、撮像装置1によって順次、送信された、眼の位置を示す位置データを取得するように構成される。 The acquisition unit 3 is configured to acquire the position data indicating the positions of the eyes sequentially transmitted by the imaging device 1 .
 照射器4は、表示パネル5を面的に照射するように構成しうる。照射器4は、光源、導光板、拡散板、拡散シート等を含んでよい。照射器4は、光源により照射光を出射し、導光板、拡散板、拡散シート等により照射光を表示パネル5の面方向に均一化するように構成される。照射器4は均一化された光を表示パネル5の方に出射するように構成しうる。 The illuminator 4 can be configured to illuminate the display panel 5 in a planar manner. The illuminator 4 may include a light source, a light guide plate, a diffusion plate, a diffusion sheet, and the like. The irradiator 4 emits irradiation light from a light source, and is configured to make the irradiation light uniform in the surface direction of the display panel 5 by means of a light guide plate, a diffusion plate, a diffusion sheet, and the like. Illuminator 4 may be configured to emit homogenized light towards display panel 5 .
 表示パネル5は、例えば透過型の液晶表示パネルなどの表示パネルを採用しうる。表示パネル5は、面状に形成されたアクティブエリア上に複数の区画領域を有する。アクティブエリアは、視差画像を表示するように構成される。視差画像は、左眼画像(第1画像)と左眼画像に対して視差を有する右眼画像(第2画像)とを含む。複数の区画領域は、第1方向xと、アクティブエリアの面内で第1方向に直交する方向yとに区画された領域である。第1方向xは、例えば、水平方向であってよい。第1方向xに直交する方向yは、例えば、鉛直方向であってよい。水平方向及び鉛直方向に直交する方向は奥行方向と称されてよい。図面において、水平方向はx軸方向として表され、鉛直方向はy軸方向として表され、奥行方向はz軸方向として表される。 The display panel 5 may employ a display panel such as a transmissive liquid crystal display panel. The display panel 5 has a plurality of partitioned areas on a planar active area. The active area is configured to display parallax images. The parallax images include a left-eye image (first image) and a right-eye image (second image) having parallax with respect to the left-eye image. The plurality of partitioned regions are regions partitioned in a first direction x and a direction y perpendicular to the first direction within the plane of the active area. The first direction x may be, for example, the horizontal direction. The direction y orthogonal to the first direction x may be, for example, the vertical direction. A direction orthogonal to the horizontal and vertical directions may be referred to as the depth direction. In the drawings, the horizontal direction is represented as the x-axis direction, the vertical direction is represented as the y-axis direction, and the depth direction is represented as the z-axis direction.
 複数の区画領域の各々には、1つのサブピクセルが対応する。したがって、アクティブエリアは、水平方向及び鉛直方向に沿って格子状に配列された複数のサブピクセルを備える。  One sub-pixel corresponds to each of the plurality of partitioned areas. Accordingly, the active area comprises a plurality of sub-pixels arranged in a grid along the horizontal and vertical directions.
 複数のサブピクセルの各々は、R(Red),G(Green),B(Blue)のいずれかの色に対応しうる。R,G,Bの3つのサブピクセルは、一組として1ピクセルを構成することができる。1ピクセルは、1画素と称されうる。1ピクセルを構成する複数のサブピクセルは、水平方向に並んでよい。同じ色の複数のサブピクセルは、鉛直方向に並んでよい。複数のサブピクセルそれぞれの水平方向の長さHpxは、互いに同一としてよい。複数のサブピクセルそれぞれの鉛直方向の長さHpyは、互いに同一としてよい。 Each of the plurality of sub-pixels can correspond to any one of colors R (Red), G (Green), and B (Blue). A set of three sub-pixels of R, G, and B can constitute one pixel. One pixel may be referred to as one pixel. A plurality of sub-pixels forming one pixel may be arranged horizontally. Multiple sub-pixels of the same color may be aligned vertically. The horizontal length Hpx of each of the plurality of sub-pixels may be the same as each other. The vertical length Hpy of each of the plurality of sub-pixels may be the same.
 表示パネル5としては、透過型の液晶パネルに限られず、有機EL等他の表示パネルを使用しうる。透過型の表示パネルは、液晶パネルの他に、MEMS(Micro Electro Mechanical Systems)シャッター式の表示パネルを含む。自発光型の表示パネルは、有機EL(electro-luminescence)、及び無機ELの表示パネルを含む。表示パネル5が自発光型の表示パネルである場合、立体画像表示装置2は照射器4を備えなくてよい。 The display panel 5 is not limited to a transmissive liquid crystal panel, and other display panels such as organic EL can be used. Transmissive display panels include MEMS (Micro Electro Mechanical Systems) shutter type display panels in addition to liquid crystal panels. Self-luminous display panels include organic EL (electro-luminescence) and inorganic EL display panels. If the display panel 5 is a self-luminous display panel, the stereoscopic image display device 2 does not need to include the illuminator 4 .
 パララックスバリア6は、表示パネル5から出射された視差画像の画像光の光線方向を規定するように構成される。パララックスバリア6は、図1に示すように、アクティブエリアに沿う平面を有する。パララックスバリア6は、アクティブエリアから所定距離(ギャップ)g、離れている。パララックスバリア6は、表示パネル5に対して照射器4の反対側に位置してよい。パララックスバリア6は、表示パネル5の照射器4側に位置する。パララックスバリア6は、入射してくる光の進行方向を規定するように構成される光学パネルである。図1の例に示すように、パララックスバリア6が表示パネル5よりも照射器4に近い側に位置する場合、照射器4から射出される光は、表示パネル5に入射し、さらにパララックスバリア6に入射する。この場合、パララックスバリア6は、照射器4から射出され表示パネル5透過した光の一部を遮ったり減衰させたりし、左眼31Lまたは右眼31Rに向けて透過させるように構成される。表示パネル5は、パララックスバリア6によって規定された方向に進行する入射光を、同じ方向に進行する画像光としてそのまま射出する。表示パネル5がパララックスバリア6よりも照射器4に近い側に位置する場合、照射器4から射出される光は、表示パネル5に入射し、さらにパララックスバリア6に入射する。この場合、パララックスバリア6は、表示パネル5から射出される画像光の一部を遮ったり減衰させたりし、他の一部を運転者13の眼31に向けて透過させるように構成される。 The parallax barrier 6 is configured to define the light beam direction of the image light of the parallax image emitted from the display panel 5 . The parallax barrier 6 has a plane along the active area, as shown in FIG. The parallax barrier 6 is separated from the active area by a predetermined distance (gap) g. A parallax barrier 6 may be located on the opposite side of the illuminator 4 with respect to the display panel 5 . The parallax barrier 6 is positioned on the side of the display panel 5 facing the illuminator 4 . The parallax barrier 6 is an optical panel configured to define the traveling direction of incident light. As shown in the example of FIG. 1, when the parallax barrier 6 is positioned closer to the illuminator 4 than the display panel 5 is, the light emitted from the illuminator 4 enters the display panel 5 and is further parallaxed. Incident on the barrier 6 . In this case, the parallax barrier 6 is configured to block or attenuate part of the light emitted from the illuminator 4 and transmitted through the display panel 5, and to transmit the light toward the left eye 31L or the right eye 31R. The display panel 5 emits the incident light traveling in the direction defined by the parallax barrier 6 as image light traveling in the same direction. When the display panel 5 is positioned closer to the illuminator 4 than the parallax barrier 6 , the light emitted from the illuminator 4 enters the display panel 5 and then the parallax barrier 6 . In this case, the parallax barrier 6 is configured to block or attenuate part of the image light emitted from the display panel 5 and transmit the other part toward the eyes 31 of the driver 13. .
 表示用コントローラ107は、取得部103によって取得された、眼31の位置(実測位置)を示す位置データと、当該位置データを取得した順をメモリ108に記憶するように構成される。メモリ108には、所定の撮像時間間隔で撮像された複数の撮像画像それぞれに基づく眼31の実測位置が順次記憶される。メモリ108には、当該実測位置に眼31が位置した順を合わせて記憶してよい。所定の撮像時間間隔は、カメラ11の性能及び設計により適宜設定されうる、一の撮像画像と、当該一の撮像画像とが撮像される時間の間隔である。 The display controller 107 is configured to store in the memory 108 the position data indicating the position (actually measured position) of the eye 31 acquired by the acquisition unit 103 and the order in which the position data was acquired. The memory 108 sequentially stores measured positions of the eye 31 based on each of a plurality of captured images captured at predetermined imaging time intervals. The memory 108 may also store the order in which the eye 31 is positioned at the actual measurement position. The predetermined imaging time interval is a time interval between one captured image and another captured image, which can be appropriately set according to the performance and design of the camera 11 .
 表示用コントローラ107は、例えばプロセッサとして構成される。コントローラ7は、1以上のプロセッサを含んでよい。プロセッサは、特定のプログラムを読み込ませて特定の機能を実行するように構成された汎用のプロセッサ、及び特定の処理に特化した専用のプロセッサを含んでよい。専用のプロセッサは、特定用途向けIC(ASIC:Application Specific Integrated Circuit)を含んでよい。プロセッサは、プログラマブルロジックデバイス(PLD:Programmable Logic Device)を含んでよい。PLDは、FPGA(Field-Programmable Gate Array)を含んでよい。コントローラ7は、1つまたは複数のプロセッサが協働するSoC(System-on-a-Chip)、及びSiP(System In a Package)のいずれかであってよい。 The display controller 107 is configured as a processor, for example. Controller 7 may include one or more processors. The processor may include a general-purpose processor configured to load a specific program to perform a specific function, and a dedicated processor specialized for specific processing. A dedicated processor may include an Application Specific Integrated Circuit (ASIC). The processor may include a programmable logic device (PLD). A PLD may include an FPGA (Field-Programmable Gate Array). The controller 7 may be either a SoC (System-on-a-Chip) with which one or more processors cooperate, or a SiP (System In a Package).
 メモリ108は、例えばRAM(Random Access Memory)及びROM(Read Only Memory)など、任意の記憶デバイスにより構成される。メモリ108は、入力部によって受け付けられた情報、及びコントローラ7によって変換された情報等を記憶するように構成される。例えば、メモリ108は、入力部によって取得した運転者13の眼31の位置情報を記憶するように構成される。 The memory 108 is composed of arbitrary storage devices such as RAM (Random Access Memory) and ROM (Read Only Memory). The memory 108 is configured to store information received by the input unit, information converted by the controller 7, and the like. For example, the memory 108 is configured to store position information of the eye 31 of the driver 13 obtained by the input unit.
 図6は、撮像装置1のウインドシールド25によって反射される光経路を示す図である。ウインドシールド25は、透光性を有するガラス層25aと、ガラス層25a内に中間膜として介在される赤外線反射膜25bとを有する。光はガラス層25aと赤外線反射膜25bとの界面で反射され、撮像装置1に入射する。そのため、撮像装置1に入射する運転者の画像は2重像となって受光される。コントローラ7は、2重像を除去することができるように構成される。 FIG. 6 is a diagram showing paths of light reflected by the windshield 25 of the imaging device 1. FIG. The windshield 25 has a translucent glass layer 25a and an infrared reflective film 25b interposed as an intermediate film in the glass layer 25a. The light is reflected at the interface between the glass layer 25a and the infrared reflecting film 25b and enters the imaging device 1. FIG. Therefore, the image of the driver incident on the imaging device 1 is received as a double image. The controller 7 is configured to be able to remove double images.
 2重像の広がり関数は、局所的でないために、画像サイズが大きくなると、カーネルも大きくなってしまう。ただし、2重像の強度に差がある場合は、カーネルは小さくなるので、現実には小さいサイズの畳み込み積分によって処理することができる。 Since the spread function of the double image is not local, the larger the image size, the larger the kernel. However, if there is a difference in the intensity of the double images, the kernel will be small, so in practice it can be processed by a convolution integral of a small size.
 図7は、ウインドシールド25の厚さTと2重像のずれ量Dと関係を示す図である。カメラ11に映る像は、Dだけずれたものが同じ位置に移り込み、2重像が生じる、ウインドシールド25の厚さをT、光の入射角度をSとすると、 FIG. 7 is a diagram showing the relationship between the thickness T of the windshield 25 and the deviation amount D of the double image. The images captured by the camera 11 are shifted by D and moved to the same position, creating a double image.
   D=2×T×cos(S)               …(5) (5)
 実証作業として、T=6mm、S=13°とすると、D=10mmずれた位置が映ることになる。これは、瞳の大きさ程度であるので、瞳検出には影響がある。カメラ11の画角が±10°の範囲での差とみると、11.2mmと9.2mmで2mmのずれの大きさのx方向の差が生じる。この差があるため、単一のカーネルを使うことができない。被写体までの距離に拘わらず10mmのずれであるということは、撮影される画像としては、被写体までの距離が遠いときには、撮影される2重像の間隔は狭くなることになる。被写体までの距離を測定して、最適なカーネルを作成することが必要となる。 As a demonstration work, if T = 6 mm and S = 13°, the position shifted by D = 10 mm will be reflected. Since this is about the size of the pupil, it affects pupil detection. Assuming that the angle of view of the camera 11 is in the range of ±10°, there is a difference in the x direction of 2 mm between 11.2 mm and 9.2 mm. Because of this difference, we cannot use a single kernel. The deviation of 10 mm regardless of the distance to the object means that the distance between the double images to be photographed becomes narrow when the distance to the object is long. It is necessary to measure the distance to the subject and create an optimal kernel.
 図8は、撮像装置1の動作を説明するためのフローチャートである。2重像の間隔のピクセル数は、カメラ11から被写体までの距離の逆数に比例する。また、被写体までの距離ごとのカーネルを用意する必要がある。2重像にならない透過光による像は、この処理によって逆に2重像になって撮りたい運転者の画像を劣化させることがある。 FIG. 8 is a flowchart for explaining the operation of the imaging device 1. FIG. The number of pixels between the double images is proportional to the reciprocal of the distance from the camera 11 to the object. Also, it is necessary to prepare a kernel for each distance to the subject. An image by transmitted light that does not form a double image may deteriorate the image of the driver that is to be taken as a double image by this processing.
 このような不具合を回避する処理動作が開始されると、ステップS1で、照明装置をオン状態にして、撮像装置1によって運転者を撮像し、ステップS2で照明装置をオフ状態にして、再度運転者を撮像装置1によって撮像する。ステップS3で、コントローラ7には、撮像装置1から2つの撮像画像が取り込まれ、これらの撮像画像の差分が算出され、透過画像が除去された後、ステップS4で、メモリに記憶されたカーネルを参照して、一部の2重像が除去される。 When the processing operation for avoiding such a problem is started, in step S1, the lighting device is turned on, the imaging device 1 captures an image of the driver, and in step S2, the lighting device is turned off, and the vehicle is driven again. A person is imaged by the imaging device 1 . In step S3, the controller 7 takes in two captured images from the imaging device 1, calculates the difference between these captured images, and removes the transparent image. As a reference, some double images are removed.
 次にステップS5で、処理後の撮像画像から運転者の瞳位置が検出され、ステップS6で、標準眼間距離Eを用いて被写体までの距離が算出され、ステップS7で、算出した距離に対応したカーネルを使用して、残りの2重像が除去される。 Next, in step S5, the pupil position of the driver is detected from the captured image after processing, and in step S6, the distance to the subject is calculated using the standard interocular distance E. The remaining double images are removed using the modified kernel.
 その対策として、瞳位置検出を行い、瞳間隔から瞳位置までの距離を求める。被写体まえの距離で決まる2重像に適したカーネルを適用して、2重像を除去し、クリアになった画像を使用して正確な瞳位置検出結果を3D HUDで利用、あるいはドライバモニタリングシステムDMS及び拡張現実ヘッドアップディスプレイAR-HUDで利用する。透過光による像は、発光素子のオン/オフでの撮像画像を差し引きして消去してから、2重像の処理を行う。 As a countermeasure, the pupil position is detected and the distance from the pupil distance to the pupil position is obtained. Applies a kernel suitable for double images determined by the distance in front of the subject, removes double images, and uses the clear image to use accurate pupil position detection results in a 3D HUD or driver monitoring system. Used in DMS and augmented reality head-up display AR-HUD. The image by transmitted light is erased by subtracting the captured image with the light emitting element on/off, and then the double image is processed.
 図9は、ウインドシールド25の反射位置による2重像のずれ量の違いを示す図である。ウインドシールド25への角度Sが小さい方が2重像のずれ量が大きく、画像の上部領域と下部領域との処理を変える必要がある。 FIG. 9 is a diagram showing the difference in the amount of deviation of the double image depending on the reflection position of the windshield 25. FIG. The smaller the angle S to the windshield 25, the greater the amount of deviation of the double image, requiring different processing for the upper and lower regions of the image.
 図10は、ウインドシールド25の湾曲による2重像の幅の変化を示す図である。ウインドシールド25のx,y,zの各軸方向の湾曲率が異なるうねりとも呼ばれる変形によって、2重像の幅も変化する。 FIG. 10 is a diagram showing changes in the width of the double image due to the curvature of the windshield 25. FIG. The width of the double image also changes due to deformation called undulation in which the windshield 25 has different curvatures in the x-, y-, and z-axis directions.
 図11は測定用パターンが描かれた撮影用パネルを示す図である。そこで、図11に示されるように、複数の平行線が撮影用のパターンとして描かれた、パネルを準備し、このパネルを撮像対象の距離をあけて撮像装置1で撮像することによって、図12に示される2重像の撮像画像が得られる。この撮像画像から2重像間の距離と明るさ比とを算出し、1軸方向のずれに画像処理することによって、1軸だけに2重像が並んだ処理画像を得ることができる。このように基本的に2重像間の距離と明るさ比とが分かれば、その箇所のカーネルを作成することができる。 FIG. 11 is a diagram showing a photographing panel on which a measurement pattern is drawn. Therefore, as shown in FIG. 11, a panel on which a plurality of parallel lines are drawn as a pattern for photographing is prepared. is obtained. By calculating the distance and the brightness ratio between the double images from this picked-up image and performing image processing on the shift in the direction of one axis, it is possible to obtain a processed image in which the double images are aligned only along one axis. If the distance and the brightness ratio between the double images are basically known in this way, a kernel for that location can be created.
 このような処理によって5~6箇所のカーネルが得られれば、補間処理によって処理画像全体のカーネルの分布が得られる。これらのカーネルを対応する場所に適用すれば、演算量を削減して、2重像が除去された復元画像を得ることができる。 If 5 to 6 kernels are obtained by such processing, the distribution of kernels for the entire processed image can be obtained by interpolation processing. By applying these kernels to the corresponding locations, it is possible to reduce the amount of computation and obtain a restored image from which double images have been removed.
 本開示に係る撮像装置及び画像処理方法によれば、運転者の眼を精度よく検出することができる。 According to the imaging device and image processing method according to the present disclosure, the driver's eyes can be detected with high accuracy.
 本開示の係る撮像装置は、以下の構成(1)の態様で実施可能である。
(1)ウインドシールドと、
 前記ウインドシールドを介して少なくとも移動体の運転者の眼を撮像するように構成されるカメラと、
 前記カメラを制御するコントローラと、を備え、
 前記コントローラは、
  前記ウインドシールドの前記カメラの撮像化像への影響を表す第1の点広がり関数に基づいて、前記カメラの撮像画像を2重像が所定方向に並んで位置する第1処理画像に変形させる変換を実行し、
  前記第1処理画像を、該第1処理画像における2重像の特徴を表す第2の点広がり関数に基づいて、前記2重像が除去された第2処理画像に補正し、
  前記第2処理画像を、前記変換の逆変換を実行して、前記第1の点広がり関数の影響が除去された第3処理画像を生成する、撮像装置。
The imaging device according to the present disclosure can be implemented in the following configuration (1).
(1) a windshield;
a camera configured to image at least the eyes of a driver of a mobile object through the windshield;
a controller that controls the camera;
The controller is
Transformation for transforming the captured image of the camera into a first processed image in which double images are positioned side by side in a predetermined direction based on a first point spread function representing the effect of the windshield on the captured image of the camera. and run
correcting the first processed image to a second processed image from which the double image is removed based on a second point spread function representing a feature of the double image in the first processed image;
An imaging device that performs an inverse transform of the transform on the second processed image to generate a third processed image from which the effects of the first point spread function are removed.
 本開示の係る画像処理方法は、以下の構成(2)~(6)の態様で実施可能である。
(2)ウインドシールドと、ウインドシールドを介して少なくとも運転者の眼を撮像するように構成されるカメラと、前記カメラを制御するコントローラと、を準備し、
 前記コントローラによって、
  前記ウインドシールドの前記カメラの撮像化像への影響を表す第1の点広がり関数に基づいて、前記カメラの撮像画像を2重像が所定方向に並んで位置する第1処理画像に変形させる変換を実行し、
  前記第1処理画像を、該第1処理画像における2重像の特徴を表す第2の点広がり関数に基づいて、前記2重像が除去された第2処理画像に補正し、
  前記第2処理画像を、前記変換の逆変換を実行して、前記第1の点広がり関数の影響が除去された第3処理画像を生成する、画像処理方法。
The image processing method according to the present disclosure can be implemented in the following configurations (2) to (6).
(2) providing a windshield, a camera configured to image at least the driver's eyes through the windshield, and a controller for controlling the camera;
by the controller,
Transformation for transforming the captured image of the camera into a first processed image in which double images are positioned side by side in a predetermined direction based on a first point spread function representing the effect of the windshield on the captured image of the camera. and run
correcting the first processed image to a second processed image from which the double image is removed based on a second point spread function representing a feature of the double image in the first processed image;
An image processing method comprising performing an inverse transform of the transform on the second processed image to produce a third processed image from which the effects of the first point spread function have been removed.
(3)前記第1の点広がり関数は、複数の点像が描かれた測定板を前記カメラによって撮像し、前記カメラによって撮像された画像に基づいて決定されている、上記構成(2)に記載の画像処理方法。 (3) In the configuration (2) above, wherein the first point spread function is determined based on the image captured by the camera after capturing an image of a measurement plate on which a plurality of point images are drawn. The described image processing method.
(4)前記第2の点広がり関数は、前記第1の点広がり関数に基づいて決定されている、上記構成(3)に記載の画像処理方法。 (4) The image processing method according to configuration (3), wherein the second point spread function is determined based on the first point spread function.
(5)前記撮像画像は、前記運転者を照明しない状態で撮像された第1画像と、前記運転者を照明した状態で撮像された第2画像との差分画像である、上記構成(2)~(4)のいずれか1つに記載の画像処理方法。 (5) Configuration (2) above, wherein the captured image is a difference image between a first image captured without illumination of the driver and a second image captured with illumination of the driver. The image processing method according to any one of (4).
(6)前記第1処理画像は、2重像間の前記所定方向の距離が同一であるように変換されている、上記構成(2)~(5)のいずれか1つに記載の画像処理方法。 (6) The image processing according to any one of the above configurations (2) to (5), wherein the first processed image is converted so that the distance in the predetermined direction between the double images is the same. Method.
 以上、本開示の実施形態について詳細に説明したが、また、本発明は上述の実施の形態に限定されるものではなく、本発明の要旨を逸脱しない範囲内において、種々の変更、改良等が可能である。上記各実施形態をそれぞれ構成する全部または一部を、適宜、矛盾しない範囲で組み合わせ可能であることは、言うまでもない。 Although the embodiments of the present disclosure have been described in detail above, the present invention is not limited to the above-described embodiments, and various modifications, improvements, etc. can be made without departing from the scope of the present invention. It is possible. It goes without saying that all or part of each of the above-described embodiments can be appropriately combined within a non-contradictory range.
 1 撮像装置
 2 立体画像表示装置
 3 取得部
 7 コントローラ
 10 移動体
 12 3次元投影装置
 25 ウインドシールド
 13 運転者
 31 眼
 31L 左眼
 31R 右眼
REFERENCE SIGNS LIST 1 imaging device 2 stereoscopic image display device 3 acquisition unit 7 controller 10 moving body 12 three-dimensional projection device 25 windshield 13 driver 31 eye 31L left eye 31R right eye

Claims (6)

  1.  ウインドシールドと、
     前記ウインドシールドを介して少なくとも移動体の運転者の眼を撮像するように構成されるカメラと、
     前記カメラを制御するコントローラと、を備え、
     前記コントローラは、
      前記ウインドシールドの前記カメラの撮像化像への影響を表す第1の点広がり関数に基づいて、前記カメラの撮像画像を2重像が所定方向に並んで位置する第1処理画像に変形させる変換を実行し、
      前記第1処理画像を、該第1処理画像における2重像の特徴を表す第2の点広がり関数に基づいて、前記2重像が除去された第2処理画像に補正し、
      前記第2処理画像を、前記変換の逆変換を実行して、前記第1の点広がり関数の影響が除去された第3処理画像を生成する、撮像装置。
    a windshield; and
    a camera configured to image at least the eyes of a driver of a mobile object through the windshield;
    a controller that controls the camera;
    The controller is
    Transformation for transforming the captured image of the camera into a first processed image in which double images are positioned side by side in a predetermined direction based on a first point spread function representing the effect of the windshield on the captured image of the camera. and run
    correcting the first processed image to a second processed image from which the double image is removed based on a second point spread function representing a feature of the double image in the first processed image;
    An imaging device that performs an inverse transform of the transform on the second processed image to generate a third processed image from which the effects of the first point spread function are removed.
  2.  ウインドシールドと、ウインドシールドを介して少なくとも運転者の眼を撮像するように構成されるカメラと、前記カメラを制御するコントローラと、を準備し、
     前記コントローラによって、
      前記ウインドシールドの前記カメラの撮像化像への影響を表す第1の点広がり関数に基づいて、前記カメラの撮像画像を2重像が所定方向に並んで位置する第1処理画像に変形させる変換を実行し、
      前記第1処理画像を、該第1処理画像における2重像の特徴を表す第2の点広がり関数に基づいて、前記2重像が除去された第2処理画像に補正し、
      前記第2処理画像を、前記変換の逆変換を実行して、前記第1の点広がり関数の影響が除去された第3処理画像を生成する、画像処理方法。
    providing a windshield, a camera configured to image at least the driver's eyes through the windshield, and a controller for controlling the camera;
    by the controller,
    Transformation for transforming the captured image of the camera into a first processed image in which double images are positioned side by side in a predetermined direction based on a first point spread function representing the effect of the windshield on the captured image of the camera. and run
    correcting the first processed image to a second processed image from which the double image is removed based on a second point spread function representing a feature of the double image in the first processed image;
    An image processing method comprising performing an inverse transform of the transform on the second processed image to produce a third processed image from which the effects of the first point spread function have been removed.
  3.  前記第1の点広がり関数は、複数の点像が描かれた測定板を前記カメラによって撮像し、前記カメラによって撮像された画像に基づいて決定されている、請求項2に記載の画像処理方法。 3. The image processing method according to claim 2, wherein said first point spread function is determined based on an image captured by said camera after capturing an image of a measurement plate on which a plurality of point images are drawn by said camera. .
  4.  前記第2の点広がり関数は、前記第1の点広がり関数に基づいて決定されている、請求項3に記載の画像処理方法。 The image processing method according to claim 3, wherein the second point spread function is determined based on the first point spread function.
  5.  前記撮像画像は、前記運転者を照明しない状態で撮像された第1画像と、前記運転者を照明した状態で撮像された第2画像との差分画像である、請求項2~4のいずれか1項に記載の画像処理方法。 5. The captured image according to any one of claims 2 to 4, wherein the captured image is a difference image between a first image captured without illumination of the driver and a second image captured with illumination of the driver. 2. The image processing method according to item 1.
  6.  前記第1処理画像は、2重像間の前記所定方向の距離が同一であるように変換されている、請求項2~5のいずれか1項に記載の画像処理方法。 The image processing method according to any one of claims 2 to 5, wherein the first processed image is transformed so that the distance in the predetermined direction between the double images is the same.
PCT/JP2022/047211 2021-12-21 2022-12-21 Imaging device and image processing method WO2023120603A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-207573 2021-12-21
JP2021207573 2021-12-21

Publications (1)

Publication Number Publication Date
WO2023120603A1 true WO2023120603A1 (en) 2023-06-29

Family

ID=86902577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/047211 WO2023120603A1 (en) 2021-12-21 2022-12-21 Imaging device and image processing method

Country Status (1)

Country Link
WO (1) WO2023120603A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004026133A (en) * 2001-12-13 2004-01-29 Valeo Vision Image correction method for eye level image projector and device for carrying out the method
JP2010268520A (en) * 2010-08-16 2010-11-25 Sony Corp Vehicle mounted camera apparatus and vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004026133A (en) * 2001-12-13 2004-01-29 Valeo Vision Image correction method for eye level image projector and device for carrying out the method
JP2010268520A (en) * 2010-08-16 2010-11-25 Sony Corp Vehicle mounted camera apparatus and vehicle

Similar Documents

Publication Publication Date Title
CN110073658B (en) Image projection apparatus, image display apparatus, and moving object
WO2019160160A1 (en) Head-up display, head-up display system, and moving body
WO2019009243A1 (en) Three-dimensional display device, three-dimensional display system, mobile body, and three-dimensional display method
WO2020095801A1 (en) Three-dimensional display device, head-up display system, mobile unit, and program
JP7325520B2 (en) 3D display device, 3D display system, head-up display, and moving object
WO2023120603A1 (en) Imaging device and image processing method
WO2019225400A1 (en) Image display device, image display system, head-up display, and mobile object
JP6668564B1 (en) Head-up display, head-up display system, and moving object
JP7274392B2 (en) Cameras, head-up display systems, and moving objects
US11874464B2 (en) Head-up display, head-up display system, moving object, and method of designing head-up display
JP7336782B2 (en) 3D display device, 3D display system, head-up display, and moving object
WO2020196052A1 (en) Image display module, image display system, moving body, image display method, and image display program
WO2020022288A1 (en) Display device and mobile body
WO2022255459A1 (en) Method for configuring three-dimensional image display system
WO2021090956A1 (en) Head-up display, head-up display system, and moving body
WO2020256154A1 (en) Three-dimensional display device, three-dimensional display system, and moving object
WO2022186189A1 (en) Imaging device and three-dimensional display device
WO2024070204A1 (en) Virtual image display device, movable body, virtual image display device driving method, and program
JP7475191B2 (en) Method for measuring interocular distance and calibration method
JP7173836B2 (en) Controller, position determination device, position determination system, display system, program, and recording medium
WO2022250164A1 (en) Method for configuring three-dimensional image display system
WO2021220833A1 (en) Image display system
WO2022149599A1 (en) Three-dimensional display device
CN118004035A (en) Auxiliary driving method and device based on vehicle-mounted projector and electronic equipment
JP2021056255A (en) Parallax barrier, three-dimensional display device, three-dimensional display system, head-up display, and movable body

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22911305

Country of ref document: EP

Kind code of ref document: A1