WO2022091864A1 - 検出装置および画像表示システム - Google Patents
検出装置および画像表示システム Download PDFInfo
- Publication number
- WO2022091864A1 WO2022091864A1 PCT/JP2021/038574 JP2021038574W WO2022091864A1 WO 2022091864 A1 WO2022091864 A1 WO 2022091864A1 JP 2021038574 W JP2021038574 W JP 2021038574W WO 2022091864 A1 WO2022091864 A1 WO 2022091864A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- eye
- detection
- template
- detection device
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 291
- 238000012545 processing Methods 0.000 claims abstract description 30
- 238000000034 method Methods 0.000 claims description 209
- 230000008569 process Effects 0.000 claims description 207
- 230000004888 barrier function Effects 0.000 claims description 50
- 230000003287 optical effect Effects 0.000 claims description 33
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 210000001747 pupil Anatomy 0.000 description 86
- 230000008859 change Effects 0.000 description 25
- 238000004891 communication Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 16
- 230000000007 visual effect Effects 0.000 description 14
- 238000002834 transmittance Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 239000004973 liquid crystal related substance Substances 0.000 description 9
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000004397 blinking Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 239000011230 binding agent Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B17/00—Details of cameras or camera bodies; Accessories therefor
- G03B17/48—Details of cameras or camera bodies; Accessories therefor adapted for combination with other photographic or optical apparatus
- G03B17/54—Details of cameras or camera bodies; Accessories therefor adapted for combination with other photographic or optical apparatus with projector
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/18—Stereoscopic photography by simultaneous viewing
- G03B35/24—Stereoscopic photography by simultaneous viewing using apertured or refractive resolving means on screens or between screen and eye
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/31—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/31—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
- H04N13/312—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers the parallax barriers being placed behind the display panel, e.g. between backlight and spatial light modulator [SLM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
Definitions
- This disclosure relates to a detection device and an image display system.
- the position data indicating the position of the pupil is acquired by using the captured image obtained by capturing the user's eye with the camera.
- the captured image obtained by capturing the user's eye with the camera.
- an image is displayed on a display so that the image corresponding to the left eye and the right eye of the user can be visually recognized based on the position of the pupil indicated by the position data (for example, a patent).
- Document 1)
- the detection device includes an input unit and a controller.
- the input unit receives input of image information.
- the controller is configured to be able to execute a detection process for detecting the position of the user's eyes.
- the controller is configured to be able to execute the first process and the second process as the detection process.
- the controller detects the first position of the eye by template matching using the first template image based on the input image information.
- the controller detects the position of the face by template matching using the second template image different from the first template image based on the input image information, and the eye is based on the detected face position. The second position of is detected.
- the image display system includes a display unit, a barrier unit, a camera, an input unit, a detection unit, and a display control unit.
- the display unit is configured to display a parallax image projected via the optical system to both eyes of the user.
- the barrier portion is configured to give parallax to both eyes by defining the traveling direction of the image light of the parallax image.
- the camera is configured to capture the user's face.
- the input unit receives the input of the imaging information output from the camera.
- the detection unit is configured to be able to execute a detection process for detecting the positions of both eyes of the user.
- the display control unit controls the display unit by synthesizing the parallax images according to the positions of both eyes of the person detected by the detection unit.
- the detection unit is configured to be able to execute the first process and the second process as the detection process.
- the detection unit detects the first position of both eyes by template matching using the first template image based on the input image information.
- the detection unit detects the position of the face by template matching using the second template image different from the first template image based on the input image information, and based on the detected face position. The second position of both eyes is detected.
- the detection device 50 may be mounted on the mobile body 10.
- the detection device 50 includes an input unit 30 and a detection unit 15.
- the moving body 10 may include a three-dimensional projection system 100 as an image display system.
- the three-dimensional projection system 100 includes a camera 11, a detection device 50, and a three-dimensional projection device 12.
- the "mobile body” in the present disclosure may include, for example, a vehicle, a ship, an aircraft, and the like.
- Vehicles may include, for example, automobiles, industrial vehicles, railroad vehicles, living vehicles, fixed-wing aircraft traveling on runways, and the like.
- Automobiles may include, for example, passenger cars, trucks, buses, motorcycles, trolley buses and the like.
- Industrial vehicles may include, for example, industrial vehicles for agriculture and construction.
- Industrial vehicles may include, for example, forklifts, golf carts and the like.
- Industrial vehicles for agriculture may include, for example, tractors, tillers, porting machines, binders, combines, lawnmowers and the like.
- Industrial vehicles for construction may include, for example, bulldozers, scrapers, excavators, mobile cranes, dump trucks, road rollers and the like.
- the vehicle may include a vehicle that travels manually.
- the classification of vehicles is not limited to the above examples.
- an automobile may include an industrial vehicle capable of traveling on a road.
- the same vehicle may be included in multiple categories.
- Vessels may include, for example, marine jets, boats, tankers and the like.
- Aircraft may include, for example, fixed-wing aircraft, rotary-wing aircraft, and the like.
- the mobile body 10 is not limited to a passenger car, and may be any of the above examples.
- the camera 11 may be attached to the moving body 10.
- the camera 11 is configured to capture an image including the driver's face 5 of the mobile body 10 as the user 13.
- the camera 11 is configured to be capable of capturing an assumed area in which the driver's face 5 of the mobile body 10 as the user 13 is assumed to be located.
- the mounting position of the camera 11 is arbitrary inside and outside the moving body 10. For example, the camera 11 may be located within the dashboard of the mobile body 10.
- the camera 11 may be a visible light camera or an infrared camera.
- the camera 11 may have the functions of both a visible light camera and an infrared camera.
- the camera 11 may include, for example, a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) image sensor.
- the camera 11 is configured to be able to output the image information of the captured image to the detection device 50.
- the image information of the image captured by the camera 11 is output to the detection device 50.
- the input unit 30 of the detection device 50 is configured to be able to receive the input of the image information output from the camera 11.
- the detection unit 15 is configured to be able to execute a detection process for detecting the position of the eye 5a of the user 13.
- the camera 11 may output all frames to the detection device 50, and the detection device 50 may detect the position of the eye 5a in all the frames.
- the detection process that can be executed by the detection unit 15 may be, for example, a process in which the input unit 30 detects the position of the eye 5a of the user 13 based on the input image information.
- the detection unit 15 is configured to be able to execute the first process and the second process as detection processes.
- the first process is a process of detecting the first position of the eye 5a by template matching based on the image information.
- the second process is a process of detecting the position of the face 5 by template matching based on the image information and detecting the second position of the eye 5a based on the detected position of the face 5.
- the position of the eye 5a of the user 13 may be the pupil position.
- Template matching can be, for example, image processing for searching a position most suitable for the template image from the target image.
- the target image is the captured image 51 output from the camera 11.
- the detection unit 15 detects the first position.
- the detection unit 15 may detect the position of the eye 5a from the captured image 51 as the first position by template matching.
- the first template image 52 used in the first process includes a facial part whose relative positional relationship with the eye 5a of the user 13 is specified, or the eye 5a of the user 13.
- the eyes 5a of the user 13 included in the first template image 52 may be both eyes, and may be only the right eye or only the left eye.
- the facial part whose relative positional relationship with the eyes 5a of the user 13 is specified may be, for example, eyebrows or nose.
- the first template image 52 includes both eyes as the eyes 5a of the user 13.
- the detection unit 15 detects the second position.
- the detection unit 15 may detect the position of the face 5 from the captured image 51 by template matching, and detect the position of the eye 5a as the second position based on the detected position of the face 5.
- the second template image 53 used in the second process is different from the first template image 52 used in the first process.
- the second template image 53 used in the second process has a wider comparison target range than the first template image 52 used in the first process.
- the second template image 53 used in the second process includes, for example, the face 5 of the user 13, as shown in the example of FIG. Since the position of the eye 5a on the face 5 does not change in the same user 13, the position of the eye 5a on the face 5 can be specified in advance.
- the detection unit 15 determines the position (second position) of the eye 5a on the face 5 specified in advance based on the detected position of the face 5. Can be detected.
- the captured image includes the user's pupil. If the captured image does not include the pupil, the pupil position cannot be detected. For example, the pupil position cannot be detected from an image taken when the user closes his eyes due to blinking or the like. Since the template matching according to the present disclosure searches for a position most suitable for the first and second template images 52 and 53, even if the captured image 51 does not include a pupil, the first and second template images 52 and 53 are searched. It is possible to search for the most suitable position of the captured image 51 by utilizing the features other than the pupil included in the image 51.
- the calculation amount required for template matching is the calculation amount required for pupil position detection when the captured image 51 of the same size is targeted for detection. Is reduced.
- the detection unit 15 can increase the calculation speed for outputting the detection result as compared with the pupil position detection.
- the detection device 50 can output the coordinate information regarding either the first position or the second position to the three-dimensional projection device 12.
- the shapes of the first and second template images 52 and 53 may be shapes corresponding to the shapes of the captured images 51. If the captured image 51 has a rectangular shape, the first and second template images 52 and 53 may have a rectangular shape. The shapes of the first and second template images 52 and 53 do not have to have a similar relationship with the shape of the captured image 51, but may have a similar relationship.
- FIGS. 2 and 3 an example in which the captured image 51 and the first and second template images 52 and 53 have a rectangular shape will be described.
- the detection result by the detection device 50 may be coordinate information indicating the pupil position of the eye 5a of the user 13.
- the coordinates of the positions most suitable for the first and second template images 52 and 53 in the captured image 51 are determined.
- the coordinates of the matching position determined by the template matching may be, for example, the coordinates corresponding to the coordinates of the representative positions of the first and second template images 52 and 53.
- the representative position of the first and second template images 52 and 53 may be, for example, any one vertex position or the center position of the first and second template images 52 and 53.
- the detection unit 15 separately specifies the relative coordinate position relationship between the coordinates of the pupil position included in the first template image 52 of the eye 5a and, for example, the representative position of the first template image 52.
- the detection unit 15 determines the pupil position in the captured image 51 (hereinafter referred to as “1” by the determined coordinates and the first relationship.
- the coordinate information of (referred to as "first position") can be determined.
- the first relationship can be identified prior to the execution of the first process.
- the detection unit 15 separately specifies the relative coordinate position relationship between the coordinates of the pupil position included in the second template image 53 of the face 5 and, for example, the representative position of the second template image 53. The second relationship can be used.
- the detection unit 15 determines the pupil position in the captured image 51 (hereinafter, by the second relationship) with the determined coordinates. , "Second position") coordinate information can be determined. For example, even when the user 13 has the eyes 5a closed and the captured image 51 does not include the pupils, the detection unit 15 is presumed to have detected the pupils if the eyes are open. The coordinate information of the position can be determined. Since the detection device 50 of the present disclosure can determine the coordinate information of the pupil position even when the user 13 closes the eye 5a due to blinking or the like, the coordinate information can be continuously output without interruption.
- the detection device 50 may include, for example, a sensor.
- the sensor may be an ultrasonic sensor, an optical sensor, or the like.
- the camera 11 may be configured to detect the position of the head of the user 13 by a sensor.
- the camera 11 may be configured to detect the position of the eye 5a of the user 13 as coordinates in three-dimensional space by two or more sensors.
- the detection device 50 is configured to output coordinate information regarding the detected pupil position of the eye 5a to the three-dimensional projection device 12. Based on this coordinate information, the three-dimensional projection device 12 may be configured to control the image to be projected. The detection device 50 may be configured to output information indicating the pupil position of the eye 5a to the three-dimensional projection device 12 via wired communication or wireless communication. Wired communication may include, for example, CAN (Controller Area Network) and the like.
- the detection unit 15 determines the position of the eye 5a of the user 13 by the first position by the first process using the first template image 52 of the eye 5a and the second process by using the second template image 53 of the face 5.
- the second position and the second position can be detected.
- the coordinate information output from the detection device 50 to the three-dimensional projection device 12 may be the coordinate information of the first position.
- the coordinate information output from the detection device 50 to the three-dimensional projection device 12 may be the coordinate information of the second position.
- the detection device 50 may output the coordinate information of the first position to the three-dimensional projection device 12, and may output the coordinate information of the second position, for example, when the first position cannot be detected.
- the detection device 50 may output the coordinate information of the second position to the three-dimensional projection device 12, and may output the coordinate information of the first position, for example, when the second position cannot be detected.
- the detection unit 15 may be configured to be able to execute the third process of detecting the third position of the eye 5a as the detection process based on the result of comparing the first position and the second position.
- the first position and the second position to be compared may be the detection results of the first process and the second process for the same captured image 51.
- the first position and the second position are the same position, it can be inferred that the detected first position and the second position have high detection accuracy.
- the first position and the second position are the same position, the difference between the coordinate information of the first position and the coordinate information of the second position is calculated, and the difference value is within a predetermined range (hereinafter, "the first". It is a case of being within 1 range).
- the detection unit 15 determines that, for example, the position of the eye 5a (hereinafter, "third") when the first position and the second position are the same position.
- the first position may be detected as "position”).
- the detection unit 15 finds that the first position and the second position are different positions, for example, an intermediate position between the first position and the second position. May be calculated and the calculated intermediate position may be detected as the third position.
- the first position and the second position are different positions, the difference between the coordinate information of the first position and the coordinate information of the second position is calculated, and the difference value is out of the first range, and the first position is obtained.
- the detection unit 15 may perform the first process and the second process newly without outputting the coordinate information, assuming that the detection result by the third process is a detection failure.
- the detection unit 15 may detect, for example, the same coordinate information as the previously output coordinate information as the coordinate information of the third position. The third process may be omitted.
- the detection unit 15 may be an external device.
- the input unit 30 may be configured to output the captured image 51 received from the camera 11 to the external detection unit 15.
- the external detection unit 15 may be configured to detect the pupil position of the eye 5a of the user 13 from the captured image 51 by template matching.
- the external detection unit 15 may be configured to output coordinate information regarding the detected pupil position of the eye 5a to the three-dimensional projection device 12. Based on this coordinate information, the three-dimensional projection device 12 may be configured to control the image to be projected.
- the input unit 30 may be configured to output the captured image 51 to the external detection unit 15 via wired communication or wireless communication.
- the external detection unit 15 may be configured to output coordinate information to the three-dimensional projection device 12 via wired communication or wireless communication. Wired communication may include, for example, CAN and the like.
- the position of the three-dimensional projection device 12 is arbitrary inside and outside the moving body 10.
- the 3D projection device 12 may be located within the dashboard of the moving body 10.
- the three-dimensional projection device 12 is configured to emit image light toward the windshield 25.
- the windshield 25 is configured to reflect the image light emitted from the three-dimensional projection device 12.
- the image light reflected by the windshield 25 reaches the eyebox 16.
- the eye box 16 is an area in real space where the eyes 5a of the user 13 are assumed to exist in consideration of, for example, the physique, posture, and changes in the posture of the user 13.
- the shape of the eye box 16 is arbitrary.
- the eyebox 16 may include a planar or three-dimensional region.
- the solid arrow shown in FIG. 1 indicates a path through which at least a part of the image light emitted from the three-dimensional projection device 12 reaches the eye box 16.
- the path through which the image light travels is also called an optical path.
- the three-dimensional projection device 12 can function as a head-up display by allowing the user 13 to visually recognize the virtual image 14.
- the direction in which the eyes 5a of the user 13 are lined up corresponds to the x-axis direction.
- the vertical direction corresponds to the y-axis direction.
- the imaging range of the camera 11 includes the eye box 16.
- the three-dimensional projection device 12 includes a three-dimensional display device 17 and an optical element 18.
- the three-dimensional projection device 12 may also be referred to as an image display module.
- the three-dimensional display device 17 may include a backlight 19, a display unit 20 having a display surface 20a, a barrier unit 21, and a display control unit 24.
- the three-dimensional display device 17 may further include a communication unit 22.
- the three-dimensional display device 17 may further include a storage unit 23.
- the optical element 18 may include a first mirror 18a and a second mirror 18b. At least one of the first mirror 18a and the second mirror 18b may have optical power. In the present embodiment, the first mirror 18a is a concave mirror having optical power. It is assumed that the second mirror 18b is a plane mirror.
- the optical element 18 may function as a magnifying optical system for enlarging the image displayed on the three-dimensional display device 17.
- the dashed line arrow shown in FIG. 4 indicates that at least a part of the image light emitted from the three-dimensional display device 17 is reflected by the first mirror 18a and the second mirror 18b and emitted to the outside of the three-dimensional projection device 12. The route when it is done is shown.
- the image light emitted to the outside of the three-dimensional projection device 12 reaches the windshield 25, is reflected by the windshield 25, and reaches the eye 5a of the user 13. As a result, the user 13 can visually recognize the image displayed on the three-dimensional display device 17.
- the optical element 18 and the windshield 25 are configured so that the image light emitted from the three-dimensional display device 17 can reach the eye 5a of the user 13.
- the optical element 18 and the windshield 25 may form an optical system.
- the optical system is configured so that the image light emitted from the three-dimensional display device 17 reaches the eye 5a of the user 13 along the optical path indicated by the alternate long and short dash line.
- the optical system may be configured to control the traveling direction of the image light so that the image visually recognized by the user 13 is enlarged or reduced.
- the optical system may be configured to control the traveling direction of the image light so as to deform the shape of the image to be visually recognized by the user 13 based on a predetermined matrix.
- the optical element 18 is not limited to the illustrated configuration.
- the mirror may be a concave mirror, a convex mirror, or a plane mirror.
- the shape may include at least a spherical shape or at least a part aspherical shape.
- the number of elements constituting the optical element 18 is not limited to two, and may be one or three or more.
- the optical element 18 is not limited to the mirror and may include a lens.
- the lens may be a concave lens or a convex lens.
- the shape of the lens may include a spherical shape at least in part, or may include an aspherical shape in at least a part.
- the backlight 19 is located on the optical path of the image light on the side farther from the display unit 20 and the barrier unit 21 when viewed from the user 13.
- the backlight 19 emits light toward the barrier unit 21 and the display unit 20. At least a part of the light emitted by the backlight 19 travels along the optical path indicated by the alternate long and short dash line and reaches the eye 5a of the user 13.
- the backlight 19 may include a light emitting element such as an LED (Light Emission Diode) or an organic EL or an inorganic EL.
- the backlight 19 may be configured so that the emission intensity and its distribution can be controlled.
- the display unit 20 includes a display panel.
- the display unit 20 may be, for example, a liquid crystal device such as an LCD (Liquid Crystal Display).
- the display unit 20 includes a transmissive liquid crystal display panel.
- the display unit 20 is not limited to this example, and may include various display panels.
- the display unit 20 has a plurality of pixels, controls the transmittance of the light incident from the backlight 19 in each pixel, and is configured to emit as image light reaching the eye 5a of the user 13.
- the user 13 visually recognizes an image composed of image light emitted from each pixel of the display unit 20.
- the barrier portion 21 is configured to define the traveling direction of the incident light. As shown in the example of FIG. 3, when the barrier unit 21 is located closer to the backlight 19 than the display unit 20, the light emitted from the backlight 19 is incident on the barrier unit 21 and further, the display unit 20. Incident to. In this case, the barrier unit 21 is configured to block or attenuate a part of the light emitted from the backlight 19 and transmit the other part toward the display unit 20.
- the display unit 20 emits incident light traveling in the direction defined by the barrier unit 21 as it is as image light traveling in the same direction. When the display unit 20 is located closer to the backlight 19 than the barrier unit 21, the light emitted from the backlight 19 is incident on the display unit 20 and further incident on the barrier unit 21. In this case, the barrier unit 21 is configured to block or attenuate a part of the image light emitted from the display unit 20 and transmit the other part toward the eye 5a of the user 13.
- the barrier unit 21 is configured to be able to control the traveling direction of the image light.
- the barrier unit 21 causes a part of the image light emitted from the display unit 20 to reach either the left eye 5aL or the right eye 5aR (see FIG. 4) of the user 13, and uses the other part of the image light. It is configured to reach the other of the left eye 5aL and the right eye 5aR of the person 13. That is, the barrier portion 21 is configured to divide at least a part of the traveling direction of the image light into the left eye 5aL and the right eye 5aR of the user 13.
- the left eye 5aL and the right eye 5aR are also referred to as the first eye and the second eye, respectively.
- the barrier unit 21 is located between the backlight 19 and the display unit 20. That is, the light emitted from the backlight 19 first incidents on the barrier portion 21, and then incidents on the display portion 20.
- the display unit 20 has a left eye viewing area 201L visually recognized by the left eye 5aL of the user 13 and a right eye visually recognized by the right eye 5aR of the user 13 on the display surface 20a. Includes the viewing area 201R.
- the display unit 20 is configured to display a parallax image including a left eye image visually recognized by the left eye 5aL of the user 13 and a right eye image visually recognized by the right eye 5aR of the user 13.
- the parallax image is an image projected on each of the left eye 5aL and the right eye 5aR of the user 13, and is an image that gives parallax to both eyes of the user 13.
- the display unit 20 is configured to display the left eye image in the left eye viewing area 201L and display the right eye image in the right eye viewing area 201R. That is, the display unit 20 is configured to display the parallax image in the left eye visual recognition area 201L and the right eye visual recognition area 201R. It is assumed that the left eye visual recognition area 201L and the right eye visual recognition area 201R are aligned in the u-axis direction representing the parallax direction.
- the left eye visual recognition area 201L and the right eye visual recognition area 201R may extend along the v-axis direction orthogonal to the parallax direction, or may extend in a direction inclined at a predetermined angle with respect to the v-axis direction.
- the left eye visual recognition area 201L and the right eye visual recognition area 201R may be alternately arranged along a predetermined direction including a component in the parallax direction.
- the pitch in which the left eye visual recognition area 201L and the right eye visual recognition area 201R are alternately arranged is also referred to as a parallax image pitch.
- the left eye visual recognition area 201L and the right eye visual recognition area 201R may be located at intervals or may be adjacent to each other.
- the display unit 20 may further have a display area for displaying a flat image on the display surface 20a. It is assumed that the planar image is an image that does not give parallax to the eyes 5a of the user 13 and is not stereoscopically viewed.
- the barrier portion 21 has an opening region 21b and a light-shielding surface 21a.
- the barrier unit 21 controls the transmittance of the image light emitted from the display unit 20. It is configured as follows.
- the opening region 21b is configured to transmit light incident on the barrier portion 21 from the display portion 20.
- the opening region 21b may transmit light with a transmittance of the first predetermined value or more.
- the first predetermined value may be, for example, 100%, or may be a value close to 100%.
- the light-shielding surface 21a is configured to block the light incident on the barrier unit 21 from the display unit 20.
- the light-shielding surface 21a may transmit light at a transmittance of a second predetermined value or less.
- the second predetermined value may be, for example, 0% or a value close to 0%.
- the first predetermined value is larger than the second predetermined value.
- the opening region 21b and the light-shielding surface 21a are alternately arranged in the u-axis direction representing the parallax direction.
- the boundary between the opening region 21b and the light-shielding surface 21a may be along the v-axis direction orthogonal to the parallax direction as illustrated in FIG. 4, or along a direction inclined at a predetermined angle with respect to the v-axis direction. good.
- the opening region 21b and the light-shielding surface 21a may be alternately arranged along a predetermined direction including a component in the parallax direction.
- the barrier unit 21 of the present embodiment is located on the optical path of the image light on the side farther than the display unit 20 when viewed from the user 13.
- the barrier unit 21 is configured to control the transmittance of light incident on the display unit 20 from the backlight 19.
- the opening region 21b is configured to transmit light incident on the display unit 20 from the backlight 19.
- the light-shielding surface 21a is configured to block the light incident on the display unit 20 from the backlight 19. By doing so, the traveling direction of the light incident on the display unit 20 is limited to a predetermined direction.
- a part of the image light can be controlled by the barrier portion 21 to reach the left eye 5aL of the user 13.
- the other part of the image light can be controlled by the barrier portion 21 to reach the right eye 5aR of the user 13.
- the barrier portion 21 may be composed of a liquid crystal shutter.
- the liquid crystal shutter can control the transmittance of light based on the applied voltage.
- the liquid crystal shutter is composed of a plurality of pixels, and the transmittance of light in each pixel may be controlled.
- the liquid crystal shutter can form a region having a high light transmittance or a region having a low light transmittance into an arbitrary shape.
- the opening region 21b may have a transmittance of a first predetermined value or more.
- the light-shielding surface 21a may have a transmittance of a second predetermined value or less.
- the first predetermined value may be set to a value higher than the second predetermined value.
- the ratio of the second predetermined value to the first predetermined value may be set to 1/100 in one example.
- the ratio of the second predetermined value to the first predetermined value may be set to 1/1000 in other examples.
- the barrier portion 21 having a shape-changeable shape between the opening region 21b and the light-shielding surface 21a is also referred to as an active barrier.
- the display control unit 24 is configured to control the display unit 20.
- the display control unit 24 may be configured to control the barrier unit 21.
- the display control unit 24 may be configured to control the backlight 19.
- the display control unit 24 is configured to acquire coordinate information regarding the pupil position of the eye 5a of the user 13 from the detection device 50 and control the display unit 20 based on the coordinate information.
- the display control unit 24 may be configured to control at least one of the barrier unit 21 and the backlight 19 based on the coordinate information.
- the display control unit 24 may receive the image output from the camera 11 and detect the eye 5a of the user 13 from the received image by the above-mentioned template matching.
- the display control unit 24 has the same function as the detection unit 15 (controller), and may be configured to also serve as the detection unit 15.
- the display control unit 24 may be configured to control the display unit 20 based on the detected pupil position of the eye 5a.
- the display control unit 24 may be configured to control at least one of the barrier unit 21 and the backlight 19 based on the detected pupil position of the eye 5a.
- the display control unit 24 and the detection unit 15 are configured as, for example, a processor.
- the display control unit 24 and the detection unit 15 may include one or more processors.
- the processor may include a general-purpose processor that loads a specific program and performs a specific function, and a dedicated processor specialized for a specific process.
- the dedicated processor may include an application specific integrated circuit (ASIC).
- ASIC application specific integrated circuit
- the processor may include a programmable logic device (PLD).
- PLD may include an FPGA (Field-Programmable Gate Array).
- FPGA Field-Programmable Gate Array
- the display control unit 24 and the detection unit 15 may be either a SoC (System-on-a-Chip) in which one or a plurality of processors cooperate, or a SiP (System In a Package).
- the communication unit 22 may include an interface capable of communicating with an external device.
- the external device may include, for example, the detection device 50.
- the external device may be configured to provide, for example, image information to be displayed on the display unit 20.
- the communication unit 22 may acquire various information from an external device such as the detection device 50 and output it to the display control unit 24.
- the "communicable interface" in the present disclosure may include, for example, a physical connector and a wireless communication device.
- the physical connector may include an electric connector corresponding to transmission by an electric signal, an optical connector corresponding to transmission by an optical signal, and an electromagnetic connector corresponding to transmission by an electromagnetic wave.
- the electrical connector may include a connector conforming to IEC60603, a connector conforming to the USB standard, or a connector corresponding to an RCA terminal.
- the electric connector may include a connector corresponding to the S terminal specified in EIAJ CP-121aA or a connector corresponding to the D terminal specified in EIAJ RC-5237.
- the electrical connector may include a connector conforming to the HDMI® standard or a connector corresponding to a coaxial cable including a BNC (British Naval Connector or Baby-series N Connector, etc.).
- the optical connector may include various connectors conforming to IEC 61754.
- the wireless communication device may include a wireless communication device conforming to each standard including Bluetooth (registered trademark) and IEEE8021a.
- the wireless communication device includes at least one antenna.
- the storage unit 23 may be configured to store various information, a program for operating each component of the three-dimensional display device 17, and the like.
- the storage unit 23 may be composed of, for example, a semiconductor memory or the like.
- the storage unit 23 may function as a work memory of the display control unit 24.
- the storage unit 23 may be included in the display control unit 24.
- the light emitted from the backlight 19 passes through the barrier unit 21 and the display unit 20 and reaches the eye 5a of the user 13.
- the path of the light emitted from the backlight 19 to reach the eye 5a is represented by a broken line.
- the light that passes through the opening region 21b of the barrier portion 21 and reaches the right eye 5aR passes through the right eye viewing region 201R of the display unit 20. That is, the right eye 5aR can visually recognize the right eye viewing region 201R by the light transmitted through the opening region 21b.
- the light that passes through the opening region 21b of the barrier portion 21 and reaches the left eye 5aL passes through the left eye viewing region 201L of the display unit 20. That is, the left eye 5aL can visually recognize the left eye viewing region 201L by the light transmitted through the opening region 21b.
- the display unit 20 is configured to display a right eye image and a left eye image in the right eye viewing area 201R and the left eye viewing area 201L, respectively.
- the barrier unit 21 is configured to allow the image light related to the left eye image to reach the left eye 5aL and the image light related to the right eye image to reach the right eye 5aR. That is, the opening region 21b is configured so that the image light related to the left eye image reaches the left eye 5aL of the user 13 and the image light related to the right eye image reaches the right eye 5aR of the user 13.
- the three-dimensional display device 17 can project a parallax image to both eyes of the user 13. The user 13 can see the image stereoscopically by viewing the parallax image with the left eye 5aL and the right eye 5aR.
- At least a part of the image light transmitted from the opening region 21b of the barrier portion 21 and emitted from the display surface 20a of the display unit 20 reaches the windshield 25 via the optical element 18.
- the image light is reflected by the windshield 25 and reaches the eye 5a of the user 13.
- the second virtual image 14b corresponds to the image displayed by the display surface 20a.
- the opening region 21b and the light-shielding surface 21a of the barrier portion 21 form a first virtual image 14a in front of the windshield 25 and on the negative direction side of the z-axis of the second virtual image 14b.
- the user 13 visually recognizes the image as if the display unit 20 exists at the position of the second virtual image 14b and the barrier unit 21 exists at the position of the first virtual image 14a. It can be done.
- the image light related to the image displayed on the display surface 20a is emitted from the three-dimensional display device 17 in the direction defined by the barrier unit 21.
- the optical element 18 is configured to emit light toward the windshield 25.
- the optical element 18 may be configured to reflect or refract image light.
- the windshield 25 is configured to reflect the image light and advance it toward the eye 5a of the user 13.
- the image light is incident on the eye 5a of the user 13, the user 13 visually recognizes the parallax image as a virtual image 14.
- the user 13 can see stereoscopically by visually recognizing the virtual image 14.
- the image corresponding to the parallax image among the virtual images 14 is also referred to as a parallax virtual image.
- the parallax virtual image is a parallax image projected through the optical system.
- the image corresponding to the plane image among the virtual images 14 is also referred to as a plane virtual image. It can be said that the plane virtual image is a plane image projected through the optical system.
- the detection device 50 is configured to generate the first and second template images 52 and 53 from the captured image 51 captured by the camera 11 before the start of the search by template matching.
- the detection unit 15 may detect the pupil of the entire captured image 51, and may use a predetermined peripheral region including the detected pupil as the first template image 52.
- the predetermined peripheral region generated as the first template image 52 may be, for example, a region corresponding to the eye box 16 in the three-dimensional projection device 12.
- the detection unit 15 may perform face detection on the entire captured image 51, and use a predetermined peripheral region including the detected face as the second template image 53.
- a known pupil detection process can be used for pupil detection in template image generation.
- the detection device 50 may use, for example, a pupil detection process using the difference in brightness between the corneal reflex and the pupil as a known pupil detection process.
- a known face detection process can be used for face detection in template image generation.
- the detection device 50 may use, for example, a face detection process that combines feature quantities and contour extraction of parts such as eyes, nose, and mouth.
- the detection device 50 may execute, for example, the template image generation process shown in the flow chart of FIG.
- the detection device 50 may start the template image generation process, for example, when the three-dimensional projection system 100 is started (when the power is turned on).
- the input unit 30 receives the input of the captured image 51 captured by the camera 11.
- the captured image 51 of the camera 11 includes, for example, the face of the user 13 seated in the seat of the moving body 10.
- the detection unit 15 cuts out a first region including the eyebox 16 from the captured image 51, and in step A3, performs face detection on the cut out first region, and the face of the user 13. It is determined whether or not 5 is detected. If the face 5 is detected, the detection device 50 proceeds to step A4, and if the face 5 is not detected, the detection device 50 returns to step A1.
- step A4 the detection unit 15 cuts out the second region including the detected face 5 from the first region and extracts the second template image 53 of the face 5.
- step A5 the detection unit 15 performs pupil detection on the second region and determines whether or not the pupil is detected. If the pupil is detected, the detection device 50 proceeds to step A6, and if not, returns to step A1.
- step A6 the detection unit 15 extracts the pupil peripheral region including the detected pupil as the first template image 52 of the eye 5a, and ends the template image generation process.
- the detection unit 15 may store the extracted first and second template images 52 and 53 in, for example, the storage area of the detection unit 15 or the storage unit 23.
- the detection unit 15 may extract a region having the same size as the eye box 16 around the pupil as the first template image 52.
- the detection unit 15 may store the relative coordinate positional relationship between the representative position and the pupil position of each of the first and second template images 52 and 53 together with the first and second template images 52 and 53.
- the first and second template images 52 and 53 may be temporarily stored in the storage area of the detection unit 15 or the storage unit 23 while the three-dimensional projection system 100 is running.
- the first and second template images 52 and 53 may be stored in the storage unit 23 in association with the imaged user 13, for example.
- the first and second template images 52 and 53 stored in the storage unit 23 are read from the storage unit 23 by the detection unit 15 when the three-dimensional projection system 100 is started next time or later, so that the template image generation process is performed. It can be omitted.
- the detection unit 15 can update the first and second template images 52 and 53 stored in the storage unit 23 by executing the template image generation process again.
- the pupil detection process in the template image generation process has higher detection accuracy than the pupil detection process (first process) by template matching, but the processing time required for detection is long.
- the pupil detection process by template matching can be detected, for example, in all frames output from the camera 11 due to the short processing time. Since the pupil detection process in the template image generation process has a long processing time, it can be detected every several frames output from the camera 11, for example.
- the detection unit 15 may repeatedly execute the template image generation process every few frames to update the first and second template images 52 and 53 stored in the storage unit 23.
- the position of the eye 5a detected by the pupil detection process in the template image generation process can be used for verification of the validity of the position of the eye 5a detected by the template matching.
- the detection unit 15 is configured to be able to execute the pupil detection process as the fourth process.
- the pupil detection process executed as the fourth process may use the same process as the template image generation process, but may be different.
- the position of the eye 5a detected in the fourth process can be the fourth position.
- the detection unit 15 is configured to be able to periodically execute the fourth process.
- the detection unit 15 may execute the fourth process every few frames, for example.
- the detection unit 15 is configured to be able to execute the first process and the fourth process in parallel. By executing the fourth process in parallel with the first process, the detection unit 15 is configured to be able to detect the position of the user's eye 5a even while the fourth process is executed.
- the detection unit 15 is configured to be able to execute a fifth process of comparing any of the first position, the second position, and the third position, which is the position of the eye 5a detected by template matching, with the fourth position. Each time the detection unit 15 executes the fourth process, the detection unit 15 may execute the fifth process using the fourth position detected in the fourth process.
- the detection unit 15 stores the captured image 51 used for the template image generation process, and determines the first position, the second position, the third position, and the fourth position detected for the same captured image 51. Just compare.
- the detection accuracy of the fourth position is higher than that of the first position, the second position, and the third position.
- the detection unit 15 may verify the validity by comparing the fourth position with each of the first position, the second position, and the third position.
- the detection unit 15 may calculate the difference value between the fourth position and the first position, the second position, and the third position, respectively, and determine whether or not the difference value is within a predetermined range. If the difference value is within a predetermined range, it can be inferred that the position is detected with the same high accuracy as the fourth position. If the difference value is out of the predetermined range, the position can be presumed to be an erroneous detection.
- the detection unit 15 calculates the difference value between the fourth position and the first position, the second position, and the third position, respectively, and transfers the coordinate information of the position where the difference value is within a predetermined range to the three-dimensional projection device 12. You may output it.
- the coordinate information of the position where the difference value is the smallest may be output to the three-dimensional projection device 12.
- the detection unit 15 calculates the difference values between the fourth position and the first position, the second position, and the third position, respectively, and if all the difference values are out of the predetermined range, the detection unit 15 obtains the coordinate information of the fourth position. It may be output to the three-dimensional projection device 12.
- the detection device 50 may execute the first process using the first template image 52 of the eye 5a and the second process using the second template image 53 of the face 5 as the template matching process.
- the detection device 50 may execute, for example, the first process shown in the flow chart of FIG. 7A and the second process shown in the flow chart of FIG. 7B.
- the detection device 50 may execute the template image generation process at the time of starting the three-dimensional projection system 100, and start the template matching process after the template image generation process is completed.
- the template image generation process can be omitted, so that the detection device 50 starts the template matching process when the three-dimensional projection system 100 is started. You can do it.
- step B1 the input unit 30 receives the input of the captured image 51 captured by the camera 11.
- step B2 the detection unit 15 cuts out the region around the position extracted by the first template image 52 of the eye 5a from the captured image 51 as a search range.
- the position coordinates from which the first template image 52 of the eye 5a is extracted may be stored in association with the first template image 52 of the eye 5a.
- step B3 the detection unit 15 performs template matching with respect to the search range using the first template image 52 of the eye 5a.
- the detection unit 15 determines the position having the highest degree of conformity with the first template image 52 of the eye 5a within the search range and the degree of conformity thereof by template matching.
- step B4 the detection unit 15 determines whether or not the determined goodness of fit is equal to or greater than the threshold value. If it is equal to or more than the threshold value, the process proceeds to step B5, and if it is less than the threshold value, the process returns to step B1.
- step B5 the detection unit 15 determines the captured image 51 based on the coordinates of the position having the highest degree of compatibility with the first template image 52 of the eye 5a within the search range and the coordinate position relationship (first relationship) specified in advance. The coordinates of the pupil position (first position) in the above are determined, and the template matching process is completed.
- step B11 the input unit 30 receives the input of the captured image 51 captured by the camera 11.
- step B12 the detection unit 15 cuts out the second template image 53 of the face 5 from the captured image 51 with the area around the position extracted in the template image generation process as the search range.
- the position coordinates from which the second template image 53 of the face 5 is extracted may be stored in association with the second template image 53 of the face 5.
- step B13 the detection unit 15 performs template matching with respect to the search range using the second template image 53 of the face 5.
- the detection unit 15 determines the position having the highest degree of conformity with the second template image 53 of the face 5 within the search range and the degree of conformity by template matching.
- step B14 the detection unit 15 determines whether or not the determined goodness of fit is equal to or greater than the threshold value. If it is equal to or more than the threshold value, the process proceeds to step B15, and if it is less than the threshold value, the process returns to step B1.
- step B15 the detection unit 15 determines the captured image 51 by the coordinates of the position having the highest degree of compatibility with the second template image 53 of the face 5 in the search range and the coordinate positional relationship (second relationship) specified in advance. The coordinates of the pupil position (second position) in the above are determined, and the template matching process is completed.
- the first process and the second process executed for the same input image 51 may be asynchronous processes.
- the first process and the second process are independently executed, and the first position and the second position are determined.
- the determined coordinate information of the first position or the second position is output from the detection device 50 to the three-dimensional projection device 12.
- the detection unit 15 may determine which coordinate information of the first position or the second position is to be output.
- the display control unit 24 controls the parallax image displayed on the display unit 20 by using the coordinate information of the pupil position acquired from the detection device 50.
- the detection device 50 may further execute the third process.
- the detection device 50 may execute, for example, the third process shown in the flow chart of FIG. 7C.
- the detection device 50 may start the third process after determining the first position and the second position, for example.
- the detection unit 15 compares the first position with the second position.
- the first position and the second position to be compared may be the detection results of the first process and the second process for the same captured image 51.
- step S2 it is determined whether the first position and the second position are at the same position. For example, the difference between the coordinate information of the first position and the coordinate information of the second position may be calculated, and if the difference value is within the first range, it may be determined that the positions are the same.
- step S3 the detection unit 15 coordinates the first position as the third position, which is the pupil position in the captured image 51, and ends the third process. do. If the first position and the second position are not the same position, in step S4, the detection unit 15 determines whether or not the detection has failed. For example, if the difference value between the coordinate information of the first position and the coordinate information of the second position is outside the second range, it may be determined that the detection has failed. If the detection fails, the detection unit 15 ends the third process without determining the coordinates of the third position.
- the detection unit 15 coordinates the intermediate position between the first position and the second position as the third position, which is the pupil position in the captured image 51, in step S5, and ends the third process. .. For example, if the difference value between the coordinate information of the first position and the coordinate information of the second position is outside the first range and is within the second range, it may be determined that the detection has not failed.
- the driver's seat of the mobile body 10 is configured to be movable in the front-rear direction, for example.
- the posture of the user may change during the operation of the moving body 10.
- the detection device 50 can execute the template matching process of another example as follows.
- the front-back position of the driver's seat or the posture of the user 13 changes and the face of the user 13 moves in the z direction.
- the face of the user 13 moves in the positive direction in the z direction
- the face of the user 13 is captured smaller in the captured image 51 than before the movement.
- the captured image 51 captures the face of the user 13 larger than before the movement.
- the detection device 50 may perform scaling processing on the first and second template images 52 and 53, and template matching may be performed using the first and second template images 52 and 53 after the scaling processing.
- the detection device 50 may perform template matching using, for example, a plurality of first and second template images 52 and 53 having different enlargement ratios.
- the detection device 50 may perform template matching using, for example, a plurality of first and second template images 52 and 53 having different reduction ratios.
- the template matching process of another example will be explained using a flow chart.
- the detection device 50 may execute, for example, the template matching process shown in the flow chart of FIG.
- the detection device 50 may execute the template image generation process at the time of starting the three-dimensional projection system 100, and start the template matching process after the template image generation process is completed.
- the template image generation process can be omitted, so that the detection device 50 starts the template matching process when the three-dimensional projection system 100 is started. You can do it.
- step C5 the detection unit 15 changes a plurality of scaling factors of the first template image 52 of the eye 5a to perform scaling processing on the first template image 52.
- the detection unit 15 performs template matching using the first template image 52 of the eye 5a after the scaling process. Since the detection device 50 does not detect how the posture of the user 13 has changed, both the enlargement process and the reduction process are performed as the scaling process.
- the detection unit 15 Since a plurality of first template images 52 of the eyes 5a after the scaling processing are generated, the detection unit 15 performs template matching with the first template images 52 of the plurality of eyes 5a, and the first of the eyes 5a having the highest degree of conformity. 1 Template image 52 is determined. In step C6, the detection unit 15 estimates the position of the user 13 in the z direction based on the magnification of the first template image 52 of the eye 5a having the highest goodness of fit. In step C7, after determining the coordinates of the pupil position in the captured image 51 in the same manner as in step B5, the coordinates of the pupil position are corrected based on the estimated position in the z direction of the user 13, and the template matching process is terminated.
- the corrected pupil position coordinate information is output from the detection device 50 to the three-dimensional projection device 12.
- steps B11 to B14 of FIG. 7B and steps C1 to C4 of FIG. 8 perform the same operation, and in steps C5 to C7, instead of the first template image 52 of the eye 5a. Since it is the same as the first process described above except that the second template image 53 of the face 5 is used, the description thereof will be omitted.
- the detection device 50A includes an input unit 30, a prediction unit 31, and a detection unit 15A. Since the configuration of the three-dimensional projection system 100A other than the detection device 50A is the same as that of the above-mentioned three-dimensional projection system 100, the same reference numerals are given and detailed description thereof will be omitted.
- the prediction unit 31 is configured to predict the position of the eye 5a at a time after the current time based on a plurality of positions of the eye 5a detected by the detection unit 15A before the current time in the first process. ..
- the position of the eye 5a of the present embodiment may be coordinate information indicating the position of the pupil of the eye 5a of the user 13 in the same manner as described above.
- the plurality of positions of the eye 5a include the positions of the eyes 5a having different detected times.
- the prediction unit 31 may be configured to predict the position of the future eye 5a by using a plurality of prediction data which is a combination of the detection time and the coordinate information, and output the position as the prediction position.
- the detection unit 15A detects the position of the eye 5a
- the detection unit 15A sequentially stores the coordinate information and the detection time as prediction data in, for example, a storage area of the detection unit 15A, a storage area of the prediction unit 31, or a storage unit 23. It's okay.
- the position of the future eye 5a refers to the future with respect to a plurality of stored predictive data.
- the prediction unit 31 is configured to predict the position of the face 5 at a time after the current time based on a plurality of positions of the face 5 detected by the detection unit 15A before the current time in the second process. ..
- the position of the face 5 of the present embodiment may be coordinate information indicating the position of the pupil of the eye 5a of the user 13 in the same manner as described above.
- the plurality of positions of the face 5 include the positions of the faces 5 having different detected times.
- the prediction unit 31 may be configured to predict the position of the future face 5 by using a plurality of prediction data which is a combination of the detection time and the coordinate information, and output the position as the prediction position.
- the detection unit 15A When the detection unit 15A detects the position of the face 5, the detection unit 15A sequentially stores the coordinate information and the detection time as prediction data in, for example, a storage area of the detection unit 15A, a storage area of the prediction unit 31, or a storage unit 23. It's okay.
- the position of the future face 5 refers to the future with respect to a plurality of stored predictive data.
- the method of predicting the positions of the eyes 5a and the face 5 by the prediction unit 31 may be, for example, a method using a prediction function.
- the prediction function is a function derived from a plurality of stored prediction data.
- a function formula using a coefficient determined in advance by an experiment or the like may be stored in a storage area of the detection unit 15A, a storage area of the prediction unit 31, or a storage unit 23.
- the prediction function may be updated every time the prediction unit 31 predicts the positions of the eyes 5a and the face 5.
- the prediction unit 31 inputs the future time to be predicted into the prediction function, and outputs the coordinate information of the positions (predicted positions) of the eyes 5a and the face 5 at the input times.
- the future time to be predicted may be the time when template matching is executed next, for example, the time when the next frame is input from the camera 11.
- the detection unit 15A may use a part of the captured image 51 as a search range in template matching.
- the detection unit 15A is configured to use a region including the position of the eye 5a predicted by the prediction unit 31 as a search range in template matching in the first process.
- the detection unit 15A is configured to set a region including the position of the face 5 predicted by the prediction unit 31 as a search range in template matching in the second process.
- the detection unit 15A sets a region including the prediction position output by the prediction unit 31 as a prediction region, and sets the set prediction region as a search range in template matching.
- the predicted region including the predicted position may be a region smaller than the captured image 51 and larger than the first and second template images 52 and 53, and the predicted position may be included in the region.
- the center coordinates of the predicted region may be a region that matches the coordinates of the predicted position.
- the shape and size of the search range in the template matching of the present embodiment may have a similar relationship with, for example, the first and second template images 52 and 53.
- the detection unit 15A executes template matching with such a prediction area as a search range.
- the template matching of the present embodiment is the same as the above-mentioned template matching except that the search range is different.
- the position most suitable for the first and second template images 52 and 53 is searched in the prediction area which is the search range.
- the detection results of the first process and the second process may be coordinate information indicating the pupil position of the eye 5a of the user 13. Since the prediction area that is the search range of the present embodiment includes the prediction position output by the prediction unit 31, there is a possibility that the eyes 5a and the face 5 are included in the search range even if the search range is made smaller. expensive. By making the search range smaller, the amount of computation required for template matching is reduced. By reducing the amount of calculation, the detection unit 15A can increase the calculation speed for outputting the detection result.
- the prediction unit 31 may be configured to further calculate the rate of change in the position of the eye 5a based on the plurality of positions of the eye 5a detected by the detection unit 15A before the current time.
- the coordinate information and the detection time are stored as prediction data, and if a plurality of prediction data are used, the change speed of the position of the eye 5a can be calculated.
- the prediction unit 31 can calculate the change speed of the position of the eye 5a. ..
- the moving distance can be calculated for each of the x-axis direction component and the y-axis direction component, and the change speed of the position of the eye 5a can also be calculated for the x-axis direction component and the y-axis direction component, respectively.
- the detection unit 15A is configured to change the size of the search range in the template matching of the first process according to the change speed calculated by the prediction unit 31. If the rate of change calculated by the prediction unit 31 is large, it is predicted that the moving distance of the position of the eye 5a is large. For example, when the x-axis direction component of the calculated change rate and the y-axis direction component of the change rate are compared, it is predicted that the moving distance of the position of the eye 5a increases in the direction in which the change rate component is large. ..
- the search range in template matching can be set small by predicting the position of the eye 5a, but the position of the eye 5a deviates from the predicted position in the direction in which the component of the change rate is large.
- the detection unit 15A may widen the prediction region including the prediction position in the direction in which the component of the change rate is large so that the position of the eye 5a does not go out of the search range.
- the detection unit 15A executes template matching using such a widened area as a search range. Similar to the above, the detection unit 15A may change the size of the search range in the template matching of the face 5 in the second process according to the change speed calculated by the prediction unit 31.
- the template matching process including the prediction of the pupil position will be explained using a flow chart.
- the detection device 50A may execute the template matching process shown in the flow chart of FIG. 10 in the first process.
- the detection device 50A may execute the template image generation process at the time of starting the three-dimensional projection system 100A, and may start the template matching process after the template image generation process is completed.
- the template image generation process can be omitted, so that the detection device 50A starts the template matching process when the three-dimensional projection system 100A is started. good.
- step B21 the input unit 30 receives the input of the captured image 51 captured by the camera 11.
- step B22 the detection unit 15A cuts out the search range from the captured image 51.
- the search range cut out in step B22 is the search range determined in step B27 described later. If step B27 has not been executed before step B22 and the search range has not been determined in advance, the first template image 52 of the eye 5a uses the area around the position extracted in the template image generation process as the search range. good.
- step B23 the detection unit 15A performs template matching with respect to the search range using the first template image 52 of the eye 5a. The detection unit 15A determines the position having the highest goodness of fit with the first template image 52 of the eye 5a and the goodness of fit within the search range by template matching.
- step B24 the detection unit 15A determines whether or not the determined goodness of fit is equal to or greater than the threshold value. If it is equal to or more than the threshold value, the process proceeds to step B25, and if it is less than the threshold value, the process returns to step B21.
- step B25 the detection unit 15A determines the pupil in the captured image 51 by the coordinates of the position having the highest goodness of fit with the first template image 52 of the eye 5a within the search range and the relative coordinate position relationship specified in advance. Determine the coordinates of the position.
- step B26 the prediction unit 31 predicts the future pupil position and outputs it as the predicted position.
- the prediction unit 31 updates the prediction function based on, for example, the latest prediction data which is a combination of the coordinate information of the pupil position determined in step B25 and the detection time, and the stored past prediction data. do.
- the prediction unit 31 predicts the pupil position using the updated prediction function and outputs the predicted position.
- the detection unit 15A determines a region including the predicted position output from the prediction unit 31 as a search range, and returns to step B21.
- the second template image 53 of the face 5 may be used instead of the first template image 52 of the eye 5a.
- the face 5 of the user 13 may move back and forth. Further, when the user 13 tilts his / her head, the face 5 of the user 13 may tilt. When the face 5 of the user 13 moves back and forth, the face 5 of the user 13 of the captured image 51 appears large or small, and is similar to the case where the enlargement processing or the reduction processing is performed. When the face 5 of the user 13 is tilted, the face 5 of the user 13 in the captured image is similar to the case of rotation processing.
- the detection unit 15A compares the predicted position with the immediately preceding pupil position.
- the detection unit 15A updates the first template image 52 of the eye 5a with the first template image 52 of the eye 5a having a variable magnification according to the inter-eye distance. do.
- the detection unit 15A previously obtains, for example, a plurality of first template images 52 having different enlargement ratios by scaling processing and a plurality of first template images 52 having different reduction ratios by scaling processing.
- the first template image 52 of the eye 5a that has been created may be selected according to the distance between the eyes.
- the prediction unit 31 predicts the pupil position of the left eye and the pupil position of the right eye, respectively, and the detection unit 15A compares the pupil position of the immediately preceding left eye with the pupil position of the right eye to obtain the intereye distance. Changes in may be detected.
- the detection unit 15A uses the second template image 53 of the face 5 as the second template image 53 of the face 5 having a variable magnification according to the inter-eye distance. Update to.
- the detection unit 15A previously obtains, for example, a plurality of second template images 53 having different enlargement ratios by scaling processing and a plurality of second template images 53 having different reduction ratios by scaling processing.
- the second template image 53 of the face 5 may be selected according to the distance between the eyes.
- the detection unit 15A if the detection unit 15A compares the predicted position with the immediately preceding pupil position and the pupil position changes according to the inclination of the face, the detection unit 15A will detect the eye 5a.
- the first template image 52 is updated with the first template image 52 of the eye 5a having a rotation angle corresponding to the change in inclination.
- the detection unit 15A creates, for example, a plurality of first template images 52 having different rotation angles by rotation processing in advance, and selects the first template image 52 according to the change in inclination. You can do it.
- the prediction unit 31 predicts the pupil position of the left eye and the pupil position of the right eye, respectively, and the detection unit 15A compares the pupil position of the immediately preceding left eye with the pupil position of the right eye in the y-axis direction.
- the tilt change may be detected from the position change of.
- the positions (y-coordinates) of the pupil position of the left eye and the pupil position of the right eye in the y-axis direction change in different directions. For example, if the position of the pupil of the left eye in the y-axis direction changes upward and the position of the pupil of the right eye in the y-axis direction changes downward, a tilt change occurs.
- the detection unit 15A may calculate the rotation angle based on the magnitude of the position change in the y-axis direction of the left eye and the right eye. Similarly, in the second process, if the pupil position is tilted, the detection unit 15A updates the second template image 53 of the face 5 with the second template image 53 having a rotation angle corresponding to the change in tilt. As the second template image 53 of the face 5, the detection unit 15A creates, for example, a plurality of second template images 53 having different rotation angles by rotation processing in advance, and selects the second template image 53 according to the change in inclination. You can do it.
- the detection device 50A may execute the template matching process shown in the flow chart of FIG. 11 in the first process.
- the input unit 30 receives the input of the captured image 51 captured by the camera 11.
- the detection unit 15A cuts out the search range from the captured image 51.
- the search range cut out in step C12 is the search range determined in step C18 described later.
- the first template image 52 of the eye 5a searches the area around the position extracted in the template image generation process. May be.
- step C13 the detection unit 15A performs template matching with respect to the search range using the updated first template image 52 of the eye 5a.
- the first template image 52 of the eye 5a is the first template image 52 of the eye 5a updated in step C17 described later.
- the detection unit 15A determines the position having the highest goodness of fit with the first template image 52 of the eye 5a and the goodness of fit within the search range by template matching.
- step C14 the detection unit 15A determines whether or not the determined goodness of fit is equal to or greater than the threshold value. If it is equal to or more than the threshold value, the process proceeds to step C15, and if it is less than the threshold value, the process returns to step C11.
- step C15 the detection unit 15A determines the coordinates of the pupil position in the captured image 51 by the coordinates of the position having the highest goodness of fit with the first template image 52 of the eye 5a within the search range and the coordinate position relationship specified in advance. To determine.
- step C16 the prediction unit 31 predicts the future pupil position and outputs it as the predicted position.
- step C17 the detection unit 15A updates the first template image 52 of the eye 5a.
- the detection unit 15A compares the predicted position with the position of the pupil immediately before, and updates to the first template image 52 of the eye 5a that has been subjected to at least scaling processing or rotation processing according to the comparison result.
- step C18 the detection unit 15A determines a region including the predicted position output from the prediction unit 31 as a search range, and returns to step C11.
- the second template image 53 of the face 5 may be used instead of the first template image 52 of the eye 5a.
- the detection units 15 and 15A use the region including the fourth position as the search range instead of the predicted position. You may decide.
- the configuration according to the present disclosure is not limited to the embodiments described above, and can be modified or changed in many ways.
- the functions and the like included in each component and the like can be rearranged so as not to be logically inconsistent, and a plurality of components and the like can be combined or divided into one.
- the description of "first”, “second”, “third”, etc. is an identifier for distinguishing the configuration.
- the configurations distinguished by the description of "first”, “second”, “third”, etc. in the present disclosure can exchange numbers in the configuration.
- the identifiers “first” and “second” can be exchanged with the second process.
- the exchange of identifiers takes place at the same time.
- the configuration is distinguished.
- the identifier may be deleted. Configurations with the identifier removed are distinguished by a code. Based solely on the description of identifiers such as “first” and “second” in the present disclosure, it shall not be used as an interpretation of the order of the configurations or as a basis for the existence of identifiers with smaller numbers.
- the x-axis, y-axis, and z-axis are provided for convenience of explanation and may be interchanged with each other.
- the configuration according to the present disclosure has been described using a Cartesian coordinate system composed of x-axis, y-axis, and z-axis.
- the positional relationship of each configuration according to the present disclosure is not limited to being orthogonal.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Vascular Medicine (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Description
5a 眼(5aL:左眼、5aR:右眼)
10 移動体
11 カメラ
12 3次元投影装置
13 利用者
14 虚像(14a:第1虚像、14b:第2虚像)
15,15A 検出部
16 アイボックス
17 3次元表示装置
18 光学素子(18a:第1ミラー、18b:第2ミラー)
19 バックライト
20 表示部(20a:表示面)
201L 左眼視認領域
201R 右眼視認領域
21 バリア部(21a:遮光面、21b:開口領域)
22 通信部
23 記憶部
24 表示制御部
25 ウインドシールド
30 入力部
31 予測部
50,50A 検出装置
51 撮像画像
52 眼の第1テンプレート画像
53 顔の第2テンプレート画像
100,100A 3次元投影システム(画像表示システム)
Claims (11)
- 画像情報の入力を受ける入力部と、
利用者の眼の位置を検出する検出処理を実行可能に構成されるコントローラと、を備え、
前記コントローラは、
入力された画像情報に基づいて、第1テンプレート画像を用いるテンプレートマッチングにて眼の第1位置を検出する第1処理と、
前記入力された画像情報に基づいて、前記第1テンプレート画像と異なる第2テンプレート画像を用いるテンプレートマッチングにて顔の位置を検出し、検出した顔の位置に基づいて眼の第2位置を検出する第2処理と、を前記検出処理として実行可能に構成される、検出装置。 - 請求項1に記載の検出装置であって、
前記コントローラは、前記第1位置および前記第2位置を比較した結果に基づいて眼の第3位置を検出する第3処理を前記検出処理として実行可能に構成される、検出装置。 - 請求項1または2に記載の検出装置であって、
前記コントローラは、前記第1処理と異なる、前記入力された画像情報に基づいて眼の第4位置を検出する第4処理を実行可能に構成される、検出装置。 - 請求項3に記載の検出装置であって、
前記コントローラは、前記第1処理の処理時間より第4処理の処理時間が長い、検出装置。 - 請求項3または4に記載の検出装置であって、
前記コントローラは、前記第1処理の検出精度より第4処理の検出精度が高い、検出装置。 - 請求項3~5のいずれか1つに記載の検出装置であって、
前記コントローラは、前記第4処理を周期的に実行するように構成される、検出装置。 - 請求項3~5のいずれか1つに記載の検出装置であって、
前記コントローラは、前記第1処理と並行して第4処理を実行可能に構成される、検出装置。 - 請求項3~5のいずれか1つに記載の検出装置であって、
前記コントローラは、前記第1位置、前記第2位置、および前記第3位置のいずれかと、前記第4位置を比較する第5処理を実行可能に構成される、検出装置。 - 請求項8に記載の検出装置であって、
前記コントローラは、前記第5処理の結果に基づき、前記第1処理または前記第2処理の少なくともいずれかにおいて、前記第4位置を含む領域を探索範囲とするテンプレートマッチングを実行可能に構成される、検出装置。 - 利用者の両眼に対して光学系を介して投影される視差画像を表示するように構成される表示部と、
前記視差画像の画像光の進行方向を規定することによって、前記両眼に視差を与えるように構成されるバリア部と、
利用者の顔を撮像するように構成されるカメラと、
前記カメラから出力された撮像情報の入力を受ける入力部と、
利用者の眼の位置を検出する検出処理を実行可能に構成される検出部と、
前記検出部で検出された利用者の眼の位置に応じた視差画像を合成して前記表示部を制御する表示制御部と、を備え、
前記検出部は、
入力された画像情報に基づいて、第1テンプレート画像を用いるテンプレートマッチングにて眼の第1位置を検出する第1処理と、
前記入力された画像情報に基づいて、前記第1テンプレート画像と異なる第2テンプレート画像を用いるテンプレートマッチングにて顔の位置を検出し、検出した顔の位置に基づいて眼の第2位置を検出する第2処理と、を前記検出処理として実行可能に構成される、画像表示システム。 - 請求項10記載の画像表示システムであって、
前記検出部は、前記第1位置および前記第2位置を比較した結果に基づいて眼の第3位置を検出する第3処理を前記検出処理として実行可能に構成され、
前記表示制御部は、前記第1位置、前記第2位置、および前記第3位置のいずれかに基づいて視差画像を合成する、画像表示システム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022559033A JP7466687B2 (ja) | 2020-10-30 | 2021-10-19 | 検出装置および画像表示システム |
US18/034,347 US20230388479A1 (en) | 2020-10-30 | 2021-10-19 | Detection device and image display system |
EP21885983.3A EP4239406A1 (en) | 2020-10-30 | 2021-10-19 | Detection device and image display system |
CN202180072468.XA CN116547604A (zh) | 2020-10-30 | 2021-10-19 | 检测装置以及图像显示系统 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020182946 | 2020-10-30 | ||
JP2020-182946 | 2020-10-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022091864A1 true WO2022091864A1 (ja) | 2022-05-05 |
Family
ID=81382578
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/038574 WO2022091864A1 (ja) | 2020-10-30 | 2021-10-19 | 検出装置および画像表示システム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230388479A1 (ja) |
EP (1) | EP4239406A1 (ja) |
JP (1) | JP7466687B2 (ja) |
CN (1) | CN116547604A (ja) |
WO (1) | WO2022091864A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06255388A (ja) * | 1993-03-09 | 1994-09-13 | Toyota Motor Corp | 運転状態検出装置 |
JP2001166259A (ja) | 1999-09-24 | 2001-06-22 | Sanyo Electric Co Ltd | 眼鏡無し立体映像表示装置 |
JP2001195582A (ja) * | 2000-01-12 | 2001-07-19 | Mixed Reality Systems Laboratory Inc | 画像検出装置、画像検出方法、立体表示装置、表示コントローラ、立体表示システムおよびプログラム記憶媒体 |
JP2014054486A (ja) * | 2012-09-14 | 2014-03-27 | Fujitsu Ltd | 注視位置検出装置及び注視位置検出方法 |
JP2014071519A (ja) * | 2012-09-27 | 2014-04-21 | Aisin Seiki Co Ltd | 状態判定装置、運転支援システム、状態判定方法及びプログラム |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2515932B (en) | 2012-04-27 | 2017-04-26 | Anderson Power Products | Compact latching mechanism for a mid-power electrical connector |
-
2021
- 2021-10-19 JP JP2022559033A patent/JP7466687B2/ja active Active
- 2021-10-19 WO PCT/JP2021/038574 patent/WO2022091864A1/ja active Application Filing
- 2021-10-19 CN CN202180072468.XA patent/CN116547604A/zh active Pending
- 2021-10-19 EP EP21885983.3A patent/EP4239406A1/en active Pending
- 2021-10-19 US US18/034,347 patent/US20230388479A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06255388A (ja) * | 1993-03-09 | 1994-09-13 | Toyota Motor Corp | 運転状態検出装置 |
JP2001166259A (ja) | 1999-09-24 | 2001-06-22 | Sanyo Electric Co Ltd | 眼鏡無し立体映像表示装置 |
JP2001195582A (ja) * | 2000-01-12 | 2001-07-19 | Mixed Reality Systems Laboratory Inc | 画像検出装置、画像検出方法、立体表示装置、表示コントローラ、立体表示システムおよびプログラム記憶媒体 |
JP2014054486A (ja) * | 2012-09-14 | 2014-03-27 | Fujitsu Ltd | 注視位置検出装置及び注視位置検出方法 |
JP2014071519A (ja) * | 2012-09-27 | 2014-04-21 | Aisin Seiki Co Ltd | 状態判定装置、運転支援システム、状態判定方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
US20230388479A1 (en) | 2023-11-30 |
JP7466687B2 (ja) | 2024-04-12 |
JPWO2022091864A1 (ja) | 2022-05-05 |
EP4239406A1 (en) | 2023-09-06 |
CN116547604A (zh) | 2023-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112052716B (zh) | 识别装置、识别方法及存储介质 | |
WO2022091864A1 (ja) | 検出装置および画像表示システム | |
WO2021201161A1 (ja) | 検出装置および画像表示モジュール | |
JP2021166031A (ja) | 検出装置および画像表示モジュール | |
WO2021177315A1 (ja) | カメラ装置、ウインドシールドおよび画像表示モジュール | |
JP7141975B2 (ja) | 画像表示モジュール、画像表示システム、移動体、画像表示方法及び画像表示プログラム | |
JP7337023B2 (ja) | 画像表示システム | |
WO2022024814A1 (ja) | カメラシステムおよび運転支援システム | |
JP7173836B2 (ja) | コントローラ、位置判定装置、位置判定システム、表示システム、プログラム、および記録媒体 | |
WO2022131137A1 (ja) | 3次元表示装置、画像表示システムおよび移動体 | |
WO2022186189A1 (ja) | 撮像装置および3次元表示装置 | |
JP7332764B2 (ja) | 画像表示モジュール | |
JP7461215B2 (ja) | 画像表示モジュール | |
JP7250582B2 (ja) | 画像表示モジュール、移動体、及び光学系 | |
WO2021235287A1 (ja) | 視点検出装置および表示装置 | |
JP2022021866A (ja) | カメラシステムおよび画像表示システム | |
JP2022077138A (ja) | 表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21885983 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022559033 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180072468.X Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18034347 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021885983 Country of ref document: EP Effective date: 20230530 |