WO2023003045A1 - Display control device, head-up display device, and display control method - Google Patents

Display control device, head-up display device, and display control method Download PDF

Info

Publication number
WO2023003045A1
WO2023003045A1 PCT/JP2022/028492 JP2022028492W WO2023003045A1 WO 2023003045 A1 WO2023003045 A1 WO 2023003045A1 JP 2022028492 W JP2022028492 W JP 2022028492W WO 2023003045 A1 WO2023003045 A1 WO 2023003045A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
image
face
eye position
visibility
Prior art date
Application number
PCT/JP2022/028492
Other languages
French (fr)
Japanese (ja)
Inventor
一成 濱田
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Publication of WO2023003045A1 publication Critical patent/WO2023003045A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/10Intensity circuits
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present disclosure relates to a display control device, a head-up display device, and a head-up display device that are used in a mobile object such as a vehicle and superimpose an image on the foreground of the mobile object (actual view in the forward direction of the mobile object as seen from the vehicle occupant). It relates to a display control method and the like.
  • Patent Document 1 display light projected onto a projection target portion such as a front windshield of a vehicle is reflected toward an occupant (observer) of the vehicle inside the vehicle, so that the observer can see the vehicle.
  • a head-up display device an example of a virtual image display device
  • the head-up display device described in Patent Literature 1 virtually displays a display object at a predetermined position (here, the position is referred to as a target position) in the depth and vertical and horizontal directions in the real space of the foreground.
  • the display object will appear as if it were present at the target position in the foreground.
  • a face detection unit such as a camera
  • Patent Document 2 the positions of the observer's right eye and left eye detected by a face detection unit such as a camera are tracked, the right-eye display light indicating the right-eye image is directed to the tracked right-eye position, and the tracked image is detected.
  • Binocular parallax is given to the virtual object by controlling the display device so as to direct the display light for the left eye indicating the image for the left eye to the position of the left eye, and whether the virtual object is at the target position in the foreground (in the real scene).
  • a head-up display device is disclosed that allows an observer to perceive in a pseudo manner.
  • Japanese Patent Laid-Open No. 2002-200030 from the position of the observer's eyes detected by a face detection unit such as a camera, a specific position on a real object existing in the foreground (or an area around the real object having a specific positional relationship with the real object) is detected.
  • a head-up display device is disclosed that emphasizes the position of a real object existing in the real scene by aligning the display position of an image (virtual image) with a position on a straight line when looking at the position of .
  • the face detection unit detects the movement of the eye position and thus also detects the display position of the image (virtual image). can be corrected according to the determined eye position.
  • the image corresponding to the eye position is displayed with a delay due to system latency. and an image that does not correspond to the eye position is viewed (the observer can view an image adapted to the first eye position from the second eye position). becomes), it is assumed that the observer feels uncomfortable.
  • a face detection unit such as a camera detects the position of the observer's eyes (left and right eye positions) using a complex algorithm from captured images. Depending on the situation, the correction of the display position of the image (virtual image) and the eye position of the observer will not match due to the expansion of the detection error (decrease in detection accuracy) or erroneous detection. is assumed.
  • the outline of the present disclosure relates to making it difficult for the observer to feel uncomfortable. More specifically, the present invention relates to providing a display control device, a head-up display device, a display control method, and the like that make it difficult to visually recognize an image that does not match the user's eye position.
  • the display control device, head-up display device, display control method, etc. described in this specification employ the following means in order to solve the above problems.
  • the present embodiment displays on the display device based on at least the eye position-related information based on the eye position-related information including at least one of the user's eye position, face position, and face direction or the detection operation of the eye position-related information.
  • the gist of this is to reduce the visibility of an AR virtual image that undergoes an eye-following image correction process that corrects the position of the image.
  • the display control device of the first embodiment of the present invention projects a display for displaying an image and the light of the image displayed by the display onto a projection target member, thereby displaying a virtual image of the image to the user of the vehicle in the foreground.
  • a display control device for performing display control in a head-up display device that superimposes on a head-up display device, comprising: one or more processors; a memory; and one or more computer programs configured, wherein the processor acquires eye position-related information including at least one of the user's eye position, face position, and face orientation, and displays AR on the head-up display device
  • eye-tracking image correction processing is performed to correct the position of the image displayed on the display device based at least on the eye position-related information.
  • the first embodiment of the present invention has the advantage of making it difficult to see images that do not match the eye position. Therefore, based on the eye position-related information including at least one of the user's eye position, face position, and face orientation, or based on the detection operation of the eye position-related information, estimating a situation in which an image that does not match the eye position can be viewed, It is possible to reduce the visibility of the AR virtual image on which eye-tracking image correction processing is performed.
  • the AR virtual image is adjusted to match the position, direction, shape, etc. of the real object (analog) in the foreground (real world) as viewed from the eye position (or face position) of the observer. , in which case it is also called a Maisanalog image.
  • the AR virtual image is not necessarily limited to a Maisanalog image that changes to match the real object (analog). may be an image on which eye-tracking image correction processing (for example, motion parallax addition processing, superimposition processing) that changes is performed.
  • the determination conditions are conditions for change speed of at least one of eye position, face position, and face orientation, coordinates of at least one of eye position, face position, and face orientation. and at least one of the eye position, face position, and face direction moving time conditions.
  • the determination conditions are that at least one of the eye position, face position, and face direction change speed is fast; It includes at least one of: the coordinates being within a predetermined range; and at least one of the eye position, face position, and face orientation continuously changing.
  • the visibility of the AR virtual image can be reduced on the condition that at least one of the eye position, face position, and face orientation changes quickly. For example, if the change speed is faster than a predetermined threshold, the visibility of the AR virtual image is reduced.
  • the visibility of the AR virtual image can be reduced on condition that at least one of the coordinates of the eye position, the face position, and the face orientation is within a predetermined range.
  • the visibility of the AR virtual image is reduced in a predetermined range where eye position detection errors are likely to increase (detection accuracy is reduced) or erroneous detections are likely to occur.
  • the visibility of the AR virtual image can be reduced on condition that a continuous change in at least one of the eye position, face position, and face direction is detected. For example, when it is detected that the eye position continuously changes in one direction, the visibility of the AR virtual image is reduced.
  • the conditions for the detection operation of the eye position-related information are that at least one of the eye position, face position, and face direction cannot be detected, and that the eye position, face position, and face direction cannot be detected. detecting a decrease in detection accuracy of at least one of the orientations.
  • the visibility of the AR virtual image can be reduced on the condition that at least one of the eye position, face position, and face direction cannot be detected.
  • the visibility of the AR virtual image can be reduced on the condition that the detection accuracy of at least one of the eye position, the face position, and the face orientation is lowered.
  • the processor lowers the visibility differently depending on at least one of the eye position, face position, and face orientation.
  • the AR virtual image is displayed with low visibility that prioritizes preventing the unmatched image from being visually recognized by the observer.
  • face position, or face orientation it is possible to display the AR virtual image with moderate visibility considering the visibility of the virtual image while suppressing the unmatched image from being seen by the observer, which is flexible and convenient. has the advantage of being able to provide a system with a high
  • the processor lowers the visibility differently according to the change speed of at least one of the eye position, the face position, and the face orientation. ing.
  • the AR virtual image is displayed with low visibility that prioritizes preventing the unmatched image from being visually recognized by the observer.
  • the eye position, face position, or face orientation it is possible to display the AR virtual image with moderate visibility considering the ease of viewing the virtual image, while suppressing the unmatched image from being seen by the observer. It has the advantage of being able to provide a highly convenient system.
  • a first eye-following image correction process for correcting the position of the image displayed on the display device based at least on the eye position or the face position; and the second correction amount of the image position with respect to the change amount of the eye position or face position is the eye position or face position at the time of the first eye-following image correction processing. or the position of the image relative to at least one of the amount of change in eye position or face position in the vertical direction and the amount of change in eye position or face position in the horizontal direction. is set to zero, and a second image correction process is executed.
  • a second image correction process is executed.
  • the processor determines whether the eye-position-related information or the detection operation of the eye-position-related information satisfies a predetermined release condition based on the eye-position-related information, and determines whether the release condition is satisfied. If it is determined to be sufficient, the visibility increasing process is further executed to increase the visibility of the AR virtual image that has been subjected to the visibility decreasing process. In this case, there is an advantage that the image that matches the eye position is less likely to be visually recognized.
  • the eye position-related information including at least one of the user's eye position, face position, and face direction or the detection operation of the eye position-related information, estimating a situation in which an image that matches the eye position is likely to be viewed, It is possible to improve the visibility of the AR virtual image on which the eye-trackable image correction processing is executed.
  • the processor raises the visibility differently depending on at least one of the eye position, the face position, and the face orientation. Let it be.
  • the AR virtual image is displayed with visibility that prioritizes the visibility of the virtual image, and for some eye positions, face positions, or face orientations, unmatched images It is possible to display the AR virtual image with moderate visibility that considers the visibility of the virtual image while suppressing the visual recognition by the observer, and it is possible to provide a flexible and highly convenient system. have.
  • the processor in the visibility increasing process, comprises different visual It's supposed to be sexually uplifting.
  • the AR virtual image is displayed with visibility that prioritizes the visibility of the virtual image, and for part of the eye position, face position, or face orientation change speed.
  • a display for displaying an image and light of an image displayed by the display are projected onto a projection target member, so that a virtual image of the image is superimposed on the foreground and visually recognized by the user of the vehicle.
  • a head-up display device that causes a head-up display device, comprising: one or more processors; a memory; one or more computer programs stored in the memory and configured to be executed by the one or more processors; , the processor acquires eye position-related information including at least one of the user's eye position, face position, and face orientation, displays the AR virtual image, and adjusts the display position of the AR virtual image, eye position executing eye-following image correction processing for correcting the position of an image displayed on a display device based at least on the relevant information, and based on the eye-position-related information, the eye-position-related information or the detection operation of the eye-position-related information is performed in a predetermined manner; It is determined whether the determination condition is satisfied, and when it is determined that the determination
  • a display for displaying an image and light of an image displayed by the display are projected onto a projection target member, so that a virtual image of the image is superimposed on the foreground and visually recognized by the user of the vehicle.
  • a display control method for a head-up display device that allows the head-up display device to display an AR virtual image on the head-up display device; , executing an eye-following image correction process for correcting the position of an image displayed on a display device based at least on the eye position-related information in order to adjust the display position of the AR virtual image; and based on the eye position-related information, Determining whether the eye position-related information or the detection operation of the eye position-related information satisfies a predetermined determination condition, and if it is determined that the determination condition is satisfied, execute a visibility reduction process to reduce the visibility of the AR virtual image. to do and to include; This provides the advantages described above. Other advantages and preferred features have been particularly mentioned in the embodiments and the description.
  • FIG. 1 is a diagram showing an application example of a vehicle virtual image display system to a vehicle.
  • FIG. 2 is a diagram showing the configuration of the head-up display device.
  • FIG. 3 is a diagram showing an example of a foreground visually recognized by an observer and a virtual image displayed superimposed on the foreground while the host vehicle is running.
  • FIG. 4 shows, in an embodiment in which the HUD device is a 3D-HUD device, a left-viewpoint virtual image and a right-viewpoint virtual image displayed on a virtual image plane, and a perceptual image perceived by an observer based on these left-viewpoint virtual images and right-viewpoint virtual images.
  • FIG. 4 is a diagram conceptually showing the positional relationship between .
  • FIG. 5 is a diagram conceptually showing a virtual object placed at a target position in the real scene and an image displayed in the virtual image display area such that the virtual object is visually recognized at the target position in the real scene.
  • FIG. 6 is a diagram for explaining a method of motion parallax addition processing in this embodiment.
  • FIG. 7A is a comparative example showing a virtual image visually recognized from the position Px12 shown in FIG. 6 when the motion parallax adding process of this embodiment is not performed.
  • FIG. 7B is a diagram showing a virtual image visually recognized from the position Px12 shown in FIG. 6 when the motion parallax adding process of this embodiment is performed.
  • FIG. 7A is a comparative example showing a virtual image visually recognized from the position Px12 shown in FIG. 6 when the motion parallax adding process of this embodiment is not performed.
  • FIG. 7B is a diagram showing a virtual image visually recognized from the position Px12 shown in FIG. 6 when the motion parallax adding process of this embodiment is performed
  • FIG. 8 is a diagram for explaining a method of motion parallax addition processing by moving eye positions (face positions) in the vertical direction according to the present embodiment.
  • FIG. 9 is a diagram showing an example of a foreground visually recognized by an observer and a virtual image displayed superimposed on the foreground while the own vehicle is running.
  • FIG. 10A is a diagram showing a real object in the foreground and a virtual image displayed by the HUD device that the observer sees when facing forward of the vehicle. The figure below shows an example of this embodiment in which superimposition processing is executed.
  • FIG. 10B is a diagram showing a real object in the foreground and a virtual image displayed by the HUD device that the observer sees when facing forward of the vehicle.
  • FIG. 11 is a block diagram of a vehicle virtual image display system according to some embodiments.
  • FIG. 12A is a flow diagram showing a method of executing visibility reduction processing based on the detection result of the observer's eye position, face position, or face direction.
  • FIG. 12B is a flow diagram following FIG. 12A.
  • FIG. 13 is an image diagram showing the eye position (face position), the amount of change in the eye position (face position), the speed of change in the eye position (face position), and the like detected at each predetermined cycle time.
  • FIG. 12A is a flow diagram showing a method of executing visibility reduction processing based on the detection result of the observer's eye position, face position, or face direction.
  • FIG. 12B is a flow diagram following FIG. 12A.
  • FIG. 13 is an image diagram showing the eye position (face position), the amount of change in the eye position (face position), the speed of change in the eye position (face position), and the like detected at each predetermined cycle time.
  • FIG. 12A is
  • FIG. 14 is a diagram showing an example of the foreground visually recognized by the observer, the AR virtual image when the visibility reduction process is executed, and the AR-related virtual image while the host vehicle is running.
  • FIG. 15 is a diagram illustrating a HUD device in some embodiments in which the eyebox can be moved vertically by rotating the relay optics.
  • FIG. 16 is a flow diagram illustrating a method of executing the visibility increasing process while executing the visibility decreasing process.
  • FIG. 1 is a diagram showing an example of the configuration of a vehicle virtual image display system including a parallax 3D-HUD device.
  • the left-right direction of a vehicle (an example of a moving body) 1 (in other words, the width direction of the vehicle 1) is the X-axis (the positive direction of the X-axis is the left direction when the vehicle 1 is facing forward).
  • the vertical direction (in other words, the height direction of the vehicle 1) along a line segment that is orthogonal to the horizontal direction and orthogonal to the ground or a surface corresponding to the ground (here, the road surface 6) is the Y axis (Y axis is the upward direction), and the front-rear direction along a line segment perpendicular to each of the left-right direction and the up-down direction is the Z-axis (the positive direction of the Z-axis is the straight-ahead direction of the vehicle 1).
  • the Y axis is the upward direction
  • the front-rear direction along a line segment perpendicular to each of the left-right direction and the up-down direction is the Z-axis (the positive direction of the Z-axis is the straight-ahead direction of the vehicle 1).
  • a vehicle display system 10 provided in a vehicle (self-vehicle) 1 displays the positions and line-of-sight directions of a left eye 700L and a right eye 700R of an observer (typically, a driver sitting in the driver's seat of the vehicle 1).
  • a face detection unit 409 for detecting pupils (or faces) to be detected a vehicle exterior sensor 411 configured by a camera (for example, a stereo camera) for imaging the front (in a broad sense, the surroundings) of the vehicle 1, a head-up display device (hereinafter , a HUD device) 20, a display control device 30 that controls the HUD device 20, and the like.
  • FIG. 2 is a diagram showing one aspect of the configuration of the head-up display device.
  • the HUD device 20 is installed, for example, in a dashboard (reference numeral 5 in FIG. 1).
  • the HUD device 20 accommodates a stereoscopic display device (an example of a display device) 40, a relay optical system 80, and the display device 40 and the relay optical system 80, and transmits display light K from the display device 40 from the inside to the outside. It has a housing 22 having a light exit window 21 that can be emitted toward.
  • the display device 40 is a parallax 3D display device here.
  • This display device (parallax type 3D display device) 40 is a naked-eye stereoscopic display device using a multi-viewpoint image display method that can control depth representation by visually recognizing left-viewpoint images and right-viewpoint images. and a light source unit 60 functioning as a backlight.
  • the display 50 has a spatial light modulation element 51 that optically modulates the illumination light from the light source unit 60 to generate an image, and, for example, a lenticular lens or a parallax barrier (parallax barrier).
  • a lenticular lens or a parallax barrier parallax barrier
  • K20 an optical layer
  • Optical layer 52 includes optical filters such as lenticular lenses, parallax barriers, lens arrays, and microlens arrays. However, this is an example and is not limited.
  • Embodiments of the optical layer 52 are not limited to the above-described optical filters, and the display light for the left eye (K10 in FIG. 1) and the display light for the right eye (K10 in FIG. 1) from the light emitted from the spatial light modulation element 51 K20) includes all forms of optical layers placed on the front or back surface of the spatial light modulator 51.
  • FIG. Some embodiments of the optical layer 52 are electrically controlled so that left-eye display light (K10 in FIG. 1) and right-eye display light (K10 in FIG. 1) are separated from light emitted from the spatial light modulator 51.
  • symbol K20 such as a liquid crystal lens. That is, embodiments of optical layer 52 may include those that are electrically controlled and those that are not.
  • the display device 40 is configured by configuring the light source unit 60 with a directional backlight unit (an example of the light beam separating portion) instead of or in addition to the optical layer (an example of the light beam separating portion) 52.
  • left-eye display light such as left-eye light rays K11, K12, and K13 (reference symbol K10 in FIG. 1)
  • right-eye display light such as right-eye light rays K21, K22, and K23, etc. (reference symbol K20 in FIG. 1); may be emitted.
  • the display control device 30, which will be described later, causes the spatial light modulation element 51 to display a left viewpoint image when the directional backlight unit emits illumination light directed toward the left eye 700L.
  • Left-eye display light K10 such as light rays K11, K12, and K13 is directed toward the viewer's left eye 700L
  • the directional backlight unit emits illumination light toward the right eye 700R.
  • right-eye display light K20 such as right-eye light rays K21, K22, and K23 is directed to the viewer's left eye 700R.
  • the embodiment of the directional backlight unit described above is an example, and is not limited.
  • the display control device 30 which will be described later, performs, for example, image rendering processing (graphic processing), display device driving processing, and the like, so that the display light K10 for the left eye of the left viewpoint image V10 and the display light K10 for the right eye 700R of the observer's left eye 700L are displayed.
  • image rendering processing graphics processing
  • display device driving processing and the like
  • the aspect of the perceptual virtual image FU displayed by the HUD device 20 is controlled. be able to.
  • the display control device 30, controls the display (display device 50) so as to generate a light field that (approximately) reproduces light rays output in various directions from a point in a certain space.
  • the relay optical system 80 has curved mirrors (concave mirrors, etc.) 81 and 82 that reflect the light from the display device 40 and project the image display light K10 and K20 onto the windshield (projection target member) 2 .
  • it may further include other optical members (including refractive optical members such as lenses, diffractive optical members such as holograms, reflective optical members, or any combination thereof).
  • the display device 40 of the HUD device 20 displays an image (parallax image) with parallax for each of the left and right eyes.
  • Each parallax image is displayed as V10 and V20 formed on a virtual image display surface (virtual image formation surface) VS, as shown in FIG.
  • the focus of each eye of the observer (person) is adjusted so as to match the position of the virtual image display area VS.
  • the position of the virtual image display area VS is referred to as an "adjustment position (or imaging position)", and a predetermined reference position (for example, the center 205 of the eyebox 200 of the HUD device 20, the observer's viewpoint position, or , a specific position of the vehicle 1, etc.) to the virtual image display area VS (see symbol D10 in FIG. 4) is referred to as an “adjustment distance (imaging distance)".
  • the human brain fuses each image (virtual image), the human is at a position farther back than the adjustment position (for example, due to the convergence angle between the left viewpoint image V10 and the right viewpoint image V20). It is a fixed position, and the position perceived as being farther away from the observer as the angle of convergence decreases) is recognized as a perceptual image (here, an arrowhead figure for navigation) FU is displayed. do.
  • the perceptual virtual image FU may be referred to as a "stereoscopic virtual image", and may also be referred to as a "stereoscopic image” when the "image" is taken in a broad sense to include virtual images.
  • the HUD device 20 can display the left-viewpoint image V10 and the right-viewpoint image V20 so that the perceived image FU can be viewed at a position on the front side of the adjustment position.
  • FIG. 3 is a diagram showing an example of a foreground visually recognized by an observer and a perceptual image superimposed on the foreground and displayed while the vehicle 1 is running.
  • FIG. 4 is a diagram conceptually showing the positional relationship between the left-viewpoint virtual image and the right-viewpoint virtual image displayed on the virtual image plane, and the perceived image perceived by the observer from the left-viewpoint virtual image and the right-viewpoint virtual image. be.
  • FIG. 3 the vehicle 1 is traveling on a straight road (road surface) 6.
  • the HUD device 20 is installed inside the dashboard 5 .
  • Display light K (K10, K20) is projected from the light exit window 21 of the HUD device 20 onto the projected portion (front windshield of the vehicle 1) 2.
  • FIG. 3 a first content image FU1 that is superimposed on the road surface 6 and instructs the route of the vehicle 1 (here, indicates straight ahead), and similarly indicates the route of the vehicle 1 (here, indicates straight travel). and a second content image FU2 perceived farther than the first content image FU1 is displayed.
  • the HUD device 20 (1) directs the projection target 2 to the left eye 700L detected by the face detection unit 409 at such a position and angle as to be reflected by the projection target 2.
  • Left-eye display light K10 is emitted to form a first left-viewpoint content image V11 at a predetermined position in the virtual image display area VS seen from the left eye 700L, and (2) reflected by the projection target unit 2 to the right eye 700R.
  • Right-eye display light K20 is emitted to the projection target unit 2 at a position and angle such that .
  • the first content image FU1 perceived by the first left-viewpoint content image V11 and the first right-viewpoint content image V21 having parallax is positioned behind the virtual image display area VS by a distance D21 (the above position separated from the reference position by a distance D31).
  • the HUD device 20 (1) detects the left eye 700L detected by the face detection unit 409 at a position and angle that are reflected by the projection target unit 2.
  • the left-eye display light K10 is emitted to the portion 2 to form a second left-viewpoint content image V12 at a predetermined position in the virtual image display area VS seen from the left eye 700L, and (2) the projected portion 2 to the right eye 700R.
  • Right-eye display light K20 is emitted to the projection target 2 at a position and an angle such that it is reflected by the second right-viewpoint content image V22 at a predetermined position in the virtual image display area VS seen from the right eye 700R. form an image.
  • the second content image FU2 perceived by the second left-viewpoint content image V12 and the second right-viewpoint content image V22 having parallax is positioned behind the virtual image display area VS by a distance D22 (the above position separated from the reference position by a distance D31).
  • the distance (imaging distance D10) from the reference position to the virtual image display area VS is set to, for example, "4 m", and the first distance shown in the left diagram of FIG.
  • the distance to the content image FU1 (first perceptual distance D31) is set to a distance of, for example, "7 m”, and the distance from the reference position to the second content image FU2 shown in the right diagram of FIG. (Second perceptual distance D32) is set to, for example, a distance of "10 m”.
  • this is an example and is not limited.
  • FIG. 5 is a diagram conceptually showing a virtual object placed at a target position in the real scene and an image displayed in the virtual image display area such that the virtual object is visually recognized at the target position in the real scene.
  • the HUD device 20 shown in FIG. 5 shows an example of performing 2D display instead of 3D display. That is, the display device 40 of the HUD device 20 shown in FIG. 5 is a 2D display device that is not a stereoscopic display device (even a stereoscopic display device can perform 2D display). As shown in FIG.
  • the depth direction is the Z-axis direction
  • the left-right direction (the width direction of the vehicle 1) is the X-axis direction
  • the vertical direction (the vertical direction of the vehicle 1) is the Y-axis direction.
  • Axial direction Note that the direction away from the viewer is the positive direction of the Z axis, the leftward direction is the positive direction of the X axis, and the upward direction is the positive direction of the Y axis. direction.
  • the viewer 700 perceives the virtual object FU at a predetermined target position PT in the real scene by visually recognizing the virtual image V formed (imaged) in the virtual image display area VS through the projection target section 2. .
  • a viewer visually recognizes the virtual image V of the image of the display light K reflected by the projection target section 2 .
  • the virtual image V is, for example, an arrow indicating a course
  • the arrow of the virtual image V is placed in the virtual image display area VS so that the virtual object FU is placed at a predetermined target position PT in the foreground of the vehicle 1 and visually recognized. Is displayed.
  • the HUD device 20 uses the center of the observer's left eye 700L and right eye 700R as the origin of the projective transformation, and the virtual object FU of a predetermined size and shape arranged at the target position PT.
  • An image to be displayed on the display device 40 is rendered such that a virtual image V having a predetermined size and shape that has undergone projective transformation is displayed in the virtual image display area VS.
  • the HUD device 20 sets the virtual image display area VS so that the virtual object FU is perceived at the same target position PT as before the eye position is moved even when the eye position of the observer is moved.
  • the virtual object FU (virtual image V) is displayed at the target position PT even though it is displayed at a position (virtual image display area VS) away from the target position PT.
  • the HUD device 20 changes the position (this, size (In other words, the HUD device 20 adds motion parallax to the virtual image (image) by image correction accompanying the movement of the eye position, making it easier to perceive depth).
  • image position correction that expresses motion parallax in accordance with such a change in eye position is referred to as motion parallax addition processing (an example of eye-tracking image correction processing).
  • the motion parallax adding process is not limited to image position correction that perfectly reproduces natural motion parallax, but may also include image position correction that approximates natural motion parallax.
  • the HUD device 20 display control device 30
  • Motion parallax addition processing an example of eye-tracking image correction processing
  • Motion parallax addition processing may be executed based on the face position.
  • FIG. 6 is a diagram for explaining the method of motion parallax addition processing in this embodiment.
  • the display control device 30 (processor 33) of the present embodiment controls the HUD device 20 to display the virtual images V41, V42, and V43 formed (imaged) in the virtual image display area VS via the projection target section 2. indicate.
  • the virtual image V41 is set at the target position PT11, which is a perceived distance D33 (a position that is a distance D23 behind the virtual image display area VS).
  • the virtual image V43 is set at the target position PT12, which is a position behind the area VS by a distance D24 (>D23)), and the virtual image V43 is positioned at a perceived distance D35 longer than the perceived distance D34 of the virtual image V42 (more than the virtual image display area VS).
  • the target position PT13 which is a position behind by a distance D25 (>D24), is set. Note that since the amount of correction of the image on the display device 40 corresponds to the amount of correction of the virtual image in the virtual image display area VS, in FIG.
  • the same reference numerals C1, C2, and C3 are also used for the correction amounts of the virtual image (the same applies to the reference numerals Cy11 (Cy) and Cy21 (Cy) in FIG. 8).
  • FIG. 7A shows a virtual image V901 and a virtual image V42 visually recognized from a position Px12 shown in FIG. V902 viewed from the position Px12 shown in FIG.
  • FIG. 7B is a diagram showing virtual images V44, V45, and V46 viewed from position Px12 shown in FIG. 6 when the motion parallax adding process of this embodiment is performed. Note that in FIG. 7B, the difference in the positions of the virtual images V44, V45, and V46 is exaggerated so that the difference in correction amount can be easily understood. That is, the display control device 30 (processor 33) corrects the positions of the plurality of virtual images V41, V42, V43 according to the movement of the eye position due to the difference in the perceived distances D33, D34, D35 of the plurality of virtual images V41, V42, V43.
  • the observer can perceive the motion parallax only between the virtual images V41 (V44), V42 (V45), and V43 (V46). More specifically, the display control device 30 (processor 33) increases the amount of correction in the motion parallax adding process as the perceived distance D30 set is longer, so that the plurality of virtual images V41 (V44) and V42 ( V45) and V43 (V46) are added with motion parallax.
  • Embodiments of the eye-tracking image correction process are not limited to the motion parallax addition process described above, and may include the superimposition process described below. That is, the HUD device 20 (display control device 30) may execute superimposition processing (an example of eye-tracking image correction processing) in accordance with a change in the eye position 700 of the observer (change in face position). .
  • FIG. 8 is a diagram for explaining a method of motion parallax addition processing when the eye position (face position) moves in the vertical direction in this embodiment.
  • the display control device 30 performs motion parallax addition processing to create a virtual image.
  • FIG. 8A the position where the virtual image V displayed in the display area VS is displayed is set in the same direction (upward (Y-axis positive direction)), the correction amount Cy11 is corrected (the position of the virtual image V is changed from the position of the reference V48 to the reference V47).
  • the display control device 30 executes motion parallax addition processing. Then, the position where the virtual image V displayed in the virtual image display area VS is displayed is moved in the same direction (upward (Y (The position of the virtual image V is changed from V48 to V49) by the correction amount Cy21 in the direction of negative axis).
  • the virtual object FU (virtual image V) is displayed at a position (virtual image display area VS) distant from the target position PT, it can be recognized as if it were at the target position PT ( It is possible to enhance the feeling that the virtual object FU (virtual image V) is at the target position PT).
  • FIG. 9 is a diagram showing a real object 300 existing in the foreground and a virtual image V displayed by the HUD device 20 of the present embodiment, which is visually recognized when an observer faces forward from the driver's seat of the vehicle 1.
  • a non-AR virtual image V70 includes a virtual image whose displayed position, direction, and shape are set regardless of the shape.
  • the AR virtual image V60 is displayed at a position (target position PT) corresponding to the position of the real object 300 existing in the real scene.
  • the AR virtual image V60 is displayed, for example, at a position superimposed on the real object 300 or in the vicinity of the real object 300, and notifies the existence of the real object 300 with emphasis.
  • the “position corresponding to the position of the real object 300 (target position PT)” is not limited to the position superimposed on the real object 300 and viewed by the observer. There may be. It is preferable that the AR virtual image V60 does not interfere with the visual recognition of the real object 300, but any aspect is possible.
  • the AR virtual images V60 shown in FIG. 9 include navigation virtual images V61 and V62 that indicate guidance routes, enhanced virtual images V64 and V65 that highlight and notify attention targets, and POI virtual images 65 that indicate targets, predetermined buildings, and the like.
  • the position (target position PT) corresponding to the position of the real object 300 is the position of the road surface 311 (an example of the real object 300) on which these are superimposed in the navigation virtual images V61 and V62. (an example of the real object 300), in the enhanced virtual image V64 the position near the other vehicle 314 (an example of the real object 300), and in the POI virtual image V65 a building 315 (an example of the real object 300). ).
  • the display control device 30 increases the correction amount C associated with the movement of the observer's eye position in the motion parallax adding process as the perceived distance D30 set for the virtual image V increases. That is, assuming that the perceptual distance D30 set for the virtual image V shown in FIG.
  • the associated correction amount C is set as follows: correction amount of V65>correction amount of V64>correction amount of V63>correction amount of V62>correction amount of V61. Note that the virtual image V62 and the virtual image V61 are of the same kind and are displayed close to each other. The same correction amount may be set.
  • the display control device 30 may set the correction amount C associated with the movement of the observer's eye position to zero in the non-AR virtual image V70 ( need not be corrected accordingly).
  • the display control device 30 may correct the non-AR virtual image V70 according to the movement of the observer's eye position.
  • the non-AR virtual images V70 (V71, V72) are arranged below the virtual image display area VS, and the area of the road surface 311, which is the real object 300, overlaps with the navigation virtual image V61 of FIG. It is closer to the vehicle 1 than the area of the road surface 311 .
  • the display control device 30 (processor 33) of some embodiments sets the perceived distance D30 of the non-AR virtual image V70 (V71, V72) to the AR virtual image V60 (in a narrow sense, the lowest distance among the AR virtual images V60). ) is set shorter than the perception distance D30 of the navigation virtual image V61) arranged in ( In a narrow sense, it may be set to be smaller than the correction amount C of the navigation virtual image V61 positioned at the lowest position among the AR virtual images V60.
  • FIGS. 10A and 10B are diagrams showing a real object in the foreground and a virtual image displayed by the HUD device visually recognized when the observer faces the front of the vehicle, and the upper diagram is a comparative example in which superimposition processing is not performed. , and the lower diagram shows an example of this embodiment in which superimposition processing is performed.
  • the observer's eye position 700 is When moving rightward (negative direction of the X-axis), the virtual image V911 of the comparative example, which is not superimposed, shifts leftward with respect to the forward vehicle 301 (real object 300), as shown in the upper diagram of FIG. 10A. be visible.
  • the display control device 30 of the present embodiment executes the superimposing process based on the change in the eye position 700 of the observer (moving to the right), and the superimposed AR virtual image V66 of the present embodiment.
  • the display control device 30 of this embodiment based on the observer's eye position 700 (or face position) detected by the face detection unit 409 and the position, direction, and shape of the real object 300 detected by the vehicle exterior sensor 411, The position of the virtual image V displayed in the virtual image display area VS is changed so that the virtual image V and the real object 300 have the specific positional relationship stored in the memory 37 .
  • the "specific positional relationship" is, for example, a position overlapping the real object 300, a vicinity of the real object 300, or a position set with the real object 300 as a reference.
  • the display control device 30 of the present embodiment executes the superimposing process based on the change in the eye position 700 of the observer (moving downward), so that the superimposed AR virtual image V67 of the present embodiment is displayed. , as shown in the lower diagram of FIG.
  • the display control device 30 of the present embodiment detects the observer's eye position 700 (or face position) detected by the face detection unit 409, and the driving lane 302 (real object) detected by the vehicle exterior sensor 411 (road information database 403). 300), the position, direction, and shape of the virtual image V displayed in the virtual image display area VS are changed so as to match the position, direction, and shape of the real object 300 . That is, the AR virtual image V60 (V51, V52) on which superimposition processing is performed is the position, direction, and shape of the foreground (real world) real object 300 (analog) viewed from the observer's eye position 700 (or face position). In this case, it is also called a Maisanalog image.
  • FIG. 11 is a block diagram of a vehicle virtual image display system according to some embodiments.
  • the display controller 30 comprises one or more I/O interfaces 31 , one or more processors 33 , one or more image processing circuits 35 and one or more memories 37 .
  • Various functional blocks illustrated in FIG. 11 may be implemented in hardware, software, or a combination of both.
  • FIG. 11 is only one embodiment and the illustrated components may be combined into fewer components or there may be additional components.
  • image processing circuitry 35 eg, a graphics processing unit
  • processors 33 may be included in one or more processors 33 .
  • processor 33 and image processing circuitry 35 are operatively coupled with memory 37 . More specifically, the processor 33 and the image processing circuit 35 execute a computer program stored in the memory 37 to generate and/or transmit image data, for example, to the vehicle display system 10 ( A display device 40) can be controlled.
  • Processor 33 and/or image processing circuitry 35 may include at least one general purpose microprocessor (e.g., central processing unit (CPU)), at least one application specific integrated circuit (ASIC), at least one field programmable gate array (FPGA). , or any combination thereof.
  • Memory 37 includes any type of magnetic media such as hard disks, any type of optical media such as CDs and DVDs, any type of semiconductor memory such as volatile memory, and non-volatile memory. Volatile memory may include DRAM and SRAM, and non-volatile memory may include ROM and NVRAM.
  • processor 33 is operatively coupled with I/O interface 31 .
  • the I / O interface 31 communicates (CAN communication).
  • the communication standard adopted by the I/O interface 31 is not limited to CAN. : MOST is a registered trademark), a wired communication interface such as UART or USB, or a personal area network (PAN) such as a Bluetooth network, a local network such as an 802.11x Wi-Fi network. It includes an in-vehicle communication (internal communication) interface, which is a short-range wireless communication interface within several tens of meters such as an area network (LAN).
  • LAN area network
  • the I / O interface 31 is a wireless wide area network (WWAN0, IEEE802.16-2004 (WiMAX: Worldwide Interoperability for Microwave Access)), IEEE802.16e base (Mobile WiMAX), 4G, 4G-LTE, LTE Advanced,
  • An external communication (external communication) interface such as a wide area communication network (for example, Internet communication network) may be included according to a cellular communication standard such as 5G.
  • the processor 33 is interoperably coupled with the I/O interface 31 to communicate with various other electronic devices and the like connected to the vehicle display system 10 (I/O interface 31). can be given and received.
  • the I/O interface 31 includes, for example, a vehicle ECU 401, a road information database 403, a vehicle position detection unit 405, an operation detection unit 407, a face detection unit 409, an external sensor 411, a brightness detection unit 413, an IMU 415, a mobile information terminal 417, and an external communication device 419, etc. are operatively coupled.
  • the I/O interface 31 may include a function of processing (converting, calculating, and analyzing) information received from other electronic devices connected to the vehicle display system 10 .
  • the display device 40 is operatively connected to the processor 33 and the image processing circuitry 35 . Accordingly, the image displayed by spatial light modulating element 51 may be based on image data received from processor 33 and/or image processing circuitry 35 .
  • the processor 33 and image processing circuit 35 control the image displayed by the spatial light modulator 51 based on the information obtained from the I/O interface 31 .
  • the vehicle ECU 401 detects the state of the vehicle 1 (for example, the ON/OFF state of a start switch (for example, an accessory switch: ACC or an ignition switch: IGN) from sensors and switches provided in the vehicle 1 (an example of start information), Travel distance, vehicle speed, accelerator pedal opening, brake pedal opening, engine throttle opening, injector fuel injection amount, engine speed, motor speed, steering angle, shift position, drive mode, various warning conditions, posture (roll angle and/or pitching angle), vehicle vibration (including magnitude, frequency, and/or frequency of vibration), etc., to collect and manage the state of the vehicle 1 (including control ), and as part of its function, it is possible to output a signal indicating the numerical value of the state of the vehicle 1 (for example, the vehicle speed of the vehicle 1 ) to the processor 33 of the display control device 30 .
  • a start switch for example, an accessory switch: ACC or an ignition switch: IGN
  • the vehicle ECU 401 simply transmits a numerical value detected by a sensor or the like (for example, the pitching angle is 3 [degrees] in the forward tilting direction) to the processor 33, or alternatively, transmits the numerical value detected by the sensor. Determination results based on one or more states of the vehicle 1 including (for example, the vehicle 1 satisfies a predetermined forward lean condition), or/and analysis results (for example, the degree of brake pedal opening (combined with the information that the vehicle is leaning forward due to braking) may be sent to the processor 33 .
  • a sensor or the like for example, the pitching angle is 3 [degrees] in the forward tilting direction
  • Determination results based on one or more states of the vehicle 1 including for example, the vehicle 1 satisfies a predetermined forward lean condition
  • analysis results for example, the degree of brake pedal opening (combined with the information that the vehicle is leaning forward due to braking) may be sent to the processor 33 .
  • the vehicle ECU 401 may output to the display control device 30 a signal indicating a determination result that the vehicle 1 satisfies a predetermined determination condition stored in advance in a memory (not shown) of the vehicle ECU 401 .
  • the I/O interface 31 may acquire the above-described information from sensors and switches provided in the vehicle 1 without using the vehicle ECU 401 .
  • the vehicle ECU 401 may output to the display control device 30 an instruction signal that instructs an image to be displayed by the vehicle display system 10.
  • Necessity-related information that serves as a basis for determining the degree and/or the necessity of notification may be added to the instruction signal and transmitted.
  • the road information database 403 is included in a navigation device (not shown) provided in the vehicle 1 or an external server connected to the vehicle 1 via an external communication interface (I/O interface 31). Based on the position of the vehicle 1 acquired from the unit 405, road information (lanes, white lines, stop lines, crosswalks, road width, number of lanes, intersections, curves, forks, traffic regulations, etc.), presence/absence, position (including distance to vehicle 1), direction, shape, type of feature information (buildings, bridges, rivers, etc.) , detailed information, etc. may be read and sent to the processor 33 . The road information database 403 may also calculate an appropriate route (navigation information) from the departure point to the destination, and output to the processor 33 a signal indicating the navigation information or image data indicating the route.
  • route indicating the navigation information or image data indicating the route.
  • the vehicle position detection unit 405 is a GNSS (global navigation satellite system) or the like provided in the vehicle 1, detects the current position and direction of the vehicle 1, and transmits a signal indicating the detection result via the processor 33. or directly to the road information database 403, a portable information terminal 417 and/or an external communication device 419, which will be described later.
  • the road information database 403, a portable information terminal 417 (to be described later), and/or an external communication device 419 acquire position information of the vehicle 1 from the vehicle position detection unit 405 continuously, intermittently, or at each predetermined event. , information about the surroundings of the vehicle 1 may be selected/generated and output to the processor 33 .
  • the operation detection unit 407 is, for example, a CID (Center Information Display) of the vehicle 1, a hardware switch provided on an instrument panel or the like, or a software switch combining an image and a touch sensor. It outputs to the processor 33 operation information based on the operation by the passenger (the user sitting in the driver's seat and/or the user sitting in the front passenger seat). For example, the operation detection unit 407 sets the display area setting information based on the operation of moving the virtual image display area VS, the eye box setting information based on the operation of moving the eye box 200, and the observer's eye position 700 by the user's operation. Information based on the operation to be performed is output to the processor 33 .
  • CID Center Information Display
  • the face detection unit 409 includes a camera such as an infrared camera that detects the eye position 700 (see FIG. 1) of the observer sitting in the driver's seat of the vehicle 1, and may output the captured image to the processor 33. .
  • the processor 33 acquires a captured image (an example of information that can estimate the eye positions 700) from the face detection unit 409, and analyzes the captured image by a technique such as pattern matching to determine the eye positions 700 of the observer. may be detected and a signal indicating the detected coordinates of the eye position 700 may be output to the processor 33 .
  • the face detection unit 409 obtains an analysis result obtained by analyzing the captured image of the camera (for example, it indicates where the observer's eye position 700 belongs in a spatial region corresponding to a plurality of preset display parameters). signal.) may be output to the processor 33 .
  • the method of acquiring the eye position 700 of the observer of the vehicle 1 or the information that can estimate the eye position 700 of the observer is not limited to these, and a known eye position detection (estimation) technique is used. may be obtained by
  • the face detection unit 409 detects the speed of change and/or the direction of movement of the observer's eye position 700, and sends a signal indicating the speed of change and/or the direction of movement of the observer's eye position 700 to the processor 33. may be output.
  • the face detection unit 409 also determines (11) that the newly detected eye position 700 is greater than or equal to the eye position movement distance threshold previously stored in the memory 37 with respect to the previously detected eye position 700 (predetermined (12) a signal indicating that the change speed of the eye position is equal to or greater than the eye position change speed threshold stored in advance in the memory 37; (13) When a signal indicating that the observer's eye position 700 cannot be detected after the movement of the observer's eye position 700 is detected, it is determined that a predetermined determination condition is satisfied, and the state is indicated. The signal may be output to processor 33 .
  • the face detection unit 409 may have a function as a line-of-sight direction detection unit.
  • the line-of-sight direction detection unit may include an infrared camera or a visible light camera that captures the face of an observer sitting in the driver's seat of the vehicle 1 , and may output the captured image to the processor 33 .
  • the processor 33 acquires a captured image (an example of information that can estimate the line-of-sight direction) from the line-of-sight direction detection unit, and identifies the line-of-sight direction (and/or the gaze position) of the observer by analyzing the captured image. can do.
  • the line-of-sight direction detection unit may analyze the captured image from the camera and output to the processor 33 a signal indicating the line-of-sight direction (and/or the gaze position) of the observer, which is the analysis result.
  • the method of acquiring information that can estimate the line-of-sight direction of the observer of the vehicle 1 is not limited to these, and includes an EOG (Electro-oculogram) method, a corneal reflection method, a scleral reflection method, and Purkinje image detection. may be obtained using other known gaze direction detection (estimation) techniques such as method, search coil method, infrared fundus camera method.
  • the vehicle exterior sensor 411 detects real objects existing around the vehicle 1 (front, side, and rear).
  • Real objects detected by the sensor 411 outside the vehicle include, for example, obstacles (pedestrians, bicycles, motorcycles, other vehicles, etc.), road surfaces of driving lanes, lane markings, roadside objects, and/or features (buildings, etc.), which will be described later. and so on.
  • Exterior sensors include, for example, radar sensors such as millimeter-wave radar, ultrasonic radar, and laser radar, cameras, or a detection unit composed of any combination thereof, and the detection data from the one or more detection units is processed. and a processing device that performs data fusion. A conventional well-known technique is applied to object detection by these radar sensors and camera sensors.
  • the position of the real object (relative distance from the vehicle 1, the traveling direction of the vehicle 1 in the front-back direction) horizontal position, vertical position, etc.), size (horizontal direction (horizontal direction), height direction (vertical direction), etc.), movement direction (horizontal direction (horizontal direction), depth direction (front-back direction)), change speed (horizontal direction (left-right direction), depth direction (front-back direction)), and/or the type of the real object may be detected.
  • One or a plurality of vehicle exterior sensors 411 detect a real object in front of the vehicle 1 in each detection cycle of each sensor, and detect real object information (presence or absence of a real object, existence of a real object), which is an example of real object information. If so, information such as the position, size and/or type of each real object) can be output to the processor 33 .
  • the real object information may be transmitted to the processor 33 via another device (for example, the vehicle ECU 401).
  • a camera an infrared camera or a near-infrared camera is desirable so that a real object can be detected even when the surroundings are dark, such as at night.
  • a stereo camera that can acquire distance and the like by parallax is desirable.
  • the brightness detection unit 413 converts the illuminance or luminance of a predetermined range of the foreground existing in front of the passenger compartment of the vehicle 1 into the outside world brightness (an example of brightness information), or converts the illuminance or luminance in the passenger compartment into the vehicle interior brightness (brightness (an example of information about
  • the brightness detection unit 413 is, for example, a phototransistor or a photodiode, and is mounted on the instrument panel, rearview mirror, HUD device 20, or the like of the vehicle 1 shown in FIG.
  • the IMU 415 includes one or more sensors (e.g., accelerometers and gyroscopes) configured to sense the position, orientation, and changes thereof (velocity of change, acceleration of change) of the vehicle 1 based on inertial acceleration. can include a combination of The IMU 415 outputs detected values (the detected values include signals indicating the position and orientation of the vehicle 1, and changes thereof (change speed and change acceleration)) and the results of analyzing the detected values to a processor. 33.
  • the analyzed result is a signal or the like indicating whether or not the detected value satisfies a predetermined determination condition. Therefore, the signal may be a signal indicating that the behavior (vibration) of the vehicle 1 is small.
  • the mobile information terminal 417 is a smart phone, a laptop computer, a smart watch, or other information equipment that can be carried by the observer (or other occupants of the vehicle 1).
  • the I/O interface 31 can communicate with the mobile information terminal 417, and can read data recorded in the mobile information terminal 417 (or a server through the mobile information terminal). to get
  • the mobile information terminal 417 has, for example, the same functions as the road information database 403 and the own vehicle position detection unit 405 described above, acquires the road information (an example of real object related information), and transmits it to the processor 33.
  • the mobile information terminal 417 may also acquire commercial information (an example of real object related information) related to commercial facilities near the vehicle 1 and transmit it to the processor 33 .
  • the mobile information terminal 417 transmits schedule information of the owner of the mobile information terminal 417 (for example, an observer), incoming call information at the mobile information terminal 417, mail reception information, etc. to the processor 33, and the processor 33 and the image Processing circuitry 35 may generate and/or transmit image data for these.
  • the external communication device 419 is a communication device that exchanges information with the vehicle 1, for example, other vehicles connected to the vehicle 1 by vehicle-to-vehicle communication (V2V: vehicle-to-vehicle communication), pedestrian-to-vehicle communication (V2P: vehicle-to-pedestrian ) connected by pedestrians (mobile information terminals carried by pedestrians), network communication equipment connected by road-to-vehicle communication (V2I: Vehicle To roadside Infrastructure), and in a broad sense, communication with vehicle 1 (V2X : Vehicle To Everything).
  • V2V vehicle-to-vehicle communication
  • V2P pedestrian-to-vehicle communication
  • V2I Vehicle To roadside Infrastructure
  • V2X Vehicle To Everything
  • the external communication device 419 acquires, for example, the positions of pedestrians, bicycles, motorcycles, other vehicles (preceding vehicles, etc.), road surfaces, lane markings, roadside objects, and/or features (buildings, etc.), and sends them to the processor 33. can be output.
  • the external communication device 419 has the same function as the vehicle position detection unit 405 described above, may acquire the position information of the vehicle 1 and transmit it to the processor 33, and furthermore may store the information of the road information database 403 described above. It may also have a function to acquire the road information (an example of real object related information) and transmit it to the processor 33 .
  • Information acquired from the external communication device 419 is not limited to the above.
  • the software components stored in memory 37 include eye position detection module 502, eye position estimation module 504, eye position prediction module 506, face detection module 508, determination module 510, vehicle state determination module 512, visibility control module 514, It includes an eye-tracking image processing module 516, a graphics module 518, a light source driving module 520, an actuator driving module 522, and the like.
  • the display control device 30 acquires information indicating the observer's eye position 700, face position (not shown), or face position (not shown) (step S110).
  • step S110 in some embodiments, the display control device 30 (processor 33) executes the eye position detection module 502 of FIG. (obtain eye position information indicating the eye position 700).
  • the eye position detection module 502 detects the coordinates indicating the observer's eye position 700 (the positions in the X and Y axis directions, which is an example of eye position information), and indicates the height of the observer's eyes.
  • Detecting coordinates positions in the Y-axis direction, which is an example of eye position information
  • coordinates indicating positions in the height and depth directions of the eyes of the observer positions in the Y- and Z-axis directions
  • detect the coordinates indicating the observer's eye position 700 positions in the X, Y, and Z axis directions, which is an example of eye position information.
  • the eye position 700 detected by the eye position detection module 502 includes positions 700R and 700L of the right and left eyes, one predetermined position out of the right eye position 700R and the left eye position 700L, the right eye position 700R and the left eye position 700L. Detectable (easily detectable) position, or a position calculated from the right eye position 700R and the left eye position 700L (for example, the middle point between the right eye position and the left eye position). good. For example, the eye position detection module 502 determines the eye position 700 based on the observation position acquired from the face detection unit 409 immediately before the timing of updating the display settings.
  • the face detection unit 409 detects the movement direction and/or change speed of the observer's eye position 700 based on a plurality of observation positions with different detection timings of the observer's eyes obtained from the face detection unit 409, A signal indicating the direction of movement and/or the rate of change of the observer's eye position 700 may be output to the processor 33 .
  • the eye position estimation module 504 uses the captured image acquired from the face detection unit 409, the position of the driver's seat of the vehicle 1, the position of the observer's face, the observer's face position, the height of the sitting height, or the eyes of a plurality of observers. includes various software components for performing various operations related to estimating the observer's eye position 700, such as estimating the observer's eye position 700, such as from the observation position of the . That is, the eye position estimation module 504 can include table data, arithmetic expressions, and the like for estimating the eye position 700 of the observer from information that can estimate the eye position.
  • the display control device 30 may acquire information that can predict the eye positions 700 of the observer by executing the eye position prediction module 506 .
  • Information that can predict the observer's eye position 700 is, for example, the latest observation position obtained from the face detection unit 409, or one or more observation positions obtained in the past.
  • Eye position prediction module 506 includes various software components for performing various operations related to predicting eye positions 700 based on information capable of predicting eye positions 700 of an observer. Specifically, for example, the eye position prediction module 506 predicts the eye position 700 of the observer at the timing when the observer visually recognizes the image to which the new display settings are applied.
  • the eye position prediction module 506 uses one or more past observed positions, for example, using a least squares method, a prediction algorithm such as a Kalman filter, an ⁇ - ⁇ filter, or a particle filter to calculate the next value. You can make a prediction.
  • the display control device 30 may acquire face position information indicating the face position and face orientation information indicating the face orientation by executing the face detection module 508.
  • the face detection module 508 acquires face area detection data (an example of face position information, an example of face orientation information) from the face detection unit 409, and detects facial feature points from the acquired face area detection data. Then, from the arrangement pattern of the detected feature points, face position information indicating the face position of the observer and face direction information indicating the face direction are detected.
  • the face detection module 508 acquires detection data (an example of face position information, an example of face direction information) of the feature points of the face detected by the feature point detection unit 126, and extracts the acquired detection data of the feature points of the face. may be used to detect face position information and face orientation information. Also, the face detection module 508 may simply acquire the face position information and face direction information detected by the face detection unit 409 . Face direction detection processing is based on a method of calculating a face direction angle based on the positional relationship of a plurality of face parts (for example, eyes, nose, mouth, etc.), for example. Alternatively, for example, the face direction detection process is based on a method using the results of machine learning (however, the face direction detection process is not limited to these).
  • the face position and face direction are coordinates on the X-axis along the left-right direction, a pitch angle indicating the direction of rotation about the X-axis, coordinates on the Y-axis along the up-down direction, and the Y-axis.
  • the yaw angle indicating the direction of rotation about the axis the coordinate on the Z-axis along the depth direction, and the roll angle indicating the direction of rotation about the Z-axis, the position in the three-axis direction and the angle around each axis Calculated.
  • Step S120 Next, the display control device 30 (processor 33) determines whether a predetermined determination condition is satisfied by executing the determination module 510 (step S120).
  • Step S130 In step S120 in some embodiments, the display control device 30 (processor 33) executes the determination module 510 of FIG. Based on this, it is determined whether the eye position 700, face position, or face orientation satisfies a predetermined condition.
  • processing using the eye position 700 and face position will be mainly described. The only difference is that the eye position 700 and face position are in a positional coordinate system, and the face orientation is in an angular coordinate system. , and the processing using the change amount and change speed of the face direction, and the description of the processing using the face direction is omitted.
  • step S131 the display control device 30 (processor 33) executes the determination module 510 of FIG. If (Vy) is fast, it is determined that the predetermined determination condition is satisfied.
  • the determination module 510 determines the change speed Vx (Vy) of the eye position 700 or face position (or face orientation) and a predetermined can be compared with a first threshold (not shown), and if the change speed Vx (Vy) of the eye position 700 or the face position (or face orientation) is faster than the predetermined first threshold, a predetermined (however, the method of determining the rate of change is not limited to this).
  • step S132 the display control device 30 (processor 33) executes the determination module 510 of FIG. If it is within the set first range (not shown), it is determined that the predetermined determination condition is satisfied.
  • the determination module 510 compares the eye position 700 or face position (or face orientation) Px (Py) with a predetermined first range (not shown) pre-stored in the memory 37 to determine the eye position. 700 or face position (or face orientation) Px (Py) is within the first range, it may be determined that a predetermined determination condition is satisfied (however, eye position 700 or face position (or (Face orientation)
  • the method for determining position coordinates or angle coordinates is not limited to this).
  • the first range can be set to a range separated from a predetermined reference position (not shown) by predetermined coordinates. That is, the first range is a first left range deviated from the center 205 of the eyebox 200 (an example of the predetermined reference position) by a predetermined X coordinate in the left direction (X-axis negative direction), and a right direction ( A first right range shifted by a predetermined X coordinate in the positive direction of the X axis, a first upper range shifted by a predetermined Y coordinate in the upward direction (positive direction of the Y axis), and a downward direction (negative direction of the Y axis). It is set to either a first lower range offset by a predetermined Y coordinate, and any combination thereof.
  • the first range can be set to the outer edge away from the center 205 of the eyebox 200 or the outside of the eyebox 200 .
  • the determination module 510 determines the difference between the eye position 700 or face position (or face orientation) Px (Py) and a predetermined reference position (not shown) stored in the memory 37 in advance. is calculated, and if the difference between the eye position 700 or the face position (or face orientation) Px (Py) and the predetermined reference position is longer than a predetermined second threshold stored in advance in the memory 37, Eye position 700 or face position (or face orientation may be acceptable) Px (Py) is within a first range separated from the predetermined reference position by the second threshold or more, and it is determined that the predetermined determination condition is satisfied.
  • the reference position can be set at the center 205 of the eyebox 200 .
  • the determination module 510 determines that the predetermined determination condition is satisfied if the eye position 700 or face position (or face orientation) Px (Py) is away from the center 205 of the eyebox 200 .
  • the first range can be changed as the eyebox 200 moves.
  • the display control device 30 controls the first actuator 28 (and/or the second actuator 29), for example, to move the eyebox 200 by controlling the first actuator 28 (and/or the second actuator 29).
  • the first range may be changed based on the control value of .
  • step S133 the display control device 30 (processor 33) executes the determination module 510 of FIG. If the eye position 700 or the face position (or face orientation may be used) is detected in the second range (not shown) changed by , it may be determined that the predetermined determination condition is satisfied.
  • the eye position estimation module 504 in FIG. 11 sequentially updates the second range based on the fact that the eye position 700 or face position (or face orientation) Px (Py) is in a stable state.
  • the second range can be set to a range separated by a predetermined coordinate from the reference position that is changed according to the eye position 700 or the face position (or face orientation). For example, if the eye position 700 or the face position (or the face orientation) Px (Py) remains roughly the same for one second or longer, the eye position estimation module 504 determines that the current state is stable.
  • the eye position 700 or the face position (or face orientation may be used) Px (Py) is registered in the memory 37 as the reference position.
  • a range may be set to the second range.
  • the eye position estimation module 504 determines that the stable state is present when the eye position 700 or the face position (or face orientation) Px (Py) remains substantially at the same position for one second or longer.
  • An average value of a plurality of eye positions 700 or face positions (or face orientations) Px (Py) acquired in the past may be registered in the memory 37 as the reference position.
  • the eye position estimation module 504 calculates the eye position 700 Alternatively, if 60 samples of the face position (or face orientation) Px (Py) in 1 second are at approximately the same position, it is determined that the state is stable, and out of 30 samples acquired in the past 0.5 sec An average value of eye positions 700 of the latest 5 samples or face positions (or face orientations) Px (Py) may be registered in the memory 37 as the reference position.
  • Step S134 the display control device 30 (processor 33) executes the determination module 510 of FIG. If it changes continuously, it may be determined that a predetermined determination condition is satisfied. For example, the determination module 510 determines that the amount of change ⁇ Px in the eye position 700 in the left-right direction or the face position (or face orientation) shown in FIG. If it is detected that the movement has continuously changed in one direction (here, right direction) for a predetermined number of times (for example, two times) or more, it is determined that the predetermined determination condition is satisfied. good too.
  • Step S141 the determination module 510 of FIG. 11 determines whether the observer's eye position 700 (or face position) is in an unstable state, and determines whether the observer's eye position 700 (or face position) is When it is determined that the state is unstable, it may be determined that the predetermined determination condition is satisfied.
  • the determination module 510 determines whether the stability of the observer's eye position is low (unstable).
  • Step S141 includes various software components for performing various operations related to. That is, the determination module 510 uses threshold values, table data, arithmetic expressions, and so on.
  • the eye position detection module 502 calculates the variance of position data of each of a plurality of observation positions acquired from the face detection unit 409 within a predetermined measurement time, and the determination module 510 calculates the variance of the position data calculated by the eye position detection module 502.
  • the variance is larger than a predetermined threshold value stored in advance in the memory 37 (or set by the operation detection unit 407), it is determined that the observer's eye position is less stable (unstable). There may be.
  • the eye position detection module 502 calculates the deviation of position data of each of the plurality of observation positions acquired from the face detection unit 409 within a predetermined measurement time, and the determination module 510 calculates the deviation of the position data calculated by the eye position detection module 502. If the deviation obtained is greater than a predetermined threshold stored in advance in the memory 37 (or set by the operation detection unit 407), it is determined that the observer's eye position is less stable (unstable) ( non-unstable) form.
  • the eye position detection module 502 divides the eye box 200 into a plurality of partial viewing zones (for example, 25 regions divided vertically into 5 and horizontally into 5). Identifiable, the stability of the observer's eye position is low (unstable) when the number of partial viewing zones to which the eye position 700 has moved per predetermined unit time exceeds a predetermined threshold. (not in an unstable state).
  • the determination module 510 of FIG. 11 determines whether the detection operation of the observer's eye position 700 is in an unstable state, and if it is determined to be in an unstable state, the predetermined It is determined that the determination condition is satisfied.
  • the determination module 510 determines whether or not the eye position 700 of the observer can be detected, and if the eye position 700 cannot be detected, determines that the state is unstable (an example of step S142), ( 20) Determining whether it can be estimated that the detection accuracy of the eye position 700 of the observer has decreased, and determining that the state is unstable if it can be estimated that the detection accuracy of the eye position 700 has decreased (step An example of S142.), (30) Determining whether or not the observer's eye position 700 is outside the eye box 200, and if it is outside the eye box 200, determining that the state is unstable (step S142). (40) Determine whether the observer's eye position 700 can be estimated to be outside the eye box 200.
  • step S142 determines whether or not the observer's eye position 700 is predicted to be outside the eyebox 200, and if it is predicted to be outside the eyebox 200, it is determined to be in an unstable state.
  • step S142 including various software components for performing various operations related to. That is, the determination module 510 determines whether or not the detection operation of the observer's eye position 700 is in an unstable state from the detection information, estimation information, or prediction information of the eye position 700. , arithmetic expressions, and the like.
  • a method for determining whether or not the eye position 700 of the observer can be detected is (1) obtaining a signal from the face detection unit 409 indicating that the eye position 700 cannot be detected; (3) the eye position detection module 502 cannot detect all or part of the observation positions of the observer's eyes acquired within the period (for example, more than a predetermined number of times); Determining that the observer's eye position 700 cannot be detected (detection of the observer's eye position 700 is in an unstable state) due to the inability to detect the eye position 700 or any combination thereof.
  • the determination method is not limited to these.).
  • a method for determining that the detection accuracy of the eye position 700 of the observer is degraded includes: (1) obtaining a signal indicating that the detection accuracy of the eye position 700 is estimated to be degraded from the face detection unit 409; (2) some or all of the observation positions of the observer's eyes acquired from the face detection unit 409 within a predetermined period (for example, more than a predetermined number of times) cannot be detected; and (3) an eye position detection module.
  • the eye position estimation module 504 cannot estimate the observer's eye position 700 in normal operation;
  • the prediction module 506 cannot predict the observer's eye position 700 in normal operation, (6) detects a decrease in the contrast of the image of the observer captured by external light such as sunlight, and (7) the hat. and accessories (including eyeglasses) are detected, (8) a part of the observer's face is not detected due to a hat or accessory (including eyeglasses), or any combination of these determining that the detection accuracy of the position 700 is degraded (the determination method is not limited to these).
  • the method for determining whether or not the observer's eye position 700 is outside the eyebox 200 includes (1) a part of the observer's eye observation position acquired from the face detection unit 409 within a predetermined period (for example, (2) eye position detection module 502 detects eye position 700 of the observer outside eye box 200, or any combination thereof. , determining that the observer's eye position 700 is outside the eye box 200 (the observer's eye position 700 is in an unstable state) (the above-described determination method is not limited to these).
  • the method for determining whether the observer's eye position 700 is outside the eyebox 200 is determined by: (1) After the face detection unit 409 detects the movement of the observer's eye position 700, the observer's eye position 700 is (2) the eye position detection module 502 detects the observer's eye position 700 near the boundary of the eye box 200; (3) the eye position detection module 502 detects the observer's right eye position 700R and left eye position 700L, or any combination thereof, it can be estimated that the observer's eye position 700 is outside the eyebox 200 (observer's eye position 700L). is in an unstable state) (Note that the determination method is not limited to these.).
  • the method for determining whether or not the observer's eye position 700 is predicted to be outside the eyebox 200 is as follows. (2) that the eye position 700 newly detected by the eye position detection module 502 is greater than or equal to the eye position movement distance threshold previously stored in the memory 37 with respect to the previously detected eye position 700 ( The change speed of the eye position 700 is equal to or greater than the eye position change speed threshold value stored in advance in the memory 37), or any combination thereof, the observer's eye position 700 is predicted to be outside the eyebox 200. (Note that the determination method is not limited to these.).
  • Step S150 Reference is now made to FIG. 12B.
  • the display control device 30 updates the image displayed on the display device 40.
  • FIG. When it is determined in step S120 that the predetermined determination condition is satisfied, the display control device 30 (processor 33) executes the visibility control module 514 to display on the display device 40 and correspond to the AR virtual image V60. Visibility reduction processing (S180) is executed to reduce the visibility of the image to be processed.
  • Step S160 The visibility control module 514 of FIG. 11 maintains the normal visibility of the AR virtual image V60 when it is determined in step S120 that the predetermined determination condition is not satisfied.
  • step S160 when it is determined in step S120 that the predetermined determination condition is not satisfied, the eye-following image processing module 516 of FIG.
  • the vertical position of the virtual image V is corrected by a first correction amount Cy1
  • the horizontal position of the virtual image V is corrected according to the amount of change ⁇ Px in the eye position in the horizontal direction.
  • the first correction amount Cy1 (the same applies to the second correction amount Cy2, which will be described later) is a parameter that gradually increases as the eye position change amount ⁇ Py in the vertical direction increases.
  • the first correction amount Cy1 (the same applies to the second correction amount Cy2 described later) is a parameter that gradually increases as the perceived distance D30 set to the virtual image V increases.
  • the first image correction processing S160 is to create an image that perfectly reproduces natural motion parallax as if the virtual image V were fixed at the set target position PT even when viewed from each eye position Py in the vertical direction. This includes position correction, and in a broader sense, it may also include correction of the image position so as to approximate natural motion parallax. That is, the first image correction processing S160 sets the display position of the virtual image V to the position of the intersection of the virtual image display area VS and the straight line connecting the target position PT set in the virtual image V and the observer's eye position 700. Align (bring the display position of the virtual image V closer).
  • Step S170 When it is determined in step S120 that the predetermined determination condition is satisfied, the display control device 30 (processor 33) executes at least the visibility reduction process (step S180), and additionally performs eye-following image processing. Module 516 may perform a second image correction process (step S190) described below.
  • Step S180 When it is determined in step S120 that the predetermined determination condition is satisfied, the visibility control module 514 performs visibility reduction processing (step S180) to reduce the visibility of the AR virtual image V60 from the normal visibility in step S160.
  • decreasing the visibility means decreasing the brightness of the AR virtual image V60, increasing the transmittance of the AR virtual image V60 (bringing it closer to transmission), and decreasing the brightness of the AR virtual image V60 (bringing it closer to black). , lowering the saturation of the AR virtual image V60 (bringing it closer to an achromatic color), and any combination thereof.
  • the display control device 30 controls the visibility of the image displayed by the display 50 through gradation control in the display 50 and local or overall illumination control in the light source unit 60, thereby making the image Controls the visibility of the corresponding AR virtual image V60.
  • step S181 When it is determined in step S120 that the predetermined determination condition is satisfied, the visibility control module 514 performs a first visibility decrease that abruptly decreases the visibility of the AR virtual image V60 from the normal visibility in step S160. Execute the process. More specifically, when it is determined that the predetermined determination condition is satisfied, the visibility control module 514 sets the desired visibility (lower than the normal visibility) stored in the memory 37 from the normal visibility. ).
  • the visibility control module 514 can rapidly reduce the visibility of the AR virtual image V60 immediately after it is determined in step S120 that the predetermined determination condition is satisfied.
  • the visibility control module 514 can rapidly reduce the visibility of the AR virtual image V60 after a predetermined period of time has elapsed after it is determined that the predetermined determination condition is satisfied.
  • step S181 when it is determined in step S120 that the predetermined determination condition is satisfied, the visibility control module 514 gradually reduces the visibility of the AR virtual image V60 to the normal visibility in step S160 over time. A second visibility lowering process for further lowering the visibility is executed. More specifically, when it is determined that the predetermined determination condition is satisfied, the visibility control module 514 sets the desired visibility (lower than the normal visibility) stored in the memory 37 from the normal visibility. ) gradually.
  • Step S183 When it is determined in step S120 that the predetermined determination condition is satisfied, the visibility control module 514 sets the visibility of the AR virtual image V60 to the changing speed of the eye position 700 or the face position (or face orientation may be used). 2nd visibility lowering processing which lowers to different visibility according to is performed. More specifically, when the eye position 700 or the face position (or face orientation may be changed), the visibility control module 514 sets the visibility significantly lower than the normal visibility (including non-display) when the change speed is fast. ), and if the change speed of the eye position 700 or the face position (or the face orientation is also acceptable) is slow, the visibility is set slightly lower than the normal visibility.
  • the visibility level lower than the normal visibility is not limited to two stages, but may be three stages or more according to the change speed of the eye position 700 or the face position (or face orientation may be used).
  • may Visibility control module 514 may also substantially reduce the level of visibility continuously in response to an increasing rate of change in eye position 700 or face position (or face orientation).
  • the visibility control module 514 reduces the visibility of the AR virtual image V60 to different visibility depending on the eye position 700 or the face position when it is determined in step S120 that the predetermined determination condition is satisfied. Execute visibility reduction processing. More specifically, the visibility control module 514 sets the visibility significantly lower than normal when the eye position 700 or face position is far away from a predetermined reference position (eg, the center 205 of the eyebox 200). Visibility (including non-display) is set, and when the eye position 700 or the face position is slightly away from the predetermined reference position, the visibility is set slightly lower than normal visibility. Note that the visibility level lower than normal visibility is not limited to two stages, and may be three stages or more according to the eye position 700 or the face position. Visibility control module 514 may also substantially continuously change the level of visibility depending on eye position 700 or face position.
  • the visibility control module 514 reduces the visibility of the entire AR virtual image V60.
  • the visibility control module 514 reduces the visibility of part of the AR virtual image V60.
  • the visibility control module 514 reduces the visibility of the AR virtual image V60 having a perceptual distance D30 set to the AR virtual image V60 longer than a predetermined threshold value (not shown) (performs visibility reduction processing), The visibility of the AR virtual image V60 that is shorter than a predetermined threshold does not have to be lowered (the visibility lowering process is not executed).
  • the visibility control module 514 sets the visibility (including non-display) significantly lower than the normal visibility, and sets the AR virtual image V60. If the perceived distance D30 is short, the visibility may be slightly lower than the normal visibility.
  • FIG. 14 is a diagram showing an example of the foreground visually recognized by the observer, the AR virtual image when the visibility reduction process is executed, and the AR-related virtual image while the host vehicle is running.
  • the HUD device 20 creates a distant virtual image V1 (for example, a virtual image V64- V65), and a near virtual image V2 perceived at a position closer than the first distance (for example, virtual images V61 to V63 shown in FIG. 14), and the processor 33 performs visibility reduction processing S170 on the distant virtual image V1.
  • the visibility is reduced by executing the visibility reduction processing S170, and the visibility reduction processing S170 is not executed for the neighboring virtual image V2. Also, in the example of FIG. 14, the processor 33 displays an AR-related virtual image V80 (for example, V81 shown in FIG. 14) related to a part (virtual image V64) of the distant virtual image V1 with reduced visibility.
  • V80 for example, V81 shown in FIG. 14
  • the eye-following image processing module 516 in FIG. 11 switches between the first image correction process (step S160) and the second image correction process (step S190) based on the determination result in step S120. good too.
  • Step S190 If it is determined in step S120 that the predetermined determination condition is satisfied, the eye-following image processing module 516 of FIG. 11 performs eye position 700 (or face position ) is reduced.
  • step S190 If it is determined in step S120 that the predetermined determination condition is satisfied, the eye-following image processing module 516 of FIG.
  • the correction amount may be reduced only with respect to the amount of change in .
  • the eye-following image processing module 516 of FIG. 11 corrects the vertical position of the virtual image V by a second correction amount Cy2 corresponding to the amount of change ⁇ Py in the eye position in the vertical direction.
  • the horizontal position of the virtual image V is corrected by a second correction amount Cx2 corresponding to the eye position change amount ⁇ Px.
  • the eye-following image processing module 516 makes the second correction amount Cy2 smaller than the first correction amount Cy1 for the eye position change amount ⁇ Py in the vertical direction in the first image correction process (step S160).
  • the second correction amount Cx2 is set to be the same as the first correction amount Cy1 for the eye position change amount ⁇ Px in the horizontal direction in the first image correction process (step S160).
  • the second correction amount Cy2 for the same eye position change amount ⁇ Py in the vertical direction is 25%.
  • the second correction amount Cx2 for the same eye position change amount ⁇ Px in the horizontal direction is also 100%.
  • the second correction amount Cy2 should be smaller than the first correction amount Cy1, and therefore should be less than 100% of the first correction amount Cy1. is preferably less than 60% with respect to
  • step S190 If it is determined in step S120 that the predetermined determination condition is satisfied, the eye-following image processing module 516 of FIG.
  • the correction amount may be set to zero only for the amount of change in .
  • the second correction amount Cy2 may be set to zero.
  • the eye position followable image processing module 511 may correct the position of the virtual image V in the horizontal direction only according to the amount of change ⁇ Px in the eye position in the horizontal direction.
  • step S190 If it is determined in step S120 that the predetermined determination condition is satisfied, the eye-following image processing module 516 of FIG.
  • the correction amount may be smaller than the amount.
  • the eye-following image processing module 516 of FIG. 11 corrects the vertical position of the virtual image V by a second correction amount Cy2 corresponding to the amount of change ⁇ Py in the eye position in the vertical direction.
  • the horizontal position of the virtual image V is corrected by a second correction amount Cx2 corresponding to the eye position change amount ⁇ Px.
  • the eye-following image processing module 516 makes the second correction amount Cy2 smaller than the first correction amount Cy1 for the eye position change amount ⁇ Py in the vertical direction in the first image correction process (step S160),
  • the second correction amount Cx2 is made smaller than the first correction amount Cy1 for the eye position change amount ⁇ Px in the horizontal direction in the first image correction process (step S160).
  • the second correction amount Cy2 for the same eye position change amount ⁇ Py in the vertical direction is 25%.
  • the first correction amount Cx1 for the eye position change amount ⁇ Px in the horizontal direction is 100%
  • the second correction amount Cx2 for the same eye position change amount ⁇ Px in the horizontal direction is also 25%.
  • the eye-following image processing module 516 in FIG. 11 in some embodiments sets the image position correction amount Cx2 with respect to the eye position change amount ⁇ Px in the horizontal direction in the second image correction process (step S190) to the is set lower than the image position correction amount Cx1 for the eye position change amount ⁇ Px in the horizontal direction in the image correction process (step S160) in step S160, and the first correction amount Cy1 for the eye position change amount ⁇ Py in the vertical direction is set to a second correction amount Cy1. (Cx2/Cx1>Cy2/Cy1).
  • FIG. 16 is a flow diagram showing a method S200 for executing the visibility increasing process while executing the visibility decreasing process.
  • the method S200 is performed in a HUD device 20 including a spatial light modulating element and a display controller 30 controlling this HUD device 20 .
  • the display control device 30 determines whether a predetermined cancellation condition is satisfied (step S210), and when it is determined that the cancellation condition is satisfied, visibility reduction processing (step S180) to the visibility increasing process (step S220).
  • the predetermined cancellation condition includes that a predetermined period of time (for example, 20 seconds) has passed since the visibility reduction process (step S180) was started.
  • the visibility control module 514 executes timekeeping after transitioning to the visibility reduction process (step S180), and the predetermined time stored in advance in the memory 37 (or set by the operation detection unit 407) has passed. In this case, it may be determined that the cancellation condition is satisfied.
  • the predetermined cancellation condition may include that the predetermined determination condition is no longer satisfied in step S120. That is, the predetermined cancellation condition is that at least one of steps S131 to S134 and steps S141 to S143 has changed from a state in which the predetermined determination condition is satisfied to a state in which the predetermined determination condition is no longer satisfied. detecting. Further, the predetermined cancellation condition may include that a predetermined time (for example, 20 seconds) has elapsed since the predetermined determination condition was not satisfied in step S120.
  • a predetermined time for example, 20 seconds
  • Step S220 The display control device 30 (processor 33) executes visibility increasing processing when it is determined in step S210 that the cancellation condition is satisfied.
  • step S181 When it is determined in step S210 that the canceling condition is satisfied, the visibility control module 514 abruptly changes the visibility of the AR virtual image V60 from the visibility set in the visibility reduction process (step S170) to normal visibility. A first visibility increasing process for increasing the visibility is executed. More specifically, when it is determined that the cancellation condition is satisfied, the visibility control module 514 switches the visibility set in the visibility reduction process (step S170) to the normal visibility.
  • the visibility control module 514 can rapidly increase the visibility of the AR virtual image V60 immediately after it is determined in step S210 that the cancellation condition is satisfied.
  • the visibility control module 514 of another embodiment can rapidly increase the visibility of the AR virtual image V60 after a predetermined period of time has passed after it is determined that the cancellation condition is satisfied.
  • step S221 when it is determined in step S210 that the cancellation condition is satisfied, the visibility control module 514 gradually decreases the visibility of the AR virtual image V60 over time through visibility reduction processing (step S170).
  • a first visibility lowering process is executed to increase the visibility from the set visibility to the normal visibility. More specifically, when it is determined that the cancellation condition is satisfied, the visibility control module 514 gradually switches the visibility set in the visibility reduction process (step S170) to the normal visibility.
  • Step S223 When it is determined in step S210 that the cancellation condition is satisfied, the visibility control module 514 performs a second visibility increase that increases the visibility of the AR virtual image V60 to a different visibility depending on the change speed of the eye position. Execute the process. More specifically, when the speed of change in eye position is fast, the visibility control module 514 sets the visibility to be slightly higher than the visibility set in the visibility reduction process (step S170). If the speed is slow, the visibility is made significantly higher than the visibility set in the visibility lowering process (step S170) (slightly lower than normal visibility or the same as normal visibility).
  • the visibility level higher than the visibility set in the visibility reduction process is not limited to two stages, and may be three stages or more according to the change speed of the eye position. good. Visibility control module 514 may also substantially continuously increase the level of visibility as the rate of change in eye position decreases.
  • the visibility control module 514 executes a third visibility increase process for increasing the visibility of the AR virtual image V60 to different visibility depending on the eye position. do. More specifically, the visibility control module 514 sets the eye position in the visibility reduction process (step S170) when the eye position is far away from a predetermined reference position (for example, the center 205 of the eyebox 200). If the eye position is slightly away from the predetermined reference position, the visibility is significantly higher than the visibility (normal or the same as normal visibility). Note that the visibility level higher than the visibility set in the visibility reduction process (step S170) is not limited to two stages, and may be three stages or more depending on the eye position. Visibility control module 514 may also substantially continuously change the level of visibility according to eye position.
  • the visibility control module 514 increases the visibility of all the AR virtual images V60 whose visibility has been lowered in the visibility reduction process (step S170).
  • the visibility control module 514 sequentially increases the visibility of the plurality of AR virtual images V60 whose visibility has been reduced in the visibility reduction process (step S170).
  • the visibility control module 514 may increase the visibility of the distant virtual image V1 after a predetermined period of time has elapsed after increasing the visibility of the near virtual image V2.
  • Step S227) The graphic module 518 of FIG. 11 hides the AR-related virtual images related to part or all of the AR virtual image V60 whose visibility has been increased in steps S221, S223, and S225.
  • step S210 If it is determined in step S210 that the cancellation condition is satisfied, the eye-following image processing module 516 of FIG. from the image correction process (step S190) to the first image correction process (step S160) in which the amount of positional correction of the image with respect to the amount of change in the eye position 700 (or the face position) is greater than that of the second image correction process (step S190). switch to
  • Graphics module 518 of FIG. 11 includes various known software components for performing image processing, such as rendering, to generate image data, and to drive display device 40 .
  • the graphic module 518 also controls the type (moving image, still image, shape), arrangement (positional coordinates, angle), size, display distance (in the case of 3D), visual effect (for example, luminance, (transparency, saturation, contrast, or other visual properties) may be included.
  • the graphic module 518 stores the type of image (one example of display parameters), the positional coordinates of the image (one example of display parameters), the angle of the image (the pitching angle with the X direction as the axis, and the Y direction as the axis).
  • yaw rate angle around the axis, rolling angle around the Z direction, etc. which are examples of display parameters
  • image size one example of display parameters
  • image color one example of display parameters set by brightness, etc.
  • intensity of perspective expression of an image one of display parameters set by the position of a vanishing point, etc.
  • the light source driving module 520 includes various known software components for performing driving the light source unit 24 .
  • the light source driving module 520 can drive the light source unit 24 based on the set display parameters.
  • Actuator drive module 522 includes various known software components for performing driving first actuator 28 and/or second actuator 29. One actuator 28 and a second actuator 29 can be driven.
  • FIG. 16 is a diagram illustrating the HUD device 20 according to some embodiments, in which the eyebox 200 can be vertically moved by rotating the relay optical system 80 (curved mirror 81).
  • the display control device 30 (processor 33) in some embodiments rotates the relay optical system 80 (curved mirror 81) by, for example, controlling the first actuator 28 to move the eyebox 200 up and down (Y-axis direction).
  • the position of the virtual image display area VS is the relatively upper position indicated by the symbol VS3.
  • the display control device 30 executes the eye-following image processing module 516 to determine if the eyebox 200 is located above a predetermined height threshold (in other words, the first When the control value of the actuator 28 exceeds the actuator control threshold such that the eyebox 200 is positioned above the predetermined height threshold), the amount of change in eye position (or face position) is displayed on the display 50.
  • the image position correction amount Cy may be reduced.
  • the actuator driving module 522 may automatically change the height of the eyebox 200 according to the vertical position of the eye position 700 (or the face position). , the height of the eyebox 200 may be changed.
  • the eye-following image processing module 516 provides information on the height of the eyebox 200, information on the control values of the actuators, information on the upper and lower eye positions 700 (or face positions) that can automatically adjust the height of the eyebox 200.
  • the correction amount Cx of the position of the image displayed on the display 50 with respect to the amount of change in the eye position (or face position) is determined from the information on the position of the direction or the operation information from the operation detection unit 407 for adjusting the height of the eyebox 200. It can include threshold values, table data, arithmetic expressions, and the like for switching (Cy).
  • the display control device 30 in some embodiments adjusts the control value of the first actuator 28 as the eyebox 200 becomes higher (in other words, the control value of the first actuator 28 becomes higher).
  • the correction amount Cx (Cy) of the position of the image displayed on the display 50 with respect to the amount of change in eye position (or face position) may be reduced stepwise or continuously. That is, the eye-following image processing module 516 provides information on the height of the eyebox 200, information on the control values of the actuators, information on the upper and lower eye positions 700 (or face positions) that can automatically adjust the height of the eyebox 200.
  • the position of the image to be displayed on the display 50 with respect to the amount of change in eye position (or face position) in the vertical direction can be determined from information about the position of the direction or operation information from the operation detection unit 407 for adjusting the height of the eyebox 200 . It may include threshold values, table data, arithmetic expressions, etc. for adjusting the correction amount Cx(Cy).
  • the display control device 30 of the present embodiment includes at least the display device 40 that displays an image, and the relay optical system 80 that projects the light of the image displayed by the display device 40 onto the projection target member,
  • First image correction processing for correcting the position of the image displayed on the display device based on at least the eye position (or face position) Py in the vertical direction and the eye position (or face position) Px in the horizontal direction 40, and the second correction amount Cy2 of the image position with respect to the change amount ⁇ Py of the eye position (or face position) in the vertical direction is the first image correction process (step S160).
  • the first correction amount Cy1 of the image position with respect to the change amount ⁇ Py of the eye position (or face position) in the vertical direction is smaller than the first correction amount Cy1 of the image position with respect to the change amount ⁇ Py of the eye position (or face position) in the vertical direction, or displayed on the display device 40 based on at least the eye position (or face position) Px in the horizontal direction and a second image correction process S170 in which the image position is corrected and the correction amount of the image position with respect to the change amount ⁇ Py of the eye position (or face position) in the vertical direction is set to zero.
  • the processor 33 determines that (1) the horizontal eye position (or face position) Px has changed continuously in one direction, (2) the vertical eye position (and/or A change in eye position (or face position) in the horizontal direction and a change in eye position (and/or face position) in the horizontal direction are detected. and (3) changes in eye position (or face position) Py in the vertical direction and eye position (or face position) Px in the horizontal direction. is detected, and at this time, if at least one of the following conditions is satisfied: that the amount of change ⁇ Py in the eye position (or face position) in the vertical direction is less than a predetermined second threshold, the second Image correction processing S170 may be selected.
  • the second Image correction processing S170 may be selected.
  • the second image correction processing S170 may be selected.
  • the processor 33 selects one of the eye position Py in the vertical direction, the face position Py in the vertical direction, the eye position Px in the horizontal direction, and the face position Px in the horizontal direction. If one or a plurality of images has been detected but has not been detected, the process may proceed to the second image correction process S170.
  • the processor 33 performs the eye position (or face position) Py in the vertical direction and the eye position (or face position) in the horizontal direction after a predetermined time has elapsed. ) Px, the position of the image displayed on the display device 40 is corrected based on at least Px. It may be switched to a third image correction process S182 that is smaller than the first correction amount Cy1 in the correction process (step S160) and larger than the second correction amount Cy2 in the second image correction process S170. .
  • the processor 33 detects in the second image correction processing S170 that the amount of change ⁇ Py in the eye position (or face position) in the vertical direction has become larger than a predetermined third threshold.
  • the position of the image displayed on the display device 40 is corrected based on at least the eye position (or face position) Py in the vertical direction and the eye position (or face position) Px in the horizontal direction, and the eye position (or face position) in the vertical direction is corrected. or face position) is smaller than the first correction amount Cy1 in the first image correction process (step S160) and the second image correction process You may switch to 3rd image correction process S182 larger than 2nd correction amount Cy2 at the time of S170.
  • the processor 33 changes the third correction amount Cy3 in the third image correction process S182 from the first correction amount Cy1 in the first image correction process (step S160). It may change over time so that it approaches.
  • the HUD device 20 is configured to create a distant virtual image V1 (for example, virtual images V64 to V65 shown in FIG. 9) perceived at a position a first distance away from a reference point set on the vehicle side. ), and a near virtual image V2 (for example, virtual images V61 to V63 shown in FIG. 9) perceived at a position separated by a second distance shorter than the first distance, and the processor 33 displays the far virtual image V1 at a predetermined distance.
  • the display is switched between the first image correction processing (step S160) and the second image correction processing S170, and the neighboring virtual image V2 is displayed when the predetermined determination condition is satisfied. Instead, it may be displayed in the second image correction processing S170.
  • the determination module 510 determines each virtual image from the position information of the real object 300 with which the virtual image V obtained from the vehicle exterior sensor 411 is associated, the information about the perceived distance D30 set to the virtual image V based on the position information of the real object 300, and the like. It may include thresholds, table data, arithmetic expressions, etc. for determining V as a distant virtual image V1 or a near virtual image V2.
  • the area in which the HUD device 20 can display the virtual image V is assumed to be a virtual image display area VS, as shown in FIG. and a lower virtual image V70 displayed in a lower area VS ⁇ below the upper area VS ⁇ including the lower end VSb of the virtual image display area VS, and the processor 33 displays , the upper virtual image V60 is displayed by switching between the first image correction processing (step S160) and the second image correction processing S170 according to the satisfaction of a predetermined determination condition, and the lower virtual image V70 is displayed. , the image may be displayed without positional correction based on the eye position or the face position.
  • the HUD device 20 includes an AR virtual image V60 that changes its display position according to the position of a real object existing in the foreground of the vehicle, and an AR virtual image V60 that changes its display position according to the position of the real object. and the non-AR virtual image V70 whose display position is not changed, and the processor 33 performs the first image correction processing (step S160) and the second
  • the non-AR virtual image V70 may be displayed without image position correction based on the eye position or the face position.
  • vehicle display system 10 is optionally implemented in hardware, software, or a combination of hardware and software to carry out the principles of the various described embodiments. It is noted that the functional blocks illustrated in FIG. 11 may optionally be combined or separated into two or more sub-blocks to implement the principles of the described embodiments. will be understood by those skilled in the art. Accordingly, the description herein optionally supports any possible combination or division of the functional blocks described herein.
  • Reference Signs List 1 vehicle 2: projected part 5: dashboard 6: road surface 10: vehicle display system 20: HUD device (head-up display device) 21 : Light exit window 22 : Housing 24 : Light source unit 28 : First actuator 29 : Second actuator 30 : Display control device 31 : I/O interface 33 : Processor 35 : Image processing circuit 37 : Memory 40 : Display device 50 : display 51 : spatial light modulator 52 : optical layer 80 : relay optical system 81 : curved mirror 205 : center 300 : real object 401 : vehicle ECU 403 : Road information database 405 : Vehicle position detection unit 407 : Operation detection unit 409 : Face detection unit 411 : Exterior sensor 413 : Brightness detection unit 417 : Portable information terminal 419 : External communication device 502 : Eye position detection module 504 : Eye position estimation module 506 : Eye position prediction module 508 : Face detection module 510 : Determination module 511 : Eye position followability image processing module 512 : Vehicle state determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Instrument Panels (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present invention makes it difficult for a viewer to have an uncomfortable feeling. A processor acquires eye position-related information including at least one of the eye position, face position, and face direction of a user, displays an AR virtual image V60, executes eye-tracking image correction processing for correcting the position of an image to be displayed on a display device 40 on the basis of at least the eye position-related information in order to adjust the display position of the AR virtual image V60, and when determining that the eye position-related information or a detection operation for the eye position-related information satisfies a predetermined determination condition, executes visibility reduction processing for reducing the visibility of the AR virtual image V60.

Description

表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法Display control device, head-up display device, and display control method
 本開示は、車両等の移動体で使用され、移動体の前景(車両の乗員から見た移動体の前進方向の実景)に画像を重畳して視認させる表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法等に関する。 The present disclosure relates to a display control device, a head-up display device, and a head-up display device that are used in a mobile object such as a vehicle and superimpose an image on the foreground of the mobile object (actual view in the forward direction of the mobile object as seen from the vehicle occupant). It relates to a display control method and the like.
 特許文献1には、車両のフロントウインドシールド等の被投影部に投射される表示光が、車両の内側にいる車両の乗員(観察者)に向けて反射されることで、観察者に、車両の前景と重なる虚像を視認させるヘッドアップディスプレイ装置(虚像表示装置の一例)が記載されている。特に、特許文献1に記載のヘッドアップディスプレイ装置は、前景の実空間内の奥行きや上下左右方向の所定の位置(ここでは、前記位置をターゲット位置ということにする。)に仮想的に表示オブジェクト(虚像)を知覚させ、車両の姿勢の変化があった場合や観察者の目位置の変化があった場合であっても、あたかも前景のターゲット位置に表示オブジェクトが存在するかのように、ヘッドアップディスプレイ装置の内部で表示する画像を制御する。すなわち、このようなヘッドアップディスプレイ装置は、現実の風景(前景)に仮想オブジェクトを付加して表示する拡張現実を形成し、車両の姿勢の変化(これも実景に対する観察者の目位置の変化につながる)や車両内で観察者の目位置の変化した際でも、カメラ等の顔検出部で検出された観察者の目位置の変化に応じて、ヘッドアップディスプレイ装置の内部で表示する画像の位置等を補正することで、仮想オブジェクトに運動視差を与え、仮想オブジェクトが前景中(実景中)の前記ターゲット位置にあたかもあるかのように擬似的に観察者に知覚させることができる。 In Patent Document 1, display light projected onto a projection target portion such as a front windshield of a vehicle is reflected toward an occupant (observer) of the vehicle inside the vehicle, so that the observer can see the vehicle. describes a head-up display device (an example of a virtual image display device) that allows a user to visually recognize a virtual image that overlaps the foreground of the image. In particular, the head-up display device described in Patent Literature 1 virtually displays a display object at a predetermined position (here, the position is referred to as a target position) in the depth and vertical and horizontal directions in the real space of the foreground. (virtual image), and even if there is a change in the posture of the vehicle or the position of the observer's eyes, the display object will appear as if it were present at the target position in the foreground. Controls the image displayed inside the up display device. That is, such a head-up display device forms augmented reality in which virtual objects are added to the real scenery (foreground) and displayed, and changes in the attitude of the vehicle (this also changes the eye position of the observer with respect to the real scenery). connected) or when the position of the observer's eyes changes in the vehicle, the position of the image displayed inside the head-up display device is adjusted according to the change in the observer's eye position detected by a face detection unit such as a camera. By correcting the above, motion parallax can be given to the virtual object, and the observer can be made to perceive the virtual object as if it were at the target position in the foreground (in the real scene).
 また、特許文献2では、カメラ等の顔検出部で検出された観察者の右目位置及び左目位置を追跡し、追跡された右目位置に右目用画像を示す右目用表示光を向け、追跡された左目位置に左目用画像を示す左目用表示光を向けるように表示器を制御することで、仮想オブジェクトに両眼視差を与え、仮想オブジェクトが前景中(実景中)の前記ターゲット位置にあたかもあるかのように擬似的に観察者に知覚させることができるヘッドアップディスプレイ装置が開示されている。 Further, in Patent Document 2, the positions of the observer's right eye and left eye detected by a face detection unit such as a camera are tracked, the right-eye display light indicating the right-eye image is directed to the tracked right-eye position, and the tracked image is detected. Binocular parallax is given to the virtual object by controlling the display device so as to direct the display light for the left eye indicating the image for the left eye to the position of the left eye, and whether the virtual object is at the target position in the foreground (in the real scene). A head-up display device is disclosed that allows an observer to perceive in a pseudo manner.
 また、特許文献3では、カメラ等の顔検出部で検出された観察者の目位置から、前景に存在する実オブジェクト上の特定の位置(又は実オブジェクトと特定の位置関係となる実オブジェクトの周囲の位置)を見た場合の直線上の位置に、画像(虚像)の表示位置を合わせることで、実景に存在する実オブジェクトの位置を強調するヘッドアップディスプレイ装置が開示されている。 Further, in Japanese Patent Laid-Open No. 2002-200030, from the position of the observer's eyes detected by a face detection unit such as a camera, a specific position on a real object existing in the foreground (or an area around the real object having a specific positional relationship with the real object) is detected. A head-up display device is disclosed that emphasizes the position of a real object existing in the real scene by aligning the display position of an image (virtual image) with a position on a straight line when looking at the position of .
特開2010-156608号公報JP 2010-156608 A 特開2019-062532号公報JP 2019-062532 A 国際公開2019/097918号WO2019/097918
 ところで、観察者が頭を意図的に動かす場合や車両振動により意図せずに頭が動いてしまう場合、顔検出部では目位置の移動が検出されることで画像(虚像)の表示位置も検出された目位置に応じて補正され得る。このような場合、目位置に応じた画像がシステムレイテンシーにより遅れて表示されることにより、例えば、第1の目位置に対応した画像が表示された頃には目位置が既に第1の目位置と異なる第2の目位置に移動しており、目位置に対応していない画像が視認される(観察者は、第2の目位置から、第1の目位置に適応した画像を視認することになる)ことで、観察者に違和感を与えることが想定される。 By the way, when the observer moves his/her head intentionally or when the head moves unintentionally due to vehicle vibration, the face detection unit detects the movement of the eye position and thus also detects the display position of the image (virtual image). can be corrected according to the determined eye position. In such a case, the image corresponding to the eye position is displayed with a delay due to system latency. and an image that does not correspond to the eye position is viewed (the observer can view an image adapted to the first eye position from the second eye position). becomes), it is assumed that the observer feels uncomfortable.
 また、カメラなどの顔検出部は、撮像した画像などを複雑なアルゴリズムにより、観察者の目位置(左右の目位置)を検出するが、例えば、頭を動かす動作の仕方、及び/又は検出環境によっては、検出誤差の拡大(検出精度の低下)や誤検出などにより、画像(虚像)の表示位置の補正と観察者の目位置とがアンマッチになり、これによっても観察者に違和感を与えることが想定される。 In addition, a face detection unit such as a camera detects the position of the observer's eyes (left and right eye positions) using a complex algorithm from captured images. Depending on the situation, the correction of the display position of the image (virtual image) and the eye position of the observer will not match due to the expansion of the detection error (decrease in detection accuracy) or erroneous detection. is assumed.
 本明細書に開示される特定の実施形態の要約を以下に示す。これらの態様が、これらの特定の実施形態の概要を読者に提供するためだけに提示され、この開示の範囲を限定するものではないことを理解されたい。実際に、本開示は、以下に記載されない種々の態様を包含し得る。 A summary of certain embodiments disclosed herein is provided below. It should be understood that these aspects are presented only to provide the reader with an overview of these particular embodiments and are not intended to limit the scope of this disclosure. Indeed, the present disclosure may encompass various aspects not described below.
 本開示の概要は、観察者に違和感を与えにくくすることに関する。より、具体的には、ユーザーの目位置にアンマッチした画像を視認しにくくする表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法等を提供することに関する。 The outline of the present disclosure relates to making it difficult for the observer to feel uncomfortable. More specifically, the present invention relates to providing a display control device, a head-up display device, a display control method, and the like that make it difficult to visually recognize an image that does not match the user's eye position.
 したがって、本明細書に記載される表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法等は、前記課題を解決するため、以下の手段を採用した。本実施形態は、ユーザーの目位置、顔位置、及び顔向きの少なくとも1つを含む目位置関連情報又は目位置関連情報の検出動作に基づき、目位置関連情報に少なくとも基づいて表示器に表示する画像の位置を補正する目追従性画像補正処理がされるAR虚像の視認性を低下させる、ことをその要旨とする。 Therefore, the display control device, head-up display device, display control method, etc. described in this specification employ the following means in order to solve the above problems. The present embodiment displays on the display device based on at least the eye position-related information based on the eye position-related information including at least one of the user's eye position, face position, and face direction or the detection operation of the eye position-related information. The gist of this is to reduce the visibility of an AR virtual image that undergoes an eye-following image correction process that corrects the position of the image.
 したがって、本発明の第1の実施態様の表示制御装置は、画像を表示する表示器、表示器が表示する画像の光を被投影部材に投影することで、車両のユーザーに画像の虚像を前景に重ねて視認させるヘッドアップディスプレイ装置における表示制御を実行する表示制御装置であって、1つ又は複数のプロセッサと、メモリと、メモリに格納され、1つ又は複数のプロセッサによって実行されるように構成される1つ又は複数のコンピュータ・プログラムと、を備え、プロセッサは、ユーザーの目位置、顔位置、及び顔向きの少なくとも1つを含む目位置関連情報を取得し、ヘッドアップディスプレイ装置にAR虚像を表示し、AR虚像の表示位置を調整するために、目位置関連情報に少なくとも基づいて表示器に表示する画像の位置を補正する目追従性画像補正処理を実行し、目位置関連情報に基づき、目位置関連情報又は目位置関連情報の検出動作が、所定の判定条件を充足するか判定し、判定条件を充足すると判定した場合、AR虚像の視認性を低下させる視認性低下処理を実行する、ようになっている。本発明の第1の実施態様は、目位置にアンマッチした画像を視認させにくくするという利点を有している。従って、ユーザーの目位置、顔位置、及び顔向きの少なくとも1つを含む目位置関連情報又は目位置関連情報の検出動作に基づいて、目位置にアンマッチした画像が視認され得る状況を推定し、目追従性画像補正処理が実行されるAR虚像の視認性を低下させることができる。AR虚像は、観察者の目位置(又は顔位置)から見た前景(現実世界)の実オブジェクト(analog)の位置、方向、形状などの変化に整合するように、その位置、方向、形状などを変化させる画像であり、この場合、Kontaktanalog画像とも呼ばれる。なお、AR虚像は、必ずしも実オブジェクト(analog)に整合するように変化するKontaktanalog画像に限定されるものではなく、観察者の目位置(又は顔位置)に応じて、その位置、方向、形状などを変化させる目追従性画像補正処理(例えば、運動視差付加処理、重畳処理)が実行される画像であってもよい。 Therefore, the display control device of the first embodiment of the present invention projects a display for displaying an image and the light of the image displayed by the display onto a projection target member, thereby displaying a virtual image of the image to the user of the vehicle in the foreground. 1. A display control device for performing display control in a head-up display device that superimposes on a head-up display device, comprising: one or more processors; a memory; and one or more computer programs configured, wherein the processor acquires eye position-related information including at least one of the user's eye position, face position, and face orientation, and displays AR on the head-up display device In order to display the virtual image and adjust the display position of the AR virtual image, eye-tracking image correction processing is performed to correct the position of the image displayed on the display device based at least on the eye position-related information. Based on this, it is determined whether the eye position-related information or the detection operation of the eye position-related information satisfies a predetermined determination condition, and if it is determined that the determination condition is satisfied, a visibility reduction process is executed to reduce the visibility of the AR virtual image. do, it's supposed to. The first embodiment of the present invention has the advantage of making it difficult to see images that do not match the eye position. Therefore, based on the eye position-related information including at least one of the user's eye position, face position, and face orientation, or based on the detection operation of the eye position-related information, estimating a situation in which an image that does not match the eye position can be viewed, It is possible to reduce the visibility of the AR virtual image on which eye-tracking image correction processing is performed. The AR virtual image is adjusted to match the position, direction, shape, etc. of the real object (analog) in the foreground (real world) as viewed from the eye position (or face position) of the observer. , in which case it is also called a Kontaktanalog image. Note that the AR virtual image is not necessarily limited to a Kontaktanalog image that changes to match the real object (analog). may be an image on which eye-tracking image correction processing (for example, motion parallax addition processing, superimposition processing) that changes is performed.
 特に好適な第2の実施態様によれば、判定条件は、目位置、顔位置、及び顔向きの少なくともいずれかの変化速度の条件、目位置、顔位置、及び顔向きの少なくともいずれかの座標の条件、並びに目位置、顔位置、及び顔向きの少なくともいずれかの移動時間の条件、の少なくともいずれか1つを含む、ようになっている。この場合、目位置、顔位置、及び顔向きの少なくともいずれかの変化速度を条件、座標の条件、又は移動時間の条件に応じて、目位置にアンマッチした画像が視認され得る状況を推定し、AR虚像の視認性を低くすることができる。 According to a particularly preferred second embodiment, the determination conditions are conditions for change speed of at least one of eye position, face position, and face orientation, coordinates of at least one of eye position, face position, and face orientation. and at least one of the eye position, face position, and face direction moving time conditions. In this case, estimating a situation in which an image that does not match the eye position can be viewed according to the change speed of at least one of the eye position, the face position, and the face direction, the coordinate condition, or the movement time condition, Visibility of the AR virtual image can be lowered.
 特に好適な第3の実施態様によれば、判定条件は、目位置、顔位置、及び顔向きの少なくともいずれかの変化速度が速いこと、目位置、顔位置、及び顔向きの少なくともいずれかの座標が所定の範囲内であること、並びに、目位置、顔位置、及び顔向きの少なくともいずれかの連続的な変化であること、の少なくともいずれか1つを含む、ようになっている。この場合、目位置、顔位置、及び顔向きの少なくともいずれかの変化速度が速いことを条件に、AR虚像の視認性を低下させることができるという利点を有する。例えば、変化速度が所定の閾値より速ければ、AR虚像の視認性を低下させる。また、この場合、目位置、顔位置、及び顔向きの少なくともいずれかの座標が所定の範囲内であることを条件に、AR虚像の視認性を低下させることができるという利点を有する。例えば、目位置の検出誤差の拡大(検出精度の低下)や誤検出が生じやすい所定の範囲であれば、AR虚像の視認性を低下させる。また、この場合、目位置、顔位置、及び顔向きの少なくともいずれかの連続的な変化が検出されたことを条件に、AR虚像の視認性を低下させることができるという利点を有する。例えば、目位置が一方向に連続的に変化したことが検出された場合、AR虚像の視認性を低下させる。 According to a particularly preferred third embodiment, the determination conditions are that at least one of the eye position, face position, and face direction change speed is fast; It includes at least one of: the coordinates being within a predetermined range; and at least one of the eye position, face position, and face orientation continuously changing. In this case, there is an advantage that the visibility of the AR virtual image can be reduced on the condition that at least one of the eye position, face position, and face orientation changes quickly. For example, if the change speed is faster than a predetermined threshold, the visibility of the AR virtual image is reduced. Moreover, in this case, there is an advantage that the visibility of the AR virtual image can be reduced on condition that at least one of the coordinates of the eye position, the face position, and the face orientation is within a predetermined range. For example, the visibility of the AR virtual image is reduced in a predetermined range where eye position detection errors are likely to increase (detection accuracy is reduced) or erroneous detections are likely to occur. Moreover, in this case, there is an advantage that the visibility of the AR virtual image can be reduced on condition that a continuous change in at least one of the eye position, face position, and face direction is detected. For example, when it is detected that the eye position continuously changes in one direction, the visibility of the AR virtual image is reduced.
 特に好適な第4の実施態様によれば、目位置関連情報の検出動作の条件は、目位置、顔位置、及び顔向きの少なくともいずれかが検出できないこと、並びに目位置、顔位置、及び顔向きの少なくともいずれかの検出精度の低下を検出したこと、の少なくともいずれか1つを含む、ようになっている。この場合、目位置、顔位置、及び顔向きの少なくともいずれかが検出できないことを条件に、AR虚像の視認性を低下させることができるという利点を有する。また、この場合、目位置、顔位置、及び顔向きの少なくともいずれかの検出精度の低下を検出したことを条件に、AR虚像の視認性を低下させることができるという利点を有する。 According to a particularly preferred fourth embodiment, the conditions for the detection operation of the eye position-related information are that at least one of the eye position, face position, and face direction cannot be detected, and that the eye position, face position, and face direction cannot be detected. detecting a decrease in detection accuracy of at least one of the orientations. In this case, there is an advantage that the visibility of the AR virtual image can be reduced on the condition that at least one of the eye position, face position, and face direction cannot be detected. Moreover, in this case, there is an advantage that the visibility of the AR virtual image can be reduced on the condition that the detection accuracy of at least one of the eye position, the face position, and the face orientation is lowered.
 特に好適な第5の実施態様によれば、プロセッサは、視認性低下処理において、目位置、顔位置、及び顔向きの少なくともいずれかに応じて異なる視認性に低下させる、ようになっている。この場合、一部の目位置、顔位置、又は顔向きでは、アンマッチした画像が観察者に視認されることを防止することを優先した低い視認性でAR虚像を表示し、一部の目位置、顔位置、又は顔向きでは、アンマッチした画像が観察者に視認されることを抑えつつも、虚像の見やすさも考慮した中度の視認性でAR虚像を表示させることができ、柔軟で利便性の高いシステムを提供することができるという利点を有する。 According to a particularly preferred fifth embodiment, in the visibility reduction process, the processor lowers the visibility differently depending on at least one of the eye position, face position, and face orientation. In this case, at some eye positions, face positions, or face orientations, the AR virtual image is displayed with low visibility that prioritizes preventing the unmatched image from being visually recognized by the observer. , face position, or face orientation, it is possible to display the AR virtual image with moderate visibility considering the visibility of the virtual image while suppressing the unmatched image from being seen by the observer, which is flexible and convenient. has the advantage of being able to provide a system with a high
 特に好適な第6の実施態様によれば、プロセッサは、視認性低下処理において、目位置、顔位置、及び顔向きの少なくともいずれかの変化速度に応じて異なる視認性に低下させる、ようになっている。この場合、一部の目位置、顔位置、又は顔向きの変化速度では、アンマッチした画像が観察者に視認されることを防止することを優先した低い視認性でAR虚像を表示し、一部の目位置、顔位置、又は顔向きでは、アンマッチした画像が観察者に視認されることを抑えつつも、虚像の見やすさも考慮した中度の視認性でAR虚像を表示させることができ、柔軟で利便性の高いシステムを提供することができるという利点を有する。 According to a particularly preferred sixth embodiment, in the visibility reduction processing, the processor lowers the visibility differently according to the change speed of at least one of the eye position, the face position, and the face orientation. ing. In this case, at some eye position, face position, or face direction change speeds, the AR virtual image is displayed with low visibility that prioritizes preventing the unmatched image from being visually recognized by the observer. In the eye position, face position, or face orientation, it is possible to display the AR virtual image with moderate visibility considering the ease of viewing the virtual image, while suppressing the unmatched image from being seen by the observer. It has the advantage of being able to provide a highly convenient system.
 特に好適な第7の実施態様によれば、プロセッサは、判定条件を充足しないと判定した場合、
  目位置又は顔位置に少なくとも基づいて表示器に表示する画像の位置を補正する第1の目追従性画像補正処理を実行し、判定条件を充足すると判定した場合、目位置又は顔位置に少なくとも基づいて表示器に表示する画像の位置を補正し、目位置又は顔位置の変化量に対する画像の位置の第2の補正量が、第1の目追従性画像補正処理のときの目位置又は顔位置の同じ変化量に対する画像の位置の第1の補正量より小さい、又は上下方向の目位置又は顔位置の変化量及び左右方向の目位置又は顔位置の変化量の少なくともいずれか一方に対する画像の位置の補正量をゼロとする、第2の画像補正処理を実行する、ようになっている。この場合、目位置又は顔位置の誤検出による測定値で行われる誤った目追従性画像補正処理の影響を抑えることができるという利点を有する。
According to a particularly preferred seventh embodiment, when the processor determines that the determination condition is not satisfied,
executing a first eye-following image correction process for correcting the position of the image displayed on the display device based at least on the eye position or the face position; and the second correction amount of the image position with respect to the change amount of the eye position or face position is the eye position or face position at the time of the first eye-following image correction processing. or the position of the image relative to at least one of the amount of change in eye position or face position in the vertical direction and the amount of change in eye position or face position in the horizontal direction. is set to zero, and a second image correction process is executed. In this case, there is an advantage that it is possible to suppress the influence of erroneous eye-tracking image correction processing performed with measurement values due to erroneous detection of eye positions or face positions.
 特に好適な第8の実施態様によれば、プロセッサは、目位置関連情報に基づき、目位置関連情報又は目位置関連情報の検出動作が、所定の解除条件を充足するか判定し、解除条件を充足すると判定した場合、視認性低下処理がされていたAR虚像の視認性を上昇させる視認性上昇処理をさらに実行する、ようになっている。この場合、目位置にマッチした画像を視認させにくくするという利点を有している。従って、ユーザーの目位置、顔位置、及び顔向きの少なくとも1つを含む目位置関連情報又は目位置関連情報の検出動作に基づいて、目位置にマッチした画像が視認されやすい状況を推定し、目追従性画像補正処理が実行されるAR虚像の視認性を上昇させることができる。 According to a particularly preferred eighth embodiment, the processor determines whether the eye-position-related information or the detection operation of the eye-position-related information satisfies a predetermined release condition based on the eye-position-related information, and determines whether the release condition is satisfied. If it is determined to be sufficient, the visibility increasing process is further executed to increase the visibility of the AR virtual image that has been subjected to the visibility decreasing process. In this case, there is an advantage that the image that matches the eye position is less likely to be visually recognized. Therefore, based on the eye position-related information including at least one of the user's eye position, face position, and face direction or the detection operation of the eye position-related information, estimating a situation in which an image that matches the eye position is likely to be viewed, It is possible to improve the visibility of the AR virtual image on which the eye-trackable image correction processing is executed.
 特に好適な第8の実施態様に従属する第9の実施態様によれば、プロセッサは、視認性上昇処理において、目位置、顔位置、及び顔向きの少なくともいずれかに応じて異なる視認性に上昇させる、ようになっている。この場合、一部の目位置、顔位置、又は顔向きでは、虚像の見やすさを優先した視認性でAR虚像を表示し、一部の目位置、顔位置、又は顔向きでは、アンマッチした画像が観察者に視認されることを抑えつつも、虚像の見やすさも考慮した中度の視認性でAR虚像を表示させることができ、柔軟で利便性の高いシステムを提供することができるという利点を有する。 According to a ninth embodiment depending on the particularly preferred eighth embodiment, in the visibility increasing process, the processor raises the visibility differently depending on at least one of the eye position, the face position, and the face orientation. Let it be. In this case, for some eye positions, face positions, or face orientations, the AR virtual image is displayed with visibility that prioritizes the visibility of the virtual image, and for some eye positions, face positions, or face orientations, unmatched images It is possible to display the AR virtual image with moderate visibility that considers the visibility of the virtual image while suppressing the visual recognition by the observer, and it is possible to provide a flexible and highly convenient system. have.
 特に好適な第8の実施態様に従属する第10の実施態様によれば、プロセッサは、視認性上昇処理において、目位置、顔位置、及び顔向きの少なくともいずれかの変化速度に応じて異なる視認性に上昇させる、ようになっている。この場合、目位置、顔位置、又は顔向きの変化速度の一部では、虚像の見やすさを優先した視認性でAR虚像を表示し、目位置、顔位置、又は顔向きの変化速度の一部では、アンマッチした画像が観察者に視認されることを抑えつつも、虚像の見やすさも考慮した中度の視認性でAR虚像を表示させることができ、柔軟で利便性の高いシステムを提供することができるという利点を有する。 According to a tenth embodiment that is particularly dependent on the eighth embodiment, the processor, in the visibility increasing process, comprises different visual It's supposed to be sexually uplifting. In this case, for some of the eye position, face position, or face orientation change speed, the AR virtual image is displayed with visibility that prioritizes the visibility of the virtual image, and for part of the eye position, face position, or face orientation change speed. In part, it is possible to display the AR virtual image with moderate visibility considering the visibility of the virtual image while suppressing the unmatched image from being visually recognized by the observer, providing a flexible and highly convenient system. has the advantage of being able to
 特に好適な第11の実施態様によれば、画像を表示する表示器、表示器が表示する画像の光を被投影部材に投影することで、車両のユーザーに画像の虚像を前景に重ねて視認させるヘッドアップディスプレイ装置であって、1つ又は複数のプロセッサと、メモリと、メモリに格納され、1つ又は複数のプロセッサによって実行されるように構成される1つ又は複数のコンピュータ・プログラムと、を備え、プロセッサは、ユーザーの目位置、顔位置、及び顔向きの少なくとも1つを含む目位置関連情報を取得し、AR虚像を表示し、AR虚像の表示位置を調整するために、目位置関連情報に少なくとも基づいて表示器に表示する画像の位置を補正する目追従性画像補正処理を実行し、目位置関連情報に基づき、目位置関連情報又は目位置関連情報の検出動作が、所定の判定条件を充足するか判定し、判定条件を充足すると判定した場合、AR虚像の視認性を低下させる視認性低下処理を実行する、ようになっている。これによって、前記利点が得られる。その他の利点および好適な特徴は、特に前記実施形態および前記説明に記載されている。 According to a particularly preferred eleventh embodiment, a display for displaying an image and light of an image displayed by the display are projected onto a projection target member, so that a virtual image of the image is superimposed on the foreground and visually recognized by the user of the vehicle. a head-up display device that causes a head-up display device, comprising: one or more processors; a memory; one or more computer programs stored in the memory and configured to be executed by the one or more processors; , the processor acquires eye position-related information including at least one of the user's eye position, face position, and face orientation, displays the AR virtual image, and adjusts the display position of the AR virtual image, eye position executing eye-following image correction processing for correcting the position of an image displayed on a display device based at least on the relevant information, and based on the eye-position-related information, the eye-position-related information or the detection operation of the eye-position-related information is performed in a predetermined manner; It is determined whether the determination condition is satisfied, and when it is determined that the determination condition is satisfied, visibility reduction processing is performed to reduce the visibility of the AR virtual image. This provides the advantages described above. Other advantages and preferred features have been particularly mentioned in the embodiments and the description.
 特に好適な第11の実施態様によれば、画像を表示する表示器、表示器が表示する画像の光を被投影部材に投影することで、車両のユーザーに画像の虚像を前景に重ねて視認させるヘッドアップディスプレイ装置の表示制御方法であって、ユーザーの目位置、顔位置、及び顔向きの少なくとも1つを含む目位置関連情報を取得し、ヘッドアップディスプレイ装置にAR虚像を表示することと、AR虚像の表示位置を調整するために、目位置関連情報に少なくとも基づいて表示器に表示する画像の位置を補正する目追従性画像補正処理を実行することと、目位置関連情報に基づき、目位置関連情報又は目位置関連情報の検出動作が、所定の判定条件を充足するか判定することと、判定条件を充足すると判定した場合、AR虚像の視認性を低下させる視認性低下処理を実行することと、を含む、ようになっている。これによって、前記利点が得られる。その他の利点および好適な特徴は、特に前記実施形態および前記説明に記載されている。 According to a particularly preferred eleventh embodiment, a display for displaying an image and light of an image displayed by the display are projected onto a projection target member, so that a virtual image of the image is superimposed on the foreground and visually recognized by the user of the vehicle. A display control method for a head-up display device that allows the head-up display device to display an AR virtual image on the head-up display device; , executing an eye-following image correction process for correcting the position of an image displayed on a display device based at least on the eye position-related information in order to adjust the display position of the AR virtual image; and based on the eye position-related information, Determining whether the eye position-related information or the detection operation of the eye position-related information satisfies a predetermined determination condition, and if it is determined that the determination condition is satisfied, execute a visibility reduction process to reduce the visibility of the AR virtual image. to do and to include; This provides the advantages described above. Other advantages and preferred features have been particularly mentioned in the embodiments and the description.
図1は、車両用虚像表示システムの車両への適用例を示す図である。FIG. 1 is a diagram showing an application example of a vehicle virtual image display system to a vehicle. 図2は、ヘッドアップディスプレイ装置の構成を示す図である。FIG. 2 is a diagram showing the configuration of the head-up display device. 図3は、自車両の走行中において、観察者が視認する前景と、前景に重畳して表示される虚像の例を示す図である。FIG. 3 is a diagram showing an example of a foreground visually recognized by an observer and a virtual image displayed superimposed on the foreground while the host vehicle is running. 図4は、HUD装置が3D-HUD装置である実施態様において、虚像結像面に表示される左視点虚像と右視点虚像と、これら左視点虚像と右視点虚像により観察者が知覚する知覚画像と、の位置関係を概念的に示した図である。FIG. 4 shows, in an embodiment in which the HUD device is a 3D-HUD device, a left-viewpoint virtual image and a right-viewpoint virtual image displayed on a virtual image plane, and a perceptual image perceived by an observer based on these left-viewpoint virtual images and right-viewpoint virtual images. , and is a diagram conceptually showing the positional relationship between . 図5は、実景のターゲット位置に配置される仮想オブジェクトと、仮想オブジェクトが実景のターゲット位置に視認されるように虚像表示領域に表示される画像と、を概念的に示した図である。FIG. 5 is a diagram conceptually showing a virtual object placed at a target position in the real scene and an image displayed in the virtual image display area such that the virtual object is visually recognized at the target position in the real scene. 図6は、本実施形態における運動視差付加処理の方法を説明するための図である。FIG. 6 is a diagram for explaining a method of motion parallax addition processing in this embodiment. 図7Aは、本実施形態の運動視差付加処理を行わなかった場合の、図6に示す位置Px12から視認する虚像を示す比較例である。FIG. 7A is a comparative example showing a virtual image visually recognized from the position Px12 shown in FIG. 6 when the motion parallax adding process of this embodiment is not performed. 図7Bは、本実施形態の運動視差付加処理を行った場合の、図6に示す位置Px12から視認する虚像を示す図である。FIG. 7B is a diagram showing a virtual image visually recognized from the position Px12 shown in FIG. 6 when the motion parallax adding process of this embodiment is performed. 図8は、本実施形態における上下方向の目位置(顔位置)の移動による運動視差付加処理の方法を説明するための図である。FIG. 8 is a diagram for explaining a method of motion parallax addition processing by moving eye positions (face positions) in the vertical direction according to the present embodiment. 図9は、自車両の走行中において、観察者が視認する前景と、前景に重畳して表示される虚像の例を示す図である。FIG. 9 is a diagram showing an example of a foreground visually recognized by an observer and a virtual image displayed superimposed on the foreground while the own vehicle is running. 図10Aは、観察者が車両の前方を向いた際に視認する、前景における実オブジェクト、及びHUD装置が表示する虚像、を示す図であり、上図が重畳処理を実行しない比較例を示し、下図が重畳処理を実行した本実施形態の一例を示す。FIG. 10A is a diagram showing a real object in the foreground and a virtual image displayed by the HUD device that the observer sees when facing forward of the vehicle. The figure below shows an example of this embodiment in which superimposition processing is executed. 図10Bは、観察者が車両の前方を向いた際に視認する、前景における実オブジェクト、及びHUD装置が表示する虚像、を示す図であり、上図が重畳処理を実行しない比較例を示し、下図が重畳処理を実行した本実施形態の一例を示す。FIG. 10B is a diagram showing a real object in the foreground and a virtual image displayed by the HUD device that the observer sees when facing forward of the vehicle. The figure below shows an example of this embodiment in which superimposition processing is executed. 図11は、いくつかの実施形態の車両用虚像表示システムのブロック図である。FIG. 11 is a block diagram of a vehicle virtual image display system according to some embodiments. 図12Aは、観察者の目位置、顔位置、又は顔向きの検出結果に基づき、視認性低下処理を実行する方法を示すフロー図である。FIG. 12A is a flow diagram showing a method of executing visibility reduction processing based on the detection result of the observer's eye position, face position, or face direction. 図12Bは、図12Aに続くフロー図である。FIG. 12B is a flow diagram following FIG. 12A. 図13は、所定の周期時間毎に検出される目位置(顔位置)、目位置(顔位置)の変化量、目位置(顔位置)の変化速度などを示すイメージ図である。FIG. 13 is an image diagram showing the eye position (face position), the amount of change in the eye position (face position), the speed of change in the eye position (face position), and the like detected at each predetermined cycle time. 図14は、自車両の走行中において、観察者が視認する前景と、視認性低下処理が実行された際のAR虚像、及びAR関連虚像の例を示す図である。FIG. 14 is a diagram showing an example of the foreground visually recognized by the observer, the AR virtual image when the visibility reduction process is executed, and the AR-related virtual image while the host vehicle is running. 図15は、リレー光学系を回転させることで、アイボックスを上下方向に移動させることができる、いくつかの実施形態におけるHUD装置を説明する図である。FIG. 15 is a diagram illustrating a HUD device in some embodiments in which the eyebox can be moved vertically by rotating the relay optics. 図16は、視認性低下処理の実行中に視認性上昇処理を実行する方法を示すフロー図である。FIG. 16 is a flow diagram illustrating a method of executing the visibility increasing process while executing the visibility decreasing process.
 以下、図1ないし図16では、例示的な車両用表示システムの構成、及び動作の説明を提供する。なお、本発明は以下の実施形態(図面の内容も含む)によって限定されるものではない。下記の実施形態に変更(構成要素の削除も含む)を加えることができるのはもちろんである。また、以下の説明では、本発明の理解を容易にするために、公知の技術的事項の説明を適宜省略する。 1 through 16 below provide a description of the configuration and operation of an exemplary vehicular display system. In addition, the present invention is not limited by the following embodiments (including the contents of the drawings). Of course, modifications (including deletion of constituent elements) can be added to the following embodiments. In addition, in the following description, descriptions of known technical matters are omitted as appropriate in order to facilitate understanding of the present invention.
 図1を参照する。図1は、視差式3D-HUD装置を含む車両用虚像表示システムの構成の一例を示す図である。なお、図1において、車両(移動体の一例。)1の左右方向(換言すると、車両1の幅方向)をX軸(X軸の正方向は、車両1の前方を向いた際の左方向。)とし、左右方向に直交すると共に、地面又は地面に相当する面(ここでは路面6)に直交する線分に沿う上下方向(換言すると、車両1の高さ方向)をY軸(Y軸の正方向は、上方向。)とし、左右方向及び上下方向の各々に直交する線分に沿う前後方向をZ軸(Z軸の正方向は、車両1の直進方向。)とする。この点は、他の図面においても同様である。 See Figure 1. FIG. 1 is a diagram showing an example of the configuration of a vehicle virtual image display system including a parallax 3D-HUD device. In FIG. 1, the left-right direction of a vehicle (an example of a moving body) 1 (in other words, the width direction of the vehicle 1) is the X-axis (the positive direction of the X-axis is the left direction when the vehicle 1 is facing forward). ), and the vertical direction (in other words, the height direction of the vehicle 1) along a line segment that is orthogonal to the horizontal direction and orthogonal to the ground or a surface corresponding to the ground (here, the road surface 6) is the Y axis (Y axis is the upward direction), and the front-rear direction along a line segment perpendicular to each of the left-right direction and the up-down direction is the Z-axis (the positive direction of the Z-axis is the straight-ahead direction of the vehicle 1). This point also applies to other drawings.
 図示するように、車両(自車両)1に備わる車両用表示システム10は、観察者(典型的には車両1の運転席に着座する運転者)の左目700Lと右目700Rの位置や視線方向を検出する瞳(あるいは顔)検出用の顔検出部409、車両1の前方(広義には周囲)を撮像するカメラ(例えばステレオカメラ)などで構成される車外センサ411、ヘッドアップディスプレイ装置(以下では、HUD装置とも呼ぶ)20、及びHUD装置20を制御する表示制御装置30、などで構成される。 As illustrated, a vehicle display system 10 provided in a vehicle (self-vehicle) 1 displays the positions and line-of-sight directions of a left eye 700L and a right eye 700R of an observer (typically, a driver sitting in the driver's seat of the vehicle 1). A face detection unit 409 for detecting pupils (or faces) to be detected, a vehicle exterior sensor 411 configured by a camera (for example, a stereo camera) for imaging the front (in a broad sense, the surroundings) of the vehicle 1, a head-up display device (hereinafter , a HUD device) 20, a display control device 30 that controls the HUD device 20, and the like.
 図2は、ヘッドアップディスプレイ装置の構成の一態様を示す図である。HUD装置20は、例えばダッシュボード(図1の符号5)内に設置される。このHUD装置20は、立体表示装置(表示器の一例。)40、リレー光学系80、及びこれら表示装置40とリレー光学系80を収納し、表示装置40からの表示光Kを内部から外部に向けて出射可能な光出射窓21を有する筐体22、を有する。 FIG. 2 is a diagram showing one aspect of the configuration of the head-up display device. The HUD device 20 is installed, for example, in a dashboard (reference numeral 5 in FIG. 1). The HUD device 20 accommodates a stereoscopic display device (an example of a display device) 40, a relay optical system 80, and the display device 40 and the relay optical system 80, and transmits display light K from the display device 40 from the inside to the outside. It has a housing 22 having a light exit window 21 that can be emitted toward.
 表示装置40は、ここでは視差式3D表示装置とする。この表示装置(視差式3D表示装置)40は、左視点画像と右視点画像とを視認させることで奥行き表現を制御可能な多視点画像表示方式を用いた裸眼立体表示装置である表示器50、及びバックライトとして機能する光源ユニット60、により構成される。 The display device 40 is a parallax 3D display device here. This display device (parallax type 3D display device) 40 is a naked-eye stereoscopic display device using a multi-viewpoint image display method that can control depth representation by visually recognizing left-viewpoint images and right-viewpoint images. and a light source unit 60 functioning as a backlight.
 表示器50は、光源ユニット60からの照明光を光変調して画像を生成する空間光変調素子51、及び例えば、レンチキュラレンズやパララックスバリア(視差バリア)等を有し、空間光変調素子51から出射される光を、左目用の光線K11、K12、及びK13等の左目用表示光(図1の符号K10)と、右目用の光線K21、K22、及びK23等の右目用表示光(図1の符号K20)とに分離する光学レイヤ(光線分離部の一例。)52、を有する。光学レイヤ52は、レンチキュラレンズ、パララックスバリア、レンズアレイ、及びマイクロレンズアレイなどの光学フィルタを含む。但し、これは一例であり、限定されるものではない。光学レイヤ52の実施形態は、前述した光学フィルタに限定されることなく、空間光変調素子51から出射される光から左目用表示光(図1の符号K10)及び右目用表示光(図1の符号K20)を生成するものであれば、空間光変調素子51の前面又は後面に配置される全ての形態の光学レイヤを含む。光学レイヤ52のいくつかの実施形態は、電気的に制御されることで、空間光変調素子51から出射される光から左目用表示光(図1の符号K10)及び右目用表示光(図1の符号K20)を生成するものであってもよく、例えば、液晶レンズなどが挙げられる。すなわち、光学レイヤ52の実施形態は、電気的制御されるものと、電気的制御されないものと、を含み得る。 The display 50 has a spatial light modulation element 51 that optically modulates the illumination light from the light source unit 60 to generate an image, and, for example, a lenticular lens or a parallax barrier (parallax barrier). The light emitted from the light beams K11, K12, and K13 for the left eye (reference symbol K10 in FIG. 1) and the right eye display light beams K21, K22, and K23 for the right eye (see FIG. 1). 1 (K20) and an optical layer (an example of a light beam splitting section) 52. Optical layer 52 includes optical filters such as lenticular lenses, parallax barriers, lens arrays, and microlens arrays. However, this is an example and is not limited. Embodiments of the optical layer 52 are not limited to the above-described optical filters, and the display light for the left eye (K10 in FIG. 1) and the display light for the right eye (K10 in FIG. 1) from the light emitted from the spatial light modulation element 51 K20) includes all forms of optical layers placed on the front or back surface of the spatial light modulator 51. FIG. Some embodiments of the optical layer 52 are electrically controlled so that left-eye display light (K10 in FIG. 1) and right-eye display light (K10 in FIG. 1) are separated from light emitted from the spatial light modulator 51. symbol K20), such as a liquid crystal lens. That is, embodiments of optical layer 52 may include those that are electrically controlled and those that are not.
 また、表示装置40は、光学レイヤ(光線分離部の一例。)52の代わりに又は、それに加えて、光源ユニット60を指向性バックライトユニット(光線分離部の一例。)で構成することで、左目用の光線K11、K12、及びK13等の左目用表示光(図1の符号K10)と、右目用の光線K21、K22、及びK23等の右目用表示光(図1の符号K20)と、を出射させてもよい。具体的に、例えば、後述する表示制御装置30は、指向性バックライトユニットが左目700Lに向かう照明光を照射した際に、空間光変調素子51に左視点画像を表示させることで、左目用の光線K11、K12、及びK13等の左目用表示光K10を、観察者の左目700Lに向け、指向性バックライトユニットが右目700Rに向かう照明光を照射した際に、空間光変調素子51に右視点画像を表示させることで、右目用の光線K21、K22、及びK23等の右目用表示光K20を、観察者の左目700Rに向ける。但し、上記の指向性バックライトユニットの実施形態は一例であり、限定されるものではない。 In addition, the display device 40 is configured by configuring the light source unit 60 with a directional backlight unit (an example of the light beam separating portion) instead of or in addition to the optical layer (an example of the light beam separating portion) 52. left-eye display light such as left-eye light rays K11, K12, and K13 (reference symbol K10 in FIG. 1), right-eye display light such as right-eye light rays K21, K22, and K23, etc. (reference symbol K20 in FIG. 1); may be emitted. Specifically, for example, the display control device 30, which will be described later, causes the spatial light modulation element 51 to display a left viewpoint image when the directional backlight unit emits illumination light directed toward the left eye 700L. Left-eye display light K10 such as light rays K11, K12, and K13 is directed toward the viewer's left eye 700L, and the directional backlight unit emits illumination light toward the right eye 700R. By displaying an image, right-eye display light K20 such as right-eye light rays K21, K22, and K23 is directed to the viewer's left eye 700R. However, the embodiment of the directional backlight unit described above is an example, and is not limited.
 後述する表示制御装置30は、例えば、画像レンダリング処理(グラフィック処理)、表示器駆動処理などを実行することで、観察者の左目700Lへ左視点画像V10の左目用表示光K10、及び右目700Rへ右視点画像V20の右目用表示光K20、を向け、左視点画像V10及び右視点画像V20を調整することで、HUD装置20が表示する(観察者が知覚する)知覚虚像FUの態様を制御することができる。なお、後述する表示制御装置30は、一定空間に存在する点などから様々な方向に出力される光線をそのまま(概ね)再現するライトフィールドを生成するように、ディスプレイ(表示器50)を制御してもよい。 The display control device 30, which will be described later, performs, for example, image rendering processing (graphic processing), display device driving processing, and the like, so that the display light K10 for the left eye of the left viewpoint image V10 and the display light K10 for the right eye 700R of the observer's left eye 700L are displayed. By directing the right-eye display light K20 of the right-viewpoint image V20 and adjusting the left-viewpoint image V10 and the right-viewpoint image V20, the aspect of the perceptual virtual image FU displayed by the HUD device 20 (perceived by the observer) is controlled. be able to. The display control device 30, which will be described later, controls the display (display device 50) so as to generate a light field that (approximately) reproduces light rays output in various directions from a point in a certain space. may
 リレー光学系80は、表示装置40からの光を反射し、画像の表示光K10、K20を、ウインドシールド(被投影部材)2に投影する曲面ミラー(凹面鏡等)81、82を有する。但し、その他の光学部材(レンズなどの屈折光学部材、ホログラムなどの回折光学部材、反射光学部材又は、これらの任意の組み合わせを含む。)を、さらに有してもよい。 The relay optical system 80 has curved mirrors (concave mirrors, etc.) 81 and 82 that reflect the light from the display device 40 and project the image display light K10 and K20 onto the windshield (projection target member) 2 . However, it may further include other optical members (including refractive optical members such as lenses, diffractive optical members such as holograms, reflective optical members, or any combination thereof).
 図1では、HUD装置20の表示装置40は、左右の目のそれぞれに視差のある画像(視差画像)が表示する。各視差画像は、図1に示されるように、虚像表示面(虚像結像面)VSに結像したV10、V20として表示される。観察者(人)の各目のピントは、虚像表示領域VSの位置に合うように調節される。なお、虚像表示領域VSの位置を、「調節位置(又は結像位置)」と称し、また、所定の基準位置(例えば、HUD装置20のアイボックス200の中心205、観察者の視点位置、又は、車両1の特定位置など)から虚像表示領域VSまでの距離(図4の符号D10を参照)を「調節距離(結像距離)」と称する。 In FIG. 1, the display device 40 of the HUD device 20 displays an image (parallax image) with parallax for each of the left and right eyes. Each parallax image is displayed as V10 and V20 formed on a virtual image display surface (virtual image formation surface) VS, as shown in FIG. The focus of each eye of the observer (person) is adjusted so as to match the position of the virtual image display area VS. Note that the position of the virtual image display area VS is referred to as an "adjustment position (or imaging position)", and a predetermined reference position (for example, the center 205 of the eyebox 200 of the HUD device 20, the observer's viewpoint position, or , a specific position of the vehicle 1, etc.) to the virtual image display area VS (see symbol D10 in FIG. 4) is referred to as an "adjustment distance (imaging distance)".
 但し、実際は、人の脳が、各画像(虚像)を融像するため、人は、調節位置よりもさらに奥側である位置(例えば、左視点画像V10と右視点画像V20との輻輳角によって定まる位置であり、輻輳角が小さくなるほど、観察者から離れた位置にあるように知覚される位置)に、知覚画像(ここでは、ナビゲーション用の矢先の図形)FUが表示されているように認識する。なお、知覚虚像FUは、「立体虚像」と称される場合があり、また、「画像」を広義に捉えて虚像も含まれるとする場合には、「立体画像」と称することもできる。また、「立体像」、「3D表示」等と称される場合がある。なお、HUD装置20は、調節位置よりも手前側である位置に、知覚画像FUが視認されるように、左視点画像V10及び右視点画像V20を表示し得る。 However, in reality, since the human brain fuses each image (virtual image), the human is at a position farther back than the adjustment position (for example, due to the convergence angle between the left viewpoint image V10 and the right viewpoint image V20). It is a fixed position, and the position perceived as being farther away from the observer as the angle of convergence decreases) is recognized as a perceptual image (here, an arrowhead figure for navigation) FU is displayed. do. Note that the perceptual virtual image FU may be referred to as a "stereoscopic virtual image", and may also be referred to as a "stereoscopic image" when the "image" is taken in a broad sense to include virtual images. It may also be referred to as a "stereoscopic image", "3D display", or the like. The HUD device 20 can display the left-viewpoint image V10 and the right-viewpoint image V20 so that the perceived image FU can be viewed at a position on the front side of the adjustment position.
 次に、図3、及び図4を参照する。図3は、車両1の走行中において、観察者が視認する前景と、前記前景に重畳して表示される知覚画像の例を示す図である。図4は、虚像結像面に表示される左視点虚像と右視点虚像と、これら左視点虚像と右視点虚像により観察者が知覚する知覚画像と、の位置関係を概念的に示した図である。 Next, refer to FIGS. 3 and 4. FIG. 3 is a diagram showing an example of a foreground visually recognized by an observer and a perceptual image superimposed on the foreground and displayed while the vehicle 1 is running. FIG. 4 is a diagram conceptually showing the positional relationship between the left-viewpoint virtual image and the right-viewpoint virtual image displayed on the virtual image plane, and the perceived image perceived by the observer from the left-viewpoint virtual image and the right-viewpoint virtual image. be.
 図3において、車両1は、直線状の道路(路面)6を走行している。HUD装置20は、ダッシュボード5内に設置されている。HUD装置20の光出射窓21から表示光K(K10,K20)を被投影部(車両1のフロントウインドシールド)2に投影する。図3の例では、路面6に重畳し、車両1の経路(ここでは直進を示す。)を指示する第1のコンテンツ画像FU1、及び同じく車両1の経路(ここでは直進を示す。)を指示し、第1のコンテンツ画像FU1より遠方に知覚される第2のコンテンツ画像FU2、を表示する。 In FIG. 3, the vehicle 1 is traveling on a straight road (road surface) 6. The HUD device 20 is installed inside the dashboard 5 . Display light K (K10, K20) is projected from the light exit window 21 of the HUD device 20 onto the projected portion (front windshield of the vehicle 1) 2. FIG. In the example of FIG. 3, a first content image FU1 that is superimposed on the road surface 6 and instructs the route of the vehicle 1 (here, indicates straight ahead), and similarly indicates the route of the vehicle 1 (here, indicates straight travel). and a second content image FU2 perceived farther than the first content image FU1 is displayed.
 図4の左図に示すように、HUD装置20は、(1)顔検出部409で検出された左目700Lへ被投影部2によって反射されるような位置、及び角度で、被投影部2に左目用表示光K10を出射し、左目700Lから見た虚像表示領域VSの所定の位置に、第1の左視点コンテンツ画像V11を結像し、(2)右目700Rへ被投影部2によって反射されるような位置、及び角度で、被投影部2に右目用表示光K20を出射し、右目700Rから見た虚像表示領域VSの所定の位置に、第1の右視点コンテンツ画像V21を結像する。視差を有する第1の左視点コンテンツ画像V11、及び第1の右視点コンテンツ画像V21により知覚される第1のコンテンツ画像FU1は、虚像表示領域VSよりも距離D21だけ奥側である位置(上記の基準位置から距離D31だけ離れた位置)において視認される。 As shown in the left diagram of FIG. 4, the HUD device 20 (1) directs the projection target 2 to the left eye 700L detected by the face detection unit 409 at such a position and angle as to be reflected by the projection target 2. Left-eye display light K10 is emitted to form a first left-viewpoint content image V11 at a predetermined position in the virtual image display area VS seen from the left eye 700L, and (2) reflected by the projection target unit 2 to the right eye 700R. Right-eye display light K20 is emitted to the projection target unit 2 at a position and angle such that . The first content image FU1 perceived by the first left-viewpoint content image V11 and the first right-viewpoint content image V21 having parallax is positioned behind the virtual image display area VS by a distance D21 (the above position separated from the reference position by a distance D31).
 同様に、図4の右図に示すように、HUD装置20は、(1)顔検出部409で検出された左目700Lへ被投影部2によって反射されるような位置、及び角度で、被投影部2に左目用表示光K10を出射し、左目700Lから見た虚像表示領域VSの所定の位置に、第2の左視点コンテンツ画像V12を結像し、(2)右目700Rへ被投影部2によって反射されるような位置、及び角度で、被投影部2に右目用表示光K20を出射し、右目700Rから見た虚像表示領域VSの所定の位置に、第2の右視点コンテンツ画像V22を結像する。視差を有する第2の左視点コンテンツ画像V12、及び第2の右視点コンテンツ画像V22により知覚される第2のコンテンツ画像FU2は、虚像表示領域VSよりも距離D22だけ奥側である位置(上記の基準位置から距離D31だけ離れた位置)において視認される。 Similarly, as shown in the right diagram of FIG. 4, the HUD device 20 (1) detects the left eye 700L detected by the face detection unit 409 at a position and angle that are reflected by the projection target unit 2. The left-eye display light K10 is emitted to the portion 2 to form a second left-viewpoint content image V12 at a predetermined position in the virtual image display area VS seen from the left eye 700L, and (2) the projected portion 2 to the right eye 700R. Right-eye display light K20 is emitted to the projection target 2 at a position and an angle such that it is reflected by the second right-viewpoint content image V22 at a predetermined position in the virtual image display area VS seen from the right eye 700R. form an image. The second content image FU2 perceived by the second left-viewpoint content image V12 and the second right-viewpoint content image V22 having parallax is positioned behind the virtual image display area VS by a distance D22 (the above position separated from the reference position by a distance D31).
 具体的には、上記の基準位置から虚像表示領域VSまでの距離(結像距離D10)は、例えば「4m」の距離に設定され、上記の基準位置から図4の左図に示される第1のコンテンツ画像FU1及までの距離(第1の知覚距離D31)は、例えば「7m」の距離に設定され、上記の基準位置から図4の右図に示される第2のコンテンツ画像FU2までの距離(第2の知覚距離D32)は、例えば「10m」の距離に設定される。但し、これは一例であり、限定されるものではない。 Specifically, the distance (imaging distance D10) from the reference position to the virtual image display area VS is set to, for example, "4 m", and the first distance shown in the left diagram of FIG. The distance to the content image FU1 (first perceptual distance D31) is set to a distance of, for example, "7 m", and the distance from the reference position to the second content image FU2 shown in the right diagram of FIG. (Second perceptual distance D32) is set to, for example, a distance of "10 m". However, this is an example and is not limited.
 図5は、実景のターゲット位置に配置される仮想オブジェクトと、仮想オブジェクトが実景のターゲット位置に視認されるように虚像表示領域に表示される画像と、を概念的に示した図である。なお、図5に示すHUD装置20は、3D表示ではない2D表示をする例を示している。すなわち、図5に示すHUD装置20の表示装置40は、立体表示装置ではない2D表示装置である(なお、立体表示装置でも2D表示は可能である。)。図5に表したように、観視者700から見て、奥行き方向をZ軸方向とし、左右方向(車両1の幅方向)をX軸方向とし、上下方向(車両1の上下方向)をY軸方向とする。なお、観視者から見て遠ざかる方向をZ軸の正の方向とし、観視者から見て左方向がX軸の正の方向とし、観視者から見て上方向をY軸の正の方向とする。 FIG. 5 is a diagram conceptually showing a virtual object placed at a target position in the real scene and an image displayed in the virtual image display area such that the virtual object is visually recognized at the target position in the real scene. Note that the HUD device 20 shown in FIG. 5 shows an example of performing 2D display instead of 3D display. That is, the display device 40 of the HUD device 20 shown in FIG. 5 is a 2D display device that is not a stereoscopic display device (even a stereoscopic display device can perform 2D display). As shown in FIG. 5, when viewed from the viewer 700, the depth direction is the Z-axis direction, the left-right direction (the width direction of the vehicle 1) is the X-axis direction, and the vertical direction (the vertical direction of the vehicle 1) is the Y-axis direction. Axial direction. Note that the direction away from the viewer is the positive direction of the Z axis, the leftward direction is the positive direction of the X axis, and the upward direction is the positive direction of the Y axis. direction.
 観視者700は、被投影部2を介して虚像表示領域VSに形成された(結像された)虚像Vを視認することで、実景の所定のターゲット位置PTに、仮想オブジェクトFUを知覚する。観視者は、被投影部2で反射した表示光Kの映像の虚像Vを視認する。この時、虚像Vが、例えば進路を示す矢印である場合、車両1の前景の所定のターゲット位置PTに仮想オブジェクトFUが配置されて視認されるように、虚像Vの矢印は虚像表示領域VSに表示される。具体的には、HUD装置20(表示制御装置30)は、観察者の左目700Lと右目700Rとの中心を射影変換の原点として、ターゲット位置PTに配置した所定のサイズ・形状の仮想オブジェクトFUを虚像表示領域VSに射影変換した所定のサイズ・形状の虚像Vが表示されるように表示装置40に表示する画像をレンダリングする。そして、HUD装置20(表示制御装置30)は、観察者が目位置を移動した場合でも、目位置が移動する前と同じターゲット位置PTに仮想オブジェクトFUが知覚されるように、虚像表示領域VSに表示される虚像Vの位置を変更することで、ターゲット位置PTとは離れた位置(虚像表示領域VS)に表示されているにもかかわらず、仮想オブジェクトFU(虚像V)がターゲット位置PTにあるかのように認識させることができる。すなわち、HUD装置20(表示制御装置30)は、自然な運動視差を、目位置の移動に基づいて表示装置40上の画像(虚像表示領域VS内の虚像V)の位置の変更(これ、サイズの変更や形状の変更が加わっても良い。)により表現している(換言すると、HUD装置20は、目位置の移動に伴う画像補正で、虚像(画像)に運動視差を付加することで、奥行き知覚を感得しやすくしている)。本実施形態の説明では、このような目位置の変化に応じて、運動視差を表現するような画像位置の補正を、運動視差付加処理(目追従性画像補正処理の一例。)と呼ぶ。前記運動視差付加処理は、自然な運動視差を完全に再現するような画像位置の補正だけに限定されるのではなく、自然な運動視差に近づけるような画像位置の補正も含んでいてもよい。なお、HUD装置20(表示制御装置30)は、目位置700の変化に応じて、運動視差付加処理(目追従性画像補正処理の一例。)だけではなく、目位置700の代わりに、観察者の顔位置に基づいて、運動視差付加処理(目追従性画像補正処理の一例。)を実行してもよい。 The viewer 700 perceives the virtual object FU at a predetermined target position PT in the real scene by visually recognizing the virtual image V formed (imaged) in the virtual image display area VS through the projection target section 2. . A viewer visually recognizes the virtual image V of the image of the display light K reflected by the projection target section 2 . At this time, if the virtual image V is, for example, an arrow indicating a course, the arrow of the virtual image V is placed in the virtual image display area VS so that the virtual object FU is placed at a predetermined target position PT in the foreground of the vehicle 1 and visually recognized. Is displayed. Specifically, the HUD device 20 (display control device 30) uses the center of the observer's left eye 700L and right eye 700R as the origin of the projective transformation, and the virtual object FU of a predetermined size and shape arranged at the target position PT. An image to be displayed on the display device 40 is rendered such that a virtual image V having a predetermined size and shape that has undergone projective transformation is displayed in the virtual image display area VS. Then, the HUD device 20 (display control device 30) sets the virtual image display area VS so that the virtual object FU is perceived at the same target position PT as before the eye position is moved even when the eye position of the observer is moved. By changing the position of the virtual image V displayed in the target position PT, the virtual object FU (virtual image V) is displayed at the target position PT even though it is displayed at a position (virtual image display area VS) away from the target position PT. It can be recognized as if That is, the HUD device 20 (display control device 30) changes the position (this, size (In other words, the HUD device 20 adds motion parallax to the virtual image (image) by image correction accompanying the movement of the eye position, making it easier to perceive depth). In the description of the present embodiment, image position correction that expresses motion parallax in accordance with such a change in eye position is referred to as motion parallax addition processing (an example of eye-tracking image correction processing). The motion parallax adding process is not limited to image position correction that perfectly reproduces natural motion parallax, but may also include image position correction that approximates natural motion parallax. Note that the HUD device 20 (display control device 30) performs motion parallax addition processing (an example of eye-tracking image correction processing) in response to a change in the eye position 700. Motion parallax addition processing (an example of eye-tracking image correction processing) may be executed based on the face position.
 図6は、本実施形態における運動視差付加処理の方法を説明するための図である。本実施形態の表示制御装置30(プロセッサ33)は、HUD装置20を制御し、被投影部2を介して虚像表示領域VSに形成された(結像された)虚像V41、V42、及びV43を表示する。虚像V41は、知覚距離D33(虚像表示領域VSよりも距離D23だけ奥側である位置)であるターゲット位置PT11に設定され、虚像V42は、虚像V41の知覚距離D33より長い知覚距離D34(虚像表示領域VSよりも距離D24(>D23)だけ奥側である位置)であるターゲット位置PT12に設定され、そして、虚像V43は、虚像V42の知覚距離D34より長い知覚距離D35(虚像表示領域VSよりも距離D25(>D24)だけ奥側である位置)であるターゲット位置PT13に設定される。なお、表示装置40での画像の補正量は、虚像表示領域VSでの虚像の補正量に対応するため、図6では、表示装置40での画像の補正量C1、C2、及びC3に対応する虚像の補正量も同じ符号C1、C2、及びC3を用いている(図8の符号Cy11(Cy)、Cy21(Cy)も同様)。 FIG. 6 is a diagram for explaining the method of motion parallax addition processing in this embodiment. The display control device 30 (processor 33) of the present embodiment controls the HUD device 20 to display the virtual images V41, V42, and V43 formed (imaged) in the virtual image display area VS via the projection target section 2. indicate. The virtual image V41 is set at the target position PT11, which is a perceived distance D33 (a position that is a distance D23 behind the virtual image display area VS). The virtual image V43 is set at the target position PT12, which is a position behind the area VS by a distance D24 (>D23)), and the virtual image V43 is positioned at a perceived distance D35 longer than the perceived distance D34 of the virtual image V42 (more than the virtual image display area VS). The target position PT13, which is a position behind by a distance D25 (>D24), is set. Note that since the amount of correction of the image on the display device 40 corresponds to the amount of correction of the virtual image in the virtual image display area VS, in FIG. The same reference numerals C1, C2, and C3 are also used for the correction amounts of the virtual image (the same applies to the reference numerals Cy11 (Cy) and Cy21 (Cy) in FIG. 8).
 観察者の顔位置(目位置700)が符号Px11の位置から符号Px12の位置まで右側(X軸負の方向)にΔPx10だけ移動した場合、表示制御装置30(プロセッサ33)は、運動視差付加処理を実行することで、虚像表示領域VSに表示される虚像V41,V42,V43が表示される位置を、観察者の顔位置部(目位置700)が移動したのと同じ方向に、それぞれ補正量C1,C2(>C1),C3(>C2)だけ補正する。図7Aは、図6に示す位置Px11から視認する虚像V41を本実施形態の運動視差付加処理を行わずに、図6に示す位置Px12から視認した虚像V901,虚像V42を本実施形態の運動視差付加処理を行わずに、図6に示す位置Px12から視認したV902,及び虚像V43を本実施形態の運動視差付加処理を行わずに、図6に示す位置Px12から視認したV903を示す比較例であり、図7Bは、本実施形態の運動視差付加処理を行った場合の、図6に示す位置Px12から視認する虚像V44,V45,V46を示す図である。なお、図7Bでは、補正量の違いがわかりやすいように、虚像V44,V45,V46の位置の違いを誇張して描いてあることに留意されたい。すなわち、表示制御装置30(プロセッサ33)は、複数の虚像V41,V42,V43の知覚距離D33,D34,D35の違いにより、目位置の移動に伴う複数の虚像V41,V42,V43の位置の補正量を異ならせることで、複数の虚像V41(V44),V42(V45),V43(V46)の間だけでも運動視差を観察者に感得させることができる。より具体的には、表示制御装置30(プロセッサ33)は、設定される知覚距離D30が長いほど、前記運動視差付加処理における補正量を大きくすることで、複数の虚像V41(V44),V42(V45),V43(V46)に運動視差を付加している。 When the observer's face position (eye position 700) moves rightward (in the negative direction of the X axis) by ΔPx10 from the position indicated by Px11 to the position indicated by Px12, the display control device 30 (processor 33) performs motion parallax addition processing. , the display positions of the virtual images V41, V42, and V43 displayed in the virtual image display area VS are corrected in the same direction as the observer's face position (eye position 700) is moved. Only C1, C2 (>C1) and C3 (>C2) are corrected. FIG. 7A shows a virtual image V901 and a virtual image V42 visually recognized from a position Px12 shown in FIG. V902 viewed from the position Px12 shown in FIG. 6 without performing the addition processing, and V903 viewed from the position Px12 illustrated in FIG. FIG. 7B is a diagram showing virtual images V44, V45, and V46 viewed from position Px12 shown in FIG. 6 when the motion parallax adding process of this embodiment is performed. Note that in FIG. 7B, the difference in the positions of the virtual images V44, V45, and V46 is exaggerated so that the difference in correction amount can be easily understood. That is, the display control device 30 (processor 33) corrects the positions of the plurality of virtual images V41, V42, V43 according to the movement of the eye position due to the difference in the perceived distances D33, D34, D35 of the plurality of virtual images V41, V42, V43. By varying the amounts, the observer can perceive the motion parallax only between the virtual images V41 (V44), V42 (V45), and V43 (V46). More specifically, the display control device 30 (processor 33) increases the amount of correction in the motion parallax adding process as the perceived distance D30 set is longer, so that the plurality of virtual images V41 (V44) and V42 ( V45) and V43 (V46) are added with motion parallax.
 目追従性画像補正処理の実施形態は、上述の運動視差付加処理に限定されず、以下に説明する重畳処理を含んでいてもよい。すなわち、HUD装置20(表示制御装置30)は、観察者の目位置700の変化(顔位置の変化)に応じて、重畳処理(目追従性画像補正処理の一例。)を実行してもよい。 Embodiments of the eye-tracking image correction process are not limited to the motion parallax addition process described above, and may include the superimposition process described below. That is, the HUD device 20 (display control device 30) may execute superimposition processing (an example of eye-tracking image correction processing) in accordance with a change in the eye position 700 of the observer (change in face position). .
 図8は、本実施形態における目位置(顔位置)が上下方向に移動した際の運動視差付加処理の方法を説明するための図である。観察者の顔位置(目位置700)が符号Py12の位置から上側(Y軸正の方向)に移動した場合、表示制御装置30(プロセッサ33)は、運動視差付加処理を実行することで、虚像表示領域VSに表示される虚像Vが表示される位置を、図8(a)に示すように、観察者の顔位置(目位置700)が移動したのと同じ方向(上側(Y軸正の方向))に、補正量Cy11だけ補正する(虚像Vの位置を符号V48の位置から符号V47に変更する)。また、観察者の顔位置(目位置700)が符号Py12の位置から下側(Y軸負の方向)に移動した場合、表示制御装置30(プロセッサ33)は、運動視差付加処理を実行することで、虚像表示領域VSに表示される虚像Vが表示される位置を、図8(c)に示すように、観察者の顔位置(目位置700)が移動したのと同じ方向(上側(Y軸負の方向))に、補正量Cy21だけ補正する(虚像Vの位置を符号V48の位置から符号V49に変更する)。これにより、ターゲット位置PTとは離れた位置(虚像表示領域VS)に表示されているにもかかわらず、仮想オブジェクトFU(虚像V)がターゲット位置PTにあるかのように認識させることができる(仮想オブジェクトFU(虚像V)がターゲット位置PTにあるかのような感覚を増強することができる)。 FIG. 8 is a diagram for explaining a method of motion parallax addition processing when the eye position (face position) moves in the vertical direction in this embodiment. When the observer's face position (eye position 700) moves upward (in the positive Y-axis direction) from the position indicated by reference sign Py12, the display control device 30 (processor 33) performs motion parallax addition processing to create a virtual image. As shown in FIG. 8A, the position where the virtual image V displayed in the display area VS is displayed is set in the same direction (upward (Y-axis positive direction)), the correction amount Cy11 is corrected (the position of the virtual image V is changed from the position of the reference V48 to the reference V47). Further, when the observer's face position (eye position 700) moves downward (Y-axis negative direction) from the position of Py12, the display control device 30 (processor 33) executes motion parallax addition processing. Then, the position where the virtual image V displayed in the virtual image display area VS is displayed is moved in the same direction (upward (Y (The position of the virtual image V is changed from V48 to V49) by the correction amount Cy21 in the direction of negative axis). As a result, although the virtual object FU (virtual image V) is displayed at a position (virtual image display area VS) distant from the target position PT, it can be recognized as if it were at the target position PT ( It is possible to enhance the feeling that the virtual object FU (virtual image V) is at the target position PT).
 図9は、車両1の運転席から観察者が前方を向いた際に視認する、前景に存在する実オブジェクト300と、本実施形態のHUD装置20が表示する虚像Vと、を示す図である。図9に示す虚像Vは、実オブジェクト300の位置、方向、形状に応じて、表示される位置、方向、形状を変化させ得るAR(Augmented Reality)虚像V60と、実オブジェクト300の位置、方向、形状によらず、表示される位置、方向、形状が設定される虚像を非AR虚像V70と、を含む。AR虚像V60は、実景に存在する実オブジェクト300の位置に対応する位置(ターゲット位置PT)に表示される。AR虚像V60は、例えば、実オブジェクト300に重畳する位置、又は実オブジェクト300の近傍に表示され、当該実オブジェクト300の存在を強調して報知する。つまり、「実オブジェクト300の位置に対応する位置(ターゲット位置PT)」とは、観察者から見て、実オブジェクト300に重畳して視認される位置に限られず、実オブジェクト300の近傍の位置であってもよい。なお、AR虚像V60は、実オブジェクト300の視認を妨げないことが好ましいが、態様は任意である。 FIG. 9 is a diagram showing a real object 300 existing in the foreground and a virtual image V displayed by the HUD device 20 of the present embodiment, which is visually recognized when an observer faces forward from the driver's seat of the vehicle 1. . A virtual image V shown in FIG. A non-AR virtual image V70 includes a virtual image whose displayed position, direction, and shape are set regardless of the shape. The AR virtual image V60 is displayed at a position (target position PT) corresponding to the position of the real object 300 existing in the real scene. The AR virtual image V60 is displayed, for example, at a position superimposed on the real object 300 or in the vicinity of the real object 300, and notifies the existence of the real object 300 with emphasis. In other words, the “position corresponding to the position of the real object 300 (target position PT)” is not limited to the position superimposed on the real object 300 and viewed by the observer. There may be. It is preferable that the AR virtual image V60 does not interfere with the visual recognition of the real object 300, but any aspect is possible.
 図9に示すAR虚像V60は、案内経路を示すナビ虚像V61、V62、注意対象を強調して報知する強調虚像V64、V65、及び目標物や所定の建物などを指示するPOI虚像65などである。実オブジェクト300の位置に対応する位置(ターゲット位置PT)は、ナビ虚像V61、V62ではこれらが重畳される路面311(実オブジェクト300の一例。)の位置であり、強調虚像V63では人物313(実オブジェクト300の一例。)の周囲の位置であり、強調虚像V64では他車両314(実オブジェクト300の一例。)の近傍側の位置であり、そして、POI虚像V65では建物315(実オブジェクト300の一例。)の周囲の位置である。表示制御装置30(プロセッサ33)は、上述したように、虚像Vに設定される知覚距離D30が長いほど、前記運動視差付加処理における観察者の目位置の移動に伴う補正量Cを大きくする。すなわち、図9に示す虚像Vに設定される知覚距離D30が長い方からV65→V64→V63→V62→V61の順だとすると、表示制御装置30(プロセッサ33)は、観察者の目位置の移動に伴う補正量Cを、V65の補正量>V64の補正量>V63の補正量>V62の補正量>V61の補正量のように設定する。なお、虚像V62と虚像V61は、同種の虚像であり、近接して表示されているため、表示制御装置30(プロセッサ33)は、観察者の目位置の移動に伴うV62の補正量とV61の補正量を同じに設定してもよい。 The AR virtual images V60 shown in FIG. 9 include navigation virtual images V61 and V62 that indicate guidance routes, enhanced virtual images V64 and V65 that highlight and notify attention targets, and POI virtual images 65 that indicate targets, predetermined buildings, and the like. . The position (target position PT) corresponding to the position of the real object 300 is the position of the road surface 311 (an example of the real object 300) on which these are superimposed in the navigation virtual images V61 and V62. (an example of the real object 300), in the enhanced virtual image V64 the position near the other vehicle 314 (an example of the real object 300), and in the POI virtual image V65 a building 315 (an example of the real object 300). ). As described above, the display control device 30 (processor 33) increases the correction amount C associated with the movement of the observer's eye position in the motion parallax adding process as the perceived distance D30 set for the virtual image V increases. That is, assuming that the perceptual distance D30 set for the virtual image V shown in FIG. The associated correction amount C is set as follows: correction amount of V65>correction amount of V64>correction amount of V63>correction amount of V62>correction amount of V61. Note that the virtual image V62 and the virtual image V61 are of the same kind and are displayed close to each other. The same correction amount may be set.
 また、いくつかの実施形態の表示制御装置30(プロセッサ33)は、非AR虚像V70において、観察者の目位置の移動に伴う補正量Cをゼロとしてもよい(観察者の目位置の移動に伴って補正しなくてもよい)。 Further, the display control device 30 (processor 33) of some embodiments may set the correction amount C associated with the movement of the observer's eye position to zero in the non-AR virtual image V70 ( need not be corrected accordingly).
 また、いくつかの実施形態の表示制御装置30(プロセッサ33)は、非AR虚像V70において、観察者の目位置の移動に伴い補正してもよい。図9に示す例では、非AR虚像V70(V71、V72)は、虚像表示領域VSの下方に配置され、これらと重なる実オブジェクト300である路面311の領域は、図9のナビ虚像V61が重なる路面311の領域よりも車両1に近い。すなわち、いくつかの実施形態の表示制御装置30(プロセッサ33)は、非AR虚像V70(V71、V72)の知覚距離D30を、AR虚像V60(狭義に言えば、AR虚像V60の中で最も下方に配置されるナビ虚像V61)の知覚距離D30よりも短く設定し、観察者の目位置の移動に伴う非AR虚像V70の補正量Cを、観察者の目位置の移動に伴うAR虚像V60(狭義に言えば、AR虚像V60の中で最も下方に配置されるナビ虚像V61)の補正量Cより小さく設定してもよい。 Also, the display control device 30 (processor 33) of some embodiments may correct the non-AR virtual image V70 according to the movement of the observer's eye position. In the example shown in FIG. 9, the non-AR virtual images V70 (V71, V72) are arranged below the virtual image display area VS, and the area of the road surface 311, which is the real object 300, overlaps with the navigation virtual image V61 of FIG. It is closer to the vehicle 1 than the area of the road surface 311 . That is, the display control device 30 (processor 33) of some embodiments sets the perceived distance D30 of the non-AR virtual image V70 (V71, V72) to the AR virtual image V60 (in a narrow sense, the lowest distance among the AR virtual images V60). ) is set shorter than the perception distance D30 of the navigation virtual image V61) arranged in ( In a narrow sense, it may be set to be smaller than the correction amount C of the navigation virtual image V61 positioned at the lowest position among the AR virtual images V60.
 図10A、図10Bは、観察者が車両の前方を向いた際に視認する、前景における実オブジェクト、及びHUD装置が表示する虚像、を示す図であり、上図が重畳処理を実行しない比較例を示し、下図が重畳処理を実行した本実施形態の一例を示す。 10A and 10B are diagrams showing a real object in the foreground and a virtual image displayed by the HUD device visually recognized when the observer faces the front of the vehicle, and the upper diagram is a comparative example in which superimposition processing is not performed. , and the lower diagram shows an example of this embodiment in which superimposition processing is performed.
 HUD装置20が、前景に存在する前方車両301(実オブジェクト300の一例)を囲むように観察者によって視認される矩形状の虚像を表示している状況で、例えば、観察者の目位置700が右方向(X軸負方向)に移動した場合、重畳処理をしない比較例の虚像V911は、図10Aの上図に示すように、前方車両301(実オブジェクト300)に対し、左方向にずれて視認されてしまう。ここで、本実施形態の表示制御装置30が、観察者の目位置700の変化(右方向への移動)に基づき、重畳処理を実行することで、重畳処理をした本実施形態のAR虚像V66は、図10Aの下図に示すように、前方車両301(実オブジェクト300)に対し、ずれがなく(ずれが少なく)視認される。本実施形態の表示制御装置30は、顔検出部409によって検出された観察者の目位置700(又は顔位置)、及び車外センサ411によって検出された実オブジェクト300の位置、方向、形状に基づき、虚像Vと実オブジェクト300とがメモリ37に記憶された特定の位置関係になるように、虚像表示領域VSに表示される虚像Vの位置を変化させる。ここで、『特定の位置関係』は、例えば、実オブジェクト300に重なる位置、実オブジェクト300の近傍、又は実オブジェクト300を基準に設定された位置などである。 In a situation where the HUD device 20 displays a rectangular virtual image that is visually recognized by the observer so as to surround the forward vehicle 301 (an example of the real object 300) existing in the foreground, for example, the observer's eye position 700 is When moving rightward (negative direction of the X-axis), the virtual image V911 of the comparative example, which is not superimposed, shifts leftward with respect to the forward vehicle 301 (real object 300), as shown in the upper diagram of FIG. 10A. be visible. Here, the display control device 30 of the present embodiment executes the superimposing process based on the change in the eye position 700 of the observer (moving to the right), and the superimposed AR virtual image V66 of the present embodiment. is visually recognized without deviation (less deviation) from the forward vehicle 301 (real object 300), as shown in the lower diagram of FIG. 10A. The display control device 30 of this embodiment, based on the observer's eye position 700 (or face position) detected by the face detection unit 409 and the position, direction, and shape of the real object 300 detected by the vehicle exterior sensor 411, The position of the virtual image V displayed in the virtual image display area VS is changed so that the virtual image V and the real object 300 have the specific positional relationship stored in the memory 37 . Here, the "specific positional relationship" is, for example, a position overlapping the real object 300, a vicinity of the real object 300, or a position set with the real object 300 as a reference.
 また、HUD装置20が、前景に存在する走行レーン302(実オブジェクト300の一例)に沿うように観察者によって視認される矢印状の虚像を表示している状況で、例えば、観察者の目位置700が下方向(Y軸負方向)に移動した場合、重畳処理をしない比較例の虚像V912は、図10Bの上図に示すように、走行レーン302(実オブジェクト300)に対し、上方向にずれて視認されてしまう。ここで、本実施形態の表示制御装置30が、観察者の目位置700の変化(下方向への移動)に基づき、重畳処理を実行することで、重畳処理をした本実施形態のAR虚像V67は、図10Bの下図に示すように、走行レーン302(実オブジェクト300)に対し、ずれがなく(ずれが少なく)視認される。本実施形態の表示制御装置30は、顔検出部409によって検出された観察者の目位置700(又は顔位置)、及び車外センサ411(道路情報データベース403)によって検出された走行レーン302(実オブジェクト300)の位置、方向、形状に基づき、実オブジェクト300の位置、方向、形状に整合するように、虚像表示領域VSに表示される虚像Vの位置、方向、形状を変化させる。すなわち、重畳処理が実行されるAR虚像V60(V51,V52)は、観察者の目位置700(又は顔位置)から見た前景(現実世界)の実オブジェクト300(analog)の位置、方向、形状などの変化に整合するように、その位置、方向、形状などを変化させる画像であり、この場合、Kontaktanalog画像とも呼ばれる。 Also, in a situation where the HUD device 20 displays an arrow-shaped virtual image that is visually recognized by the observer so as to follow the running lane 302 (an example of the real object 300) existing in the foreground, for example, the observer's eye position 700 moves downward (negative Y-axis direction), the virtual image V912 of the comparative example that is not superimposed moves upward with respect to the driving lane 302 (real object 300), as shown in the upper diagram of FIG. 10B. It shifts and is visually recognized. Here, the display control device 30 of the present embodiment executes the superimposing process based on the change in the eye position 700 of the observer (moving downward), so that the superimposed AR virtual image V67 of the present embodiment is displayed. , as shown in the lower diagram of FIG. 10B, is visually recognized without deviation (with little deviation) from the running lane 302 (real object 300). The display control device 30 of the present embodiment detects the observer's eye position 700 (or face position) detected by the face detection unit 409, and the driving lane 302 (real object) detected by the vehicle exterior sensor 411 (road information database 403). 300), the position, direction, and shape of the virtual image V displayed in the virtual image display area VS are changed so as to match the position, direction, and shape of the real object 300 . That is, the AR virtual image V60 (V51, V52) on which superimposition processing is performed is the position, direction, and shape of the foreground (real world) real object 300 (analog) viewed from the observer's eye position 700 (or face position). In this case, it is also called a Kontaktanalog image.
 図11は、いくつかの実施形態に係る、車両用虚像表示システムのブロック図である。表示制御装置30は、1つ又は複数のI/Oインタフェース31、1つ又は複数のプロセッサ33、1つ又は複数の画像処理回路35、及び1つ又は複数のメモリ37を備える。図11に記載される様々な機能ブロックは、ハードウェア、ソフトウェア、又はこれら両方の組み合わせで構成されてもよい。図11は、1つの実施形態に過ぎず、図示された構成要素は、より数の少ない構成要素に組み合わされてもよく、又は追加の構成要素があってもよい。例えば、画像処理回路35(例えば、グラフィック処理ユニット)が、1つ又は複数のプロセッサ33に含まれてもよい。 FIG. 11 is a block diagram of a vehicle virtual image display system according to some embodiments. The display controller 30 comprises one or more I/O interfaces 31 , one or more processors 33 , one or more image processing circuits 35 and one or more memories 37 . Various functional blocks illustrated in FIG. 11 may be implemented in hardware, software, or a combination of both. FIG. 11 is only one embodiment and the illustrated components may be combined into fewer components or there may be additional components. For example, image processing circuitry 35 (eg, a graphics processing unit) may be included in one or more processors 33 .
 図示するように、プロセッサ33及び画像処理回路35は、メモリ37と動作可能に連結される。より具体的には、プロセッサ33及び画像処理回路35は、メモリ37に記憶されているコンピュータ・プログラムを実行することで、例えば画像データを生成、及び/又は送信するなど、車両用表示システム10(表示装置40)の制御を行うことができる。プロセッサ33及び/又は画像処理回路35は、少なくとも1つの汎用マイクロプロセッサ(例えば、中央処理装置(CPU))、少なくとも1つの特定用途向け集積回路(ASIC)、少なくとも1つのフィールドプログラマブルゲートアレイ(FPGA)、又はそれらの任意の組み合わせを含むことができる。メモリ37は、ハードディスクのような任意のタイプの磁気媒体、CD及びDVDのような任意のタイプの光学媒体、揮発性メモリのような任意のタイプの半導体メモリ、及び不揮発性メモリを含む。揮発性メモリは、DRAM及びSRAMを含み、不揮発性メモリは、ROM及びNVRAMを含んでもよい。 As shown, processor 33 and image processing circuitry 35 are operatively coupled with memory 37 . More specifically, the processor 33 and the image processing circuit 35 execute a computer program stored in the memory 37 to generate and/or transmit image data, for example, to the vehicle display system 10 ( A display device 40) can be controlled. Processor 33 and/or image processing circuitry 35 may include at least one general purpose microprocessor (e.g., central processing unit (CPU)), at least one application specific integrated circuit (ASIC), at least one field programmable gate array (FPGA). , or any combination thereof. Memory 37 includes any type of magnetic media such as hard disks, any type of optical media such as CDs and DVDs, any type of semiconductor memory such as volatile memory, and non-volatile memory. Volatile memory may include DRAM and SRAM, and non-volatile memory may include ROM and NVRAM.
 図示するように、プロセッサ33は、I/Oインタフェース31と動作可能に連結されている。I/Oインタフェース31は、例えば、車両に設けられた後述の車両ECU401、及び/又は他の電子機器(後述する符号403~419)と、CAN(Controller Area Network)の規格に応じて通信(CAN通信とも称する)を行う。なお、I/Oインタフェース31が採用する通信規格は、CANに限定されず、例えば、CANFD(CAN with Flexible Data Rate)、LIN(Local Interconnect Network)、Ethernet(登録商標)、MOST(Media Oriented Systems Transport:MOSTは登録商標)、UART、もしくはUSBなどの有線通信インタフェース、又は、例えば、Bluetooth(登録商標)ネットワークなどのパーソナルエリアネットワーク(PAN)、802.11x Wi-Fi(登録商標)ネットワークなどのローカルエリアネットワーク(LAN)等の数十メートル内の近距離無線通信インタフェースである車内通信(内部通信)インタフェースを含む。また、I/Oインタフェース31は、無線ワイドエリアネットワーク(WWAN0、IEEE802.16-2004(WiMAX:Worldwide Interoperability for Microwave Access))、IEEE802.16eベース(Mobile WiMAX)、4G、4G-LTE、LTE Advanced、5Gなどのセルラー通信規格により広域通信網(例えば、インターネット通信網)などの車外通信(外部通信)インタフェースを含んでいてもよい。 As shown, processor 33 is operatively coupled with I/O interface 31 . For example, the I / O interface 31 communicates (CAN communication). The communication standard adopted by the I/O interface 31 is not limited to CAN. : MOST is a registered trademark), a wired communication interface such as UART or USB, or a personal area network (PAN) such as a Bluetooth network, a local network such as an 802.11x Wi-Fi network. It includes an in-vehicle communication (internal communication) interface, which is a short-range wireless communication interface within several tens of meters such as an area network (LAN). In addition, the I / O interface 31 is a wireless wide area network (WWAN0, IEEE802.16-2004 (WiMAX: Worldwide Interoperability for Microwave Access)), IEEE802.16e base (Mobile WiMAX), 4G, 4G-LTE, LTE Advanced, An external communication (external communication) interface such as a wide area communication network (for example, Internet communication network) may be included according to a cellular communication standard such as 5G.
 図示するように、プロセッサ33は、I/Oインタフェース31と相互動作可能に連結されることで、車両用表示システム10(I/Oインタフェース31)に接続される種々の他の電子機器等と情報を授受可能となる。I/Oインタフェース31には、例えば、車両ECU401、道路情報データベース403、自車位置検出部405、操作検出部407、顔検出部409、車外センサ411、明るさ検出部413、IMU415、携帯情報端末417、及び外部通信機器419などが動作可能に連結される。なお、I/Oインタフェース31は、車両用表示システム10に接続される他の電子機器等から受信する情報を加工(変換、演算、解析)する機能を含んでいてもよい。 As shown, the processor 33 is interoperably coupled with the I/O interface 31 to communicate with various other electronic devices and the like connected to the vehicle display system 10 (I/O interface 31). can be given and received. The I/O interface 31 includes, for example, a vehicle ECU 401, a road information database 403, a vehicle position detection unit 405, an operation detection unit 407, a face detection unit 409, an external sensor 411, a brightness detection unit 413, an IMU 415, a mobile information terminal 417, and an external communication device 419, etc. are operatively coupled. Note that the I/O interface 31 may include a function of processing (converting, calculating, and analyzing) information received from other electronic devices connected to the vehicle display system 10 .
 表示装置40は、プロセッサ33及び画像処理回路35に動作可能に連結される。したがって、空間光変調素子51によって表示される画像は、プロセッサ33及び/又は画像処理回路35から受信された画像データに基づいてもよい。プロセッサ33及び画像処理回路35は、I/Oインタフェース31から取得される情報に基づき、空間光変調素子51が表示する画像を制御する。 The display device 40 is operatively connected to the processor 33 and the image processing circuitry 35 . Accordingly, the image displayed by spatial light modulating element 51 may be based on image data received from processor 33 and/or image processing circuitry 35 . The processor 33 and image processing circuit 35 control the image displayed by the spatial light modulator 51 based on the information obtained from the I/O interface 31 .
 車両ECU401は、車両1に設けられたセンサやスイッチから、車両1の状態(例えば、起動スイッチ(例えば、アクセサリスイッチ:ACCやイグニッションスイッチ:IGN)のON/OFF状態(起動情報の一例。)、走行距離、車速、アクセルペダル開度、ブレーキペダル開度、エンジンスロットル開度、インジェクター燃料噴射量、エンジン回転数、モータ回転数、ステアリング操舵角、シフトポジション、ドライブモード、各種警告状態、姿勢(ロール角、及び/又はピッチング角を含む)、車両の振動(振動の大きさ、頻度、及び/又は周波数を含む))などを取得し、車両1の前記状態を収集、及び管理(制御も含んでもよい。)するものであり、機能の一部として、車両1の前記状態の数値(例えば、車両1の車速。)を示す信号を、表示制御装置30のプロセッサ33へ出力することができる。なお、車両ECU401は、単にセンサ等で検出した数値(例えば、ピッチング角が前傾方向に3[degree]。)をプロセッサ33へ送信することに加え、又はこれに代わり、センサで検出した数値を含む車両1の1つ又は複数の状態に基づく判定結果(例えば、車両1が予め定められた前傾状態の条件を満たしていること。)、若しくは/及び解析結果(例えば、ブレーキペダル開度の情報と組み合わせされて、ブレーキにより車両が前傾状態になったこと。)を、プロセッサ33へ送信してもよい。例えば、車両ECU401は、車両1が車両ECU401のメモリ(不図示)に予め記憶された所定の判定条件を満たすような判定結果を示す信号を表示制御装置30へ出力してもよい。なお、I/Oインタフェース31は、車両ECU401を介さずに、車両1に設けられた車両1に設けられたセンサやスイッチから、上述したような情報を取得してもよい。 The vehicle ECU 401 detects the state of the vehicle 1 (for example, the ON/OFF state of a start switch (for example, an accessory switch: ACC or an ignition switch: IGN) from sensors and switches provided in the vehicle 1 (an example of start information), Travel distance, vehicle speed, accelerator pedal opening, brake pedal opening, engine throttle opening, injector fuel injection amount, engine speed, motor speed, steering angle, shift position, drive mode, various warning conditions, posture (roll angle and/or pitching angle), vehicle vibration (including magnitude, frequency, and/or frequency of vibration), etc., to collect and manage the state of the vehicle 1 (including control ), and as part of its function, it is possible to output a signal indicating the numerical value of the state of the vehicle 1 (for example, the vehicle speed of the vehicle 1 ) to the processor 33 of the display control device 30 . The vehicle ECU 401 simply transmits a numerical value detected by a sensor or the like (for example, the pitching angle is 3 [degrees] in the forward tilting direction) to the processor 33, or alternatively, transmits the numerical value detected by the sensor. Determination results based on one or more states of the vehicle 1 including (for example, the vehicle 1 satisfies a predetermined forward lean condition), or/and analysis results (for example, the degree of brake pedal opening (combined with the information that the vehicle is leaning forward due to braking) may be sent to the processor 33 . For example, the vehicle ECU 401 may output to the display control device 30 a signal indicating a determination result that the vehicle 1 satisfies a predetermined determination condition stored in advance in a memory (not shown) of the vehicle ECU 401 . Note that the I/O interface 31 may acquire the above-described information from sensors and switches provided in the vehicle 1 without using the vehicle ECU 401 .
 また、車両ECU401は、車両用表示システム10が表示する画像を指示する指示信号を表示制御装置30へ出力してもよく、この際、画像の座標、サイズ、種類、表示態様、画像の報知必要度、及び/又は報知必要度を判定する元となる必要度関連情報を、前記指示信号に付加して送信してもよい。 In addition, the vehicle ECU 401 may output to the display control device 30 an instruction signal that instructs an image to be displayed by the vehicle display system 10. At this time, the coordinates, size, type, display mode of the image, the need for notification of the image, and so on. Necessity-related information that serves as a basis for determining the degree and/or the necessity of notification may be added to the instruction signal and transmitted.
 道路情報データベース403は、車両1に設けられた図示しないナビゲーション装置、又は車両1と車外通信インタフェース(I/Oインタフェース31)を介して接続される外部サーバー、に含まれ、後述する自車位置検出部405から取得される車両1の位置に基づき、車両1の周辺の情報(車両1の周辺の実オブジェクト関連情報)である車両1が走行する道路情報(車線,白線,停止線,横断歩道,道路の幅員,車線数,交差点,カーブ,分岐路,交通規制など)、地物情報(建物、橋、河川など)の、有無、位置(車両1までの距離を含む)、方向、形状、種類、詳細情報などを読み出し、プロセッサ33に送信してもよい。また、道路情報データベース403は、出発地から目的地までの適切な経路(ナビゲーション情報)を算出し、当該ナビゲーション情報を示す信号、又は経路を示す画像データをプロセッサ33へ出力してもよい。 The road information database 403 is included in a navigation device (not shown) provided in the vehicle 1 or an external server connected to the vehicle 1 via an external communication interface (I/O interface 31). Based on the position of the vehicle 1 acquired from the unit 405, road information (lanes, white lines, stop lines, crosswalks, road width, number of lanes, intersections, curves, forks, traffic regulations, etc.), presence/absence, position (including distance to vehicle 1), direction, shape, type of feature information (buildings, bridges, rivers, etc.) , detailed information, etc. may be read and sent to the processor 33 . The road information database 403 may also calculate an appropriate route (navigation information) from the departure point to the destination, and output to the processor 33 a signal indicating the navigation information or image data indicating the route.
 自車位置検出部405は、車両1に設けられたGNSS(全地球航法衛星システム)等であり、現在の車両1の位置、方位を検出し、検出結果を示す信号を、プロセッサ33を介して、又は直接、道路情報データベース403、後述する携帯情報端末417、及び/もしくは外部通信機器419へ出力する。道路情報データベース403、後述する携帯情報端末417、及び/又は外部通信機器419は、自車位置検出部405から車両1の位置情報を連続的、断続的、又は所定のイベント毎に取得することで、車両1の周辺の情報を選択・生成して、プロセッサ33へ出力してもよい。 The vehicle position detection unit 405 is a GNSS (global navigation satellite system) or the like provided in the vehicle 1, detects the current position and direction of the vehicle 1, and transmits a signal indicating the detection result via the processor 33. or directly to the road information database 403, a portable information terminal 417 and/or an external communication device 419, which will be described later. The road information database 403, a portable information terminal 417 (to be described later), and/or an external communication device 419 acquire position information of the vehicle 1 from the vehicle position detection unit 405 continuously, intermittently, or at each predetermined event. , information about the surroundings of the vehicle 1 may be selected/generated and output to the processor 33 .
 操作検出部407は、例えば、車両1のCID(Center Information Display)、インストルメントパネルなどに設けられたハードウェアスイッチ、又は画像とタッチセンサなどとを兼ね合わされたソフトウェアスイッチなどであり、車両1の乗員(運転席の着座するユーザー、及び/又は助手席に着座するユーザー)による操作に基づく操作情報を、プロセッサ33へ出力する。例えば、操作検出部407は、ユーザーの操作により、虚像表示領域VSを移動させる操作に基づく表示領域設定情報、アイボックス200を移動させる操作に基づくアイボックス設定情報、観察者の目位置700を設定する操作に基づく情報などを、プロセッサ33へ出力する。 The operation detection unit 407 is, for example, a CID (Center Information Display) of the vehicle 1, a hardware switch provided on an instrument panel or the like, or a software switch combining an image and a touch sensor. It outputs to the processor 33 operation information based on the operation by the passenger (the user sitting in the driver's seat and/or the user sitting in the front passenger seat). For example, the operation detection unit 407 sets the display area setting information based on the operation of moving the virtual image display area VS, the eye box setting information based on the operation of moving the eye box 200, and the observer's eye position 700 by the user's operation. Information based on the operation to be performed is output to the processor 33 .
 顔検出部409は、車両1の運転席に着座する観察者の目位置700(図1参照。)を検出する赤外線カメラなどのカメラを含み、撮像した画像を、プロセッサ33に出力してもよい。プロセッサ33は、顔検出部409から撮像画像(目位置700を推定可能な情報の一例。)を取得し、この撮像画像を、パターンマッチングなどの手法で解析することで、観察者の目位置700の座標を検出し、検出した目位置700の座標を示す信号を、プロセッサ33へ出力してもよい。 The face detection unit 409 includes a camera such as an infrared camera that detects the eye position 700 (see FIG. 1) of the observer sitting in the driver's seat of the vehicle 1, and may output the captured image to the processor 33. . The processor 33 acquires a captured image (an example of information that can estimate the eye positions 700) from the face detection unit 409, and analyzes the captured image by a technique such as pattern matching to determine the eye positions 700 of the observer. may be detected and a signal indicating the detected coordinates of the eye position 700 may be output to the processor 33 .
 また、顔検出部409は、カメラの撮像画像を解析した解析結果(例えば、観察者の目位置700が、予め設定された複数の表示パラメータが対応する空間的な領域のどこに属しているかを示す信号。)を、プロセッサ33に出力してもよい。なお、車両1の観察者の目位置700、又は観察者の目位置700を推定可能な情報を取得する方法は、これらに限定されるものではなく、既知の目位置検出(推定)技術を用いて取得されてもよい。 In addition, the face detection unit 409 obtains an analysis result obtained by analyzing the captured image of the camera (for example, it indicates where the observer's eye position 700 belongs in a spatial region corresponding to a plurality of preset display parameters). signal.) may be output to the processor 33 . Note that the method of acquiring the eye position 700 of the observer of the vehicle 1 or the information that can estimate the eye position 700 of the observer is not limited to these, and a known eye position detection (estimation) technique is used. may be obtained by
 また、顔検出部409は、観察者の目位置700の変化速度、及び/又は移動方向を検出し、観察者の目位置700の変化速度、及び/又は移動方向を示す信号を、プロセッサ33に出力してもよい。 In addition, the face detection unit 409 detects the speed of change and/or the direction of movement of the observer's eye position 700, and sends a signal indicating the speed of change and/or the direction of movement of the observer's eye position 700 to the processor 33. may be output.
 また、顔検出部409は、(11)新たに検出した目位置700が、過去に検出した目位置700に対して、メモリ37に予め記憶された目位置移動距離閾値以上であること(所定の単位時間内における目位置の移動が規定範囲より大きいこと。)を示す信号、(12)目位置の変化速度が、メモリ37に予め記憶された目位置変化速度閾値以上であることを示す信号、(13)観察者の目位置700の移動が検出された後、観察者の目位置700が検出できないことを示す信号、を検出した場合、所定の判定条件を満たしたと判定し、当該状態を示す信号を、プロセッサ33に出力してもよい。 The face detection unit 409 also determines (11) that the newly detected eye position 700 is greater than or equal to the eye position movement distance threshold previously stored in the memory 37 with respect to the previously detected eye position 700 (predetermined (12) a signal indicating that the change speed of the eye position is equal to or greater than the eye position change speed threshold stored in advance in the memory 37; (13) When a signal indicating that the observer's eye position 700 cannot be detected after the movement of the observer's eye position 700 is detected, it is determined that a predetermined determination condition is satisfied, and the state is indicated. The signal may be output to processor 33 .
 また、顔検出部409は、視線方向検出部としての機能を有していても良い。視線方向検出部は、車両1の運転席に着座する観察者の顔を撮像する赤外線カメラ、又は可視光カメラを含み、撮像した画像を、プロセッサ33に出力してもよい。プロセッサ33は、視線方向検出部から撮像画像(視線方向を推定可能な情報の一例。)を取得し、この撮像画像を解析することで観察者の視線方向(及び/又は前記注視位置)を特定することができる。なお、視線方向検出部は、カメラからの撮像画像を解析し、解析結果である観察者の視線方向(及び/又は前記注視位置)を示す信号をプロセッサ33に出力してもよい。なお、車両1の観察者の視線方向を推定可能な情報を取得する方法は、これらに限定されるものではなく、EOG(Electro-oculogram)法、角膜反射法、強膜反射法、プルキンエ像検出法、サーチコイル法、赤外線目底カメラ法などの他の既知の視線方向検出(推定)技術を用いて取得されてもよい。 Also, the face detection unit 409 may have a function as a line-of-sight direction detection unit. The line-of-sight direction detection unit may include an infrared camera or a visible light camera that captures the face of an observer sitting in the driver's seat of the vehicle 1 , and may output the captured image to the processor 33 . The processor 33 acquires a captured image (an example of information that can estimate the line-of-sight direction) from the line-of-sight direction detection unit, and identifies the line-of-sight direction (and/or the gaze position) of the observer by analyzing the captured image. can do. The line-of-sight direction detection unit may analyze the captured image from the camera and output to the processor 33 a signal indicating the line-of-sight direction (and/or the gaze position) of the observer, which is the analysis result. Note that the method of acquiring information that can estimate the line-of-sight direction of the observer of the vehicle 1 is not limited to these, and includes an EOG (Electro-oculogram) method, a corneal reflection method, a scleral reflection method, and Purkinje image detection. may be obtained using other known gaze direction detection (estimation) techniques such as method, search coil method, infrared fundus camera method.
 車外センサ411は、車両1の周辺(前方、側方、及び後方)に存在する実オブジェクトを検出する。車外センサ411が検知する実オブジェクトは、例えば、障害物(歩行者、自転車、自動二輪車、他車両など)、後述する走行レーンの路面、区画線、路側物、及び/又は地物(建物など)などを含んでいてもよい。車外センサとしては、例えば、ミリ波レーダ、超音波レーダ、レーザレーダ等のレーダセンサ、カメラ、又はこれらの任意の組み合わせからなる検出ユニットと、当該1つ又は複数の検出ユニットからの検出データを処理する(データフュージョンする)処理装置と、から構成される。これらレーダセンサやカメラセンサによる物体検知については従来の周知の手法を適用する。これらのセンサによる物体検知によって、三次元空間内での実オブジェクトの有無、実オブジェクトが存在する場合には、その実オブジェクトの位置(車両1からの相対的な距離、車両1の進行方向を前後方向とした場合の左右方向の位置、上下方向の位置等)、大きさ(横方向(左右方向)、高さ方向(上下方向)等の大きさ)、移動方向(横方向(左右方向)、奥行き方向(前後方向))、変化速度(横方向(左右方向)、奥行き方向(前後方向))、及び/又は実オブジェクトの種類等を検出してもよい。1つ又は複数の車外センサ411は、各センサの検知周期毎に、車両1の前方の実オブジェクトを検知して、実オブジェクト情報の一例である実オブジェクト情報(実オブジェクトの有無、実オブジェクトが存在する場合には実オブジェクト毎の位置、大きさ、及び/又は種類等の情報)をプロセッサ33に出力することができる。なお、これら実オブジェクト情報は、他の機器(例えば、車両ECU401)を経由してプロセッサ33に送信されてもよい。また、夜間等の周辺が暗いときでも実オブジェクトが検知できるように、センサとしてカメラを利用する場合には赤外線カメラや近赤外線カメラが望ましい。また、センサとしてカメラを利用する場合、視差で距離等も取得できるステレオカメラが望ましい。 The vehicle exterior sensor 411 detects real objects existing around the vehicle 1 (front, side, and rear). Real objects detected by the sensor 411 outside the vehicle include, for example, obstacles (pedestrians, bicycles, motorcycles, other vehicles, etc.), road surfaces of driving lanes, lane markings, roadside objects, and/or features (buildings, etc.), which will be described later. and so on. Exterior sensors include, for example, radar sensors such as millimeter-wave radar, ultrasonic radar, and laser radar, cameras, or a detection unit composed of any combination thereof, and the detection data from the one or more detection units is processed. and a processing device that performs data fusion. A conventional well-known technique is applied to object detection by these radar sensors and camera sensors. By object detection by these sensors, the presence or absence of a real object in the three-dimensional space, and if the real object exists, the position of the real object (relative distance from the vehicle 1, the traveling direction of the vehicle 1 in the front-back direction) horizontal position, vertical position, etc.), size (horizontal direction (horizontal direction), height direction (vertical direction), etc.), movement direction (horizontal direction (horizontal direction), depth direction (front-back direction)), change speed (horizontal direction (left-right direction), depth direction (front-back direction)), and/or the type of the real object may be detected. One or a plurality of vehicle exterior sensors 411 detect a real object in front of the vehicle 1 in each detection cycle of each sensor, and detect real object information (presence or absence of a real object, existence of a real object), which is an example of real object information. If so, information such as the position, size and/or type of each real object) can be output to the processor 33 . Note that the real object information may be transmitted to the processor 33 via another device (for example, the vehicle ECU 401). Also, when using a camera as a sensor, an infrared camera or a near-infrared camera is desirable so that a real object can be detected even when the surroundings are dark, such as at night. Moreover, when using a camera as a sensor, a stereo camera that can acquire distance and the like by parallax is desirable.
 明るさ検出部413は、車両1の車室の前方に存在する前景の所定範囲の照度又は輝度を外界明るさ(明るさ情報の一例)、又は車室内の照度又は輝度を車内明るさ(明るさ情報の一例)として検知する。明るさ検出部413は、例えばフォトトランジスタ若しくはフォトダイオード等であり、図1に示す車両1のインストルメントパネル、ルームミラー又はHUD装置20等に搭載される。 The brightness detection unit 413 converts the illuminance or luminance of a predetermined range of the foreground existing in front of the passenger compartment of the vehicle 1 into the outside world brightness (an example of brightness information), or converts the illuminance or luminance in the passenger compartment into the vehicle interior brightness (brightness (an example of information about The brightness detection unit 413 is, for example, a phototransistor or a photodiode, and is mounted on the instrument panel, rearview mirror, HUD device 20, or the like of the vehicle 1 shown in FIG.
 IMU415は、慣性加速に基づいて、車両1の位置、向き、及びこれらの変化(変化速度、変化加速度)を検知するように構成された1つ又は複数のセンサ(例えば、加速度計及びジャイロスコープ)の組み合わせを含むことができる。IMU415は、検出した値(前記検出した値は、車両1の位置、向き、及びこれらの変化(変化速度、変化加速度)を示す信号などを含む。)、検出した値を解析した結果を、プロセッサ33に出力してもよい。前記解析した結果は、前記検出した値が、所定の判定条件を満たしたか否かの判定結果を示す信号などであり、例えば、車両1の位置又は向きの変化(変化速度、変化加速度)に関する値から、車両1の挙動(振動)が少ないことを示す信号であってもよい。 The IMU 415 includes one or more sensors (e.g., accelerometers and gyroscopes) configured to sense the position, orientation, and changes thereof (velocity of change, acceleration of change) of the vehicle 1 based on inertial acceleration. can include a combination of The IMU 415 outputs detected values (the detected values include signals indicating the position and orientation of the vehicle 1, and changes thereof (change speed and change acceleration)) and the results of analyzing the detected values to a processor. 33. The analyzed result is a signal or the like indicating whether or not the detected value satisfies a predetermined determination condition. Therefore, the signal may be a signal indicating that the behavior (vibration) of the vehicle 1 is small.
 携帯情報端末417は、スマートフォン、ノートパソコン、スマートウォッチ、又は観察者(又は車両1の他の乗員)が携帯可能なその他の情報機器である。I/Oインタフェース31は、携帯情報端末417とペアリングすることで、携帯情報端末417と通信を行うことが可能であり、携帯情報端末417(又は携帯情報端末を通じたサーバ)に記録されたデータを取得する。携帯情報端末417は、例えば、上述の道路情報データベース403及び自車位置検出部405と同様の機能を有し、前記道路情報(実オブジェクト関連情報の一例。)を取得し、プロセッサ33に送信してもよい。また、携帯情報端末417は、車両1の近傍の商業施設に関連するコマーシャル情報(実オブジェクト関連情報の一例。)を取得し、プロセッサ33に送信してもよい。なお、携帯情報端末417は、携帯情報端末417の所持者(例えば、観察者)のスケジュール情報、携帯情報端末417での着信情報、メールの受信情報などをプロセッサ33に送信し、プロセッサ33及び画像処理回路35は、これらに関する画像データを生成及び/又は送信してもよい。 The mobile information terminal 417 is a smart phone, a laptop computer, a smart watch, or other information equipment that can be carried by the observer (or other occupants of the vehicle 1). By pairing with the mobile information terminal 417, the I/O interface 31 can communicate with the mobile information terminal 417, and can read data recorded in the mobile information terminal 417 (or a server through the mobile information terminal). to get The mobile information terminal 417 has, for example, the same functions as the road information database 403 and the own vehicle position detection unit 405 described above, acquires the road information (an example of real object related information), and transmits it to the processor 33. may The mobile information terminal 417 may also acquire commercial information (an example of real object related information) related to commercial facilities near the vehicle 1 and transmit it to the processor 33 . In addition, the mobile information terminal 417 transmits schedule information of the owner of the mobile information terminal 417 (for example, an observer), incoming call information at the mobile information terminal 417, mail reception information, etc. to the processor 33, and the processor 33 and the image Processing circuitry 35 may generate and/or transmit image data for these.
 外部通信機器419は、車両1と情報のやりとりをする通信機器であり、例えば、車両1と車車間通信(V2V:Vehicle To Vehicle)により接続される他車両、歩車間通信(V2P:Vehicle To Pedestrian)により接続される歩行者(歩行者が携帯する携帯情報端末)、路車間通信(V2I:Vehicle To roadside Infrastructure)により接続されるネットワーク通信機器であり、広義には、車両1との通信(V2X:Vehicle To Everything)により接続される全てのものを含む。外部通信機器419は、例えば、歩行者、自転車、自動二輪車、他車両(先行車等)、路面、区画線、路側物、及び/又は地物(建物など)の位置を取得し、プロセッサ33へ出力してもよい。また、外部通信機器419は、上述の自車位置検出部405と同様の機能を有し、車両1の位置情報を取得し、プロセッサ33に送信してもよく、さらに上述の道路情報データベース403の機能も有し、前記道路情報(実オブジェクト関連情報の一例。)を取得し、プロセッサ33に送信してもよい。なお、外部通信機器419から取得される情報は、上述のものに限定されない。 The external communication device 419 is a communication device that exchanges information with the vehicle 1, for example, other vehicles connected to the vehicle 1 by vehicle-to-vehicle communication (V2V: vehicle-to-vehicle communication), pedestrian-to-vehicle communication (V2P: vehicle-to-pedestrian ) connected by pedestrians (mobile information terminals carried by pedestrians), network communication equipment connected by road-to-vehicle communication (V2I: Vehicle To roadside Infrastructure), and in a broad sense, communication with vehicle 1 (V2X : Vehicle To Everything). The external communication device 419 acquires, for example, the positions of pedestrians, bicycles, motorcycles, other vehicles (preceding vehicles, etc.), road surfaces, lane markings, roadside objects, and/or features (buildings, etc.), and sends them to the processor 33. can be output. In addition, the external communication device 419 has the same function as the vehicle position detection unit 405 described above, may acquire the position information of the vehicle 1 and transmit it to the processor 33, and furthermore may store the information of the road information database 403 described above. It may also have a function to acquire the road information (an example of real object related information) and transmit it to the processor 33 . Information acquired from the external communication device 419 is not limited to the above.
 メモリ37に記憶されたソフトウェア構成要素は、目位置検出モジュール502、目位置推定モジュール504、目位置予測モジュール506、顔検出モジュール508、判定モジュール510、車両状態判定モジュール512、視認性制御モジュール514、目追従性画像処理モジュール516、グラフィックモジュール518、光源駆動モジュール520、及びアクチュエータ駆動モジュール522、などを含む。 The software components stored in memory 37 include eye position detection module 502, eye position estimation module 504, eye position prediction module 506, face detection module 508, determination module 510, vehicle state determination module 512, visibility control module 514, It includes an eye-tracking image processing module 516, a graphics module 518, a light source driving module 520, an actuator driving module 522, and the like.
 図12A、及び図12Bは、観察者の目位置、顔位置、又は顔向きの検出結果に基づき、視認性低下処理を実行する方法S100を示すフロー図である。方法S100は、空間光変調素子を含むHUD装置20と、このHUD装置20を制御する表示制御装置30と、において実行される。以下に示す方法S100のいくつかの動作は任意選択的に組み合わされ、いくつかの動作の手順は任意選択的に変更され、いくつかの動作は任意選択的に省略される。 FIGS. 12A and 12B are flowcharts showing a method S100 for executing visibility reduction processing based on the detection result of the observer's eye position, face position, or face direction. The method S100 is performed in a HUD device 20 including a spatial light modulating element and a display control device 30 controlling this HUD device 20 . Some of the acts of method S100 described below are optionally combined, some of the steps of the acts are optionally varied, and some of the acts are optionally omitted.
 まず、表示制御装置30(プロセッサ33)は、観察者の目位置700、顔位置(不図示)、又は顔位置(不図示)を示す情報を取得する(ステップS110)。 First, the display control device 30 (processor 33) acquires information indicating the observer's eye position 700, face position (not shown), or face position (not shown) (step S110).
(ステップS110の一例)
 いくつかの実施形態におけるステップS110において、表示制御装置30(プロセッサ33)は、図11の目位置検出モジュール502を実行することで、顔検出部409を介して、観察者の目位置700を検出する(目位置700を示す目位置情報を取得する)。目位置検出モジュール502は、観察者の目位置700を示す座標(X,Y軸方向の位置であり、目位置情報の一例である。)を検出すること、観察者の目の高さを示す座標(Y軸方向の位置であり、目位置情報の一例である。)を検出すること、観察者の目の高さ及び奥行方向の位置を示す座標(Y及びZ軸方向の位置であり、目位置情報の一例である。)を検出すること、及び/又は観察者の目位置700を示す座標(X,Y,Z軸方向の位置であり、目位置情報の一例である。)を検出すること、に関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。
(Example of step S110)
In step S110 in some embodiments, the display control device 30 (processor 33) executes the eye position detection module 502 of FIG. (obtain eye position information indicating the eye position 700). The eye position detection module 502 detects the coordinates indicating the observer's eye position 700 (the positions in the X and Y axis directions, which is an example of eye position information), and indicates the height of the observer's eyes. Detecting coordinates (positions in the Y-axis direction, which is an example of eye position information), coordinates indicating positions in the height and depth directions of the eyes of the observer (positions in the Y- and Z-axis directions, and/or detect the coordinates indicating the observer's eye position 700 (positions in the X, Y, and Z axis directions, which is an example of eye position information). includes various software components for performing various operations related to doing.
 なお、目位置検出モジュール502が検出する目位置700は、右目と左目のそれぞれの位置700R,700L、右目位置700R及び左目位置700Lのうち予め定められた一方の位置、右目位置700R及び左目位置700Lのうち検出可能な(検出しやすい)いずれか一方の位置、又は右目位置700Rと左目位置700Lとから算出される位置(例えば、右目位置と左目位置との中点)、などを含んでいてもよい。例えば、目位置検出モジュール502は、目位置700を、表示設定を更新するタイミングの直前に顔検出部409から取得した観測位置に基づき決定する。 Note that the eye position 700 detected by the eye position detection module 502 includes positions 700R and 700L of the right and left eyes, one predetermined position out of the right eye position 700R and the left eye position 700L, the right eye position 700R and the left eye position 700L. Detectable (easily detectable) position, or a position calculated from the right eye position 700R and the left eye position 700L (for example, the middle point between the right eye position and the left eye position). good. For example, the eye position detection module 502 determines the eye position 700 based on the observation position acquired from the face detection unit 409 immediately before the timing of updating the display settings.
 また、顔検出部409は、顔検出部409から取得する観察者の目の検出タイミングの異なる複数の観測位置に基づき、観察者の目位置700の移動方向、及び/又は変化速度を検出し、観察者の目位置700の移動方向、及び/又は変化速度を示す信号を、プロセッサ33に出力してもよい。 Further, the face detection unit 409 detects the movement direction and/or change speed of the observer's eye position 700 based on a plurality of observation positions with different detection timings of the observer's eyes obtained from the face detection unit 409, A signal indicating the direction of movement and/or the rate of change of the observer's eye position 700 may be output to the processor 33 .
(ステップS110の一例)
 また、いくつかの実施形態の表示制御装置30(プロセッサ33)は、目位置推定モジュール504を実行することで、目位置を推定可能な情報(目位置情報の一例)を取得してもよい。目位置を推定可能な情報(目位置情報の一例)は、例えば、顔検出部409から取得した撮像画像、車両1の運転席の位置、観察者の顔の位置、観察者の顔位置、座高の高さ、又は複数の観察者の目の観測位置などである。目位置推定モジュール504は、1つ又は複数の目位置を推定可能な情報から、車両1の観察者の目位置700を推定する。目位置推定モジュール504は、顔検出部409から取得した撮像画像、車両1の運転席の位置、観察者の顔の位置、観察者の顔位置、座高の高さ、又は複数の観察者の目の観測位置などから、観察者の目位置700を推定すること、など観察者の目位置700を推定することに関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。すなわち、目位置推定モジュール504は、目の位置を推定可能な情報から観察者の目位置700を推定するためのテーブルデータ、演算式、などを含み得る。
(Example of step S110)
Further, the display control device 30 (processor 33) of some embodiments may acquire information (an example of eye position information) that enables estimation of eye positions by executing the eye position estimation module 504 . Information from which eye positions can be estimated (an example of eye position information) is, for example, a captured image acquired from the face detection unit 409, the position of the driver's seat of the vehicle 1, the position of the observer's face, the observer's face position, and the sitting height. or the observation positions of the eyes of multiple observers. The eye position estimation module 504 estimates the eye positions 700 of the observer of the vehicle 1 from one or more eye position estimable information. The eye position estimation module 504 uses the captured image acquired from the face detection unit 409, the position of the driver's seat of the vehicle 1, the position of the observer's face, the observer's face position, the height of the sitting height, or the eyes of a plurality of observers. includes various software components for performing various operations related to estimating the observer's eye position 700, such as estimating the observer's eye position 700, such as from the observation position of the . That is, the eye position estimation module 504 can include table data, arithmetic expressions, and the like for estimating the eye position 700 of the observer from information that can estimate the eye position.
(ステップS110の一例)
 また、いくつかの実施形態の表示制御装置30(プロセッサ33)は、目位置予測モジュール506を実行することで、観察者の目位置700を予測可能な情報を取得してもよい。観察者の目位置700を予測可能な情報は、例えば、顔検出部409から取得した最新の観測位置、又は1つ又はそれ以上の過去に取得した観測位置などである。目位置予測モジュール506は、観察者の目位置700を予測可能な情報に基づいて、目位置700を予測することに関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。具体的に、例えば、目位置予測モジュール506は、新たな表示設定が適用された画像が観察者に視認されるタイミングの、観察者の目位置700を予測する。目位置予測モジュール506は、例えば、最小二乗法や、カルマンフィルタ、α-βフィルタ、又はパーティクルフィルタなどの予測アルゴリズムを用いて、過去の1つ又はそれ以上の観測位置を用いて、次回の値を予測するようにしてもよい。
(Example of step S110)
Also, the display control device 30 (processor 33) of some embodiments may acquire information that can predict the eye positions 700 of the observer by executing the eye position prediction module 506 . Information that can predict the observer's eye position 700 is, for example, the latest observation position obtained from the face detection unit 409, or one or more observation positions obtained in the past. Eye position prediction module 506 includes various software components for performing various operations related to predicting eye positions 700 based on information capable of predicting eye positions 700 of an observer. Specifically, for example, the eye position prediction module 506 predicts the eye position 700 of the observer at the timing when the observer visually recognizes the image to which the new display settings are applied. The eye position prediction module 506 uses one or more past observed positions, for example, using a least squares method, a prediction algorithm such as a Kalman filter, an α-β filter, or a particle filter to calculate the next value. You can make a prediction.
(ステップS110の一例)
 また、いくつかの実施形態の表示制御装置30(プロセッサ33)は、顔検出モジュール508を実行することで、顔位置を示す顔位置情報、及び顔向きを示す顔向き情報を取得してもよい。顔検出モジュール508は、顔検出部409から取得した顔領域の検出データ(顔位置情報の一例、顔向き情報の一例。)を取得し、取得した顔領域の検出データから顔の特徴点を検出し、検出した特徴点の配置パターンから観察者の顔位置を示す顔位置情報、及び顔向きを示す顔向き情報を検出する。なお、顔検出モジュール508は、特徴点検出部126が検出した顔の特徴点の検出データ(顔位置情報の一例、顔向き情報の一例。)を取得し、取得した顔の特徴点の検出データを利用して顔位置情報、顔向き情報を検出してもよい。また、顔検出モジュール508は、単に顔検出部409が検出した顔位置情報、顔向き情報を取得するだけでもよい。顔向き検出処理は、例えば、複数個の顔パーツ(例えば目、鼻及び口など)の位置関係に基づき顔向き角度を算出する方法によるものである。または、例えば、顔向き検出処理は、機械学習の結果を用いる方法によるものである(但し、顔向き検出処理はこれらに限定されない)。具体的に、例えば、顔位置及び顔向きは、左右方向に沿ったX軸上の座標及びX軸を中心とした回転方向を示すピッチ角、上下方向に沿ったY軸上の座標及びY軸を中心とした回転方向を示すヨー角、奥行方向に沿ったZ軸上の座標及びZ軸を中心とした回転方向を示すロール角などで示される3軸方向の位置及び各軸回りの角度として算出される。
(Example of step S110)
Also, the display control device 30 (processor 33) of some embodiments may acquire face position information indicating the face position and face orientation information indicating the face orientation by executing the face detection module 508. . The face detection module 508 acquires face area detection data (an example of face position information, an example of face orientation information) from the face detection unit 409, and detects facial feature points from the acquired face area detection data. Then, from the arrangement pattern of the detected feature points, face position information indicating the face position of the observer and face direction information indicating the face direction are detected. Note that the face detection module 508 acquires detection data (an example of face position information, an example of face direction information) of the feature points of the face detected by the feature point detection unit 126, and extracts the acquired detection data of the feature points of the face. may be used to detect face position information and face orientation information. Also, the face detection module 508 may simply acquire the face position information and face direction information detected by the face detection unit 409 . Face direction detection processing is based on a method of calculating a face direction angle based on the positional relationship of a plurality of face parts (for example, eyes, nose, mouth, etc.), for example. Alternatively, for example, the face direction detection process is based on a method using the results of machine learning (however, the face direction detection process is not limited to these). Specifically, for example, the face position and face direction are coordinates on the X-axis along the left-right direction, a pitch angle indicating the direction of rotation about the X-axis, coordinates on the Y-axis along the up-down direction, and the Y-axis. As the yaw angle indicating the direction of rotation about the axis, the coordinate on the Z-axis along the depth direction, and the roll angle indicating the direction of rotation about the Z-axis, the position in the three-axis direction and the angle around each axis Calculated.
(ステップS120)
 次に、表示制御装置30(プロセッサ33)は、判定モジュール510を実行することで、所定の判定条件が充足するかを判定する(ステップS120)。
(Step S120)
Next, the display control device 30 (processor 33) determines whether a predetermined determination condition is satisfied by executing the determination module 510 (step S120).
(ステップS130)
 いくつかの実施形態におけるステップS120において、表示制御装置30(プロセッサ33)は、図11の判定モジュール510を実行することで、ステップS110で取得した目位置情報、顔位置情報、又は顔向き情報に基づき、目位置700、顔位置、又は顔向きが、所定の条件を充足するか判定する。なお、以下の説明では、主に、目位置700、顔位置を用いた処理について説明する。目位置700及び顔位置は、位置座標系であり、顔向きは、角度座標系という違いだけであり、以下に説明する目位置700(又は顔位置)の変化量及び変化速度を用いた処理は、顔向きの変化量及び変化速度を用いた処理にも適用可能であり、顔向きを用いた処理についての説明を省略する。
(Step S130)
In step S120 in some embodiments, the display control device 30 (processor 33) executes the determination module 510 of FIG. Based on this, it is determined whether the eye position 700, face position, or face orientation satisfies a predetermined condition. In the following description, processing using the eye position 700 and face position will be mainly described. The only difference is that the eye position 700 and face position are in a positional coordinate system, and the face orientation is in an angular coordinate system. , and the processing using the change amount and change speed of the face direction, and the description of the processing using the face direction is omitted.
 図13は、所定の周期時間t(t1,t2,t3・・・t10)毎に検出される、(11)上下方向の目位置700又は顔位置(又は顔向きでもよい。)Py(Y1,Y2,Y3・・・Y10)、(12)上下方向の目位置700又は顔位置(又は顔向きでもよい。)の変化量ΔPy(Py1(=Y2-Y1),Py2(=Y3-Y2),Py3(=Y4-Y3),・・・Py9(=Y10-Y9))、(13)上下方向の目位置700又は顔位置(又は顔向きでもよい。)の変化速度Vy(Vy1(=Py1/Δt),Vy2(=Py2/Δt),Vy3(=Py3/Δt),・・・Vy9(=Py9/Δt))、(21)左右方向の目位置700又は顔位置(又は顔向きでもよい。)Px(X1,X2,X3・・・X10)、(22)左右方向の目位置700又は顔位置(又は顔向きでもよい。)の変化量ΔPx(Px1(=X2-X1),Px2(=X3-X2),Px3(=X4-X3),・・・Px9(=X10-X9))、及び(23)左右方向の目位置700又は顔位置(又は顔向きでもよい。)の変化速度Vx(Vx1(=Px1/Δt),Vx2(=Px2/Δt),Vx3(=Px3/Δt),・・・Vx9(=Px9/Δt))を示す表である。 FIG. 13 shows (11) eye position 700 in the vertical direction or face position (or face orientation may be used) Py (Y1, Y2, Y3 . Py3 (=Y4-Y3), . Δt), Vy2 (=Py2/Δt), Vy3 (=Py3/Δt), . ) Px (X1, X2, X3 . X3-X2), Px3 (=X4-X3), . 2 is a table showing (Vx1 (=Px1/Δt), Vx2 (=Px2/Δt), Vx3 (=Px3/Δt), . . . Vx9 (=Px9/Δt)).
(ステップS131)
 いくつかの実施形態におけるステップS130において、表示制御装置30(プロセッサ33)は、図11の判定モジュール510を実行することで、目位置700又は顔位置(又は顔向きでもよい。)の変化速度Vx(Vy)が速い場合、所定の判定条件を充足すると判定する。例えば、判定モジュール510は、目位置700又は顔位置(又は顔向きでもよい。)の変化速度Vx(Vy)と予めメモリ37に記憶された(又は操作検出部407でユーザーにより設定された)所定の第1の閾値(不図示)とを比較可能であり、目位置700又は顔位置(又は顔向きでもよい。)の変化速度Vx(Vy)が前記所定の第1の閾値より速い場合、所定の判定条件を充足すると判定してもよい(但し、変化速度の判定方法はこれに限定されない)。
(Step S131)
In step S130 in some embodiments, the display control device 30 (processor 33) executes the determination module 510 of FIG. If (Vy) is fast, it is determined that the predetermined determination condition is satisfied. For example, the determination module 510 determines the change speed Vx (Vy) of the eye position 700 or face position (or face orientation) and a predetermined can be compared with a first threshold (not shown), and if the change speed Vx (Vy) of the eye position 700 or the face position (or face orientation) is faster than the predetermined first threshold, a predetermined (however, the method of determining the rate of change is not limited to this).
(ステップS132)
 また、いくつかの実施形態におけるステップS130において、表示制御装置30(プロセッサ33)は、図11の判定モジュール510を実行することで、目位置700又は顔位置(又は顔向きでもよい。)が予め設定された第1の範囲(不図示)である場合、所定の判定条件を充足すると判定する。例えば、判定モジュール510は、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)と予めメモリ37に記憶された所定の第1の範囲(不図示)とを比較し、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)が前記第1の範囲内である場合、所定の判定条件を充足すると判定してもよい(但し、目位置700又は顔位置(又は顔向き)位置座標又は角度座標の判定方法はこれに限定されない)。前記第1の範囲は、所定の基準位置(不図示)から所定の座標だけ離れた範囲に設定され得る。すなわち、前記第1の範囲は、アイボックス200(前記所定の基準位置の一例)の中心205から左方向(X軸負方向)に所定のX座標だけずれた第1の左範囲、右方向(X軸正方向)に所定のX座標だけずれた第1の右範囲、上方向(Y軸正方向)に所定のY座標だけずれた第1の上範囲、下方向(Y軸負方向)に所定のY座標だけずれた第1の下範囲、及びこれらの任意の組み合わせのいずれかに設定される。これによれば、前記第1の範囲は、アイボックス200の中心205から離れた外縁、又はアイボックス200の外側に設定され得る。また、別の実施形態では、判定モジュール510は、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)と予めメモリ37に記憶された所定の基準位置(不図示)との差を算出し、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)と前記所定の基準位置との差が、メモリ37に予め記憶された所定の第2の閾値より長い場合、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)が前記所定の基準位置から前記第2の閾値以上離れた第1の範囲内であるとして、所定の判定条件を充足すると判定してもよい。ここで、前記基準位置は、アイボックス200の中心205に設定され得る。この場合、判定モジュール510は、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)が、アイボックス200の中心205から離れていれば、所定の判定条件を充足すると判定する。なお、前記第1の範囲は、アイボックス200の移動に伴い、変更され得る。表示制御装置30(プロセッサ33)は、例えば、第1アクチュエータ28及び/又は第2アクチュエータ29を制御することで、アイボックス200を移動させる際、第1アクチュエータ28(及び/又は第2アクチュエータ29)の制御値に基づき、前記第1の範囲を変更してもよい。
(Step S132)
Further, in step S130 in some embodiments, the display control device 30 (processor 33) executes the determination module 510 of FIG. If it is within the set first range (not shown), it is determined that the predetermined determination condition is satisfied. For example, the determination module 510 compares the eye position 700 or face position (or face orientation) Px (Py) with a predetermined first range (not shown) pre-stored in the memory 37 to determine the eye position. 700 or face position (or face orientation) Px (Py) is within the first range, it may be determined that a predetermined determination condition is satisfied (however, eye position 700 or face position (or (Face orientation) The method for determining position coordinates or angle coordinates is not limited to this). The first range can be set to a range separated from a predetermined reference position (not shown) by predetermined coordinates. That is, the first range is a first left range deviated from the center 205 of the eyebox 200 (an example of the predetermined reference position) by a predetermined X coordinate in the left direction (X-axis negative direction), and a right direction ( A first right range shifted by a predetermined X coordinate in the positive direction of the X axis, a first upper range shifted by a predetermined Y coordinate in the upward direction (positive direction of the Y axis), and a downward direction (negative direction of the Y axis). It is set to either a first lower range offset by a predetermined Y coordinate, and any combination thereof. According to this, the first range can be set to the outer edge away from the center 205 of the eyebox 200 or the outside of the eyebox 200 . In another embodiment, the determination module 510 determines the difference between the eye position 700 or face position (or face orientation) Px (Py) and a predetermined reference position (not shown) stored in the memory 37 in advance. is calculated, and if the difference between the eye position 700 or the face position (or face orientation) Px (Py) and the predetermined reference position is longer than a predetermined second threshold stored in advance in the memory 37, Eye position 700 or face position (or face orientation may be acceptable) Px (Py) is within a first range separated from the predetermined reference position by the second threshold or more, and it is determined that the predetermined determination condition is satisfied. You may Here, the reference position can be set at the center 205 of the eyebox 200 . In this case, the determination module 510 determines that the predetermined determination condition is satisfied if the eye position 700 or face position (or face orientation) Px (Py) is away from the center 205 of the eyebox 200 . Note that the first range can be changed as the eyebox 200 moves. The display control device 30 (processor 33) controls the first actuator 28 (and/or the second actuator 29), for example, to move the eyebox 200 by controlling the first actuator 28 (and/or the second actuator 29). The first range may be changed based on the control value of .
(ステップS133)
 また、いくつかの実施形態におけるステップS130において、表示制御装置30(プロセッサ33)は、図11の判定モジュール510を実行することで、目位置700又は顔位置(又は顔向きでもよい。)に応じて変更される第2の範囲(不図示)に目位置700又は顔位置(又は顔向きでもよい。)が検出された場合、所定の判定条件を充足すると判定してもよい。
(Step S133)
Further, in step S130 in some embodiments, the display control device 30 (processor 33) executes the determination module 510 of FIG. If the eye position 700 or the face position (or face orientation may be used) is detected in the second range (not shown) changed by , it may be determined that the predetermined determination condition is satisfied.
 ステップS133では、図11の目位置推定モジュール504は、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)が安定状態であることに基づいて、前記第2の範囲を順次更新してもよい。前記第2の範囲は、目位置700又は顔位置(又は顔向きでもよい。)に応じて変更される前記基準位置から所定の座標だけ離れた範囲に設定され得る。例えば、目位置推定モジュール504は、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)が1秒以上概ね同じ位置であった場合、安定状態であると判定して、現在の目位置700又は顔位置(又は顔向きでもよい。)Px(Py)を前記基準位置としてメモリ37に登録し、前記基準位置からメモリ37に予め記憶された所定の座標(不図示)だけ離れた範囲を前記第2の範囲に設定し得る。また、別の実施形態では、目位置推定モジュール504は、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)が1秒以上概ね同じ位置であった場合、安定状態であると判定して、過去に取得した複数の目位置700又は顔位置(又は顔向きでもよい。)Px(Py)の平均値を前記基準位置としてメモリ37に登録してもよい。例えば、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)のサンプルレートが60samples/secであり、平均化周期が0.5secとすると、目位置推定モジュール504は、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)の1秒における60samplesが概ね同じ位置であった場合、安定状態であると判定して、過去0.5secの間に取得した30samplesのうち最新の5Samplesの目位置700又は顔位置(又は顔向きでもよい。)Px(Py)の平均値を前記基準位置としてメモリ37に登録してもよい。例えば、判定モジュール510は、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)と、メモリ37に順次更新されて記憶された前記所定の基準位置との差を算出し、目位置700又は顔位置(又は顔向きでもよい。)Px(Py)と前記所定の基準位置との差が、メモリ37に予め記憶された所定の第3の閾値より長い場合、所定の判定条件を充足すると判定してもよい。 In step S133, the eye position estimation module 504 in FIG. 11 sequentially updates the second range based on the fact that the eye position 700 or face position (or face orientation) Px (Py) is in a stable state. You may The second range can be set to a range separated by a predetermined coordinate from the reference position that is changed according to the eye position 700 or the face position (or face orientation). For example, if the eye position 700 or the face position (or the face orientation) Px (Py) remains roughly the same for one second or longer, the eye position estimation module 504 determines that the current state is stable. The eye position 700 or the face position (or face orientation may be used) Px (Py) is registered in the memory 37 as the reference position. A range may be set to the second range. In another embodiment, the eye position estimation module 504 determines that the stable state is present when the eye position 700 or the face position (or face orientation) Px (Py) remains substantially at the same position for one second or longer. An average value of a plurality of eye positions 700 or face positions (or face orientations) Px (Py) acquired in the past may be registered in the memory 37 as the reference position. For example, assuming that the eye position 700 or the face position (or face orientation) Px (Py) has a sample rate of 60 samples/sec and an averaging period of 0.5 sec, the eye position estimation module 504 calculates the eye position 700 Alternatively, if 60 samples of the face position (or face orientation) Px (Py) in 1 second are at approximately the same position, it is determined that the state is stable, and out of 30 samples acquired in the past 0.5 sec An average value of eye positions 700 of the latest 5 samples or face positions (or face orientations) Px (Py) may be registered in the memory 37 as the reference position. For example, the determination module 510 calculates the difference between the eye position 700 or face position (or face orientation) Px (Py) and the predetermined reference position sequentially updated and stored in the memory 37, and If the difference between the position 700 or the face position (or face orientation) Px (Py) and the predetermined reference position is longer than a predetermined third threshold stored in advance in the memory 37, a predetermined determination condition is set. It may be determined that it is sufficient.
(ステップS134)
 いくつかの実施形態におけるステップS130において、表示制御装置30(プロセッサ33)は、図11の判定モジュール510を実行することで、目位置700又は顔位置(又は顔向きでもよい。)が一方向に連続的に変化した場合、所定の判定条件を充足すると判定してもよい。判定モジュール510は、例えば、図13に示す左右方向の目位置700又は顔位置(又は顔向きでもよい。)の変化量ΔPxが、Px2からPx3まで右方向に移動→Px3からPx4まで右方向に移動、のように一方向(ここでは、右方向。)に所定の回数(例えば、2回)以上連続して変化したことが検出された場合、前記所定の判定条件が充足したと判定してもよい。
(Step S134)
In step S130 in some embodiments, the display control device 30 (processor 33) executes the determination module 510 of FIG. If it changes continuously, it may be determined that a predetermined determination condition is satisfied. For example, the determination module 510 determines that the amount of change ΔPx in the eye position 700 in the left-right direction or the face position (or face orientation) shown in FIG. If it is detected that the movement has continuously changed in one direction (here, right direction) for a predetermined number of times (for example, two times) or more, it is determined that the predetermined determination condition is satisfied. good too.
(ステップS141)
 また、いくつかの実施形態において、図11の判定モジュール510は、観察者の目位置700(又は顔位置)が不安定状態であるか判定し、観察者の目位置700(又は顔位置)が不安定状態であると判定された場合、前記所定の判定条件を充足すると判定してもよい。判定モジュール510は、観察者の目位置の安定度が低い(不安定である)か否かを判定し、観察者の目位置の安定度が低い場合、不安定状態であると判定すること(ステップS141)、に関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。すなわち、判定モジュール510は、目位置700の検出情報、推定情報、又は予測情報から、観察者の目位置700が不安定状態であるか否かを判定するため、閾値、テーブルデータ、演算式、などを含み得る。
(Step S141)
Also, in some embodiments, the determination module 510 of FIG. 11 determines whether the observer's eye position 700 (or face position) is in an unstable state, and determines whether the observer's eye position 700 (or face position) is When it is determined that the state is unstable, it may be determined that the predetermined determination condition is satisfied. The determination module 510 determines whether the stability of the observer's eye position is low (unstable). Step S141), includes various software components for performing various operations related to. That is, the determination module 510 uses threshold values, table data, arithmetic expressions, and so on.
(ステップS141の一例)
 目位置検出モジュール502は、顔検出部409から所定の測定時間内において取得された複数の観測位置の各々の位置データの分散を算出し、判定モジュール510は、目位置検出モジュール502により算出された分散がメモリ37に予め記憶された(又は操作検出部407で設定された)所定の閾値よりも大きい場合、観察者の目位置の安定度が低い(不安定である。)と判定する形態であってもよい。
(Example of step S141)
The eye position detection module 502 calculates the variance of position data of each of a plurality of observation positions acquired from the face detection unit 409 within a predetermined measurement time, and the determination module 510 calculates the variance of the position data calculated by the eye position detection module 502. When the variance is larger than a predetermined threshold value stored in advance in the memory 37 (or set by the operation detection unit 407), it is determined that the observer's eye position is less stable (unstable). There may be.
(ステップS141の一例)
 )目位置検出モジュール502は、顔検出部409から所定の測定時間内において取得された複数の観測位置の各々の位置データの偏差を算出し、判定モジュール510は、目位置検出モジュール502により算出された偏差がメモリ37に予め記憶された(又は操作検出部407で設定された)所定の閾値よりも大きい場合、観察者の目位置の安定度が低い(不安定である。)と判定する(不安定状態ではない)形態であってもよい。
(Example of step S141)
) The eye position detection module 502 calculates the deviation of position data of each of the plurality of observation positions acquired from the face detection unit 409 within a predetermined measurement time, and the determination module 510 calculates the deviation of the position data calculated by the eye position detection module 502. If the deviation obtained is greater than a predetermined threshold stored in advance in the memory 37 (or set by the operation detection unit 407), it is determined that the observer's eye position is less stable (unstable) ( non-unstable) form.
(ステップS141の一例)
 また、ステップS141の分散や偏差を用いずに、目位置検出モジュール502は、アイボックス200を複数の部分視域(例えば、上下方向に5分割、左右方向に5分割の25個の領域)に識別可能であり、所定の単位時間当りに、目位置700が移動した前記部分視域の数が、所定の閾値より多くなったときに、観察者の目位置の安定度が低い(不安定である。)と判定する(不安定状態ではない)形態であってもよい。
(Example of step S141)
Also, without using the variance and deviation in step S141, the eye position detection module 502 divides the eye box 200 into a plurality of partial viewing zones (for example, 25 regions divided vertically into 5 and horizontally into 5). Identifiable, the stability of the observer's eye position is low (unstable) when the number of partial viewing zones to which the eye position 700 has moved per predetermined unit time exceeds a predetermined threshold. (not in an unstable state).
(ステップS141の一例)
 また、目位置検出モジュール502は、所定の単位時間当りに、目位置700が移動総距離(単位時間当りに複数回取得される複数の観測位置の間の距離の総和)が、所定の閾値より長くなったときに、観察者の目位置の安定度が低い(不安定である。)と判定する(不安定状態ではない)形態であってもよい。
(Example of step S141)
Further, the eye position detection module 502 detects that the total moving distance of the eye position 700 (the sum of the distances between a plurality of observation positions obtained multiple times per unit time) per predetermined unit time exceeds a predetermined threshold value. It may be determined (not in an unstable state) that the stability of the observer's eye position is low (unstable) when it becomes longer.
(ステップS142)
 また、いくつかの実施形態において、図11の判定モジュール510は、観察者の目位置700の検出動作が不安定状態であるか判定し、不安定状態であると判定された場合、前記所定の判定条件を充足すると判定する。判定モジュール510は、(10)観察者の目位置700が検出できるか否かを判定し、目位置700が検出できない場合、不安定状態であると判定すること(ステップS142の一例。)、(20)観察者の目位置700の検出精度が低下していると推定できるか判定し、目位置700の検出精度が低下していると推定できる場合、不安定状態であると判定すること(ステップS142の一例。)、(30)観察者の目位置700がアイボックス200外にあるか否かを判定し、アイボックス200外にある場合、不安定状態であると判定すること(ステップS142の一例。)、(40)観察者の目位置700がアイボックス200外にあると推定できるか判定し、アイボックス200外にあると推定できる場合、不安定状態であると判定すること(ステップS142の一例。)、又は(50)観察者の目位置700がアイボックス200外になると予測されるか否かを判定し、アイボックス200外になると予測される場合、不安定状態であると判定すること(ステップS142の一例。)、に関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。すなわち、判定モジュール510は、目位置700の検出情報、推定情報、又は予測情報などから、観察者の目位置700の検出動作が不安定状態であるか否かを判定するため、閾値、テーブルデータ、演算式、などを含み得る。
(Step S142)
Further, in some embodiments, the determination module 510 of FIG. 11 determines whether the detection operation of the observer's eye position 700 is in an unstable state, and if it is determined to be in an unstable state, the predetermined It is determined that the determination condition is satisfied. The determination module 510 (10) determines whether or not the eye position 700 of the observer can be detected, and if the eye position 700 cannot be detected, determines that the state is unstable (an example of step S142), ( 20) Determining whether it can be estimated that the detection accuracy of the eye position 700 of the observer has decreased, and determining that the state is unstable if it can be estimated that the detection accuracy of the eye position 700 has decreased (step An example of S142.), (30) Determining whether or not the observer's eye position 700 is outside the eye box 200, and if it is outside the eye box 200, determining that the state is unstable (step S142). (40) Determine whether the observer's eye position 700 can be estimated to be outside the eye box 200. If it can be estimated to be outside the eye box 200, determine that the state is unstable (step S142). ), or (50) determining whether or not the observer's eye position 700 is predicted to be outside the eyebox 200, and if it is predicted to be outside the eyebox 200, it is determined to be in an unstable state. (an example of step S142), including various software components for performing various operations related to. That is, the determination module 510 determines whether or not the detection operation of the observer's eye position 700 is in an unstable state from the detection information, estimation information, or prediction information of the eye position 700. , arithmetic expressions, and the like.
 (ステップS142の一例)
 観察者の目位置700が検出できるか否かの判定する方法は、(1)顔検出部409から目位置700が検出できないことを示す信号を取得すること、(2)顔検出部409から所定期間内において取得される観察者の目の観測位置の一部(例えば、所定の回数以上。)又は全部が検出できないこと、(3)目位置検出モジュール502が、通常の動作において、観察者の目位置700を検出できないこと、又はこれらの任意の組み合わせにより、観察者の目位置700が検出できない(観察者の目位置700の検出が不安定状態である)と判定すること、を含む(なお前記判定方法は、これらに限定されない。)。
(Example of step S142)
A method for determining whether or not the eye position 700 of the observer can be detected is (1) obtaining a signal from the face detection unit 409 indicating that the eye position 700 cannot be detected; (3) the eye position detection module 502 cannot detect all or part of the observation positions of the observer's eyes acquired within the period (for example, more than a predetermined number of times); Determining that the observer's eye position 700 cannot be detected (detection of the observer's eye position 700 is in an unstable state) due to the inability to detect the eye position 700 or any combination thereof. The determination method is not limited to these.).
 (ステップS142の一例)
 観察者の目位置700の検出精度が低下していると判定する方法は、(1)顔検出部409から目位置700の検出精度が低下していると推定されることを示す信号を取得すること、(2)顔検出部409から所定期間内において取得される観察者の目の観測位置の一部(例えば、所定の回数以上。)又は全部が検出できないこと、(3)目位置検出モジュール502が、通常の動作において、観察者の目位置700を検出できないこと、(4)目位置推定モジュール504が、通常の動作において、観察者の目位置700を推定できないこと、(5)目位置予測モジュール506が、通常の動作において、観察者の目位置700を予測できないこと、(6)太陽光等の外光による観察者を撮像する画像のコントラストの低下を検出したこと、(7)帽子やアクセサリ(眼鏡も含む)が検出されたこと、(8)帽子やアクセサリ(眼鏡も含む)などにより観察者の顔の一部が検出されないこと、又はこれらの任意の組み合わせにより、観察者の目位置700の検出精度が低下していると判定すること、を含む(なお前記判定方法は、これらに限定されない。)。
(Example of step S142)
A method for determining that the detection accuracy of the eye position 700 of the observer is degraded includes: (1) obtaining a signal indicating that the detection accuracy of the eye position 700 is estimated to be degraded from the face detection unit 409; (2) some or all of the observation positions of the observer's eyes acquired from the face detection unit 409 within a predetermined period (for example, more than a predetermined number of times) cannot be detected; and (3) an eye position detection module. 502 cannot detect the observer's eye position 700 in normal operation; (4) the eye position estimation module 504 cannot estimate the observer's eye position 700 in normal operation; The prediction module 506 cannot predict the observer's eye position 700 in normal operation, (6) detects a decrease in the contrast of the image of the observer captured by external light such as sunlight, and (7) the hat. and accessories (including eyeglasses) are detected, (8) a part of the observer's face is not detected due to a hat or accessory (including eyeglasses), or any combination of these determining that the detection accuracy of the position 700 is degraded (the determination method is not limited to these).
 (ステップS142の一例)
 観察者の目位置700がアイボックス200外にあるか否かの判定する方法は、(1)顔検出部409から所定期間内において取得される観察者の目の観測位置の一部(例えば、所定の回数以上。)又は全部をアイボックス200外で取得すること、(2)目位置検出モジュール502が観察者の目位置700をアイボックス200外で検出すること、又はこれらの任意の組み合わせにより、観察者の目位置700がアイボックス200外にある(観察者の目位置700が不安定状態である)と判定すること、を含む(なお前記判定方法は、これらに限定されない。)。
(Example of step S142)
The method for determining whether or not the observer's eye position 700 is outside the eyebox 200 includes (1) a part of the observer's eye observation position acquired from the face detection unit 409 within a predetermined period (for example, (2) eye position detection module 502 detects eye position 700 of the observer outside eye box 200, or any combination thereof. , determining that the observer's eye position 700 is outside the eye box 200 (the observer's eye position 700 is in an unstable state) (the above-described determination method is not limited to these).
 (ステップS142の一例)
 観察者の目位置700がアイボックス200外にあると推定できるか判定する方法は、(1)顔検出部409で観察者の目位置700の移動が検出された後、観察者の目位置700が検出できなくなったこと、(2)目位置検出モジュール502が観察者の目位置700をアイボックス200の境界の近傍で検出すること、(3)目位置検出モジュール502が観察者の右目位置700R及び左目位置700Lのいずれかアイボックス200の境界の近傍で検出すること、又はこれらの任意の組み合わせにより、観察者の目位置700がアイボックス200外にあると推定できる(観察者の目位置700が不安定状態である)と判定すること、を含む(なお前記判定方法は、これらに限定されない。)。
(Example of step S142)
The method for determining whether the observer's eye position 700 is outside the eyebox 200 is determined by: (1) After the face detection unit 409 detects the movement of the observer's eye position 700, the observer's eye position 700 is (2) the eye position detection module 502 detects the observer's eye position 700 near the boundary of the eye box 200; (3) the eye position detection module 502 detects the observer's right eye position 700R and left eye position 700L, or any combination thereof, it can be estimated that the observer's eye position 700 is outside the eyebox 200 (observer's eye position 700L). is in an unstable state) (Note that the determination method is not limited to these.).
 (ステップS142の一例)
 観察者の目位置700がアイボックス200外になると予測されるか否かを判定する方法は、(1)目位置予測モジュール506が所定時間後の観察者の目位置700をアイボックス200外に予測すること、(2)目位置検出モジュール502が新たに検出した目位置700が、過去に検出した目位置700に対して、メモリ37に予め記憶された目位置移動距離閾値以上であること(目位置700の変化速度が、メモリ37に予め記憶された目位置変化速度閾値以上であること。)、又はこれらの任意の組み合わせにより、観察者の目位置700がアイボックス200外にあると予測できる(観察者の目位置700が不安定状態である)と判定すること、を含む(なお前記判定方法は、これらに限定されない。)。
(Example of step S142)
The method for determining whether or not the observer's eye position 700 is predicted to be outside the eyebox 200 is as follows. (2) that the eye position 700 newly detected by the eye position detection module 502 is greater than or equal to the eye position movement distance threshold previously stored in the memory 37 with respect to the previously detected eye position 700 ( The change speed of the eye position 700 is equal to or greater than the eye position change speed threshold value stored in advance in the memory 37), or any combination thereof, the observer's eye position 700 is predicted to be outside the eyebox 200. (Note that the determination method is not limited to these.).
(ステップS150)
 次に図12Bを参照する。ステップS120で前記所定の判定条件が充足するか判定された後、表示制御装置30(プロセッサ33)は、表示装置40に表示する画像を更新する。ステップS120で前記所定の判定条件が充足すると判定された場合、表示制御装置30(プロセッサ33)は、視認性制御モジュール514を実行することで、表示装置40に表示され、かつAR虚像V60に対応する画像の視認性を低下させる視認性低下処理(S180)を実行する。
(Step S150)
Reference is now made to FIG. 12B. After it is determined in step S120 whether the predetermined determination condition is satisfied, the display control device 30 (processor 33) updates the image displayed on the display device 40. FIG. When it is determined in step S120 that the predetermined determination condition is satisfied, the display control device 30 (processor 33) executes the visibility control module 514 to display on the display device 40 and correspond to the AR virtual image V60. Visibility reduction processing (S180) is executed to reduce the visibility of the image to be processed.
(ステップS160)
 図11の視認性制御モジュール514は、ステップS120で、前記所定の判定条件が充足していないと判定された場合、AR虚像V60を通常の視認性に維持する。
(Step S160)
The visibility control module 514 of FIG. 11 maintains the normal visibility of the AR virtual image V60 when it is determined in step S120 that the predetermined determination condition is not satisfied.
 また、ステップS160において、図11の目追従性画像処理モジュール516は、ステップS120で、前記所定の判定条件が充足していないと判定された場合、上下方向の目位置の変化量ΔPyに応じた第1の補正量Cy1だけ虚像Vの上下方向の位置を補正し、左右方向の目位置の変化量ΔPxに応じて虚像Vの左右方向の位置を補正する。第1の補正量Cy1(後述する第2の補正量Cy2も同様。)は、上下方向の目位置の変化量ΔPyが大きくなるに従い、徐々に大きくなるパラメータである。また、第1の補正量Cy1(後述する第2の補正量Cy2も同様。)は、虚像Vに設定される知覚距離D30が長くなるに従い、徐々に大きくなるパラメータである。第1の画像補正処理S160は、各上下方向の目位置Pyから見ても虚像Vが設定されたターゲット位置PTにあたかも固定されているような自然な運動視差を完全に再現するような画像の位置補正を含み、広義には、自然な運動視差に近づけるような画像位置の補正も含んでいてもよい。すなわち、第1の画像補正処理S160は、虚像Vに設定されたターゲット位置PTと観察者の目位置700とを結ぶ直線と、虚像表示領域VSと、の交点の位置に虚像Vの表示位置を合わせる(虚像Vの表示位置を近づける)。 Further, in step S160, when it is determined in step S120 that the predetermined determination condition is not satisfied, the eye-following image processing module 516 of FIG. The vertical position of the virtual image V is corrected by a first correction amount Cy1, and the horizontal position of the virtual image V is corrected according to the amount of change ΔPx in the eye position in the horizontal direction. The first correction amount Cy1 (the same applies to the second correction amount Cy2, which will be described later) is a parameter that gradually increases as the eye position change amount ΔPy in the vertical direction increases. Also, the first correction amount Cy1 (the same applies to the second correction amount Cy2 described later) is a parameter that gradually increases as the perceived distance D30 set to the virtual image V increases. The first image correction processing S160 is to create an image that perfectly reproduces natural motion parallax as if the virtual image V were fixed at the set target position PT even when viewed from each eye position Py in the vertical direction. This includes position correction, and in a broader sense, it may also include correction of the image position so as to approximate natural motion parallax. That is, the first image correction processing S160 sets the display position of the virtual image V to the position of the intersection of the virtual image display area VS and the straight line connecting the target position PT set in the virtual image V and the observer's eye position 700. Align (bring the display position of the virtual image V closer).
(ステップS170)
 表示制御装置30(プロセッサ33)は、ステップS120で、前記所定の判定条件が充足すると判定された場合、視認性低下処理(ステップS180)を少なくとも実行し、これに加えて、目追従性画像処理モジュール516により後述する第2の画像補正処理(ステップS190)を実行し得る。
(Step S170)
When it is determined in step S120 that the predetermined determination condition is satisfied, the display control device 30 (processor 33) executes at least the visibility reduction process (step S180), and additionally performs eye-following image processing. Module 516 may perform a second image correction process (step S190) described below.
(ステップS180)
 視認性制御モジュール514は、ステップS120で、前記所定の判定条件が充足すると判定された場合、AR虚像V60の視認性を、ステップS160における通常の視認性より低下させる視認性低下処理(ステップS180)を少なくとも実行する。ここで、視認性を低下させることは、AR虚像V60の輝度を下げること、AR虚像V60の透過率を上げること(透過に近づけること)、AR虚像V60の明度を下げること(黒色に近づけること)、AR虚像V60の彩度を下げること(無彩色に近づけること)、及びこれらの任意の組み合わせ、のうち少なくともいずれか1つを含む。表示制御装置30(プロセッサ33)は、表示器50における階調制御、光源ユニット60における局所的又は全体的な照明制御により表示器50が表示する画像の視認性を制御することで、前記画像に対応するAR虚像V60の視認性を制御する。
(Step S180)
When it is determined in step S120 that the predetermined determination condition is satisfied, the visibility control module 514 performs visibility reduction processing (step S180) to reduce the visibility of the AR virtual image V60 from the normal visibility in step S160. at least Here, decreasing the visibility means decreasing the brightness of the AR virtual image V60, increasing the transmittance of the AR virtual image V60 (bringing it closer to transmission), and decreasing the brightness of the AR virtual image V60 (bringing it closer to black). , lowering the saturation of the AR virtual image V60 (bringing it closer to an achromatic color), and any combination thereof. The display control device 30 (processor 33) controls the visibility of the image displayed by the display 50 through gradation control in the display 50 and local or overall illumination control in the light source unit 60, thereby making the image Controls the visibility of the corresponding AR virtual image V60.
(ステップS181の一例)
 視認性制御モジュール514は、ステップS120で、前記所定の判定条件が充足すると判定された場合、AR虚像V60の視認性を、急激にステップS160における通常の視認性より低下させる第1の視認性低下処理を実行する。より具体的には、視認性制御モジュール514は、前記所定の判定条件が充足すると判定された場合、前記通常の視認性からメモリ37に記憶される所望の視認性(前記通常の視認性より低い)へと切り替える。
(Example of step S181)
When it is determined in step S120 that the predetermined determination condition is satisfied, the visibility control module 514 performs a first visibility decrease that abruptly decreases the visibility of the AR virtual image V60 from the normal visibility in step S160. Execute the process. More specifically, when it is determined that the predetermined determination condition is satisfied, the visibility control module 514 sets the desired visibility (lower than the normal visibility) stored in the memory 37 from the normal visibility. ).
 なお、視認性制御モジュール514は、ステップS120で、前記所定の判定条件が充足すると判定された後、直ちに、AR虚像V60の視認性を、急激に低下させ得る。 Note that the visibility control module 514 can rapidly reduce the visibility of the AR virtual image V60 immediately after it is determined in step S120 that the predetermined determination condition is satisfied.
 また、視認性制御モジュール514は、前記所定の判定条件が充足すると判定された後、所定時間が経過後に、AR虚像V60の視認性を、急激に低下させ得る。 Also, the visibility control module 514 can rapidly reduce the visibility of the AR virtual image V60 after a predetermined period of time has elapsed after it is determined that the predetermined determination condition is satisfied.
(ステップS181の一例)
 他の実施形態では、視認性制御モジュール514は、ステップS120で、前記所定の判定条件が充足すると判定された場合、AR虚像V60の視認性を、時間経過とともに徐々にステップS160における通常の視認性より低下させる第2の視認性低下処理を実行する。より具体的には、視認性制御モジュール514は、前記所定の判定条件が充足すると判定された場合、前記通常の視認性からメモリ37に記憶される所望の視認性(前記通常の視認性より低い)へと徐々に切り替える。
(Example of step S181)
In another embodiment, when it is determined in step S120 that the predetermined determination condition is satisfied, the visibility control module 514 gradually reduces the visibility of the AR virtual image V60 to the normal visibility in step S160 over time. A second visibility lowering process for further lowering the visibility is executed. More specifically, when it is determined that the predetermined determination condition is satisfied, the visibility control module 514 sets the desired visibility (lower than the normal visibility) stored in the memory 37 from the normal visibility. ) gradually.
(ステップS183)
 視認性制御モジュール514は、ステップS120で、前記所定の判定条件が充足すると判定された場合、AR虚像V60の視認性を、目位置700又は顔位置(又は顔向きでもよい。)の変化速度に応じて異なる視認性へ低下させる第2の視認性低下処理を実行する。より具体的には、視認性制御モジュール514は、目位置700又は顔位置(又は顔向きでもよい。)の変化速度が速い場合には、通常の視認性より大幅に低い視認性(非表示も含む。)とし、目位置700又は顔位置(又は顔向きでもよい。)の変化速度が遅い場合には、通常の視認性より小幅に低い視認性とする。なお、通常の視認性より低い視認性のレベルは、2段階に限定されるものではなく、目位置700又は顔位置(又は顔向きでもよい。)の変化速度に応じて、3段階以上であってもよい。また、視認性制御モジュール514は、実質的に、目位置700又は顔位置(又は顔向きでもよい。)の変化速度の増加に応じて、連続的に視認性のレベルを低下させてもよい。
(Step S183)
When it is determined in step S120 that the predetermined determination condition is satisfied, the visibility control module 514 sets the visibility of the AR virtual image V60 to the changing speed of the eye position 700 or the face position (or face orientation may be used). 2nd visibility lowering processing which lowers to different visibility according to is performed. More specifically, when the eye position 700 or the face position (or face orientation may be changed), the visibility control module 514 sets the visibility significantly lower than the normal visibility (including non-display) when the change speed is fast. ), and if the change speed of the eye position 700 or the face position (or the face orientation is also acceptable) is slow, the visibility is set slightly lower than the normal visibility. Note that the visibility level lower than the normal visibility is not limited to two stages, but may be three stages or more according to the change speed of the eye position 700 or the face position (or face orientation may be used). may Visibility control module 514 may also substantially reduce the level of visibility continuously in response to an increasing rate of change in eye position 700 or face position (or face orientation).
(ステップS185)
 視認性制御モジュール514は、ステップS120で、前記所定の判定条件が充足すると判定された場合、AR虚像V60の視認性を、目位置700又は顔位置に応じて異なる視認性へ低下させる第3の視認性低下処理を実行する。より具体的には、視認性制御モジュール514は、目位置700又は顔位置が所定の基準位置(例えば、アイボックス200の中心205)から大きく離れた場合には、通常の視認性より大幅に低い視認性(非表示も含む。)とし、目位置700又は顔位置が前記所定の基準位置から少し離れた場合には、通常の視認性より小幅に低い視認性とする。なお、通常の視認性より低い視認性のレベルは、2段階に限定されるものではなく、目位置700又は顔位置に応じて、3段階以上であってもよい。また、視認性制御モジュール514は、実質的に、目位置700又は顔位置に応じて、連続的に視認性のレベルを変化させてもよい。
(Step S185)
The visibility control module 514 reduces the visibility of the AR virtual image V60 to different visibility depending on the eye position 700 or the face position when it is determined in step S120 that the predetermined determination condition is satisfied. Execute visibility reduction processing. More specifically, the visibility control module 514 sets the visibility significantly lower than normal when the eye position 700 or face position is far away from a predetermined reference position (eg, the center 205 of the eyebox 200). Visibility (including non-display) is set, and when the eye position 700 or the face position is slightly away from the predetermined reference position, the visibility is set slightly lower than normal visibility. Note that the visibility level lower than normal visibility is not limited to two stages, and may be three stages or more according to the eye position 700 or the face position. Visibility control module 514 may also substantially continuously change the level of visibility depending on eye position 700 or face position.
 なお、上記ステップS181、S183、S185において、視認性制御モジュール514は、AR虚像V60の全部の視認性を低下させる。 Note that in steps S181, S183, and S185, the visibility control module 514 reduces the visibility of the entire AR virtual image V60.
 また、他の例では、上記ステップS181、S183、S185において、視認性制御モジュール514は、AR虚像V60の一部の視認性を低下させる。例えば、視認性制御モジュール514は、AR虚像V60に設定される知覚距離D30が、所定の閾値(不図示)より長いAR虚像V60の視認性を低下させ(視認性低下処理を実行する)、前記所定の閾値より短いAR虚像V60の視認性を低下させなくてもよい(視認性低下処理を実行しない)。また、視認性制御モジュール514は、AR虚像V60に設定される知覚距離D30が長い場合には、通常の視認性より大幅に低い視認性(非表示も含む。)とし、AR虚像V60に設定される知覚距離D30が短い場合には、通常の視認性より小幅に低い視認性としてもよい。 In another example, in steps S181, S183, and S185, the visibility control module 514 reduces the visibility of part of the AR virtual image V60. For example, the visibility control module 514 reduces the visibility of the AR virtual image V60 having a perceptual distance D30 set to the AR virtual image V60 longer than a predetermined threshold value (not shown) (performs visibility reduction processing), The visibility of the AR virtual image V60 that is shorter than a predetermined threshold does not have to be lowered (the visibility lowering process is not executed). Further, when the perceived distance D30 set for the AR virtual image V60 is long, the visibility control module 514 sets the visibility (including non-display) significantly lower than the normal visibility, and sets the AR virtual image V60. If the perceived distance D30 is short, the visibility may be slightly lower than the normal visibility.
(ステップS187)
 図11のグラフィックモジュール518は、ステップS181、S183、S185にて視認性を低下させたAR虚像V60の一部又は全部に関連するAR関連虚像を表示する。図14は、自車両の走行中において、観察者が視認する前景と、視認性低下処理が実行された際のAR虚像、及びAR関連虚像の例を示す図である。図14の例では、HUD装置20は、車両側に設定される基準点から、第1の距離(不図示)より離れた位置に知覚される遠方虚像V1(例えば、図14に示す虚像V64-V65)、及び前記第1の距離より近い位置に知覚される近傍虚像V2(例えば、図14に示す虚像V61-V63)を表示し、プロセッサ33は、遠方虚像V1には視認性低下処理S170を実行することで視認性を低下させ、近傍虚像V2には視認性低下処理S170を実行していない。また、図14の例では、プロセッサ33は、視認性を低下させた遠方虚像V1の一部(虚像V64)に関連するAR関連虚像V80(例えば、図14に示すV81)を表示させている。
(Step S187)
The graphic module 518 of FIG. 11 displays an AR-related virtual image related to part or all of the AR virtual image V60 whose visibility is lowered in steps S181, S183, and S185. FIG. 14 is a diagram showing an example of the foreground visually recognized by the observer, the AR virtual image when the visibility reduction process is executed, and the AR-related virtual image while the host vehicle is running. In the example of FIG. 14, the HUD device 20 creates a distant virtual image V1 (for example, a virtual image V64- V65), and a near virtual image V2 perceived at a position closer than the first distance (for example, virtual images V61 to V63 shown in FIG. 14), and the processor 33 performs visibility reduction processing S170 on the distant virtual image V1. The visibility is reduced by executing the visibility reduction processing S170, and the visibility reduction processing S170 is not executed for the neighboring virtual image V2. Also, in the example of FIG. 14, the processor 33 displays an AR-related virtual image V80 (for example, V81 shown in FIG. 14) related to a part (virtual image V64) of the distant virtual image V1 with reduced visibility.
 図11の目追従性画像処理モジュール516は、ステップS120における判定結果に基づいて、第1の画像補正処理(ステップS160)と、第2の画像補正処理(ステップS190)と、の間で切り替えてもよい。 The eye-following image processing module 516 in FIG. 11 switches between the first image correction process (step S160) and the second image correction process (step S190) based on the determination result in step S120. good too.
(ステップS190)
 図11の目追従性画像処理モジュール516は、ステップS120で、前記所定の判定条件が充足していると判定された場合、第1の画像補正処理(ステップS160)より目位置700(又は顔位置)の変化量に対する画像の補正量を小さくする。
(Step S190)
If it is determined in step S120 that the predetermined determination condition is satisfied, the eye-following image processing module 516 of FIG. 11 performs eye position 700 (or face position ) is reduced.
(ステップS190の一例)
 図11の目追従性画像処理モジュール516は、ステップS120で、前記所定の判定条件が充足していると判定された場合、上下方向又は左右方向のいずれか一方の目位置700(又は顔位置)の変化量に対してのみ補正量を小さくしてもよい。
(Example of step S190)
If it is determined in step S120 that the predetermined determination condition is satisfied, the eye-following image processing module 516 of FIG. The correction amount may be reduced only with respect to the amount of change in .
 いくつかの実施形態における図11の目追従性画像処理モジュール516は、上下方向の目位置の変化量ΔPyに応じた第2の補正量Cy2だけ虚像Vの上下方向の位置を補正し、左右方向の目位置の変化量ΔPxに応じた第2の補正量Cx2だけ虚像Vの左右方向の位置を補正する。ここで、目追従性画像処理モジュール516は、第2の補正量Cy2を、第1の画像補正処理(ステップS160)における上下方向の目位置の変化量ΔPyに対する第1の補正量Cy1より小さくし、第2の補正量Cx2を、第1の画像補正処理(ステップS160)における左右方向の目位置の変化量ΔPxに対する第1の補正量Cy1と同じくする。具体的に例えば、上下方向の目位置の変化量ΔPyに対する第1の補正量Cy1を100%とした場合、同じ上下方向の目位置の変化量ΔPyに対する第2の補正量Cy2は25%となり、左右方向の目位置の変化量ΔPxに対する第1の補正量Cx1を100%とした場合、同じ左右方向の目位置の変化量ΔPxに対する第2の補正量Cx2も100%となる。なお、広義には、第2の補正量Cy2は、第1の補正量Cy1より小さければよいので、第1の補正量Cy1に対して100%未満であればよいが、第1の補正量Cy1に対して60%未満であることが好ましい。 In some embodiments, the eye-following image processing module 516 of FIG. 11 corrects the vertical position of the virtual image V by a second correction amount Cy2 corresponding to the amount of change ΔPy in the eye position in the vertical direction. The horizontal position of the virtual image V is corrected by a second correction amount Cx2 corresponding to the eye position change amount ΔPx. Here, the eye-following image processing module 516 makes the second correction amount Cy2 smaller than the first correction amount Cy1 for the eye position change amount ΔPy in the vertical direction in the first image correction process (step S160). , the second correction amount Cx2 is set to be the same as the first correction amount Cy1 for the eye position change amount ΔPx in the horizontal direction in the first image correction process (step S160). Specifically, for example, when the first correction amount Cy1 for the eye position change amount ΔPy in the vertical direction is 100%, the second correction amount Cy2 for the same eye position change amount ΔPy in the vertical direction is 25%. When the first correction amount Cx1 for the eye position change amount ΔPx in the horizontal direction is 100%, the second correction amount Cx2 for the same eye position change amount ΔPx in the horizontal direction is also 100%. In a broad sense, the second correction amount Cy2 should be smaller than the first correction amount Cy1, and therefore should be less than 100% of the first correction amount Cy1. is preferably less than 60% with respect to
(ステップS190の一例)
 図11の目追従性画像処理モジュール516は、ステップS120で、前記所定の判定条件が充足していると判定された場合、上下方向又は左右方向のいずれか一方の目位置700(又は顔位置)の変化量に対してのみ補正量をゼロに設定してもよい。
(Example of step S190)
If it is determined in step S120 that the predetermined determination condition is satisfied, the eye-following image processing module 516 of FIG. The correction amount may be set to zero only for the amount of change in .
 いくつかの実施形態における図11の目追従性画像処理モジュール516は、ステップS120で、前記所定の判定条件が充足していると判定された場合、上下方向の目位置の変化量ΔPyに応じた第2の補正量Cy2をゼロに設定してもよい。例えば、目位置追従性画像処理モジュール511は、左右方向の目位置の変化量ΔPxに応じてのみ虚像Vの左右方向の位置を補正してもよい。 In some embodiments, when it is determined in step S120 that the predetermined determination condition is satisfied, the eye-following image processing module 516 of FIG. The second correction amount Cy2 may be set to zero. For example, the eye position followable image processing module 511 may correct the position of the virtual image V in the horizontal direction only according to the amount of change ΔPx in the eye position in the horizontal direction.
(ステップS190の一例)
 図11の目追従性画像処理モジュール516は、ステップS120で、前記所定の判定条件が充足していると判定された場合、上下方向及び左右方向の双方の目位置700(又は顔位置)の変化量に対して補正量を小さくしてもよい。
(Example of step S190)
If it is determined in step S120 that the predetermined determination condition is satisfied, the eye-following image processing module 516 of FIG. The correction amount may be smaller than the amount.
 いくつかの実施形態における図11の目追従性画像処理モジュール516は、上下方向の目位置の変化量ΔPyに応じた第2の補正量Cy2だけ虚像Vの上下方向の位置を補正し、左右方向の目位置の変化量ΔPxに応じた第2の補正量Cx2だけ虚像Vの左右方向の位置を補正する。ここで、目追従性画像処理モジュール516は、第2の補正量Cy2を、第1の画像補正処理(ステップS160)における上下方向の目位置の変化量ΔPyに対する第1の補正量Cy1より小さく、第2の補正量Cx2を、第1の画像補正処理(ステップS160)における左右方向の目位置の変化量ΔPxに対する第1の補正量Cy1より小さくする。具体的に例えば、上下方向の目位置の変化量ΔPyに対する第1の補正量Cy1を100%とした場合、同じ上下方向の目位置の変化量ΔPyに対する第2の補正量Cy2は25%となり、左右方向の目位置の変化量ΔPxに対する第1の補正量Cx1を100%とした場合、同じ左右方向の目位置の変化量ΔPxに対する第2の補正量Cx2も25%となる。 In some embodiments, the eye-following image processing module 516 of FIG. 11 corrects the vertical position of the virtual image V by a second correction amount Cy2 corresponding to the amount of change ΔPy in the eye position in the vertical direction. The horizontal position of the virtual image V is corrected by a second correction amount Cx2 corresponding to the eye position change amount ΔPx. Here, the eye-following image processing module 516 makes the second correction amount Cy2 smaller than the first correction amount Cy1 for the eye position change amount ΔPy in the vertical direction in the first image correction process (step S160), The second correction amount Cx2 is made smaller than the first correction amount Cy1 for the eye position change amount ΔPx in the horizontal direction in the first image correction process (step S160). Specifically, for example, when the first correction amount Cy1 for the eye position change amount ΔPy in the vertical direction is 100%, the second correction amount Cy2 for the same eye position change amount ΔPy in the vertical direction is 25%. When the first correction amount Cx1 for the eye position change amount ΔPx in the horizontal direction is 100%, the second correction amount Cx2 for the same eye position change amount ΔPx in the horizontal direction is also 25%.
 また、いくつかの実施形態における図11の目追従性画像処理モジュール516は、第2の画像補正処理(ステップS190)における左右方向の目位置の変化量ΔPxに対する画像位置の補正量Cx2を、第1の画像補正処理(ステップS160)における左右方向の目位置の変化量ΔPxに対する画像位置の補正量Cx1より低く設定し、上下方向の目位置の変化量ΔPyに対する第1の補正量Cy1に対する第2の補正量Cy2の割合よりも高く設定してもよい(Cx2/Cx1>Cy2/Cy1)。 Further, the eye-following image processing module 516 in FIG. 11 in some embodiments sets the image position correction amount Cx2 with respect to the eye position change amount ΔPx in the horizontal direction in the second image correction process (step S190) to the is set lower than the image position correction amount Cx1 for the eye position change amount ΔPx in the horizontal direction in the image correction process (step S160) in step S160, and the first correction amount Cy1 for the eye position change amount ΔPy in the vertical direction is set to a second correction amount Cy1. (Cx2/Cx1>Cy2/Cy1).
 図16は、視認性低下処理の実行中に視認性上昇処理を実行する方法S200を示すフロー図である。方法S200は、空間光変調素子を含むHUD装置20と、このHUD装置20を制御する表示制御装置30と、において実行される。 FIG. 16 is a flow diagram showing a method S200 for executing the visibility increasing process while executing the visibility decreasing process. The method S200 is performed in a HUD device 20 including a spatial light modulating element and a display controller 30 controlling this HUD device 20 .
 いくつかの実施形態において、表示制御装置30(プロセッサ33)は、所定の解除条件を充足するかを判定し(ステップS210)、前記解除条件を充足すると判定された場合、視認性低下処理(ステップS180)から視認性上昇処理(ステップS220)へ移行する。 In some embodiments, the display control device 30 (processor 33) determines whether a predetermined cancellation condition is satisfied (step S210), and when it is determined that the cancellation condition is satisfied, visibility reduction processing (step S180) to the visibility increasing process (step S220).
 前記所定の解除条件は、視認性低下処理(ステップS180)に移行してから所定時間(例えば、20秒)が経過したことを含む。視認性制御モジュール514は、前記視認性低下処理(ステップS180)に移行してから計時を実行し、予めメモリ37に記憶された(又は操作検出部407で設定された)前記所定時間が経過した場合、前記解除条件を充足したと判定してもよい。 The predetermined cancellation condition includes that a predetermined period of time (for example, 20 seconds) has passed since the visibility reduction process (step S180) was started. The visibility control module 514 executes timekeeping after transitioning to the visibility reduction process (step S180), and the predetermined time stored in advance in the memory 37 (or set by the operation detection unit 407) has passed. In this case, it may be determined that the cancellation condition is satisfied.
 また、前記所定の解除条件は、ステップS120で前記所定の判定条件を充足しなくなったこと、を含んでいてもよい。すなわち、前記所定の解除条件は、ステップS131乃至S134、及びステップS141乃至S143のうち少なくとも1つが前記所定の判定条件を充足した状態から前記所定の判定条件を充足しなくなった状態へ移行したことを検出すること、を含んでいてもよい。また、前記所定の解除条件は、ステップS120で前記所定の判定条件を充足しなくなってから所定時間(例えば、20秒)が経過したことを含んでいてもよい。 Also, the predetermined cancellation condition may include that the predetermined determination condition is no longer satisfied in step S120. That is, the predetermined cancellation condition is that at least one of steps S131 to S134 and steps S141 to S143 has changed from a state in which the predetermined determination condition is satisfied to a state in which the predetermined determination condition is no longer satisfied. detecting. Further, the predetermined cancellation condition may include that a predetermined time (for example, 20 seconds) has elapsed since the predetermined determination condition was not satisfied in step S120.
(ステップS220)
 表示制御装置30(プロセッサ33)は、ステップS210で、前記解除条件が充足すると判定された場合、視認性上昇処理を実行する。
(Step S220)
The display control device 30 (processor 33) executes visibility increasing processing when it is determined in step S210 that the cancellation condition is satisfied.
(ステップS181の一例)
 視認性制御モジュール514は、ステップS210で、前記解除条件が充足すると判定された場合、AR虚像V60の視認性を、視認性低下処理(ステップS170)で設定された視認性から急激に通常の視認性に上昇させる第1の視認性上昇処理を実行する。より具体的には、視認性制御モジュール514は、前記解除条件が充足すると判定された場合、視認性低下処理(ステップS170)で設定された視認性から前記通常の視認性へと切り替える。
(Example of step S181)
When it is determined in step S210 that the canceling condition is satisfied, the visibility control module 514 abruptly changes the visibility of the AR virtual image V60 from the visibility set in the visibility reduction process (step S170) to normal visibility. A first visibility increasing process for increasing the visibility is executed. More specifically, when it is determined that the cancellation condition is satisfied, the visibility control module 514 switches the visibility set in the visibility reduction process (step S170) to the normal visibility.
 なお、視認性制御モジュール514は、ステップS210で、前記解除条件が充足すると判定された後、直ちに、AR虚像V60の視認性を、急激に上昇させ得る。 Note that the visibility control module 514 can rapidly increase the visibility of the AR virtual image V60 immediately after it is determined in step S210 that the cancellation condition is satisfied.
 また、他の実施形態の視認性制御モジュール514は、前記解除条件が充足すると判定された後、所定時間が経過後に、AR虚像V60の視認性を、急激に上昇させ得る。 Also, the visibility control module 514 of another embodiment can rapidly increase the visibility of the AR virtual image V60 after a predetermined period of time has passed after it is determined that the cancellation condition is satisfied.
(ステップS221の一例)
 他の実施形態では、視認性制御モジュール514は、ステップS210で、前記解除条件が充足すると判定された場合、AR虚像V60の視認性を、時間経過とともに徐々に視認性低下処理(ステップS170)で設定された視認性から前記通常の視認性へ上昇させる第1の視認性低下処理を実行する。より具体的には、視認性制御モジュール514は、前記解除条件が充足すると判定された場合、視認性低下処理(ステップS170)で設定された視認性から前記通常の視認性へと徐々に切り替える。
(Example of step S221)
In another embodiment, when it is determined in step S210 that the cancellation condition is satisfied, the visibility control module 514 gradually decreases the visibility of the AR virtual image V60 over time through visibility reduction processing (step S170). A first visibility lowering process is executed to increase the visibility from the set visibility to the normal visibility. More specifically, when it is determined that the cancellation condition is satisfied, the visibility control module 514 gradually switches the visibility set in the visibility reduction process (step S170) to the normal visibility.
(ステップS223)
 視認性制御モジュール514は、ステップS210で、前記解除条件が充足すると判定された場合、AR虚像V60の視認性を、目位置の変化速度に応じて異なる視認性へ上昇させる第2の視認性上昇処理を実行する。より具体的には、視認性制御モジュール514は、目位置の変化速度が速い場合には、視認性低下処理(ステップS170)で設定された視認性より小幅に高い視認性とし、目位置の変化速度が遅い場合には、視認性低下処理(ステップS170)で設定された視認性より大幅に高い視認性(通常の視認性より若干低い又は通常の視認性と同じ)とする。なお、視認性低下処理(ステップS170)で設定された視認性より高い視認性のレベルは、2段階に限定されるものではなく、目位置の変化速度に応じて、3段階以上であってもよい。また、視認性制御モジュール514は、実質的に、目位置の変化速度の低下に応じて、連続的に視認性のレベルを上昇させてもよい。
(Step S223)
When it is determined in step S210 that the cancellation condition is satisfied, the visibility control module 514 performs a second visibility increase that increases the visibility of the AR virtual image V60 to a different visibility depending on the change speed of the eye position. Execute the process. More specifically, when the speed of change in eye position is fast, the visibility control module 514 sets the visibility to be slightly higher than the visibility set in the visibility reduction process (step S170). If the speed is slow, the visibility is made significantly higher than the visibility set in the visibility lowering process (step S170) (slightly lower than normal visibility or the same as normal visibility). Note that the visibility level higher than the visibility set in the visibility reduction process (step S170) is not limited to two stages, and may be three stages or more according to the change speed of the eye position. good. Visibility control module 514 may also substantially continuously increase the level of visibility as the rate of change in eye position decreases.
(ステップS225)
 視認性制御モジュール514は、ステップS210で、前記解除条件が充足すると判定された場合、AR虚像V60の視認性を、目位置に応じて異なる視認性へ上昇させる第3の視認性上昇処理を実行する。より具体的には、視認性制御モジュール514は、目位置が所定の基準位置(例えば、アイボックス200の中心205)から大きく離れている場合には、視認性低下処理(ステップS170)で設定された視認性より小幅に高い視認性とし、目位置が前記所定の基準位置から少し離れている場合には、視認性低下処理(ステップS170)で設定された視認性より大幅に高い視認性(通常の視認性より若干低い又は通常の視認性と同じ)とする。なお、視認性低下処理(ステップS170)で設定された視認性より高い視認性のレベルは、2段階に限定されるものではなく、目位置に応じて、3段階以上であってもよい。また、視認性制御モジュール514は、実質的に、目位置に応じて、連続的に視認性のレベルを変化させてもよい。
(Step S225)
When it is determined in step S210 that the cancellation condition is satisfied, the visibility control module 514 executes a third visibility increase process for increasing the visibility of the AR virtual image V60 to different visibility depending on the eye position. do. More specifically, the visibility control module 514 sets the eye position in the visibility reduction process (step S170) when the eye position is far away from a predetermined reference position (for example, the center 205 of the eyebox 200). If the eye position is slightly away from the predetermined reference position, the visibility is significantly higher than the visibility (normal or the same as normal visibility). Note that the visibility level higher than the visibility set in the visibility reduction process (step S170) is not limited to two stages, and may be three stages or more depending on the eye position. Visibility control module 514 may also substantially continuously change the level of visibility according to eye position.
 なお、上記ステップS221、S223、S225において、視認性制御モジュール514は、視認性低下処理(ステップS170)で視認性が低下していたAR虚像V60の全部の視認性を上昇させる。 Note that in steps S221, S223, and S225, the visibility control module 514 increases the visibility of all the AR virtual images V60 whose visibility has been lowered in the visibility reduction process (step S170).
 また、他の例では、上記ステップS221、S223、S225において、視認性制御モジュール514は、視認性低下処理(ステップS170)で視認性が低下していた複数のAR虚像V60の視認性を順次上昇させる。例えば、視認性制御モジュール514は、近傍虚像V2の視認性を上昇させた後、所定時間経過後に、遠方虚像V1の視認性を上昇させてもよい。 In another example, in steps S221, S223, and S225, the visibility control module 514 sequentially increases the visibility of the plurality of AR virtual images V60 whose visibility has been reduced in the visibility reduction process (step S170). Let For example, the visibility control module 514 may increase the visibility of the distant virtual image V1 after a predetermined period of time has elapsed after increasing the visibility of the near virtual image V2.
(ステップS227)
 図11のグラフィックモジュール518は、ステップS221、S223、S225にて視認性を上昇させたAR虚像V60の一部又は全部に関連するAR関連虚像を非表示にする。
(Step S227)
The graphic module 518 of FIG. 11 hides the AR-related virtual images related to part or all of the AR virtual image V60 whose visibility has been increased in steps S221, S223, and S225.
 図11の目追従性画像処理モジュール516は、ステップS210で、前記解除条件が充足すると判定された場合、目位置700(又は顔位置)の変化量に対する画像の位置補正量を小さく抑えた第2の画像補正処理(ステップS190)から、第2の画像補正処理(ステップS190)より目位置700(又は顔位置)の変化量に対する画像の位置補正量が大きい第1の画像補正処理(ステップS160)へ切り替える。 If it is determined in step S210 that the cancellation condition is satisfied, the eye-following image processing module 516 of FIG. from the image correction process (step S190) to the first image correction process (step S160) in which the amount of positional correction of the image with respect to the amount of change in the eye position 700 (or the face position) is greater than that of the second image correction process (step S190). switch to
 再び、図11を参照する。図11のグラフィックモジュール518は、レンダリングなどの画像処理をして画像データを生成し、表示装置40を駆動するための様々な既知のソフトウェア構成要素を含む。また、グラフィックモジュール518は、表示される画像の、種類(動画、静止画、形状)、配置(位置座標、角度)、サイズ、表示距離(3Dの場合。)、視覚的効果(例えば、輝度、透明度、彩度、コントラスト、又は他の視覚特性)、を変更するための様々な既知のソフトウェア構成要素を含んでいてもよい。グラフィックモジュール518は、画像の種類(表示パラメータの例の1つ。)、画像の位置座標(表示パラメータの例の1つ。)、画像の角度(X方向を軸としたピッチング角、Y方向を軸としたヨーレート角、Z方向を軸としたローリング角などであり、表示パラメータの例の1つ。)、画像のサイズ(表示パラメータの例の1つ。)、画像の色(色相、彩度、明度などで設定される表示パラメータの例の1つ。)、画像の遠近表現の強度(消失点の位置などで設定される表示パラメータの1つ。)で観察者に視認されるように画像データを生成し、表示器50を駆動し得る。 Refer to FIG. 11 again. Graphics module 518 of FIG. 11 includes various known software components for performing image processing, such as rendering, to generate image data, and to drive display device 40 . The graphic module 518 also controls the type (moving image, still image, shape), arrangement (positional coordinates, angle), size, display distance (in the case of 3D), visual effect (for example, luminance, (transparency, saturation, contrast, or other visual properties) may be included. The graphic module 518 stores the type of image (one example of display parameters), the positional coordinates of the image (one example of display parameters), the angle of the image (the pitching angle with the X direction as the axis, and the Y direction as the axis). yaw rate angle around the axis, rolling angle around the Z direction, etc., which are examples of display parameters), image size (one example of display parameters), image color (hue, saturation , one example of display parameters set by brightness, etc.), and the intensity of perspective expression of an image (one of display parameters set by the position of a vanishing point, etc.) to make an image visible to the observer. Data may be generated to drive the display 50 .
 光源駆動モジュール520は、光源ユニット24を駆動することを実行するための様々な既知のソフトウェア構成要素を含む。光源駆動モジュール520は、設定された表示パラメータに基づき、光源ユニット24を駆動し得る。 The light source driving module 520 includes various known software components for performing driving the light source unit 24 . The light source driving module 520 can drive the light source unit 24 based on the set display parameters.
 アクチュエータ駆動モジュール522は、第1アクチュエータ28及び/又は第2アクチュエータ29を駆動することを実行するための様々な既知のソフトウェア構成要素を含むアクチュエータ駆動モジュール522は、設定された表示パラメータに基づき、第1アクチュエータ28及び第2アクチュエータ29を駆動し得る。 Actuator drive module 522 includes various known software components for performing driving first actuator 28 and/or second actuator 29. One actuator 28 and a second actuator 29 can be driven.
 図16は、リレー光学系80(曲面ミラー81)を回転させることで、アイボックス200を上下方向に移動させることができる、いくつかの実施形態におけるHUD装置20を説明する図である。いくつかの実施形態における表示制御装置30(プロセッサ33)は、例えば、第1アクチュエータ28を制御することで、リレー光学系80(曲面ミラー81)を回転させ、アイボックス200を上下方向(Y軸方向)に移動させることができる。典型的には、アイボックス200は、図16に示す比較的上側のアイボックス201に配置される場合、虚像表示領域VSの位置は、比較的下側の符号VS1で示す位置となり、アイボックス200が、図16に示す下側のアイボックス203に配置される場合、虚像表示領域VSの位置は、比較的上側の符号VS3で示す位置となる。いくつかの実施形態における表示制御装置30(プロセッサ33)は、目追従性画像処理モジュール516を実行することで、アイボックス200が所定の高さ閾値より上側に配置する場合(換言すると、第1アクチュエータ28の制御値が、アイボックス200が所定の高さ閾値より上側に配置されるような、アクチュエータ制御閾値を超えた場合)目位置(又は顔位置)の変化量に対する表示器50に表示する画像の位置の補正量Cyを小さくしてもよい。なお、アクチュエータ駆動モジュール522は、目位置700(又は顔位置)の上下方向の位置に応じて自動的にアイボックス200の高さを変更してもよく、操作検出部407によるユーザーの操作に応じて、アイボックス200の高さを変更してもよい。すなわち、目追従性画像処理モジュール516は、アイボックス200の高さに関する情報、アクチュエータの制御値に関する情報、アイボックス200の高さを自動的に調整し得る目位置700(又は顔位置)の上下方向の位置に関する情報、又はアイボックス200の高さを調整する操作検出部407からの操作情報などから目位置(又は顔位置)の変化量に対する表示器50に表示する画像の位置の補正量Cx(Cy)を切り替えるための閾値やテーブルデータ、演算式、などを含み得る。 FIG. 16 is a diagram illustrating the HUD device 20 according to some embodiments, in which the eyebox 200 can be vertically moved by rotating the relay optical system 80 (curved mirror 81). The display control device 30 (processor 33) in some embodiments rotates the relay optical system 80 (curved mirror 81) by, for example, controlling the first actuator 28 to move the eyebox 200 up and down (Y-axis direction). Typically, when the eyebox 200 is arranged in the relatively upper eyebox 201 shown in FIG. is arranged in the lower eyebox 203 shown in FIG. 16, the position of the virtual image display area VS is the relatively upper position indicated by the symbol VS3. The display control device 30 (processor 33) in some embodiments executes the eye-following image processing module 516 to determine if the eyebox 200 is located above a predetermined height threshold (in other words, the first When the control value of the actuator 28 exceeds the actuator control threshold such that the eyebox 200 is positioned above the predetermined height threshold), the amount of change in eye position (or face position) is displayed on the display 50. The image position correction amount Cy may be reduced. Note that the actuator driving module 522 may automatically change the height of the eyebox 200 according to the vertical position of the eye position 700 (or the face position). , the height of the eyebox 200 may be changed. That is, the eye-following image processing module 516 provides information on the height of the eyebox 200, information on the control values of the actuators, information on the upper and lower eye positions 700 (or face positions) that can automatically adjust the height of the eyebox 200. The correction amount Cx of the position of the image displayed on the display 50 with respect to the amount of change in the eye position (or face position) is determined from the information on the position of the direction or the operation information from the operation detection unit 407 for adjusting the height of the eyebox 200. It can include threshold values, table data, arithmetic expressions, and the like for switching (Cy).
 また、いくつかの実施形態における表示制御装置30(プロセッサ33)は、アイボックス200が所定の高さが高くなるに従い(換言すると、第1アクチュエータ28の制御値が、アイボックス200が高くなるように変更されるに従い)、目位置(又は顔位置)の変化量に対する表示器50に表示する画像の位置の補正量Cx(Cy)を段階的又は連続的に小さくしてもよい。すなわち、目追従性画像処理モジュール516は、アイボックス200の高さに関する情報、アクチュエータの制御値に関する情報、アイボックス200の高さを自動的に調整し得る目位置700(又は顔位置)の上下方向の位置に関する情報、又はアイボックス200の高さを調整する操作検出部407からの操作情報などから上下方向の目位置(又は顔位置)の変化量に対する表示器50に表示する画像の位置の補正量Cx(Cy)を調整するための閾値やテーブルデータ、演算式、などを含み得る。 Further, the display control device 30 (processor 33) in some embodiments adjusts the control value of the first actuator 28 as the eyebox 200 becomes higher (in other words, the control value of the first actuator 28 becomes higher). ), the correction amount Cx (Cy) of the position of the image displayed on the display 50 with respect to the amount of change in eye position (or face position) may be reduced stepwise or continuously. That is, the eye-following image processing module 516 provides information on the height of the eyebox 200, information on the control values of the actuators, information on the upper and lower eye positions 700 (or face positions) that can automatically adjust the height of the eyebox 200. The position of the image to be displayed on the display 50 with respect to the amount of change in eye position (or face position) in the vertical direction can be determined from information about the position of the direction or operation information from the operation detection unit 407 for adjusting the height of the eyebox 200 . It may include threshold values, table data, arithmetic expressions, etc. for adjusting the correction amount Cx(Cy).
 以上に説明したように、本実施形態の表示制御装置30は、画像を表示する表示装置40、表示装置40が表示する画像の光を被投影部材に投影するリレー光学系80から少なくとも構成され、車両のユーザーに画像の虚像を前景に重ねて視認させるHUD装置20における表示制御を実行する表示制御装置30であって、1つ又は複数のプロセッサ33と、メモリ37と、メモリ37に格納され、1つ又は複数のプロセッサ33によって実行されるように構成される1つ又は複数のコンピュータ・プログラムと、を備え、プロセッサ33は、ユーザーの車両の上下方向の目位置(及び/又は顔位置)Py並びに車両の左右方向の目位置(及び/又は顔位置)Pxを取得し、上下方向の目位置(又は顔位置)Py及び左右方向の目位置(又は顔位置)Pxに少なくとも基づいて表示装置40に表示する画像の位置を補正する第1の画像補正処理(ステップS160)と、上下方向の目位置(又は顔位置)Py及び左右方向の目位置(又は顔位置)Pxに少なくとも基づいて表示装置40に表示する画像の位置を補正し、上下方向の目位置(又は顔位置)の変化量ΔPyに対する画像の位置の第2の補正量Cy2が、第1の画像補正処理(ステップS160)のときの上下方向の目位置(又は顔位置)の変化量ΔPyに対する画像の位置の第1の補正量Cy1より小さい、又は左右方向の目位置(又は顔位置)Pxに少なくとも基づいて表示装置40に表示する画像の位置を補正し、上下方向の目位置(又は顔位置)の変化量ΔPyに対する画像の位置の補正量をゼロとする、第2の画像補正処理S170と、の間で切り替える。 As described above, the display control device 30 of the present embodiment includes at least the display device 40 that displays an image, and the relay optical system 80 that projects the light of the image displayed by the display device 40 onto the projection target member, A display control device 30 that executes display control in a HUD device 20 that allows a vehicle user to visually recognize a virtual image of an image superimposed on the foreground, comprising one or more processors 33, a memory 37, and stored in the memory 37, and one or more computer programs configured to be executed by one or more processors 33, the processor 33 calculating the user's vehicle vertical eye position (and/or face position) Py Also, the eye position (and/or face position) Px in the horizontal direction of the vehicle is obtained, and the display device 40 is based on at least the eye position (or face position) Py in the vertical direction and the eye position (or face position) Px in the horizontal direction. First image correction processing (step S160) for correcting the position of the image displayed on the display device based on at least the eye position (or face position) Py in the vertical direction and the eye position (or face position) Px in the horizontal direction 40, and the second correction amount Cy2 of the image position with respect to the change amount ΔPy of the eye position (or face position) in the vertical direction is the first image correction process (step S160). is smaller than the first correction amount Cy1 of the image position with respect to the change amount ΔPy of the eye position (or face position) in the vertical direction, or displayed on the display device 40 based on at least the eye position (or face position) Px in the horizontal direction and a second image correction process S170 in which the image position is corrected and the correction amount of the image position with respect to the change amount ΔPy of the eye position (or face position) in the vertical direction is set to zero.
 また、いくつかの実施形態において、プロセッサ33は、(1)左右方向の目位置(又は顔位置)Pxが一方向に連続的に変化したこと、(2)上下方向の目位置(及び/又は顔位置)の変化と左右方向の目位置(及び/又は顔位置)の変化とが検出され、この際、左右方向の目位置(又は顔位置)の変化量ΔPxに対する上下方向の目位置(又は顔位置)の変化量ΔPyの割合が所定の第1の閾値未満であること、及び(3)上下方向の目位置(又は顔位置)Pyの変化と左右方向の目位置(又は顔位置)Pxの変化とが検出され、この際、上下方向の目位置(又は顔位置)の変化量ΔPyが所定の第2の閾値未満であること、のうち少なくとも1つの条件を充足する場合、第2の画像補正処理S170を選択してもよい。これにより、観察者が左右方向に目位置(顔位置)を移動させた際に、観察者が意識していない上下方向の目位置(顔位置)の移動が検出されてしまうことによる観察者に与える違和感を軽減することができる。 Further, in some embodiments, the processor 33 determines that (1) the horizontal eye position (or face position) Px has changed continuously in one direction, (2) the vertical eye position (and/or A change in eye position (or face position) in the horizontal direction and a change in eye position (and/or face position) in the horizontal direction are detected. and (3) changes in eye position (or face position) Py in the vertical direction and eye position (or face position) Px in the horizontal direction. is detected, and at this time, if at least one of the following conditions is satisfied: that the amount of change ΔPy in the eye position (or face position) in the vertical direction is less than a predetermined second threshold, the second Image correction processing S170 may be selected. As a result, when the observer moves the eye position (face position) in the horizontal direction, the movement of the eye position (face position) in the vertical direction, which the observer is not aware of, is detected. It is possible to reduce the feeling of discomfort.
 また、いくつかの実施形態において、プロセッサ33は、上下方向の目位置(及び/もしくは顔位置)Py並びに/又は、左右方向の目位置(及び/もしくは顔位置)Pxが取得できなくなった後、上下方向の目位置(又は顔位置)Pyの変化と左右方向の目位置(又は顔位置)Pxの変化とが検出された場合、第2の画像補正処理S170を選択してもよい。換言すると、プロセッサ33は、第1の画像補正処理(ステップS160)において、上下方向の目位置Py、上下方向の顔位置Py、左右方向の目位置Px、及び左右方向の顔位置Pxのうち1つ又は複数が検出できていた状態から検出できない状態になった場合、第2の画像補正処理S170に移行してもよい。 Further, in some embodiments, after the processor 33 becomes unable to acquire the eye position (and/or face position) Py in the vertical direction and/or the eye position (and/or face position) Px in the horizontal direction, If a change in eye position (or face position) Py in the vertical direction and a change in eye position (or face position) Px in the horizontal direction are detected, the second image correction processing S170 may be selected. In other words, in the first image correction process (step S160), the processor 33 selects one of the eye position Py in the vertical direction, the face position Py in the vertical direction, the eye position Px in the horizontal direction, and the face position Px in the horizontal direction. If one or a plurality of images has been detected but has not been detected, the process may proceed to the second image correction process S170.
 また、いくつかの実施形態において、プロセッサ33は、第2の画像補正処理S170において、所定時間が経過した後、上下方向の目位置(又は顔位置)Py及び左右方向の目位置(又は顔位置)Pxに少なくとも基づいて表示装置40に表示する画像の位置を補正し、上下方向の目位置(又は顔位置)の変化量ΔPyに対する画像の位置の第3の補正量Cy3が、第1の画像補正処理(ステップS160)のときの第1の補正量Cy1より小さくかつ、第2の画像補正処理S170のときの第2の補正量Cy2より大きい第3の画像補正処理S182、に切り替えてもよい。 In some embodiments, in the second image correction processing S170, the processor 33 performs the eye position (or face position) Py in the vertical direction and the eye position (or face position) in the horizontal direction after a predetermined time has elapsed. ) Px, the position of the image displayed on the display device 40 is corrected based on at least Px. It may be switched to a third image correction process S182 that is smaller than the first correction amount Cy1 in the correction process (step S160) and larger than the second correction amount Cy2 in the second image correction process S170. .
 また、いくつかの実施形態において、プロセッサ33は、第2の画像補正処理S170において、上下方向の目位置(又は顔位置)の変化量ΔPyが所定の第3の閾値より大きくなったことが検出された場合、上下方向の目位置(又は顔位置)Py及び左右方向の目位置(又は顔位置)Pxに少なくとも基づいて表示装置40に表示する画像の位置を補正し、上下方向の目位置(又は顔位置)の変化量ΔPyに対する画像の位置の第3の補正量Cy3が、第1の画像補正処理(ステップS160)のときの第1の補正量Cy1より小さくかつ、第2の画像補正処理S170のときの第2の補正量Cy2より大きい第3の画像補正処理S182、に切り替えてもよい。 Further, in some embodiments, the processor 33 detects in the second image correction processing S170 that the amount of change ΔPy in the eye position (or face position) in the vertical direction has become larger than a predetermined third threshold. In this case, the position of the image displayed on the display device 40 is corrected based on at least the eye position (or face position) Py in the vertical direction and the eye position (or face position) Px in the horizontal direction, and the eye position (or face position) in the vertical direction is corrected. or face position) is smaller than the first correction amount Cy1 in the first image correction process (step S160) and the second image correction process You may switch to 3rd image correction process S182 larger than 2nd correction amount Cy2 at the time of S170.
 また、いくつかの実施形態において、プロセッサ33は、第3の画像補正処理S182において、第3の補正量Cy3が、第1の画像補正処理(ステップS160)のときの第1の補正量Cy1に近づくように、経時的に変化させてもよい。 Further, in some embodiments, the processor 33 changes the third correction amount Cy3 in the third image correction process S182 from the first correction amount Cy1 in the first image correction process (step S160). It may change over time so that it approaches.
 また、いくつかの実施形態において、HUD装置20は、車両側に設定される基準点から、第1の距離だけ離れた位置に知覚される遠方虚像V1(例えば、図9に示す虚像V64-V65)、及び第1の距離より短い第2の距離だけ離れた位置に知覚される近傍虚像V2(例えば、図9に示す虚像V61-V63)を表示し、プロセッサ33は、遠方虚像V1を、所定の判定条件の充足に応じて、第1の画像補正処理(ステップS160)と、第2の画像補正処理S170と、の間で切り替えて表示し、近傍虚像V2を、所定の判定条件の充足によらず、第2の画像補正処理S170で表示させてもよい。すなわち、判定モジュール510は、車外センサ411から取得する虚像Vが対応づけられる実オブジェクト300の位置情報、実オブジェクト300の位置情報に基づき虚像Vに設定される知覚距離D30に関する情報などから、各虚像Vを遠方虚像V1か近傍虚像V2に判定するための、閾値、テーブルデータ、演算式、などを含み得る。 Further, in some embodiments, the HUD device 20 is configured to create a distant virtual image V1 (for example, virtual images V64 to V65 shown in FIG. 9) perceived at a position a first distance away from a reference point set on the vehicle side. ), and a near virtual image V2 (for example, virtual images V61 to V63 shown in FIG. 9) perceived at a position separated by a second distance shorter than the first distance, and the processor 33 displays the far virtual image V1 at a predetermined distance. According to the satisfaction of the determination condition, the display is switched between the first image correction processing (step S160) and the second image correction processing S170, and the neighboring virtual image V2 is displayed when the predetermined determination condition is satisfied. Instead, it may be displayed in the second image correction processing S170. That is, the determination module 510 determines each virtual image from the position information of the real object 300 with which the virtual image V obtained from the vehicle exterior sensor 411 is associated, the information about the perceived distance D30 set to the virtual image V based on the position information of the real object 300, and the like. It may include thresholds, table data, arithmetic expressions, etc. for determining V as a distant virtual image V1 or a near virtual image V2.
 また、いくつかの実施形態において、HUD装置20が虚像Vを表示可能な領域を虚像表示領域VSとすると、図9に示すように、車両の運転席から見た虚像表示領域VSの上端VSuを含む上方領域VSαに表示される上側虚像V60と、虚像表示領域VSの下端VSbを含み、上方領域VSαより下側の下方領域VSβに表示される下側虚像V70と、を表示し、プロセッサ33は、上側虚像V60を、所定の判定条件の充足に応じて、第1の画像補正処理(ステップS160)と、第2の画像補正処理S170と、の間で切り替えて表示し、下側虚像V70を、目位置又は顔位置に基づく画像の位置補正を行なわずに表示してもよい。 Further, in some embodiments, if the area in which the HUD device 20 can display the virtual image V is assumed to be a virtual image display area VS, as shown in FIG. and a lower virtual image V70 displayed in a lower area VSβ below the upper area VSα including the lower end VSb of the virtual image display area VS, and the processor 33 displays , the upper virtual image V60 is displayed by switching between the first image correction processing (step S160) and the second image correction processing S170 according to the satisfaction of a predetermined determination condition, and the lower virtual image V70 is displayed. , the image may be displayed without positional correction based on the eye position or the face position.
 また、いくつかの実施形態において、HUD装置20が、図9に示すように、車両の前景に存在する実オブジェクトの位置に応じて表示位置を変化させるAR虚像V60と、実オブジェクトの位置に応じて表示位置を変化させない非AR虚像V70と、を表示し、プロセッサ33は、AR虚像V60を、所定の判定条件の充足に応じて、第1の画像補正処理(ステップS160)と、第2の画像補正処理S170と、の間で切り替えて表示し、非AR虚像V70を、目位置又は顔位置に基づく画像の位置補正を行なわずに表示してもよい。 In some embodiments, as shown in FIG. 9, the HUD device 20 includes an AR virtual image V60 that changes its display position according to the position of a real object existing in the foreground of the vehicle, and an AR virtual image V60 that changes its display position according to the position of the real object. and the non-AR virtual image V70 whose display position is not changed, and the processor 33 performs the first image correction processing (step S160) and the second The non-AR virtual image V70 may be displayed without image position correction based on the eye position or the face position.
 上述の処理プロセスの動作は、汎用プロセッサ又は特定用途向けチップなどの情報処理装置の1つ以上の機能モジュールを実行させることにより実施することができる。これらのモジュール、これらのモジュールの組み合わせ、及び/又はそれらの機能を代替えし得る公知のハードウェアとの組み合わせは全て、本発明の保護の範囲内に含まれる。 The operations of the processing processes described above can be implemented by executing one or more functional modules of an information processing device such as a general-purpose processor or an application-specific chip. These modules, combinations of these modules, and/or combinations with known hardware that can replace their functions are all within the scope of protection of the present invention.
 車両用表示システム10の機能ブロックは、任意選択的に、説明される様々な実施形態の原理を実行するために、ハードウェア、ソフトウェア、又はハードウェア及びソフトウェアの組み合わせによって実行される。図11で説明する機能ブロックが、説明される実施形態の原理を実施するために、任意選択的に、組み合わされ、又は1つの機能ブロックを2以上のサブブロックに分離されてもいいことは、当業者に理解されるだろう。したがって、本明細書における説明は、本明細書で説明されている機能ブロックのあらゆる可能な組み合わせ若しくは分割を、任意選択的に支持する。 The functional blocks of vehicle display system 10 are optionally implemented in hardware, software, or a combination of hardware and software to carry out the principles of the various described embodiments. It is noted that the functional blocks illustrated in FIG. 11 may optionally be combined or separated into two or more sub-blocks to implement the principles of the described embodiments. will be understood by those skilled in the art. Accordingly, the description herein optionally supports any possible combination or division of the functional blocks described herein.
1       :車両
2       :被投影部
5       :ダッシュボード
6       :路面
10      :車両用表示システム
20      :HUD装置(ヘッドアップディスプレイ装置)
21      :光出射窓
22      :筐体
24      :光源ユニット
28      :第1アクチュエータ
29      :第2アクチュエータ
30      :表示制御装置
31      :I/Oインタフェース
33      :プロセッサ
35      :画像処理回路
37      :メモリ
40      :表示装置
50      :表示器
51      :空間光変調素子
52      :光学レイヤ
80      :リレー光学系
81      :曲面ミラー
205     :中心
300     :実オブジェクト
401     :車両ECU
403     :道路情報データベース
405     :自車位置検出部
407     :操作検出部
409     :顔検出部
411     :車外センサ
413     :明るさ検出部
417     :携帯情報端末
419     :外部通信機器
502     :目位置検出モジュール
504     :目位置推定モジュール
506     :目位置予測モジュール
508     :顔検出モジュール
510     :判定モジュール
511     :目位置追従性画像処理モジュール
512     :車両状態判定モジュール
514     :視認性制御モジュール
516     :目追従性画像処理モジュール
518     :グラフィックモジュール
520     :光源駆動モジュール
522     :アクチュエータ駆動モジュール
PT      :ターゲット位置
Px      :目位置(顔位置)
Py      :目位置(顔位置)
V       :虚像
V1      :遠方虚像
V2      :近傍虚像
V60     :AR虚像
V70     :非AR虚像
V80     :AR関連虚像
VS      :虚像表示領域
Vx      :変化速度
Vy      :変化速度
ΔPx     :変化量
ΔPy     :変化量

 
Reference Signs List 1: vehicle 2: projected part 5: dashboard 6: road surface 10: vehicle display system 20: HUD device (head-up display device)
21 : Light exit window 22 : Housing 24 : Light source unit 28 : First actuator 29 : Second actuator 30 : Display control device 31 : I/O interface 33 : Processor 35 : Image processing circuit 37 : Memory 40 : Display device 50 : display 51 : spatial light modulator 52 : optical layer 80 : relay optical system 81 : curved mirror 205 : center 300 : real object 401 : vehicle ECU
403 : Road information database 405 : Vehicle position detection unit 407 : Operation detection unit 409 : Face detection unit 411 : Exterior sensor 413 : Brightness detection unit 417 : Portable information terminal 419 : External communication device 502 : Eye position detection module 504 : Eye position estimation module 506 : Eye position prediction module 508 : Face detection module 510 : Determination module 511 : Eye position followability image processing module 512 : Vehicle state determination module 514 : Visibility control module 516 : Eye followability image processing module 518 : Graphic module 520: Light source driving module 522: Actuator driving module PT: Target position Px: Eye position (face position)
Py: eye position (face position)
V: Virtual image V1: Distant virtual image V2: Nearby virtual image V60: AR virtual image V70: Non-AR virtual image V80: AR-related virtual image VS: Virtual image display area Vx: Velocity of change Vy: Velocity of change ΔPx: Amount of change ΔPy: Amount of change

Claims (12)

  1.  画像を表示する表示器(40)、前記表示器(40)が表示する前記画像の光を被投影部材に投影することで、車両のユーザーに前記画像の虚像を前景に重ねて視認させるヘッドアップディスプレイ装置(20)における表示制御を実行する表示制御装置(30)であって、
     1つ又は複数のプロセッサ(33)と、
     メモリ(37)と、
     前記メモリ(37)に格納され、前記1つ又は複数のプロセッサ(33)によって実行されるように構成される1つ又は複数のコンピュータ・プログラムと、を備え、
     前記プロセッサ(33)は、
      前記ユーザーの目位置、顔位置、及び顔向きの少なくとも1つを含む目位置関連情報を取得し、
      前記ヘッドアップディスプレイ装置(20)にAR虚像(V60)を表示し、
       前記AR虚像(V60)の表示位置を調整するために、前記目位置関連情報に少なくとも基づいて前記表示器(40)に表示する前記画像の位置を補正する目追従性画像補正処理を実行し、
      前記目位置関連情報に基づき、前記目位置関連情報又は前記目位置関連情報の検出動作が、所定の判定条件を充足するか判定し、
       前記判定条件を充足すると判定した場合、前記AR虚像(V60)の視認性を低下させる視認性低下処理を実行する、
    表示制御装置(30)。
    A display device (40) for displaying an image, and a head-up for allowing a vehicle user to visually recognize a virtual image of the image superimposed on the foreground by projecting the light of the image displayed by the display device (40) onto a projection target member. A display control device (30) for performing display control in a display device (20),
    one or more processors (33);
    a memory (37);
    one or more computer programs stored in said memory (37) and configured to be executed by said one or more processors (33);
    Said processor (33)
    obtaining eye position-related information including at least one of the user's eye position, face position, and face orientation;
    Displaying an AR virtual image (V60) on the head-up display device (20),
    performing eye-following image correction processing for correcting the position of the image displayed on the display (40) based at least on the eye position-related information in order to adjust the display position of the AR virtual image (V60);
    determining whether the eye position-related information or the detection operation of the eye position-related information satisfies a predetermined determination condition based on the eye position-related information;
    When it is determined that the determination condition is satisfied, executing a visibility reduction process for reducing the visibility of the AR virtual image (V60),
    A display controller (30).
  2.  前記判定条件は、
      前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかの変化速度の条件、
      前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかの座標の条件、並びに
      前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかの移動時間の条件、の少なくともいずれか1つを含む、
    請求項1に記載の表示制御装置(30)。
    The judgment condition is
    a condition of change speed of at least one of the eye position, the face position, and the face orientation;
    at least one of a coordinate condition of at least one of the eye position, the face position, and the face orientation, and a moving time condition of at least one of the eye position, the face position, and the face orientation including,
    A display control device (30) according to claim 1.
  3.  前記判定条件は、
      前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかの変化速度が速いこと、
      前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかの座標が所定の範囲内であること、並びに、
      前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかの連続的な変化であること、の少なくともいずれか1つを含む、
    請求項1に記載の表示制御装置(30)。
    The judgment condition is
    At least one of the eye position, the face position, and the face direction changes at a high speed;
    coordinates of at least one of the eye position, the face position, and the face orientation are within a predetermined range;
    Continuous change of at least one of the eye position, the face position, and the face orientation,
    A display control device (30) according to claim 1.
  4.  前記目位置関連情報の検出動作の条件は、
      前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかが検出できないこと、並びに
      前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかの検出精度の低下を検出したこと、の少なくともいずれか1つを含む、
    請求項3に記載の表示制御装置(30)。
    Conditions for detecting the eye position-related information are:
    At least one of the eye position, the face position, and the face orientation cannot be detected, and a decrease in detection accuracy of at least one of the eye position, the face position, and the face orientation is detected. including any one
    A display control device (30) according to claim 3.
  5.  前記プロセッサ(33)は、前記視認性低下処理において、
     前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかに応じて異なる視認性に低下させる、
    請求項1に記載の表示制御装置(30)。
    The processor (33), in the visibility reduction process,
    reduce visibility differently depending on at least one of the eye position, the face position, and the face orientation;
    A display control device (30) according to claim 1.
  6.  前記プロセッサ(33)は、前記視認性低下処理において、
     前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかの変化速度に応じて異なる視認性に低下させる、
    請求項1に記載の表示制御装置(30)。
    The processor (33), in the visibility reduction process,
    Decrease the visibility differently according to the change speed of at least one of the eye position, the face position, and the face orientation;
    A display control device (30) according to claim 1.
  7.  前記プロセッサ(33)は、
     前記判定条件を充足しないと判定した場合、
      前記目位置又は前記顔位置に少なくとも基づいて前記表示器(40)に表示する前記画像の位置を補正する第1の目追従性画像補正処理を実行し、
     前記判定条件を充足すると判定した場合、
      前記目位置又は前記顔位置に少なくとも基づいて前記表示器(40)に表示する前記画像の位置を補正し、前記目位置又は前記顔位置の変化量に対する前記画像の位置の第2の補正量が、前記第1の目追従性画像補正処理のときの前記目位置又は前記顔位置の同じ変化量に対する前記画像の位置の第1の補正量より小さい、又は
      前記上下方向の目位置又は前記顔位置の変化量及び前記左右方向の目位置又は前記顔位置の変化量の少なくともいずれか一方に対する前記画像の位置の補正量をゼロとする、第2の画像補正処理を実行する、
    請求項1に記載の表示制御装置(30)。
    Said processor (33)
    If it is determined that the above determination conditions are not satisfied,
    performing a first eye-following image correction process for correcting the position of the image displayed on the display (40) based at least on the eye position or the face position;
    If it is determined that the above determination conditions are satisfied,
    correcting the position of the image displayed on the display device (40) based at least on the eye position or the face position, wherein a second correction amount of the position of the image with respect to the amount of change in the eye position or the face position is , smaller than the first correction amount of the position of the image for the same change amount of the eye position or the face position in the first eye-following image correction process, or the eye position or the face position in the vertical direction and the amount of correction of the position of the image with respect to at least one of the amount of change in the eye position in the horizontal direction and the amount of change in the face position in the horizontal direction, performing a second image correction process.
    A display control device (30) according to claim 1.
  8.  前記プロセッサ(33)は、
      前記目位置関連情報に基づき、前記目位置関連情報又は前記目位置関連情報の検出動作が、所定の解除条件を充足するか判定し、
       前記解除条件を充足すると判定した場合、前記視認性低下処理がされていた前記AR虚像(V60)の視認性を上昇させる視認性上昇処理をさらに実行する、
    請求項1に記載の表示制御装置(30)。
    Said processor (33)
    determining whether the eye position-related information or the detection operation of the eye position-related information satisfies a predetermined release condition based on the eye position-related information;
    When it is determined that the cancellation condition is satisfied, further executing a visibility increase process for increasing the visibility of the AR virtual image (V60) that has been subjected to the visibility reduction process,
    A display control device (30) according to claim 1.
  9.  前記プロセッサ(33)は、前記視認性上昇処理において、
     前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかに応じて異なる視認性に上昇させる、
    請求項8に記載の表示制御装置(30)。
    The processor (33), in the visibility increasing process,
    Different visibility is raised according to at least one of the eye position, the face position, and the face orientation,
    A display control device (30) according to claim 8.
  10.  前記プロセッサ(33)は、前記視認性上昇処理において、
     前記目位置、前記顔位置、及び前記顔向きの少なくともいずれかの変化速度に応じて異なる視認性に上昇させる、
    請求項8に記載の表示制御装置(30)。
    The processor (33), in the visibility increasing process,
    Different visibility is raised according to the change speed of at least one of the eye position, the face position, and the face orientation;
    A display control device (30) according to claim 8.
  11.  画像を表示する表示器(40)、前記表示器(40)が表示する前記画像の光を被投影部材に投影することで、車両のユーザーに前記画像の虚像を前景に重ねて視認させるヘッドアップディスプレイ装置(20)であって、
     1つ又は複数のプロセッサ(33)と、
     メモリ(37)と、
     前記メモリ(37)に格納され、前記1つ又は複数のプロセッサ(33)によって実行されるように構成される1つ又は複数のコンピュータ・プログラムと、を備え、
     前記プロセッサ(33)は、
      前記ユーザーの目位置、顔位置、及び顔向きの少なくとも1つを含む目位置関連情報を取得し、
      AR虚像(V60)を表示し、
       前記AR虚像(V60)の表示位置を調整するために、前記目位置関連情報に少なくとも基づいて前記表示器(40)に表示する前記画像の位置を補正する目追従性画像補正処理を実行し、
      前記目位置関連情報に基づき、前記目位置関連情報又は前記目位置関連情報の検出動作が、所定の判定条件を充足するか判定し、
       前記判定条件を充足すると判定した場合、前記AR虚像(V60)の視認性を低下させる視認性低下処理を実行する、
    ヘッドアップディスプレイ装置(20)。
    A display device (40) for displaying an image, and a head-up for allowing a vehicle user to visually recognize a virtual image of the image superimposed on the foreground by projecting the light of the image displayed by the display device (40) onto a projection target member. A display device (20),
    one or more processors (33);
    a memory (37);
    one or more computer programs stored in said memory (37) and configured to be executed by said one or more processors (33);
    Said processor (33)
    obtaining eye position-related information including at least one of the user's eye position, face position, and face orientation;
    Display an AR virtual image (V60),
    performing eye-following image correction processing for correcting the position of the image displayed on the display (40) based at least on the eye position-related information in order to adjust the display position of the AR virtual image (V60);
    determining whether the eye position-related information or the detection operation of the eye position-related information satisfies a predetermined determination condition based on the eye position-related information;
    When it is determined that the determination condition is satisfied, executing a visibility reduction process for reducing the visibility of the AR virtual image (V60),
    A head-up display device (20).
  12.  画像を表示する表示器(40)、前記表示器(40)が表示する前記画像の光を被投影部材に投影することで、車両のユーザーに前記画像の虚像を前景に重ねて視認させるヘッドアップディスプレイ装置(20)の表示制御方法であって、
     前記ユーザーの目位置、顔位置、及び顔向きの少なくとも1つを含む目位置関連情報を取得し、
     前記ヘッドアップディスプレイ装置(20)にAR虚像(V60)を表示することと、
      前記AR虚像(V60)の表示位置を調整するために、前記目位置関連情報に少なくとも基づいて前記表示器(40)に表示する前記画像の位置を補正する目追従性画像補正処理を実行することと、
     前記目位置関連情報に基づき、前記目位置関連情報又は前記目位置関連情報の検出動作が、所定の判定条件を充足するか判定することと、
      前記判定条件を充足すると判定した場合、前記AR虚像(V60)の視認性を低下させる視認性低下処理を実行することと、を含む、
    表示制御方法。
     

     
    A display device (40) for displaying an image, and a head-up for allowing a vehicle user to visually recognize a virtual image of the image superimposed on the foreground by projecting the light of the image displayed by the display device (40) onto a projection target member. A display control method for a display device (20), comprising:
    obtaining eye position-related information including at least one of the user's eye position, face position, and face orientation;
    displaying an AR virtual image (V60) on the head-up display device (20);
    executing eye-following image correction processing for correcting the position of the image displayed on the display (40) based at least on the eye position-related information, in order to adjust the display position of the AR virtual image (V60); When,
    Determining whether the eye position-related information or the detection operation of the eye position-related information satisfies a predetermined determination condition based on the eye position-related information;
    When it is determined that the determination condition is satisfied, executing a visibility reduction process that reduces the visibility of the AR virtual image (V60),
    Display control method.


PCT/JP2022/028492 2021-07-22 2022-07-22 Display control device, head-up display device, and display control method WO2023003045A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-121102 2021-07-22
JP2021121102 2021-07-22

Publications (1)

Publication Number Publication Date
WO2023003045A1 true WO2023003045A1 (en) 2023-01-26

Family

ID=84979338

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/028492 WO2023003045A1 (en) 2021-07-22 2022-07-22 Display control device, head-up display device, and display control method

Country Status (1)

Country Link
WO (1) WO2023003045A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014021673A (en) * 2012-07-17 2014-02-03 Toshiba Corp Image presentation apparatus and method
JP2014197052A (en) * 2013-03-29 2014-10-16 船井電機株式会社 Projector device and head-up display device
JP2016210212A (en) * 2015-04-30 2016-12-15 株式会社リコー Information providing device, information providing method and control program for information provision
JP2017206251A (en) * 2017-07-07 2017-11-24 日本精機株式会社 Vehicle information projection system
US20180204365A1 (en) * 2017-01-13 2018-07-19 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014021673A (en) * 2012-07-17 2014-02-03 Toshiba Corp Image presentation apparatus and method
JP2014197052A (en) * 2013-03-29 2014-10-16 船井電機株式会社 Projector device and head-up display device
JP2016210212A (en) * 2015-04-30 2016-12-15 株式会社リコー Information providing device, information providing method and control program for information provision
US20180204365A1 (en) * 2017-01-13 2018-07-19 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling the same
JP2017206251A (en) * 2017-07-07 2017-11-24 日本精機株式会社 Vehicle information projection system

Similar Documents

Publication Publication Date Title
CN111433067B (en) Head-up display device and display control method thereof
JP6608146B2 (en) Virtually transparent instrument cluster with live video
KR20190028667A (en) Image generating apparatus, image generating method, and program
WO2020110580A1 (en) Head-up display, vehicle display system, and vehicle display method
US11525694B2 (en) Superimposed-image display device and computer program
US11803053B2 (en) Display control device and non-transitory tangible computer-readable medium therefor
JP2018077400A (en) Head-up display
JP2016109645A (en) Information providing device, information providing method, and control program for providing information
JP7255608B2 (en) DISPLAY CONTROLLER, METHOD, AND COMPUTER PROGRAM
WO2021132555A1 (en) Display control device, head-up display device, and method
WO2022230995A1 (en) Display control device, head-up display device, and display control method
US20210300183A1 (en) In-vehicle display apparatus, method for controlling in-vehicle display apparatus, and computer program
WO2023048213A1 (en) Display control device, head-up display device, and display control method
WO2021200914A1 (en) Display control device, head-up display device, and method
WO2023003045A1 (en) Display control device, head-up display device, and display control method
WO2020158601A1 (en) Display control device, method, and computer program
JP2022072954A (en) Display control device, head-up display device, and display control method
JP2022190724A (en) Display control device, head-up display device and display control method
JP2020121607A (en) Display control device, method and computer program
JP2021056358A (en) Head-up display device
WO2021200913A1 (en) Display control device, image display device, and method
JP2020117105A (en) Display control device, method and computer program
JP2022113292A (en) Display control device, head-up display device, and display control method
JP2020121704A (en) Display control device, head-up display device, method and computer program
JP2022077138A (en) Display controller, head-up display device, and display control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22845978

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE