WO2021132555A1 - Display control device, head-up display device, and method - Google Patents

Display control device, head-up display device, and method Download PDF

Info

Publication number
WO2021132555A1
WO2021132555A1 PCT/JP2020/048680 JP2020048680W WO2021132555A1 WO 2021132555 A1 WO2021132555 A1 WO 2021132555A1 JP 2020048680 W JP2020048680 W JP 2020048680W WO 2021132555 A1 WO2021132555 A1 WO 2021132555A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
image
display
vehicle
real object
Prior art date
Application number
PCT/JP2020/048680
Other languages
French (fr)
Japanese (ja)
Inventor
勇希 舛屋
博 平澤
中村 崇
一夫 諸橋
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Priority to JP2021567664A priority Critical patent/JP7459883B2/en
Publication of WO2021132555A1 publication Critical patent/WO2021132555A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present disclosure relates to a display control device, a head-up display device, and a method used in a vehicle to superimpose and visually recognize an image on the foreground of the vehicle.
  • Patent Document 1 a virtual object as if it actually exists in the foreground (actual view) of the own vehicle is expressed by an image with a sense of perspective, and augmented reality (AR) is generated to generate a virtual object (AR: Augmented Reality).
  • AR Augmented Reality
  • a head-up display device that recognizes the above is described.
  • the head-up display device can typically display a virtual image of an image only in a limited area (virtual image display area) seen by the viewer. It is assumed that the position of the virtual image display area is fixed even if the eye height (eye position) of the driver of the own vehicle changes (substantially, when the eye height (eye position) changes, the position of the virtual image display area changes. Due to the difference in eye height (eye position) of the driver of the own vehicle, the area of the actual view outside the own vehicle that overlaps with the virtual image display area seen by the viewer is different.
  • the real scene area where the virtual image display areas viewed from a position higher than the reference eye height overlap is from the viewer. It is a region below the reference real scene area when viewed (in perspective, it is a real scene area closer to the reference real scene area).
  • the actual view area where the virtual image display areas viewed from a position lower than the reference eye height overlap is the area above the reference actual view area when viewed from the viewer (in terms of perspective, it is farther than the reference actual view area). It becomes the actual view area on the side).
  • the actual view area where the virtual image display areas overlap differs depending on the difference in eye height (eye position), even if the relationship between the own vehicle and the actual object is constant, for a person with a low eye height of the viewer.
  • a real object is included in the virtual image display area, and the virtual object corresponding to the real object is displayed, but for a person with a high visual height, the real object is not included in the virtual image display area.
  • it is assumed that the virtual object corresponding to the real object is not displayed.
  • the outline of the present disclosure makes it easier to recognize information about a real object even if the position of the virtual image display area or the eye position changes. More specifically, the position of the virtual image display area or the eye position of the viewer is different. Even if it is, it is also related to suppressing the variation of the information presented.
  • the display control device described in the present specification is a display control device that controls an image display unit that displays a virtual image of an image in a display area that overlaps the foreground when viewed from an eye box in the vehicle, and is information.
  • the one or more I / O interfaces comprising a plurality of computer programs include the position of a real object existing around the vehicle, the position of the display area, and the observer's eyes in the eyebox.
  • the position, the attitude of the vehicle, or at least one of information capable of estimating these is acquired, and the one or more processors move the position of the real object into the first determination real scene area.
  • the image of the first aspect corresponding to the real object is displayed.
  • the virtual image is displayed and the position of the real object is within the second determination real scene area
  • the virtual image of the image of the second aspect corresponding to the real object is displayed, and the position of the display area, the eye position
  • An instruction is executed to expand the range of the second determination actual scene area based on at least one of the attitude of the vehicle or information that can estimate these.
  • FIG. 1 is a diagram showing an application example of a vehicle display system.
  • FIG. 2 is a diagram showing a configuration of an image display unit.
  • FIG. 3 is a diagram showing a foreground and a virtual image of the image of the first aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 4 is a diagram showing a foreground and a virtual image of the image of the first aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 5 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 6 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 7A is a diagram showing a real object and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 7B shows a situation in which the real object is closer to the vehicle than in FIG. 7A, and the real object and the virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle, are displayed. It is a figure which shows.
  • FIG. 7C shows a situation in which the real object is closer to the vehicle than in FIG.
  • FIG. 8A is a diagram showing a real object and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 8B is a diagram showing a situation in which the real object is closer to the vehicle than in FIG. 8A.
  • FIG. 8C is a diagram showing a situation in which the real object is closer to the vehicle than in FIG. 8B.
  • FIG. 9 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 10 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 11 is a block diagram of a vehicle display system.
  • FIG. 12A shows a first display area, a first determination actual view area, and a second determination actual view for displaying a virtual image of the eye box and the image of the first aspect when viewed from the left-right direction (X-axis direction) of the vehicle. It is a figure which shows the positional relationship of a region.
  • FIG. 12A shows a first display area, a first determination actual view area, and a second determination actual view for displaying a virtual image of the eye box and the image of the first aspect when viewed from the left-right direction (X-axis direction) of the vehicle. It is a figure which shows the positional relationship of a region.
  • FIG. 12A shows a first display area, a first determination actual view area, and a second determination actual view for displaying
  • FIG. 12B shows the eye box and the first display area, the first determination actual scene area, and the second determination actual scene for displaying the virtual image of the image of the first aspect when viewed from the left-right direction (X-axis direction) of the vehicle. It is a figure which shows the positional relationship of a region.
  • FIG. 13A is a diagram showing the positional relationship between the first display area, the first determination actual scene area, and the second determination actual scene area when viewed from the left-right direction (X-axis direction) of the vehicle.
  • FIG. 13B is a diagram showing a situation in which the first display area is arranged below FIG. 13A.
  • FIG. 13C is a diagram showing a situation in which the first display area is arranged below FIG. 13B.
  • FIG. 14A is a diagram showing the positional relationship between the first display area, the first determination actual scene area, and the second determination actual scene area when viewed from the left-right direction (X-axis direction) of the vehicle.
  • FIG. 14B is a diagram showing a situation in which the eye position of the viewer is arranged on the upper side of FIG. 14A.
  • FIG. 14C is a diagram showing a situation in which the eye position of the viewer is arranged above FIG. 14B.
  • FIG. 145 is a diagram showing the positional relationship between the first display area, the first determination actual scene area, and the second determination actual scene area when viewed from the left-right direction (X-axis direction) of the vehicle.
  • FIG. X-axis direction left-right direction
  • FIG. 15B is a diagram showing a situation in which the posture of the vehicle is tilted forward as compared with FIG. 15A.
  • 16A is the same as FIG. 13B, and the second determination actual scene area when the first display area is arranged below the reference display area with the position of the first display area of FIG. 13A as the reference display area. Shows an expanded aspect of.
  • FIG. 16B shows an enlarged aspect of the second determination actual scene area.
  • FIG. 16C shows an enlarged aspect of the second determination actual scene area.
  • FIG. 16D shows an enlarged aspect of the second determination actual scene area.
  • FIG. 17A is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17A is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17B is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17C is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17D is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17E is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17F is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17G is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 18A is a flowchart showing a method of performing an operation of displaying a virtual image of an image of the first aspect or the second aspect with respect to a real object existing in a real view outside the vehicle according to some embodiments.
  • FIG. 18B is a flowchart following FIG. 18A.
  • FIGS. 1, 2, and 11 provide a description of the configuration of an exemplary vehicle display system.
  • 3 to 10 provide a description of a display example.
  • 12A to 18 show exemplary operations.
  • the present invention is not limited to the following embodiments (including the contents of the drawings). Of course, changes (including deletion of components) can be made to the following embodiments. Further, in the following description, in order to facilitate understanding of the present invention, description of known technical matters will be omitted as appropriate.
  • the vehicle display system 10 of the present embodiment includes an image display unit 20, a display control device 30 that controls the image display unit 20, and electronic devices 401 to 417 connected to the display control device 30.
  • the image display unit 20 in the vehicle display system 10 is a head-up display (HUD: Head-Up Display) device provided in the dashboard 5 of the vehicle 1.
  • the image display unit 20 emits the display light 40 toward the front windshield 2 (an example of the projected unit), and the front windshield 2 eye the display light 40 of the image M displayed by the image display unit 20. Reflects on the box 200.
  • the viewer can display the virtual image V of the image M displayed by the image display unit 20 at a position overlapping the foreground, which is the real space that is visually recognized through the front windshield 2. It can be visually recognized.
  • the left-right direction of the vehicle 1 is the X-axis direction (the left side when facing the front of the vehicle 1 is the X-axis positive direction), and the vertical direction is the Y-axis direction (a vehicle traveling on the road surface).
  • the upper side of 1 is the Y-axis positive direction), and the front-rear direction of the vehicle 1 is the Z-axis direction (the front of the vehicle 1 is the Z-axis positive direction).
  • the "eye box" used in the description of the present embodiment is (1) a region in which at least a part of the virtual image V of the image M is visible in the region, and a part of the virtual image V of the image M is not visible outside the region, (2). ) In the region, at least a part of the virtual image V of the image M can be visually recognized at a predetermined brightness or higher, and outside the region, the entire virtual image V of the image M is less than the predetermined brightness, or (3) the image display unit 20.
  • the image display unit 20 When can display a virtual image V that can be viewed stereoscopically, at least a part of the virtual image V can be viewed stereoscopically, and a part of the virtual image V is not stereoscopically viewed outside the region.
  • the predetermined brightness is, for example, about 1/50 of the brightness of the virtual image of the image M visually recognized at the center of the eye box.
  • the display area 100 is an area of a plane, a curved surface, or a partially curved surface in which the image M generated inside the image display unit 20 forms an image as a virtual image V, and is also called an image forming surface.
  • the display area 100 is a position where the display surface (for example, the exit surface of the liquid crystal display panel) 21a of the display 21 described later of the image display unit 20 is imaged as a virtual image, that is, the display area 100 is the image display unit.
  • the display surface 21a described later of 20 corresponds to the display surface 21a described later of 20 (in other words, the display area 100 has a conjugate relationship with the display surface 21a of the display 21 described later), and the virtual image visually recognized in the display area 100 is an image.
  • the display area 100 includes an angle (tilt angle ⁇ t in FIG. 1) formed by the horizontal direction (XZ plane) about the left-right direction (X-axis direction) of the vehicle 1, the center 205 of the eyebox 200, and the display area 100.
  • the angle formed by the line segment connecting the upper end 101 of the eye box and the line segment connecting the center of the eyebox and the lower end 102 of the display area 100 is defined as the vertical angle of the display area 100, and is horizontal to the bisector of this vertical angle.
  • the angle formed by the direction (XZ plane) (vertical arrangement angle ⁇ v in FIG. 1) is set.
  • the display area 100 of the present embodiment has a tilt angle ⁇ t of approximately 90 [degree] so as to substantially face the front (Z-axis positive direction).
  • the tilt angle ⁇ t is not limited to this, and can be changed within the range of 0 ⁇ ⁇ t ⁇ 90 [degree].
  • the tilt angle ⁇ t may be set to 60 [degree]
  • the display area 100 may be arranged so that the upper area is farther than the lower area when viewed from the viewer.
  • FIG. 2 is a diagram showing the configuration of the HUD device 20 of the present embodiment.
  • the HUD device 20 includes a display 21 having a display surface 21a for displaying the image M, and a relay optical system 25.
  • the display 21 of FIG. 2 is composed of a liquid crystal display panel 22 and a light source unit 24.
  • the display surface 21a is a surface on the visual side of the liquid crystal display panel 22, and emits the display light 40 of the image M.
  • the display area is set by setting the angle of the display surface 21a with respect to the optical axis 40p of the display light 40 from the center of the display surface 21a toward the eye box 200 (center of the eye box 200) via the relay optical system 25 and the projected portion.
  • An angle of 100 (including a tilt angle ⁇ t) can be set.
  • the relay optical system 25 is arranged on the optical path of the display light 40 (light from the display 21 toward the eyebox 200) emitted from the display 21, and the display light 40 from the display 21 is directed to the outside of the HUD device 20. It is composed of one or more optical members projected onto the front windshield 2.
  • the relay optical system 25 of FIG. 2 includes one concave first mirror 26 and one flat second mirror 27.
  • the first mirror 26 has, for example, a free curved surface shape having positive optical power.
  • the first mirror 26 may have a curved surface shape in which the optical power differs for each region, that is, the optical power added to the display light 40 according to the region (optical path) through which the display light 40 passes. It may be different.
  • the first image light 41, the second image light 42, and the third image light 43 (see FIG. 2) heading from each region of the display surface 21a toward the eyebox 200 are added by the relay optical system 25.
  • the optical power may be different.
  • the second mirror 27 is, for example, a flat mirror, but is not limited to this, and may be a curved surface having optical power. That is, the relay optical system 25 is added according to the region (optical path) through which the display light 40 passes by synthesizing a plurality of mirrors (for example, the first mirror 26 and the second mirror 27 of the present embodiment). The optical power may be different.
  • the second mirror 27 may be omitted. That is, the display light 40 emitted from the display 21 may be reflected by the first mirror 26 on the projected portion (front windshield) 2.
  • the relay optical system 25 includes two mirrors, but the present invention is not limited to this, and one or more refractive optics such as a lens may be added or substituted to these. It may include a member, a diffractive optical member such as a hologram, a catoptric member, or a combination thereof.
  • the relay optical system 25 of the present embodiment has a function of setting the distance to the display area 100 by the curved surface shape (an example of optical power), and a virtual image obtained by enlarging the image displayed on the display surface 21a. It has a function of generating, but in addition to this, it may have a function of suppressing (correcting) distortion of a virtual image that may occur due to the curved shape of the front windshield 2.
  • relay optical system 25 may be rotatable to which actuators 28 and 29 controlled by the display control device 30 are attached. This will be described later.
  • the liquid crystal display panel 22 receives light from the light source unit 24 and emits the spatial light-modulated display light 40 toward the relay optical system 25 (second mirror 27).
  • the liquid crystal display panel 22 has, for example, a rectangular shape whose short side is the direction in which the pixels corresponding to the vertical direction (Y-axis direction) of the virtual image V seen from the viewer are arranged.
  • the viewer visually recognizes the transmitted light of the liquid crystal display panel 22 via the virtual image optical system 90.
  • the virtual image optical system 90 is a combination of the relay optical system 25 shown in FIG. 2 and the front windshield 2.
  • the light source unit 24 is composed of a light source (not shown) and an illumination optical system (not shown).
  • the light source (not shown) is, for example, a plurality of chip-type LEDs, and emits illumination light to a liquid crystal display panel (an example of a spatial light modulation element) 22.
  • the light source unit 24 is composed of, for example, four light sources, and is arranged in a row along the long side of the liquid crystal display panel 22.
  • the light source unit 24 emits illumination light toward the liquid crystal display panel 22 under the control of the display control device 30.
  • the configuration of the light source unit 24 and the arrangement of the light sources are not limited to this.
  • the illumination optical system includes, for example, one or a plurality of lenses (not shown) arranged in the emission direction of the illumination light of the light source unit 24, and diffusion arranged in the emission direction of the one or a plurality of lenses. It is composed of a board (not shown).
  • the display 21 may be a self-luminous display or a projection type display that projects an image on a screen.
  • the display surface 21a is the screen of the projection type display.
  • the display 21 may be attached with an actuator (not shown) including a motor controlled by the display control device 30, and may be movable and / or rotatable on the display surface 21a.
  • the relay optical system 25 has two rotation axes (first rotation axis AX1 and second rotation axis AX2) that move the eyebox 200 in the vertical direction (Y-axis direction).
  • Each of the first rotation axis AX1 and the second rotation axis AX2 is not perpendicular to the left-right direction (X-axis direction) of the vehicle 1 in the state where the HUD device 20 is attached to the vehicle 1 (in other words, the YZ plane). It is set so that it is not parallel).
  • the angle between the first rotation axis AX1 and the second rotation axis AX2 with respect to the left-right direction (X-axis direction) of the vehicle 1 is set to less than 45 [degree], and more preferably. It is set to less than 20 [degree].
  • the amount of vertical movement of the display area 100 is relatively small, and the amount of vertical movement of the eyebox 200 is relatively large.
  • the rotation of the relay optical system 25 on the second rotation axis AX2 the amount of movement of the display area 100 in the vertical direction is relatively large, and the amount of movement of the eyebox 200 in the vertical direction is relatively small. That is, when the first rotation axis AX1 and the second rotation axis AX2 are compared, "the amount of vertical movement of the eyebox 200 / the amount of vertical movement of the display area 100" due to the rotation of the first rotation axis AX1.
  • the relative amount of the vertical movement amount of the display area 100 and the vertical movement amount of the eyebox 200 due to the rotation of the relay optical system 25 on the first rotation axis AX1 is the relative amount on the second rotation axis AX2.
  • the relative amount of the vertical movement amount of the display area 100 due to the rotation of the relay optical system 25 and the vertical movement amount of the eyebox 200 are different.
  • the HUD device 20 includes a first actuator 28 that rotates the first mirror 26 on the first rotation axis AX1 and a second actuator 29 that rotates the first mirror 26 on the second rotation axis AX2.
  • the HUD device 20 rotates one relay optical system 25 on two axes (first rotation axis AX1 and second rotation axis AX2).
  • the first actuator 28 and the second actuator 29 may be composed of one integrated two-axis actuator.
  • the HUD device 20 in another embodiment rotates the two relay optical systems 25 on two axes (first rotation axis AX1 and second rotation axis AX2).
  • the HUD device 20 includes a first actuator 28 that rotates the first mirror 26 on the first rotation axis AX1 and a second actuator 29 that rotates the second mirror 27 on the second rotation axis AX2. You may.
  • the rotation of the first rotation axis AX1 causes the eye box 200 to move relatively large in the vertical direction
  • the rotation of the second rotation axis AX2 causes the display area 100 to move relatively large in the vertical direction. If so, the arrangement of the first rotation axis AX1 and the second rotation axis AX2 is not limited to these. Further, the drive by the actuator may include movement in addition to or instead of rotation.
  • the HUD device 20 in another embodiment does not have to drive the relay optical system 25.
  • the HUD device 20 may not have an actuator that rotates and / or rotates the relay optical system 25.
  • the HUD device 20 of this embodiment may include a wide eye box 200 that covers a range of driver's eye heights where the vehicle 1 is expected to be used.
  • the image display unit 20 is a branch of the road surface 310 of the traveling lane, which exists in the foreground, which is a real space (actual view) visually recognized via the front windshield 2 of the vehicle 1, based on the control of the display control device 30 described later.
  • a real object 300 such as a road 330, a road sign, an obstacle (pedestrian 320, a bicycle, a motorcycle, another vehicle, etc.), and a feature (building, a bridge, etc.
  • a position overlapping the real object 300 or a real object.
  • the visual augmented reality (AR) is perceived by the viewer (typically, the viewer sitting in the driver's seat of the vehicle 1). You can also do it.
  • an image whose displayed position can be changed according to the position of the real object 300 existing in the real scene is defined as an AR image, and the displayed position is determined regardless of the position of the real object 300.
  • the set image is defined as a non-AR image. An example of an AR image will be described below.
  • FIG. 3 and 4 are diagrams showing a foreground that is visually recognized when a viewer faces forward from the inside of the vehicle, and an AR image of the first aspect that is visually recognized so as to overlap the foreground.
  • the AR image of the first aspect is displayed with respect to the real object 300 which is visible inside the display area 100 when viewed from the viewer.
  • the "first aspect of the image” is an image displayed in the first display area 150 described later in the display area 100, and is a predetermined position in the eye box 200 (for example, at the center 205).
  • the present invention is not limited to this.) It is an aspect of the image when it is displayed with respect to a real object existing in the real scene area overlapping with the first display area 150.
  • the virtual image of the image of the first aspect can be expressed as overlapping with the real object, surrounding the real object, approaching the real object, and the like when viewed from the viewer.
  • the "second aspect of the image” described later with respect to the "first aspect of the image” refers to a real object existing outside the real scene area overlapping the first display area 150 described later when viewed from the viewer. This is the aspect of the image when it is displayed.
  • the real object (pedestrian) 320 exists in the real scene area that overlaps with the display area 100 (first display area 150) seen by the viewer.
  • the image display unit 20 of the present embodiment has a virtual image V10 (V11, V12, V13) of the AR image of the first aspect with respect to the pedestrian 320 existing in the actual view area overlapping the display area 100 seen by the viewer. Is displayed.
  • the virtual image V11 is a rectangular image located so as to surround the pedestrian 320 indicating the position of the pedestrian 320 from the outside (an example of being arranged in the vicinity of the real object 300), and the virtual image V12 is the real object 300.
  • a type (pedestrian)
  • the third virtual image V13 is a pedestrian 320.
  • It is an arrow shape indicating the moving direction, and is an image displayed at a position shifted to the moving direction side with respect to the pedestrian 320 (arranged at a position set with reference to the real object 300).
  • the display area 100 is shown in a rectangular shape in FIG. 3, as described above, the display area 100 is so low in visibility that it is not actually visible to the viewer or is difficult to see.
  • the virtual images V11, V12, and V13 of the image M displayed on the display surface 21a of the display 21 are clearly visible, and the virtual image of the display surface 21a itself of the display 21 (virtual image of the area where the image M is not displayed). Is not visible (hard to see).
  • the real object (branch path) 330 exists in the real scene area that overlaps with the display area 100 as seen by the viewer.
  • the image display unit 20 of the present embodiment displays the virtual image V10 (V14) of the AR image of the first aspect with respect to the branch path 330 existing in the actual scene area overlapping the display area 100 seen by the viewer.
  • the virtual image V14 is arranged at a position where an arrow-shaped virtual object indicating a guide path is overlapped with the road surface 310 and the branch road 330 in the foreground of the vehicle 1 when viewed from the viewer.
  • the virtual image V14 is an image in which the arrangement (angle) is set so that the angle formed by the road surface 310 is visually recognized as 0 [degree] (in other words, parallel to the road surface 310).
  • the guidance route indicates that the vehicle goes straight and then turns right at the branch road 330, overlaps the road surface 310 of the traveling lane of the vehicle 1 from the viewpoint of the viewer, and goes straight toward the front branch road 330 (Z-axis positive).
  • the direction) is instructed, and the portion indicating the guide path beyond the branch road 330 is instructed in the right direction (X-axis negative direction) so as to overlap the road surface 310 of the branch road in the right turn direction when viewed from the viewer.
  • FIG. 5, 6 and 7 are diagrams showing a foreground that is visually recognized when a viewer faces forward from the inside of the vehicle and a virtual image of an AR image of the second aspect that is visually recognized overlapping the foreground. is there.
  • the virtual image of the AR image of the second aspect is displayed on the real object 300 that is visible outside the display area 100 (an example of the first display area 150 described later) when viewed from the viewer.
  • the image display unit 20 displays the virtual image V20 (V21), which is the AR image of the second aspect, in the wide area (outer edge area) 110 of the upper, lower, left, and right outer edges of the display area 100.
  • the display control device 30, which will be described later, arranges the virtual image V21 near the pedestrian 320 existing outside the display area 100 when viewed from the viewer.
  • the virtual image V21 is, for example, a ripple image based on the position of the pedestrian 320, and may be a still image or a moving image.
  • the virtual image V21 may have a shape or movement that indicates the direction of the pedestrian 320, but may not have the shape or movement.
  • the aspect of the virtual image V21 which is the AR image of the second aspect is not limited to this, and may be an arrow, a text, and / or a mark.
  • the display control device 30 visually recognizes the real object linked to the virtual image V21 by displaying the virtual image V21 which is the AR image of the second aspect in the outer edge region 110 in the display area 100 close to the pedestrian 320. It can be made easier for people to understand.
  • the image display unit 20 displays the virtual image V20 (V22), which is the AR image of the second aspect, in a predetermined predetermined area (fixed area) 120 in the display area 100.
  • the fixed area 120 is set in the lower area of the center of the display area 100.
  • the display control device 30, which will be described later, arranges a virtual image V22 having a shape and / or a movement indicating the pedestrian 320 existing outside the display area 100 when viewed from the viewer in the fixed area 120.
  • the virtual image V22 is, for example, a ripple image based on the position of the pedestrian 320, and may be a still image or a moving image.
  • the aspect of the virtual image V22 which is the AR image of the second aspect, is limited as long as it includes a shape and / or a movement indicating a pedestrian 320 existing outside the display area 100. Instead, it may consist of one or more arrows, text, and / or marks and the like.
  • the display control device 30 is an AR image of the second aspect including a shape and / or a movement indicating the pedestrian 320 existing outside the display area 100 in the predetermined fixed area 120.
  • the fixed area 120 is not completely fixed, and may be changed depending on the layout of a plurality of images displayed on the image display unit 20, such as the state of the actual scene acquired from the I / O interface described later or the vehicle 1. It may be changed depending on the state.
  • FIGS. 7A, 7B, and 7C show an actual size (an example of a display mode) of the virtual image V20 (V23), which is an AR image of the second aspect, located outside the display area 100 as seen by the viewer. It is a figure which shows the transition which changes according to the position of an object 340.
  • the position of the real object 340 as seen from the viewer gradually moves to the left side (X-axis positive direction) and the front side (Z positive / negative direction) in the order of FIGS. 7A, 7B, and 7C as the vehicle 1 advances. To go.
  • the image display unit 20, which will be described later may gradually move the virtual image 23 to the left side (X-axis positive direction) so as to follow the movement of the real object 340 to the left side (X-axis positive direction). .. Further, the image display unit 20 described later may gradually increase the size of the virtual image 23 so as to follow the movement of the real object 340 toward the front side (Z-axis negative direction). That is, the image display unit 20 described later may change the position and / or size (an example of the display mode) of the virtual image V23 which is the AR image of the second aspect according to the position of the real object 340. ..
  • FIGS. 8A, 8B, and 8C show a real object in which the brightness (an example of a display mode) of the virtual image V20 (V23), which is the AR image of the second aspect, is located outside the display area 100 as seen by the viewer. It is a figure which shows the transition which changes according to the position of 340.
  • the position of the real object 340 as seen from the viewer gradually moves to the left side (X-axis positive direction) and the front side (Z positive / negative direction) in the order of FIGS. 8A, 8B, and 8C as the vehicle 1 advances. To go.
  • the image display unit 20, which will be described later, may gradually move the virtual image 23 to the left side (X-axis positive direction) so as to follow the movement of the real object 340 to the left side (X-axis positive direction). .. Further, the image display unit 20 described later may gradually reduce the brightness of the virtual image 23 so as to follow the movement of the real object 340 toward the front side (Z-axis negative direction). It should be noted that this description does not deny that the image display unit 20 gradually increases the brightness of the virtual image 23 so as to follow the movement of the real object 340 toward the front side (Z-axis negative direction).
  • the image display unit 20 described later may change the position and / or the brightness (an example of the display mode) of the virtual image V23 which is the AR image of the second aspect according to the position of the real object 340.
  • the image display unit 20, which will be described later displays information about the vehicle 1, information about the occupants of the vehicle 1, information other than the position of the real object to which the virtual image is displayed, and / or the virtual image.
  • the display mode of the virtual image V23, which is the AR image of the second aspect may be changed according to information such as the position of a real object that is not the target.
  • the change in the display mode of the virtual image referred to here may include a change in color, a change in brightness, switching between lighting and blinking, and / or switching between display and non-display, in addition to those described above.
  • FIG. 9 is a diagram illustrating a virtual image of the non-AR image of the second aspect.
  • the real object (branch path) 330 exists outside the real scene area that overlaps with the display area 100 as seen by the viewer.
  • the image display unit 20 of the present embodiment has a predetermined area (fixed area) 120 in the display area 100 with respect to the branch path 330 existing in the actual view area that does not overlap with the display area 100 seen by the viewer.
  • the virtual image V30 (V31, V32) of the non-AR image of the second aspect is displayed.
  • the display control device 30 described later provides a virtual image V31 which is a non-AR image showing the guide path (here, a right turn is shown) and a virtual image V32 which is a non-AR image showing the distance to the branch path in the fixed area 120.
  • the "non-AR image” referred to here is an image that does not change the position of the image or the direction to be instructed according to the position of the real object existing in the real scene in the real space.
  • the virtual image V31 is an arrow image showing the right turn direction, but the displayed position and the direction to be indicated are determined according to the position of the branch road 330 (in other words, according to the positional relationship between the vehicle 1 and the branch road 330).
  • the non-AR image of the second aspect is not limited to this as long as it includes information about the pedestrian 320 existing outside the display area 100, and one or more texts and / Or may be composed of a mark or the like.
  • FIG. 10 shows an example in which a virtual image V33 composed of a mark, which is a non-AR image of the second aspect, is displayed on a pedestrian 320 existing outside an actual scene area that overlaps with the display area 100 seen by the viewer. It is a figure which shows.
  • the image display unit 20 of the present embodiment has a virtual image V30 of the non-AR image of the second aspect with respect to the pedestrian 320 existing in the fixed area 120 outside the actual view area overlapping the display area 100 seen by the viewer. (V33) is displayed.
  • the display control device 30 notifies the presence of the real object (pedestrian 320, branch road 330) existing outside the display area 100 to the predetermined fixed area 120, which is the non-AR of the second aspect.
  • V30 V31, V32, V33
  • V30 V31, V32, V33
  • FIG. 11 is a block diagram of the vehicle display system 10 according to some embodiments.
  • the display control device 30 includes one or more I / O interfaces 31, one or more processors 33, one or more image processing circuits 35, and one or more memories 37.
  • the various functional blocks shown in FIG. 11 may consist of hardware, software, or a combination of both.
  • FIG. 11 shows only one embodiment, and the illustrated components may be combined with a smaller number of components, or there may be additional components.
  • the image processing circuit 35 (for example, a graphic processing unit) may be included in one or more processors 33.
  • the processor 33 and the image processing circuit 35 are operably connected to the memory 37. More specifically, the processor 33 and the image processing circuit 35 execute a program stored in the memory 37 to generate and / or transmit image data, for example, and display the vehicle display system 10 (image display). The operation of unit 20) can be performed.
  • the processor 33 and / or the image processing circuit 35 includes at least one general purpose microprocessor (eg, central processing unit (CPU)), at least one application specific integrated circuit (ASIC), and at least one field programmable gate array (FPGA). , Or any combination thereof.
  • the memory 37 includes any type of magnetic medium such as a hard disk, any type of optical medium such as a CD and DVD, any type of semiconductor memory such as a volatile memory, and a non-volatile memory.
  • the volatile memory may include DRAM and SRAM, and the non-volatile memory may include ROM and NVRAM.
  • the processor 33 is operably connected to the I / O interface 31.
  • the I / O interface 31 communicates with, for example, the vehicle ECU 401 described later or another electronic device (reference numerals 403 to 417 described later) provided in the vehicle according to the standard of CAN (Controller Area Network) (also referred to as CAN communication). ).
  • CAN Controller Area Network
  • the communication standard adopted by the I / O interface 31 is not limited to CAN, for example, CANFD (CAN with Flexible Data Rate), LIN (Local Interconnect Network), Ethernet (registered trademark), MOST (Media Oriented Systems Transport).
  • MOST is a registered trademark
  • a wired communication interface such as UART, or USB
  • a local such as a personal area network (PAN)
  • PAN personal area network
  • Bluetooth registered trademark
  • 802.1x Wi-Fi registered trademark
  • In-vehicle communication (internal communication) interface which is a short-range wireless communication interface within several tens of meters such as an area network (LAN), is included.
  • the I / O interface 31 is a wireless wide area network (WWAN0, IEEE 802.16-2004 (WiMAX: Worldwide Interoperability for Microwave Access)), IEEE 802.16e base (Mobile WiMAX), 4G, 4G-LTE, LTE Advanced,
  • An external communication (external communication) interface such as a wide area network (for example, an Internet communication network) may be included according to a cellular communication standard such as 5G.
  • the processor 33 is interoperably connected to the I / O interface 31 to provide information with various other electronic devices and the like connected to the vehicle display system 10 (I / O interface 31). Can be exchanged.
  • the I / O interface 31 includes, for example, a vehicle ECU 401, a road information database 403, a vehicle position detection unit 405, an external sensor 407, an operation detection unit 409, an eye position detection unit 411, a line-of-sight direction detection unit 413, and a mobile information terminal 415.
  • the external communication device 417 and the like are operably connected.
  • the I / O interface 31 may include a function of processing (converting, calculating, analyzing) information received from another electronic device or the like connected to the vehicle display system 10.
  • the display 21 is operably connected to the processor 33 and the image processing circuit 35. Therefore, the image displayed by the image display unit 20 may be based on the image data received from the processor 33 and / or the image processing circuit 35.
  • the processor 33 and the image processing circuit 35 control the image displayed by the image display unit 20 based on the information acquired from the I / O interface 31.
  • the vehicle ECU 401 uses sensors and switches provided on the vehicle 1 to determine the state of the vehicle 1 (for example, mileage, vehicle speed, accelerator pedal opening, brake pedal opening, engine throttle opening, injector fuel injection amount, engine rotation speed). , Motor speed, steering angle, shift position, drive mode, various warning states, attitude (including roll angle and / or pitching angle), vehicle vibration (including magnitude, frequency, and / or frequency of vibration) )) and the like, and collect and manage (may include control) the state of the vehicle 1. As a part of the function, the numerical value of the state of the vehicle 1 (for example, the vehicle speed of the vehicle 1). ) Can be output to the processor 33 of the display control device 30.
  • the state of the vehicle 1 for example, mileage, vehicle speed, accelerator pedal opening, brake pedal opening, engine throttle opening, injector fuel injection amount, engine rotation speed.
  • the vehicle ECU 401 simply transmits the numerical value detected by the sensor or the like (for example, the pitching angle is 3 [brake] in the forward tilt direction) to the processor 33, or instead, the numerical value detected by the sensor is used.
  • Judgment results based on one or more states of the including vehicle 1 (for example, the vehicle 1 satisfies a predetermined condition of the forward leaning state) and / and analysis results (for example, of the brake pedal opening degree). Combined with the information, the brake has caused the vehicle to lean forward.) May be transmitted to the processor 33.
  • the vehicle ECU 401 may output a signal indicating a determination result indicating that the vehicle 1 satisfies a predetermined condition stored in advance in a memory (not shown) of the vehicle ECU 401 to the display control device 30.
  • the I / O interface 31 may acquire the above-mentioned information from the sensors and switches provided in the vehicle 1 provided in the vehicle 1 without going through the vehicle ECU 401.
  • the vehicle ECU 401 may output an instruction signal indicating an image to be displayed by the vehicle display system 10 to the display control device 30, and at this time, it is necessary to notify the coordinates, size, type, display mode, and image of the image.
  • the degree and / or the necessity-related information that is the basis for determining the notification necessity may be added to the instruction signal and transmitted.
  • the road information database 403 is included in a navigation device (not shown) provided in the vehicle 1 or an external server connected to the vehicle 1 via an external communication interface (I / O interface 31), and the vehicle position detection described later. Based on the position of the vehicle 1 acquired from the section 405, the road information (lane, white line, stop line, crosswalk, etc.) on which the vehicle 1 travels, which is the information around the vehicle 1 (information related to the actual object around the vehicle 1). Road width, number of lanes, intersections, curves, branch roads, traffic regulations, etc.), feature information (buildings, bridges, rivers, etc.), presence / absence, position (including distance to vehicle 1), direction, shape, type , Detailed information and the like may be read out and transmitted to the processor 33. Further, the road information database 403 may calculate an appropriate route (navigation information) from the departure point to the destination, and output a signal indicating the navigation information or image data indicating the route to the processor 33.
  • the own vehicle position detection unit 405 is a GNSS (Global Navigation Satellite System) or the like provided in the vehicle 1, detects the current position and orientation of the vehicle 1, and transmits a signal indicating the detection result via the processor 33. , Or directly output to the road information database 403, the portable information terminal 415 described later, and / or the external communication device 417.
  • the road information database 403, the mobile information terminal 415 described later, and / or the external communication device 417 acquire the position information of the vehicle 1 from the own vehicle position detection unit 405 continuously, intermittently, or at a predetermined event. , Information around the vehicle 1 may be selected and generated and output to the processor 33.
  • the vehicle exterior sensor 407 detects the real object 300 existing around the vehicle 1 (front, side, and rear).
  • the real object 300 detected by the external sensor 407 is, for example, an obstacle (pedestrian, bicycle, motorcycle, other vehicle, etc.), a road surface 310 of a traveling lane described later, a lane marking, a roadside object, and / or a feature (building). Etc.) may be included.
  • a detection unit composed of a radar sensor such as a millimeter wave radar, an ultrasonic radar, a laser radar, a camera, or a combination thereof, and detection data from the one or a plurality of detection units are processed ( It consists of a processing device (data fusion) and.
  • One or more external sensors 407 detect a real object in front of the vehicle 1 for each detection cycle of each sensor, and the real object information (presence or absence of the real object, existence of the real object exists) which is an example of the real object information.
  • information such as the position, size, and / or type of each real object
  • these real object information may be transmitted to the processor 33 via another device (for example, vehicle ECU 401).
  • a camera an infrared camera or a near-infrared camera is desirable so that a real object can be detected even when the surroundings are dark such as at night.
  • a stereo camera capable of acquiring a distance or the like by parallax is desirable.
  • the operation detection unit 409 is, for example, a CID (Center Information Display) of the vehicle 1, a hardware switch provided on the instrument panel, or a software switch that combines an image and a touch sensor, and the like.
  • the operation information based on the operation by the occupant is output to the processor 33.
  • the operation detection unit 409 sets the display area setting information based on the operation of moving the display area 100, the eye box setting information based on the operation of moving the eye box 200, and the eye position 4 of the viewer by the user's operation.
  • Information based on the operation is output to the processor 33.
  • the eye position detection unit 411 may include a camera such as an infrared camera that detects the position of the eyes of a viewer sitting in the driver's seat of the vehicle 1, and may output the captured image to the processor 33.
  • the processor 33 acquires an image (an example of information capable of estimating the eye position) from the eye position detection unit 411, and can identify the eye position of the viewer by analyzing the captured image.
  • the eye position detection unit 411 may analyze the image captured by the camera and output a signal indicating the position of the eyes of the viewer, which is the analysis result, to the processor 33.
  • the method of acquiring information capable of estimating the eye position of the viewer of the vehicle 1 or the eye position of the viewer is not limited to these, and a known eye position detection (estimation) technique is used.
  • the processor 33 adjusts at least the position of the image based on the position of the eyes of the viewer to obtain the eye position of the image superimposed on the desired position of the foreground (the position having a specific positional relationship with the real object). You may make the detected visual person (visual person) visually recognize.
  • the line-of-sight direction detection unit 413 may include an infrared camera or a visible light camera that captures the face of a viewer sitting in the driver's seat of the vehicle 1, and may output the captured image to the processor 33.
  • the processor 33 acquires an captured image (an example of information capable of estimating the line-of-sight direction) from the line-of-sight direction detection unit 413, and identifies the line-of-sight direction (and / or the gaze position) of the viewer by analyzing the captured image. can do.
  • the line-of-sight direction detection unit 413 may analyze the captured image from the camera and output a signal indicating the line-of-sight direction (and / or the gaze position) of the viewer, which is the analysis result, to the processor 33.
  • the method for acquiring information that can estimate the line-of-sight direction of the viewer of the vehicle 1 is not limited to these, and is not limited to these, but is an EOG (Electro-oculargram) method, a corneal reflex method, a scleral reflex method, and Purkinje image detection. It may be obtained using other known line-of-sight detection (estimation) techniques such as the method, search coil method, infrared fundus camera method.
  • the mobile information terminal 415 is a smartphone, a laptop computer, a smart watch, or other information device that can be carried by a viewer (or another occupant of the vehicle 1).
  • the I / O interface 31 can communicate with the mobile information terminal 415 by pairing with the mobile information terminal 415, and the data recorded in the mobile information terminal 415 (or the server through the mobile information terminal). To get.
  • the mobile information terminal 415 has, for example, the same functions as the above-mentioned road information database 403 and own vehicle position detection unit 405, acquires the road information (an example of real object-related information), and transmits it to the processor 33. May be good.
  • the mobile information terminal 415 may acquire commercial information (an example of information related to a real object) related to a commercial facility in the vicinity of the vehicle 1 and transmit it to the processor 33.
  • the mobile information terminal 415 transmits the schedule information of the owner (for example, the viewer) of the mobile information terminal 415, the incoming information on the mobile information terminal 415, the reception information of the mail, etc. to the processor 33, and the processor 33 and the image.
  • the processing circuit 35 may generate and / or transmit image data relating to these.
  • the external communication device 417 is a communication device that exchanges information with the vehicle 1. For example, another vehicle or pedestrian communication (V2P: Vehicle To Pestation) connected to the vehicle 1 by vehicle-to-vehicle communication (V2V: Vehicle To Vehicle). ), A network communication device connected by a pedestrian (a mobile information terminal carried by a pedestrian) and a vehicle-to-vehicle communication (V2I: Vehicle To vehicle Infrastructure). : Includes everything connected by (Vehicle To Everything).
  • the external communication device 417 acquires the positions of, for example, pedestrians, bicycles, motorcycles, other vehicles (preceding vehicles, etc.), road surfaces, lane markings, roadside objects, and / or features (buildings, etc.) and sends them to the processor 33.
  • the external communication device 417 has the same function as the own vehicle position detection unit 405 described above, and may acquire the position information of the vehicle 1 and transmit it to the processor 33, and further, the road information database 403 described above may be used. It also has a function, and the road information (an example of information related to a real object) may be acquired and transmitted to the processor 33.
  • the information acquired from the external communication device 417 is not limited to the above.
  • the software components stored in the memory 37 are the actual object information detection module 502, the actual object position identification module 504, the notification necessity determination module 506, the eye position detection module 508, the vehicle attitude detection module 510, the display area setting module 512, and the like. It includes a real object position determination module 514, an actual scene area division module 516, an image type setting module 518, an image arrangement setting module 520, an image size setting module 522, a line-of-sight direction determination module 524, a graphic module 526, and a drive module 528.
  • the real object information detection module 502 acquires information (also referred to as real object information) including at least the position of the real object 300 existing in front of the vehicle 1. For example, when the real object information detection module 502 visually recognizes the position of the real object 300 existing in the foreground of the vehicle 1 from the vehicle exterior sensor 407 (when the viewer in the driver's seat of the vehicle 1 visually recognizes the traveling direction (forward) of the vehicle 1). The position in the height direction (vertical direction) and the horizontal direction (horizontal direction) of, and the position (distance) in the depth direction (front direction) may be added to these positions) and the size of the real object 300 (.
  • information also referred to as real object information
  • Information (an example of real object information) including the height direction, the size in the lateral direction, and the relative speed with respect to the vehicle 1 (including the relative moving direction) may be acquired.
  • the real object information detection module 502 includes the position, relative speed, type of the real object (for example, another vehicle), the lighting state of the direction indicator of the other vehicle, the state of steering angle operation, and the state of steering angle operation via the external communication device 417.
  • information indicating the planned progress route and the progress schedule by the driving support system (an example of information related to the actual object) may be acquired.
  • the position of (see 3) may be acquired, and the region (road surface 310 of the traveling lane) between the left and right lane markings 31 and 312 may be recognized.
  • the real object information detection module 502 is information about a real object existing in the foreground of the vehicle 1 (real object), which is a source for determining the content of the virtual image V described later (hereinafter, also appropriately referred to as “image type”). Related information) may be detected.
  • the real object-related information includes, for example, type information indicating the type of the real object such as a pedestrian, a building, or another vehicle, the movement direction information indicating the moving direction of the real object, the distance to the real object, and the like.
  • Distance time information indicating the arrival time, or individual detailed information of the real object such as the charge of the parking lot (an example of the real object) (but not limited to these).
  • the real object information detection module 502 acquires type information, distance / time information, and / or individual detailed information from the road information database 403 or the mobile information terminal 415, and the type information, movement direction information, and / or from the vehicle exterior sensor 407.
  • the distance / time information may be acquired and / or the type information, the moving direction information, the distance / time information, and / or the individual detailed information may be detected from the external communication device 417.
  • the real object position identification module 504 acquires an observation position indicating the current position of the real object 300 from the vehicle exterior sensor 407 or the external communication device 417 via the I / O interface 31, or two or more of these observation positions.
  • the observation position of the real object obtained by data fusion is acquired, and the position (also referred to as a specific position) of the real object 300 is set based on the acquired observation position.
  • the image arrangement setting module 520 which will be described later, determines the position of the image based on the specific position of the real object 300 set by the real object position specifying module 504.
  • the real object position specifying module 504 may specify the position of the real object 300 based on the observed position of the real object 300 acquired immediately before, but is not limited to this, and at least observes the real object 300 acquired immediately before.
  • the position of the real object 300 may be specified (estimated) based on the predicted position of the real object at a predetermined time predicted based on the observed position of one or more real objects 300 acquired in the past including the position. .. That is, by executing the real object position specifying module 504 and the image arrangement setting module 520 described later, the processor 33 can set the position of the virtual image V based on the observed position of the real object 300 acquired immediately before, or at least.
  • a virtual image based on the predicted position of the real object 300 at the display update timing of the virtual image V predicted based on the observed position of one or more real objects 300 acquired in the past including the observed position of the real object 300 acquired immediately before.
  • the position of V can be set.
  • the real object positioning module 504 uses, for example, a least squares method, a prediction algorithm such as a Kalman filter, an ⁇ - ⁇ filter, or a particle filter, and uses one or more observation positions in the past to determine the next value.
  • the vehicle display system 10 only needs to be able to acquire the observed position and / or the predicted position of the real object, and does not have to have the function of setting (calculating) the predicted position of the real object.
  • a part or all of the function of setting (calculating) the predicted position may be provided separately from the display control device 30 of the vehicle display system 10 (for example, the vehicle ECU 401).
  • the notification necessity determination module 506 determines whether the content of each virtual image V displayed by the vehicle display system 10 should be notified to the viewer.
  • the notification necessity determination module 506 may acquire information from various other electronic devices connected to the I / O interface 31 and calculate the notification necessity. Further, the electronic device connected to the I / O interface 31 in FIG. 11 transmits information to the vehicle ECU 401, and the notification necessity determination module 506 detects (acquires) the notification necessity determined by the vehicle ECU 401 based on the received information. ) May.
  • the "notification necessity" is, for example, the degree of danger derived from the degree of seriousness that can occur, the degree of urgency derived from the length of the reaction time required to take a reaction action, the vehicle 1 or the viewer (or the vehicle).
  • the notification necessity determination module 506 may detect the necessity-related information that is the source for estimating the notification necessity, and may estimate the notification necessity from this.
  • the necessity-related information that is the basis for estimating the notification necessity of the image may be estimated by, for example, the position and type of a real object or traffic regulation (an example of road information), and is connected to the I / O interface 31. It may be estimated based on other information input from various other electronic devices or in addition to other information. That is, the notification necessity determination module 506 may determine whether to notify the viewer and may choose not to display the image described later.
  • the vehicle display system 10 only needs to be able to acquire the notification necessity, and does not have to have a function of estimating (calculating) the notification necessity, and some or all of the functions for estimating the notification necessity are , It may be provided separately from the display control device 30 of the vehicle display system 10 (for example, the vehicle ECU 401).
  • the eye position detection module 508 detects the position of the eyes of the viewer of the vehicle 1.
  • the eye position detection module 508 determines where the height of the viewer's eyes is in a height region provided in a plurality of stages, detects the height of the eyes of the viewer (position in the Y-axis direction), and the viewer. Detection of eye height (position in Y-axis direction) and depth direction (position in Z-axis direction), and / or detection of visual eye position (position in X, Y, Z-axis direction), Includes various software components for performing various actions related to.
  • the eye position detection module 508 can acquire, for example, the position of the eyes of the viewer from the eye position detection unit 411, or can estimate the position of the eyes including the height of the eyes of the viewer from the eye position detection unit 411. Is received and the eye position including the eye height of the viewer is estimated.
  • the information that can estimate the position of the eyes may be, for example, the position of the driver's seat of the vehicle 1, the position of the face of the viewer, the height of the sitting height, the input value by the viewer at the operation unit (not shown), or the like.
  • the vehicle posture detection module 510 is mounted on the vehicle 1 and detects the posture of the vehicle 1.
  • the vehicle attitude detection module 510 determines where the attitude of the vehicle 1 is in the attitude region provided in a plurality of stages, detects the angle (pitching angle, rolling angle) of the vehicle 1 in the Earth coordinate system, and with respect to the road surface of the vehicle 1. It includes various software components for performing various actions related to the detection of angles (pitching angle, rolling angle) and / or the detection of the height (position in the Y-axis direction) of the vehicle 1 with respect to the road surface.
  • the vehicle attitude detection module 510 analyzes, for example, a triaxial acceleration sensor (not shown) provided in the vehicle 1 and the triaxial acceleration detected by the triaxial acceleration sensor to obtain the vehicle 1 with reference to a horizontal plane.
  • the pitching angle (vehicle attitude) is estimated, and vehicle attitude information including information on the pitching angle of the vehicle 1 is output to the processor 33.
  • the vehicle attitude detection module 510 may be composed of a height sensor (not shown) arranged in the vicinity of the suspension of the vehicle 1 in addition to the above-mentioned three-axis acceleration sensor. At this time, the vehicle posture detection module 510 estimates the pitching angle of the vehicle 1 as described above by analyzing the height of the vehicle 1 detected by the height sensor from the ground, and provides information on the pitching angle of the vehicle 1. The vehicle attitude information including the above is output to the processor 33.
  • the method by which the vehicle posture detection module 510 obtains the pitching angle of the vehicle 1 is not limited to the above-mentioned method, and the pitching angle of the vehicle 1 may be obtained by using a known sensor or analysis method.
  • the display area setting module 512 sets the rotation amount (angle) of the first actuator 28 and the rotation amount (angle) of the second actuator 29 based on the input information of the eye position 4 of the viewer and the setting information.
  • the position of the display area 100 can be determined by the amount of rotation (angle) of the actuator. Therefore, the rotation amount (angle) of the actuator is an example of information that can estimate the position of the display area 100.
  • the display area setting module 512 has the rotation amount (angle) of the first actuator 28 and the second eye position estimation information based on the eye position information detected by the eye position detection module 508 and the eye position estimation information estimated by the eye position detection module 508. Includes various software components for performing various operations related to setting the amount of rotation (angle) of the actuator 29.
  • the display area setting module 512 has the rotation amount (angle) about the first rotation axis AX1 and the second rotation axis AX2 as the axes from the eye position or the information that can estimate the eye position. It may include table data, arithmetic expressions, etc. for setting the amount of rotation (angle).
  • the display area setting module 512 may change the area to be used in the display surface 21a of the display 21 based on the information of the eye position 4 of the viewer to be input and the setting information. That is, the display area setting module 512 can also change the position of the display area 100 used for displaying the virtual image V by changing the area used for displaying the image on the display surface 21a of the display 21. Therefore, the information indicating the area used for displaying the image on the display surface 21a of the display 21 can be said to be an example of the information that can estimate the position of the display area 100.
  • the display area setting module 512 uses the rotation amount (angle) about the first rotation axis AX1 and the second rotation axis AX2 as axes based on the operation by the operation detection unit 409 and the instruction from the vehicle ECU 401.
  • the amount of rotation (angle) may be set.
  • the display area setting module 512 may include (1) position information of the viewer's favorite eyebox (an example of eyebox position setting information) and position information of the favorite display area (display) acquired from a viewer identification unit (not shown). (Example of area setting information), (2) Display area setting information based on the operation of moving the display area 100, eye box 200, which is obtained from the operation detection unit (accompanying) provided in the vehicle 1 and is based on the user's operation.
  • the display area setting information indicating the position of the display area 100 determined by the vehicle ECU 401, the eye box setting information indicating the position of the eye box 200, etc. acquired from the vehicle ECU 401. It is related to setting the rotation amount (angle) of the first actuator 28 about the first rotation axis AX1 and the rotation amount (angle) of the second actuator 29 about the second rotation axis AX2. Includes various software components to perform different actions.
  • the display area setting module 512 When the display area setting module 512 acquires the display area setting information for moving the display area 100 to a predetermined position, the display area setting module 512 maintains the position of the eye box 200 or keeps the movement amount of the eye box 200 small. In addition to the driving amount for moving the display area 100 to a predetermined position, the rotation amount (angle) about the first rotation axis AX1 and the rotation amount (angle) about the second rotation axis AX2. ), Can be set (corrected). On the contrary, when the display area setting module 512 acquires only the eye box setting information for moving the eye box 200 to a predetermined position, the display area setting module 512 maintains the position of the display area 100 or the amount of movement of the display area 100.
  • the rotation amount (angle) about the first rotation axis AX1 and the second rotation axis AX2 are used as axes so as to keep the size small.
  • the amount of rotation (angle) can be set (corrected).
  • the display area setting module 512 may set the amount of movement of the relay optical system 25 by one or a plurality of actuators.
  • the display area setting module 512 is set according to the type of the vehicle 1 on which the vehicle display system is mounted, and is stored in the memory 37 in advance, the display area 100 (and / or the first display area 150 described later).
  • the current position of the display area 100 may be estimated and stored in the memory 37 by correcting the display area 100 described above based on the estimable information based on the position of.
  • the real object position determination module 514 determines whether or not the position of the real object 300 is within the first determination actual scene area R10 and whether or not the position is within the second determination actual scene area R20. That is, the display area setting module 512 determines whether or not the real object enters the first determination actual scene area R10 and whether or not it enters the second determination actual scene area R20 from the observation position and / or the predicted position of the real object. It may include a determination value, table data, an arithmetic expression, etc. for determining whether or not. For example, the real object position determination module 514 determines whether or not to enter the first determination actual scene area R10 for comparison with the observed position and / or the predicted position of the actual object (left-right direction (X-axis direction)).
  • Position, vertical direction (Y-axis direction) position), and judgment value of whether or not to enter the second judgment actual scene area R20 (left-right direction (X-axis direction) position, vertical direction (Y-axis direction) position)
  • Etc. can be included.
  • the determination value of whether or not to enter the first determination actual scene area R10 and the determination value of whether or not to enter the second determination actual scene area R20 are set (changed) by the actual scene area division module 516 described later. ).
  • the real scene area division module 516 sets a range of judgment values as to whether or not the real object is within the first judgment real scene area R10, and a range of judgment values as to whether or not the real object is within the second judgment real scene area R20. ..
  • the first to fifth determination methods by the actual view area division module 516 and the actual object position determination module 514 will be described below, but as will be described later, the visual position 4 of the viewer and the display area 100 (first display area). If the range determined to be within the second determination actual scene area R20 is changed according to the position of 150), the posture of the vehicle 1, and the like, the present invention is not limited to these.
  • the real object position determination module 514 is based on the observation position and / or the predicted position of the real object acquired from the real object position identification module 504 and the determination value stored in advance in the memory 37. It is determined whether or not the position of the real object 300 is within the first determination actual scene area R10 and whether or not the position is within the second determination actual scene area R20.
  • FIG. 12A and 12B show the eye box 200, the first display area 150 displaying the virtual image V10 of the image of the first aspect, and the first determination actual view when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the positional relationship of the area R10 and the 2nd determination actual scene area R20.
  • FIG. 12A shows a case where the real object 300 enters the first determination actual scene area R10
  • FIG. 12B shows a case where the real object 300 enters the second determination actual scene area R20.
  • the first determination actual scene area R10 is the upper end 150a of the first display area 150 and the center 205 of the eyebox 200 (inside the eyebox 200) for displaying the virtual image V10 of the image of the first aspect in the display area 100. It is an example of a predetermined position in the eye box 200, and is an example of a predetermined position in the eye box 200, and is an example of a line connecting the lower end 150b of the first display area 150 and the center 205 of the eye box 200. It is an area between the line connecting with, but not limited to). Further, the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
  • the first display area 150 for displaying the virtual image V10 of the image of the first aspect referred to here may be a predetermined area smaller than the display area 100 in the display area 100 and coincides with the display area 100. (In the example of FIGS. 3 to 10, the first display area 150 and the display area 100 coincide with each other).
  • the eye box 200 and the first display area 150 are set according to the type of the vehicle 1 on which the vehicle display system 10 is mounted, so that the first determination actual scene area R10 and the second determination are made.
  • the actual scene area R20 is set to a constant value in advance for each type of vehicle 1 and stored in the memory 37.
  • the first determination actual scene area R10 and the second determination actual scene area R20 are the individual difference of the vehicle 1, the individual difference of the HUD device 20 (including the assembly error to the vehicle 1), and the outside of the vehicle provided in the vehicle 1. It may be preset and stored in the memory 37 for each individual of the vehicle display system 10 by calibration in consideration of individual differences of the sensor 407 (including an error in assembling to the vehicle 1).
  • the real object position determination is made.
  • Module 514 determines that the real object 300 is in the first determination real scene area R10.
  • the real object position determination module 514 determines. It is determined that the real object 300 is in the second determination actual scene area R20.
  • the real object position determination module 514 may execute the following second setting method in addition to or in place of the first setting method described above.
  • the real object position determination module 514 determines the observation position and / or the predicted position of the real object acquired from the real object position identification module 504, and the position of the display area 100 acquired from the display area setting module 512. Based on (or information that can estimate the position of the display area 100), whether or not the real object 300 enters the first determination real scene area R10, and the virtual image V of the image of the second aspect are displayed. 2 Judgment Judges whether or not to enter the actual scene area R20.
  • the actual scene area division module 516 changes the range of the first determination actual scene area R10 and the range of the second determination actual scene area R20 according to the position of the display area 100. That is, the actual scene area division module 516 has the first determination actual scene area R10 and the second determination actual scene area from the position of the display area 100 (or information that can estimate the position of the display area 100) acquired from the display area setting module 512. It may include table data for setting R20, an arithmetic program, and the like. The table data includes, for example, the position of the display area 100 and the determination value of whether or not to enter the first determination actual scene area R10 (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)).
  • FIGS. 13A, 13B, and 13C show a first determination actual view area R10 and a second determination actual view area according to a change in the position of the display area 100 when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the change of the range of R20.
  • the display area 100 is gradually moved downward (Y-axis negative direction) in the order of FIGS. 13A, 13B, and 13C by rotating the first mirror 26 of the HUD device 20.
  • the real scene area division module 516 changes the range of the first judgment real scene area R10 and the second judgment real scene area R20 according to the position of the display area 100, and the real object position judgment module 514 has the real object 300 as the real scene. It is determined whether or not to enter the first determination actual scene area R10 changed by the area division module 516, and whether or not to enter the second determination actual scene area R20 which has been appropriately changed.
  • the first determination actual scene area R10 is the upper end 150a of the first display area 150 and the center 205 of the eyebox 200 (inside the eyebox 200) for displaying the virtual image V10 of the image of the first aspect in the display area 100.
  • the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
  • the actual scene area division module 516 when the first display area 151 is arranged below the first display area 150 shown in FIG. 13A, the actual scene area division module 516 also includes the first determination actual scene area R12. , The first determination is arranged below the actual scene area R10. At this time, the actual scene area division module 516 sets the range of the second determination actual scene area R22 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R12 by expanding it (R22> R21). In other words, when the position of the display area 100 (first display area 150) deviates from the reference position, the actual scene area division module 516 expands the second determination actual scene area R20. As shown in FIG. 13B, when the straight line connecting the center 205 of the eye box 200 and the real object 300 passes within the range of the second determination actual scene area R22, the real object position determination module 514 determines the real object 300. Is in the second determination actual scene area R22.
  • the real object position determination module 514 further expands the range of the second determination actual scene area R23 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R13 (R23> R22). As shown in FIG. 13C, when the straight line connecting the center 205 of the eye box 200 and the real object 300 passes within the range of the second determination real scene area R23, the real object position determination module 514 determines the real object 300. Is in the second determination actual scene area R23.
  • the second setting method when the first determination actual scene area R11 on which the first display area 150 (display area 100) overlaps shown in FIG. 13A is used as the reference display area, the position of the display area 100 is changed and the second setting method is performed.
  • the second determination actual scene area R20 is expanded as the first determination actual scene area R10 on which the one display area 150 (display area 100) overlaps is separated from the first standard actual scene area R10s. According to this, since the area for displaying the image for the real object 300 in the second aspect is expanded, the image for the second aspect is used for the real object 300 which is out of the area for displaying the image in the first aspect. It can be easily recognized by the viewer.
  • the virtual images V20 and V30 of the image of the second aspect can be viewed by the viewer with respect to the real object 300 existing in or near the specific first standard real scene area R10s. It can be made easier to recognize.
  • the real object position determination module 514 may execute the following third setting method in addition to or in place of the first setting method and / or the second setting method described above.
  • the real object position determination module 514 includes the observation position and / or the predicted position of the real object acquired from the real object position identification module 504, and the eye position of the viewer acquired from the eye position detection module 508. Based on 4 (or information that can estimate the eye position), whether or not the real object 300 is within the first determination actual scene area R10, and the second determination to display the virtual image V of the image of the second aspect. It is determined whether or not the object is within the real scene area R20.
  • the range of the first determination actual scene area R10 and the range of the second determination actual scene area R20 change according to the eye position 4 of the viewer, and the real object 300 is appropriately changed. It is determined whether or not the first determination actual scene area R10 is included, and whether or not the second determination actual scene area R20 is included. That is, the real object position determination module 514 has the first determination actual view area R10 and the second determination actual view area R20 from the eye position 4 (or information that can estimate the eye position) of the viewer acquired from the eye position detection module 508. And may include table data, arithmetic programs, etc. for setting.
  • the table data is, for example, the eye position 4 of the viewer and the determination value (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)) of whether or not to enter the first determination actual scene area R10. ) And the data.
  • FIG. 14A, 14B, and 14C show the first determination actual scene area R10 according to the change in the eye position (eye height) 4 of the viewer when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the change of the range of the 2nd determination real scene area R20.
  • the eye position 4 of the viewer gradually increases in the order of reference numeral 4a shown in FIG. 14A, reference numeral 4b shown in FIG. 14B, and reference numeral 4c shown in FIG. 14C.
  • the real object position determination module 514 changes the first determination actual view area R10 and the second determination actual view area R20 according to the eye position 4 of the viewer, and the real object 300 is appropriately changed in the first determination actual view area R10. It is determined whether or not to enter the inside, and whether or not to enter the second determination actual scene area R20 which has been appropriately changed.
  • the first determination actual scene area R10 is the eye observed as the upper end 150a of the first display area 150 that displays the virtual image V10 of the image of the first aspect in the display area 100.
  • a line connecting the position 4a (an example of a predetermined position in the eyebox 200, and not limited to this), the lower end 150b of the first display area 150, and the observed eye position 4a (predetermined position in the eyebox 200). It is an example of the position of, and is not limited to this.) It is an area between the line connecting with and.
  • the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
  • the first determination actual scene area R12 is the first determination actual scene area R11 shown in FIG. 14A. Placed below.
  • the real object position determination module 514 expands the range of the second determination actual scene area R22 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R12 (R22> R21). In other words, when the eye position 4 moves, the real object position determination module 514 expands the second determination actual scene area R20. As shown in FIG.
  • the real object position determination module 514 determines that the real object 300 is the second. Judgment It is determined that the actual scene area R22 is included.
  • the real object position determination module 514 further expands the range of the second determination actual scene area R23 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R13 (R23> R22).
  • the real object position determination module 514 has the real object 300 as the second. Judgment It is determined that the actual scene area R23 is included.
  • the first display is made as the eye position 4 is changed.
  • the second determination actual scene area R20 is expanded as the first determination actual scene area R10 on which the areas 150 (display area 100) overlap is separated from the first standard actual scene area R10s.
  • the image for the second aspect is used for the real object 300 which is out of the area for displaying the image in the first aspect. It can be easily recognized by the viewer. Further, even if the position of the display area 100 is different, it is easy for the viewer to recognize the real object 300 existing in the specific first standard real scene area R10s or its vicinity in the image of the second aspect. Can be done.
  • the real object position determination module 514 may execute the following fourth setting method in addition to or in place of the first to third setting methods described above.
  • the real object position determination module 514 determines the observation position and / or the predicted position of the real object acquired from the real object position identification module 504, and the posture of the vehicle 1 (for example, tilt) acquired from the vehicle ECU 401. Based on (angle) and, whether or not the real object 300 enters the first determination actual scene area R10 and whether or not the real object 300 enters the second determination actual scene area R20 that displays the virtual image V of the image of the second aspect. , Is determined.
  • the range of the first determination actual scene area R10 and the range of the second determination actual scene area R20 change according to the posture of the vehicle 1, and the actual object 300 is appropriately changed in the first determination. It is determined whether or not it is in the real scene area R10, and whether or not it is in the second determination real scene area R20. That is, the real object position determination module 514 determines the first determination actual view area R10 and the second determination actual view area R20 from the attitude of the vehicle 1 (or information that can estimate the attitude of the vehicle 1) acquired from the vehicle ECU 401. It may include table data for setting, arithmetic programs, and so on.
  • the table data is, for example, the eye position 4 of the viewer and the determination value (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)) of whether or not to enter the first determination actual scene area R10. ) And the data.
  • 15A and 15B show the ranges of the first determination actual view area R10 and the second determination actual view area R20 according to the change in the tilt angle ⁇ t of the vehicle 1 when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the change of.
  • the tilt angle ⁇ t2 shown in FIG. 15B is tilted forward from the tilt angle ⁇ t1 shown in FIG. 15A.
  • the real object position determination module 514 changes the first determination actual view area R10 and the second determination actual view area R20 according to the posture of the vehicle 1, and the real object 300 is placed in the first determination actual view area R10 which has been appropriately changed. It is determined whether or not to enter, and whether or not to enter within the second determination actual scene area R20 which has been appropriately changed.
  • the first determination actual scene area R10 is the upper end 150a of the first display area 150 and the center 205 of the eyebox 200 (inside the eyebox 200) for displaying the virtual image V10 of the image of the first aspect in the display area 100.
  • the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
  • the first determination actual scene area R12 is also the first determination actual scene area R11. Placed below.
  • the real object position determination module 514 expands the range of the second determination actual view area R21 adjacent to the upper side (Y-axis positive direction) of the first determination actual view area R11 (R22> R21). In other words, when the position of the display area 100 deviates from a predetermined position, the real object position determination module 514 expands the second determination actual view area R20.
  • the real object position determination module 514 determines the real object 300. Is in the second determination actual scene area R22.
  • the display area 100 when the first determination actual scene area R11 on which the first display area 150 (display area 100) overlaps in FIG. 15A is used as the reference display area, as shown in FIG. 15B, the display area 100
  • the second determination actual scene area R20 is expanded as the first determination actual scene area R10 on which the first display area 150 (display area 100) overlaps is separated from the first standard actual scene area R10s as the position shifts.
  • the image for the second aspect is used for the real object 300 which is out of the area for displaying the image in the first aspect. It can be easily recognized by the viewer. Further, even if the position of the display area 100 is different, it is easy for the viewer to recognize the real object 300 existing in the specific first standard real scene area R10s or its vicinity in the image of the second aspect. Can be done.
  • 16A, 16B, 16C, and 16D will be used to describe an example of an expansion setting of the second determination actual scene area R20 performed by the actual scene area division module 516.
  • 16A is the same as FIG. 13B, and is enlarged when the first display area 151 is arranged below the reference display area with the position of the first display area 150 in FIG. 13A as the reference display area. 2 Judgment actual scene area R22 is shown.
  • FIG. 16B shows a mode in which the second determination actual scene area R20 is further expanded even in the same situation as in FIG. 16A. Specifically, a part of the enlarged second determination actual scene area R22 overlaps with a part of the second standard actual scene area R20s. That is, in one embodiment, the actual scene area division module 516 is set by enlarging the second determination actual scene area R20 so as to overlap a part of the reference second determination actual scene area R21.
  • FIG. 16C shows a mode in which the second determination actual scene area R20 is further expanded even in the same situation as in FIG. 16B. Specifically, a part of the enlarged second determination actual scene area R23 overlaps with the entire second standard actual scene area R20s. That is, in one embodiment, the actual scene area division module 516 is set by enlarging the second determination actual scene area R20 so as to include the entire second determination actual scene area R21 as a reference.
  • FIG. 16D shows a mode in which the second determination actual scene area R20 is further expanded even in the same situation as in FIG. 16C.
  • a part of the enlarged second determination actual scene area R23 includes the entire second standard actual scene area R20s and a wider range. That is, in one embodiment, the actual scene area division module 516 expands and sets the second determination actual scene area R20 so as to include the entire reference second determination actual scene area R21 and a wider range.
  • 17A to 17F are diagrams schematically showing the positional relationship between the first determination actual scene area R10 and the second determination actual scene area R20 when facing forward from the eye box 200.
  • the first display area 150 has a shape, but is not limited thereto.
  • the second determination actual scene area R20 includes an area adjacent to the left side of the left end of the first determination actual scene area R10, an area adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene area R10.
  • the area adjacent to the upper side of the upper end and the lower side including the area are dented.
  • the second determination actual scene area R20 is shown narrowly, it is preferably a wider range (the same applies to FIGS. 17A to 17F).
  • the second determination actual scene area R20 further includes an area adjacent to the lower end of the first determination actual scene area R10 in addition to the area in FIG. 17A. It may be a hollow region.
  • the second determination actual scene area R20 is an area adjacent to the left side of the left end of the first determination actual scene area R10 and a right side of the right end of the first determination actual scene area R10. It does not have to include the area adjacent to.
  • the second determination actual scene area R20 may be composed of a plurality of separated areas.
  • the display area 100 and the first display area 150 for displaying the virtual image V10 of the image of the first aspect are shown in agreement with each other, but the present invention is not limited thereto.
  • the first display area 150 can be smaller than the display area 100.
  • the second determination actual scene area R20 includes an area adjacent to the left side of the left end of the first determination actual scene area R10, an area adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene area R10. It can be set to an area adjacent to the upper side of the upper end of the above and a dented area on the lower side including. In this case, in the second determination actual scene area R20, an area adjacent to the first determination actual scene area R10 may be arranged in the display area 100.
  • the first determination actual scene area R10 and the second determination actual scene area R20 are adjacent to each other, but the present invention is not limited to these.
  • the first display area 150 can be smaller than the display area 100.
  • the second determination actual scene area R20 includes an area not adjacent to the left side of the left end of the first determination actual scene area R10, an area not adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene area R10. It can be set to a region that is not adjacent to the upper end of the upper end and a dented region that includes the lower side.
  • the second determination actual scene area R20 includes an area adjacent to the left side of the left end of the first determination actual scene area R10, an area adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene.
  • a region not adjacent to the upper end of the region R10 and a region including a lower side may be set as a dented region. That is, the first determination actual scene area R10 and the second determination actual scene area R20 are adjacent to each other only in a part, and may not be adjacent to each other in other parts (first determination actual scene area R10 or second determination actual scene area R20). Areas that are not may be included in between).
  • FIGS. 17F and 17G may be modified so that the display area 100 and the first display area 150 for displaying the virtual image V10 of the image of the first aspect are matched.
  • the image type setting module 518 is the real object position determination module 514, and sets the image of the first aspect with respect to the real object entering the first determination actual scene area R10, and sets the image of the first aspect, and enters the second determination actual scene area R20. On the other hand, the image of the second aspect is set.
  • the image type setting module 518 is, for example, the type and position of the real object detected by the real object information detection module 502, the type, number, and / or of the real object-related information detected by the real object information detection module 502.
  • the type of image to be displayed for the real object may be determined (changed) based on the magnitude of the (estimated) notification necessity detected by the notification necessity determination module 506.
  • the image type setting module 518 may increase or decrease the type of the image to be displayed depending on the determination result by the line-of-sight direction determination module 524 described later. Specifically, when the real object 300 is in a state where it is difficult for the viewer to see it, the number of types of images visually recognized by the viewer may be increased in the vicinity of the real object.
  • the image arrangement setting module 520 sets the virtual image V at the position (observed position or predicted position) of the real object 300 specified by the real object position specifying module 504 so that the virtual image V can be visually recognized in a specific positional relationship with the real object 300. Based on this, the coordinates of the virtual image V (including at least the left-right direction (X-axis direction) and the up-down direction (Y-axis direction) when the viewer sees the direction of the display area 100 from the driver's seat of the vehicle 1) are determined. In addition to this, the image arrangement setting module 520 is in the front-rear direction when the viewer sees the direction of the display area 100 from the driver's seat of the vehicle 1 based on the determined position of the real object 300 set by the real object position identification module 504.
  • the image arrangement setting module 520 adjusts the position of the virtual image V based on the position of the eyes of the viewer detected by the eye position detection unit 411. For example, the image arrangement setting module 520 determines the positions of the virtual image V in the horizontal direction and the vertical direction so that the contents of the virtual image V can be visually recognized in the region (road surface 310) between the division lines 311, 312.
  • the image arrangement setting module 520 can set the angle of the virtual image V (pitching angle about the X direction, yaw rate angle about the Y direction, rolling angle about the Z direction).
  • the angle of the virtual image V is a preset angle, and can be set to be parallel to the front-rear and left-right directions (XZ plane) of the vehicle 1.
  • the image size setting module 522 may change the size of the virtual image V according to the position, shape, and / or size of the corresponding real object 300. For example, the image size setting module 522 can reduce the size of the virtual image V if the position of the corresponding real object 300 is far away. Further, the image size setting module 522 can increase the size of the virtual image V if the size of the corresponding real object 300 is large.
  • the image size setting module 522 can determine the size of the virtual image V based on the magnitude of the (estimated) notification necessity detected by the notification necessity determination module 506.
  • the image size setting module 522 may have a function of predicting and calculating the size of displaying the contents of the virtual image V to be displayed in the current display update cycle based on the size of the real object 300 a predetermined number of times in the past.
  • the image sizing module 522 tracks the pixels of the real object 300 between two past images captured by a camera (an example of an outside sensor 407), for example using the Lucas-Kanade method. Therefore, the size of the real object 300 in the current display update cycle may be predicted, and the size of the virtual image V may be determined according to the predicted size of the real object 300.
  • the rate of change in the size of the real object 300 is obtained based on the change in the size of the real object 300 between the two past captured images, and the virtual image V is obtained according to the rate of change in the size of the real object 300.
  • the method of estimating the size change of the real object 300 from the viewpoint that changes in time series is not limited to the above, and includes, for example, an optical flow estimation algorithm such as the Horn-Schunkk method, the Boxon-Buxton, and the Black-Jepson method. A known method may be used.
  • the viewer of the vehicle 1 is looking at the real object to which the virtual image V or the virtual image V is associated, and / and is looking at the real object to which the virtual image V or the virtual image V is associated.
  • the viewer of the vehicle 1 is looking at the real object to which the virtual image V or the virtual image V is associated, and / and is looking at the real object to which the virtual image V or the virtual image V is associated.
  • the line-of-sight direction determination module 524 may detect what the viewer is viewing other than the content of the virtual image V. For example, the line-of-sight direction determination module 524 compares the position of the real object 300 existing in the foreground of the vehicle 1 detected by the real object information detection module 502 with the line-of-sight direction of the viewer acquired from the line-of-sight direction detection unit 413. As a result, the real object 300 to be watched may be specified, and the information for identifying the visually recognized real object 300 may be transmitted to the processor 33.
  • the graphic module 526 includes various known software components for performing image processing such as rendering to generate image data and driving the display 21.
  • the graphic module 526 also provides the type, arrangement (position coordinates, angle), size, display distance (in the case of 3D), visual effect (eg, brightness, transparency, saturation, contrast, or contrast) of the displayed image. Other visual characteristics), may include various known software components for modification.
  • the graphic module 526 includes a type set by the image type setting module 518 and a position coordinate set by the image arrangement setting module 520 (horizontal direction (X-axis) when the viewer sees the direction of the display area 100 from the driver's seat of the vehicle 1).
  • Angle set by the image arrangement setting module 520 (pitching angle centered on the X direction, yaw rate angle centered on the Y direction, Z)
  • the image data is generated so that the viewer can see it in the size set by the image size setting module 522 and the rolling angle with respect to the direction), and the image data is displayed on the image display unit 20.
  • the drive module 528 is a variety of known software components for driving the display 21, driving the light source unit 24, and driving the first actuator 28 and / or the second actuator 29. including.
  • the drive module 528 drives the liquid crystal display panel 22, the light source unit 24, and the first actuator 28 and the second actuator 29 based on the drive data generated by the display area setting module 512 and the graphic module 526.
  • Method S100 for performing an operation of displaying a virtual image of an image of the first aspect or the second aspect with respect to a real object existing in a real view outside the vehicle according to some embodiments. It is a flow chart.
  • Method S100 is executed by an image display unit 20 including a display and a display control device 30 that controls the image display unit 20. Some actions in method S100 are optionally combined, some steps are optionally modified, and some actions are optionally omitted.
  • method S100 provides a method of presenting an image (virtual image) that enhances the cognition of a real object.
  • the display control device 30 sets the range of the first determination actual scene area R10.
  • the processor 33 of the display control device 30 is set by executing the real scene area division module 516 and reading out the first determination real scene area R10 stored in advance in the memory 37 (S111). Further, in some embodiments, the processor 33 executes the display area setting module 512 and acquires the state of the relay optical system (S113), the used area of the display (S115), and the eye position of the viewer (S117). , The posture of the vehicle 1 (S119), or a combination thereof, the range of the first determination actual scene area R10 is set.
  • the display control device 30 detects that a predetermined condition for expanding the range of the second determination actual scene area R20 is satisfied.
  • the actual view area where the display areas 100 overlap when viewed from a predetermined position (for example, the center 205) of the eye box 200 (or when viewed from the eye position 4 of the viewer) is from the first standard actual view area R10s.
  • a deviation estimated to be a deviation
  • the display control device 30 determines the eye box 200 from the state of the relay optical system (S122), the area used by the display (S124), the eye position of the viewer (S126), the posture of the vehicle 1 (S128), and the like.
  • the display control device 30 expands and sets the range of the second determination actual scene area R20 when the predetermined condition is satisfied in S120.
  • the processor 33 of the display control device 30 expands the range of the second determination actual scene area from the standard range by the actual scene area division module 516 (S132), and the second determination actual scene area R20 Enlarging so as to overlap a part of the second standard actual scene area R20s (S134), expanding the second determination actual scene area R20 so as to include the entire second standard actual scene area R20s (S136), or the second determination.
  • One of the entire second standard real scene area R20s and the expansion so as to include a wider range (S138) is executed.
  • the display control device 30 may make the degree of expansion of the second determination real scene area R20 different for each type of the real object 300 acquired by the real object information detection module 502. For example, in some embodiments, the display control device 30 may vary the degree of expansion of the second determination actual view area R20 in the traveling lane, the obstacle, and the feature. In some embodiments, when the predetermined condition is not satisfied in S120, the display control device 30 sets the range of the second determination actual view area R20 to the first determination actual view area R10 set in the block S110. As a reference, the second standard real scene area R20s stored in advance in the memory 37 is set.
  • the display control device 30 acquires the real object position by executing the real object position specifying module 504.
  • the display control device 30 determines whether or not the position of the real object acquired in the block S140 falls within the first determination actual scene area R10 set in the block S110, and the second determination set in the block S130. It is determined whether or not the object is within the real scene area R20.
  • the processor 33 of the display control device 30 executes the real object positioning module 504. Whether or not the position of the real object acquired from the real object position specifying module 504 falls within the first judgment real scene area R10 set in the block S110, and enters the second judgment real scene area R20 set in the block S130.
  • the image type setting module 518 sets the image corresponding to the real object in the first mode or the second mode, and displays the image (virtual image) on the image display unit 20. Display (blocks S152, S154).
  • the operation of the above-mentioned processing process can be performed by executing one or more functional modules of an information processing device such as a general-purpose processor or a chip for a specific purpose. All of these modules, combinations of these modules, and / or combinations with known hardware that can replace their functionality are within the scope of the protection of the present invention.
  • the functional blocks of the vehicle display system 10 are optionally executed by hardware, software, or a combination of hardware and software in order to execute the principles of the various embodiments described.
  • the functional blocks described in FIG. 11 may be optionally combined or one functional block separated into two or more subblocks in order to implement the principles of the embodiments described. It will be understood by those skilled in the art. Accordingly, the description herein optionally supports any possible combination or division of functional blocks described herein.
  • the display control device 30 of the present embodiment controls the image display unit 20 that displays the virtual image V of the image in the display area 100 that overlaps the foreground when viewed from the eye box 200 in the vehicle.
  • the control device 30, one or more I / O interfaces 31 capable of acquiring information, one or more processors 33, a memory 37, and one or more processors 33 stored in the memory 37. It comprises one or more computer programs configured to be executed by, and one or more I / O interfaces 31 are the positions of real objects present around the vehicle and the display area 100. Acquiring at least one of a position, an observer's eye position 4 in the eyebox 200, a vehicle attitude, or information that can estimate these, and one or more processors 33 may be a position of a real object.
  • the one or more processors 33 are based on at least one of the position of the display area 100, the eye position 4, the posture of the vehicle, or information that can estimate these.
  • the foreground area that overlaps at least a part of the display area 100 when viewed from the box 200 is set as the first determination actual scene area R10, and the second determination actual scene area R20 is above the first determination actual scene area R10 when viewed from the eye box 200. Executes a command that is set to include the area of the foreground that is visible to the CPU.
  • one or more processors 33 execute an instruction to set a part of the first determination actual scene area R10 and a part of the second determination actual scene area R20 to be adjacent to each other. To do.
  • the memory 37 stores a specific area of the foreground as the first standard real-world area R10s, and one or more processors 33 use the position of the display area 100 and the eye position 4 of the display area 100.
  • the posture of the vehicle, or at least one of the information that can be estimated from these, the foreground region that overlaps with at least a part of the display region 100 as viewed from the eyebox 200 is relative to the first standard real scene region R10s. If it is presumed that the deviation occurs, an instruction is executed to expand the range of the second determination actual scene area R20.
  • the memory 37 stores a specific area of the foreground as the first standard real-world area R10s, and one or more processors 33 use the position of the display area 100 and the eye position 4 of the display area 100.
  • the posture of the vehicle, or at least one of the information that can be estimated from these, and the foreground area that overlaps with at least a part of the display area 100 as viewed from the eyebox 200 is set as the first determination actual scene area R10.
  • the one or more processors 33 are based on at least one of the position of the display area 100, the eye position 4, the posture of the vehicle, or information that can estimate these. 2 Execute a command to change the enlargement width of the range of the judgment actual scene area R20.
  • the one or more processors 33 execute an instruction to display the virtual image V20 (V30) of the image of the second aspect in the outer edge region 110 of the display region 100.
  • the position of the real object acquired by one or more I / O interfaces 31 includes one or more positions in the left-right direction when facing the foreground from the eyebox 200.
  • the processor 33 executes an instruction to move the position in the left-right direction of the virtual image V20 (V30) of the image of the second aspect seen from the eyebox 200 so as to follow the position in the left-right direction of the real object.
  • the memory 37 stores a specific area of the foreground as the second standard real-sight area R20s, and one or more processors 33 have the second determination real-sight area R20 as the second determination real-sight area R20s. 2 Execute an instruction to expand the range so as to include at least a part of the standard real scene area R20s.
  • the memory 37 stores a specific area of the foreground as the second standard real-sight area R20s, and one or more processors 33 have the second determination real-sight area R20 as the second determination real-sight area R20s. 2 Execute a command that extends the range to include the entire standard real-world area R20s.
  • Vehicle 2 Front windshield 4: Eye position 10: Vehicle display system 20: HUD device (image display unit) 21: Display 21a: Display surface 22: Liquid crystal display panel 23: Virtual image 24: Light source unit 25: Relay optical system 26: First mirror 27: Second mirror 30: Display control device 31: I / O interface 33: Processor 35 : Image processing circuit 37: Memory 40: Display light 40p: Optical axis 41: First image light 42: Second image light 43: Third image light 90: Virtual image optical system 100: Display area 101: Upper end 102: Lower end 110: Outer edge area 120: Fixed area 150: First display area 150a: Upper end 150b: Lower end 151, 152: First display area 200: Eye box 205: Center 300: Real object 502: Real object information detection module 504: Real object position identification Module 506: Notification necessity determination module 508: Eye position detection module 510: Vehicle attitude detection module 512: Display area setting module 514: Real object position determination module 516: Real view area division module 518: Image type setting module 520: Image

Abstract

The present invention makes it easy to recognize information pertaining to a real object even in the case when there is a change in eye location or position of display region of a virtual image. In the present invention, a processor: causes a virtual image V10 of a first-mode image corresponding to a real object to be displayed in the case when the position of the real object falls within a first determinant actual scene region R10; causes a virtual image V20 (V30) of a second-mode image corresponding to the real object to be displayed in the case when the position of the real object falls within a second determinant actual scene region R20; and then expands the range of the second determinant actual scene region R20 on the basis of at least one aspect among the position of a display region 100, the eye location 4 of a viewer, the attitude of a vehicle, and information from which any one of the preceding three aspects can be estimated.

Description

表示制御装置、ヘッドアップディスプレイ装置、及び方法Display control device, head-up display device, and method
 本開示は、車両で使用され、車両の前景に画像を重畳して視認させる表示制御装置、ヘッドアップディスプレイ装置、及び方法に関する。 The present disclosure relates to a display control device, a head-up display device, and a method used in a vehicle to superimpose and visually recognize an image on the foreground of the vehicle.
 特許文献1には、自車両の前景(実景)にあたかも実際に存在するかのような仮想オブジェクトを遠近感のある画像で表現し、拡張現実(AR:Augmented Reality)を生成し、仮想オブジェクト(画像)が示す情報と実景に存在する実オブジェクト(道路、他車両、又は歩行者など。)とのつながりを向上させ、視認者(自車両の運転者)の視覚的負荷を軽減しつつ、情報を認識させるヘッドアップディスプレイ装置が記載されている。 In Patent Document 1, a virtual object as if it actually exists in the foreground (actual view) of the own vehicle is expressed by an image with a sense of perspective, and augmented reality (AR) is generated to generate a virtual object (AR: Augmented Reality). Information while improving the connection between the information shown by the image) and the actual object (road, other vehicle, pedestrian, etc.) existing in the actual scene and reducing the visual load on the viewer (driver of the own vehicle). A head-up display device that recognizes the above is described.
国際公開第2018/216553号International Publication No. 2018/216553
 ヘッドアップディスプレイ装置は、典型的には、視認者から見た限られた領域(虚像表示領域)にだけ画像の虚像を表示することができる。自車両の運転者の目高さ(目位置)が変化しても虚像表示領域の位置が固定だとする(実質的には、目高さ(目位置)が変化すると虚像表示領域の位置は多少変化する。)と、自車両の運転者の目高さ(目位置)の違いにより、視認者から見た虚像表示領域と重なる自車両の外側の実景の領域とが異なる。具体的には、基準となる目高さで虚像表示領域が重なる実景の領域を基準実景領域とすると、基準となる目高さより高い位置から見た虚像表示領域が重なる実景領域は、視認者から見て基準実景領域よりも下側の領域となる(遠近で言えば、基準実景領域よりも近傍側の実景領域となる)。逆に、基準となる目高さより低い位置から見た虚像表示領域が重なる実景領域は、視認者から見て基準実景領域よりも上側の領域となる(遠近で言えば、基準実景領域よりも遠方側の実景領域となる)。 The head-up display device can typically display a virtual image of an image only in a limited area (virtual image display area) seen by the viewer. It is assumed that the position of the virtual image display area is fixed even if the eye height (eye position) of the driver of the own vehicle changes (substantially, when the eye height (eye position) changes, the position of the virtual image display area changes. Due to the difference in eye height (eye position) of the driver of the own vehicle, the area of the actual view outside the own vehicle that overlaps with the virtual image display area seen by the viewer is different. Specifically, assuming that the area of the real scene where the virtual image display areas overlap at the reference eye height is the reference real scene area, the real scene area where the virtual image display areas viewed from a position higher than the reference eye height overlap is from the viewer. It is a region below the reference real scene area when viewed (in perspective, it is a real scene area closer to the reference real scene area). On the contrary, the actual view area where the virtual image display areas viewed from a position lower than the reference eye height overlap is the area above the reference actual view area when viewed from the viewer (in terms of perspective, it is farther than the reference actual view area). It becomes the actual view area on the side).
 虚像表示領域が重なる実景領域が、目高さ(目位置)の違いにより異なることで、例え、自車両と実オブジェクトの関係が一定であっても、視認者の目高さが低い人に対し、虚像表示領域内に実オブジェクトが含まれ、当該実オブジェクトに対応した仮想オブジェクトが表示されるが、視認者の目高さが高い人に対しては、虚像表示領域内に実オブジェクトが含まれずに、当該実オブジェクトに対応した仮想オブジェクトが表示されないということも想定される。 Since the actual view area where the virtual image display areas overlap differs depending on the difference in eye height (eye position), even if the relationship between the own vehicle and the actual object is constant, for a person with a low eye height of the viewer. , A real object is included in the virtual image display area, and the virtual object corresponding to the real object is displayed, but for a person with a high visual height, the real object is not included in the virtual image display area. In addition, it is assumed that the virtual object corresponding to the real object is not displayed.
 また、視認者の目位置から見て虚像表示領域が重なる実景領域の周辺に、実オブジェクトが存在する場合、虚像表示領域内に当該実オブジェクトを指示するような画像を表示するが、視認者の目位置から見て虚像表示領域が重なる実景領域と、実オブジェクトの位置とが離れてしまった場合、実オブジェクトが画像(虚像)の表示対象から外されてしまうことも想定される。すなわち、虚像表示領域の位置が所定の位置、又は目位置が所定の位置などで、視認者から見た虚像表示領域が重なる実景領域が変わってしまうため、実オブジェクトの位置が一定であっても、虚像表示領域の位置や目位置が所定の位置であった場合でも、実オブジェクトが画像(虚像)の表示対象から外されてしまうことも想定される。 Further, when a real object exists around the real scene area where the virtual image display areas overlap when viewed from the eye position of the viewer, an image indicating the real object is displayed in the virtual image display area, but the viewer If the real scene area where the virtual image display areas overlap when viewed from the eye position and the position of the real object are separated, it is assumed that the real object is excluded from the display target of the image (virtual image). That is, when the position of the virtual image display area is a predetermined position or the eye position is a predetermined position, the actual view area where the virtual image display area as seen by the viewer overlaps changes, so that even if the position of the real object is constant. Even if the position of the virtual image display area or the eye position is a predetermined position, it is assumed that the real object is excluded from the display target of the image (virtual image).
 本明細書に開示される特定の実施形態の要約を以下に示す。これらの態様が、これらの特定の実施形態の概要を読者に提供するためだけに提示され、この開示の範囲を限定するものではないことを理解されたい。実際に、本開示は、以下に記載されない種々の態様を包含し得る。 The following is a summary of the particular embodiments disclosed herein. It should be understood that these aspects are presented solely to provide the reader with an overview of these particular embodiments and do not limit the scope of this disclosure. In fact, the present disclosure may include various aspects not described below.
 本開示の概要は、虚像の表示領域の位置や目位置が変化しても実オブジェクトに関する情報を認識させやすくする、より具体的には、虚像の表示領域の位置や視認者の目位置が異なっていても提示される情報のばらつきを抑えることにも関する。 The outline of the present disclosure makes it easier to recognize information about a real object even if the position of the virtual image display area or the eye position changes. More specifically, the position of the virtual image display area or the eye position of the viewer is different. Even if it is, it is also related to suppressing the variation of the information presented.
 したがって、本明細書に記載される表示制御装置は、車両内のアイボックスから見て前景に重なる表示領域内に、画像の虚像を表示する画像表示部を制御する表示制御装置であって、情報を取得可能な1つ又は複数のI/Oインタフェースと、1つ又は複数のプロセッサと、メモリと、前記メモリに格納され、前記1つ又は複数のプロセッサによって実行されるように構成される1つ又は複数のコンピュータ・プログラムと、を備え、前記1つ又は複数のI/Oインタフェースは、車両の周辺に存在する実オブジェクトの位置と、前記表示領域の位置、前記アイボックス内の観察者の目位置、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つと、を取得し、前記1つ又は複数のプロセッサは、前記実オブジェクトの位置が、第1判定実景領域内に入るか否か、及び第2判定実景領域内に入るか否かを判定し、前記実オブジェクトの位置が、前記第1判定実景領域内に入る場合、前記実オブジェクトに対応する第1態様の画像の虚像を表示させ、前記実オブジェクトの位置が、前記第2判定実景領域内に入る場合、前記実オブジェクトに対応する第2態様の画像の虚像を表示させ、前記表示領域の位置、前記目位置、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、前記第2判定実景領域の範囲を拡大する、命令を実行する。 Therefore, the display control device described in the present specification is a display control device that controls an image display unit that displays a virtual image of an image in a display area that overlaps the foreground when viewed from an eye box in the vehicle, and is information. One or more I / O interfaces, one or more processors, a memory, and one stored in the memory and configured to be executed by the one or more processors. Alternatively, the one or more I / O interfaces comprising a plurality of computer programs include the position of a real object existing around the vehicle, the position of the display area, and the observer's eyes in the eyebox. The position, the attitude of the vehicle, or at least one of information capable of estimating these is acquired, and the one or more processors move the position of the real object into the first determination real scene area. When it is determined whether or not the object is within the second determination real scene area and the position of the real object is within the first determination real scene area, the image of the first aspect corresponding to the real object is displayed. When the virtual image is displayed and the position of the real object is within the second determination real scene area, the virtual image of the image of the second aspect corresponding to the real object is displayed, and the position of the display area, the eye position, An instruction is executed to expand the range of the second determination actual scene area based on at least one of the attitude of the vehicle or information that can estimate these.
図1は、車両用表示システムの適用例を示す図である。FIG. 1 is a diagram showing an application example of a vehicle display system. 図2は、画像表示部の構成を示す図である。FIG. 2 is a diagram showing a configuration of an image display unit. 図3は、車両内のアイボックスから前方を向いた際に視認される、前景と、第1の態様の画像の虚像と、を示す図である。FIG. 3 is a diagram showing a foreground and a virtual image of the image of the first aspect, which are visually recognized when facing forward from the eye box in the vehicle. 図4は、車両内のアイボックスから前方を向いた際に視認される、前景と、第1の態様の画像の虚像と、を示す図である。FIG. 4 is a diagram showing a foreground and a virtual image of the image of the first aspect, which are visually recognized when facing forward from the eye box in the vehicle. 図5は、車両内のアイボックスから前方を向いた際に視認される、前景と、第2の態様の画像の虚像と、を示す図である。FIG. 5 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle. 図6は、車両内のアイボックスから前方を向いた際に視認される、前景と、第2の態様の画像の虚像と、を示す図である。FIG. 6 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle. 図7Aは、車両内のアイボックスから前方を向いた際に視認される、実オブジェクトと、第2の態様の画像の虚像と、を示す図である。FIG. 7A is a diagram showing a real object and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle. 図7Bは、図7Aよりさらに実オブジェクトが車両に接近した状況であり、車両内のアイボックスから前方を向いた際に視認される、実オブジェクトと、第2の態様の画像の虚像と、を示す図である。FIG. 7B shows a situation in which the real object is closer to the vehicle than in FIG. 7A, and the real object and the virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle, are displayed. It is a figure which shows. 図7Cは、図7Bよりさらに実オブジェクトが車両に接近した状況であり、車両内のアイボックスから前方を向いた際に視認される、実オブジェクトと、第2の態様の画像の虚像と、を示す図である。FIG. 7C shows a situation in which the real object is closer to the vehicle than in FIG. 7B, and the real object and the virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle, are displayed. It is a figure which shows. 図8Aは、車両内のアイボックスから前方を向いた際に視認される、実オブジェクトと、第2の態様の画像の虚像と、を示す図である。FIG. 8A is a diagram showing a real object and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle. 図8Bは、図8Aよりも実オブジェクトが車両に接近した状況を示す図である。FIG. 8B is a diagram showing a situation in which the real object is closer to the vehicle than in FIG. 8A. 図8Cは、図8Bよりも実オブジェクトが車両に接近した状況を示す図である。FIG. 8C is a diagram showing a situation in which the real object is closer to the vehicle than in FIG. 8B. 図9は、車両内のアイボックスから前方を向いた際に視認される、前景と、第2の態様の画像の虚像と、を示す図である。FIG. 9 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle. 図10は、車両内のアイボックスから前方を向いた際に視認される、前景と、第2の態様の画像の虚像と、を示す図である。FIG. 10 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle. 図11は、車両用表示システムのブロック図である。FIG. 11 is a block diagram of a vehicle display system. 図12Aは、車両の左右方向(X軸方向)から見た場合の、アイボックスと、第1の態様の画像の虚像を表示する第1表示領域、第1判定実景領域、及び第2判定実景領域、の位置関係を示す図である。FIG. 12A shows a first display area, a first determination actual view area, and a second determination actual view for displaying a virtual image of the eye box and the image of the first aspect when viewed from the left-right direction (X-axis direction) of the vehicle. It is a figure which shows the positional relationship of a region. 図12Bは、車両の左右方向(X軸方向)から見た場合の、アイボックスと、第1の態様の画像の虚像を表示する第1表示領域、第1判定実景領域、及び第2判定実景領域、の位置関係を示す図である。FIG. 12B shows the eye box and the first display area, the first determination actual scene area, and the second determination actual scene for displaying the virtual image of the image of the first aspect when viewed from the left-right direction (X-axis direction) of the vehicle. It is a figure which shows the positional relationship of a region. 図13Aは、車両の左右方向(X軸方向)から見た場合の、第1表示領域、第1判定実景領域、及び第2判定実景領域、の位置関係を示す図である。FIG. 13A is a diagram showing the positional relationship between the first display area, the first determination actual scene area, and the second determination actual scene area when viewed from the left-right direction (X-axis direction) of the vehicle. 図13Bは、図13Aよりも第1表示領域が下側に配置された状況を示す図である。FIG. 13B is a diagram showing a situation in which the first display area is arranged below FIG. 13A. 図13Cは、図13Bよりも第1表示領域が下側に配置された状況を示す図である。FIG. 13C is a diagram showing a situation in which the first display area is arranged below FIG. 13B. 図14Aは、車両の左右方向(X軸方向)から見た場合の、第1表示領域、第1判定実景領域、及び第2判定実景領域、の位置関係を示す図である。FIG. 14A is a diagram showing the positional relationship between the first display area, the first determination actual scene area, and the second determination actual scene area when viewed from the left-right direction (X-axis direction) of the vehicle. 図14Bは、図14Aよりも視認者の目位置が上側に配置された状況を示す図である。FIG. 14B is a diagram showing a situation in which the eye position of the viewer is arranged on the upper side of FIG. 14A. 図14Cは、図14Bよりも視認者の目位置が上側に配置された状況を示す図である。FIG. 14C is a diagram showing a situation in which the eye position of the viewer is arranged above FIG. 14B. 図145は、車両の左右方向(X軸方向)から見た場合の、第1表示領域、第1判定実景領域、及び第2判定実景領域、の位置関係を示す図である。FIG. 145 is a diagram showing the positional relationship between the first display area, the first determination actual scene area, and the second determination actual scene area when viewed from the left-right direction (X-axis direction) of the vehicle. 図15Bは、図15Aよりも車両の姿勢が前傾になった状況を示す図である。FIG. 15B is a diagram showing a situation in which the posture of the vehicle is tilted forward as compared with FIG. 15A. 図16Aは、図13Bと同じであり、図13Aの第1表示領域の位置を基準表示領域として、当該基準表示領域よりも第1表示領域が下に配置された際の、第2判定実景領域の拡大された一態様を示している。16A is the same as FIG. 13B, and the second determination actual scene area when the first display area is arranged below the reference display area with the position of the first display area of FIG. 13A as the reference display area. Shows an expanded aspect of. 図16Bは、第2判定実景領域の拡大された一態様を示している。FIG. 16B shows an enlarged aspect of the second determination actual scene area. 図16Cは、第2判定実景領域の拡大された一態様を示している。FIG. 16C shows an enlarged aspect of the second determination actual scene area. 図16Dは、第2判定実景領域の拡大された一態様を示している。FIG. 16D shows an enlarged aspect of the second determination actual scene area. 図17Aは、アイボックスから前方を向いた際の第1判定実景領域、及び第2判定実景領域、の位置関係を模式的に示す図である。FIG. 17A is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box. 図17Bは、アイボックスから前方を向いた際の第1判定実景領域、及び第2判定実景領域、の位置関係を模式的に示す図である。FIG. 17B is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box. 図17Cは、アイボックスから前方を向いた際の第1判定実景領域、及び第2判定実景領域、の位置関係を模式的に示す図である。FIG. 17C is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box. 図17Dは、アイボックスから前方を向いた際の第1判定実景領域、及び第2判定実景領域、の位置関係を模式的に示す図である。FIG. 17D is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box. 図17Eは、アイボックスから前方を向いた際の第1判定実景領域、及び第2判定実景領域、の位置関係を模式的に示す図である。FIG. 17E is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box. 図17Fは、アイボックスから前方を向いた際の第1判定実景領域、及び第2判定実景領域、の位置関係を模式的に示す図である。FIG. 17F is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box. 図17Gは、アイボックスから前方を向いた際の第1判定実景領域、及び第2判定実景領域、の位置関係を模式的に示す図である。FIG. 17G is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box. 図18Aは、いくつかの実施形態に従って、車両の外側の実景に存在する実オブジェクトに対して、第1態様又は第2態様の画像の虚像を表示する動作を実行する方法を示すフローチャートである。FIG. 18A is a flowchart showing a method of performing an operation of displaying a virtual image of an image of the first aspect or the second aspect with respect to a real object existing in a real view outside the vehicle according to some embodiments. 図18Bは、図18Aに続くフローチャートである。FIG. 18B is a flowchart following FIG. 18A.
 以下、図1、図2、及び図11では、例示的な車両用表示システムの構成の説明を提供する。図3ないし図10では、表示例の説明を提供する。図12Aないし図18では、例示的な動作について説明する。なお、本発明は以下の実施形態(図面の内容も含む)によって限定されるものではない。下記の実施形態に変更(構成要素の削除も含む)を加えることができるのはもちろんである。また、以下の説明では、本発明の理解を容易にするために、公知の技術的事項の説明を適宜省略する。 Hereinafter, FIGS. 1, 2, and 11 provide a description of the configuration of an exemplary vehicle display system. 3 to 10 provide a description of a display example. 12A to 18 show exemplary operations. The present invention is not limited to the following embodiments (including the contents of the drawings). Of course, changes (including deletion of components) can be made to the following embodiments. Further, in the following description, in order to facilitate understanding of the present invention, description of known technical matters will be omitted as appropriate.
 図1を参照する。本実施形態の車両用表示システム10は、画像表示部20と、画像表示部20を制御する表示制御装置30と、表示制御装置30に連結される電子機器401ないし417と、で構成される。 Refer to FIG. The vehicle display system 10 of the present embodiment includes an image display unit 20, a display control device 30 that controls the image display unit 20, and electronic devices 401 to 417 connected to the display control device 30.
 車両用表示システム10における画像表示部20は、車両1のダッシュボード5内に設けられたヘッドアップディスプレイ(HUD:Head-Up Display)装置である。画像表示部20は、表示光40をフロントウインドシールド2(被投影部の一例である)に向けて出射し、フロントウインドシールド2は、画像表示部20が表示する画像Mの表示光40をアイボックス200へ反射する。視認者は、アイボックス200内に目4を配置することで、フロントウインドシールド2を介して視認される現実空間である前景に重なる位置に、画像表示部20が表示する画像Mの虚像Vを視認することができる。なお、本実施形態に用いる図面において、車両1の左右方向をX軸方向(車両1の前方を向いた際の左側がX軸正方向)とし、上下方向をY軸方向(路面を走行する車両1の上側がY軸正方向)とし、車両1の前後方向をZ軸方向(車両1の前方がZ軸正方向)とする。 The image display unit 20 in the vehicle display system 10 is a head-up display (HUD: Head-Up Display) device provided in the dashboard 5 of the vehicle 1. The image display unit 20 emits the display light 40 toward the front windshield 2 (an example of the projected unit), and the front windshield 2 eye the display light 40 of the image M displayed by the image display unit 20. Reflects on the box 200. By arranging the eyes 4 in the eye box 200, the viewer can display the virtual image V of the image M displayed by the image display unit 20 at a position overlapping the foreground, which is the real space that is visually recognized through the front windshield 2. It can be visually recognized. In the drawings used in this embodiment, the left-right direction of the vehicle 1 is the X-axis direction (the left side when facing the front of the vehicle 1 is the X-axis positive direction), and the vertical direction is the Y-axis direction (a vehicle traveling on the road surface). The upper side of 1 is the Y-axis positive direction), and the front-rear direction of the vehicle 1 is the Z-axis direction (the front of the vehicle 1 is the Z-axis positive direction).
 本実施形態の説明で用いる「アイボックス」とは、(1)領域内では画像Mの虚像Vの少なくとも一部が視認でき、領域外では画像Mの虚像Vの一部分も視認されない領域、(2)領域内では画像Mの虚像Vの少なくとも一部が所定の輝度以上で視認でき、領域外では画像Mの虚像Vの全体が前記所定の輝度未満である領域、又は(3)画像表示部20が立体視可能な虚像Vを表示可能である場合、虚像Vの少なくとも一部が立体視でき、領域外では虚像Vの一部分も立体視されない領域である。すなわち、視認者が目(両目)4をアイボックス200外に配置すると、視認者は、画像Mの虚像Vの全体が視認できない、画像Mの虚像Vの全体の視認性が非常に低く知覚しづらい、又は画像Mの虚像Vが立体視できない。前記所定の輝度とは、例えば、アイボックスの中心で視認される画像Mの虚像の輝度に対して1/50程度である。 The "eye box" used in the description of the present embodiment is (1) a region in which at least a part of the virtual image V of the image M is visible in the region, and a part of the virtual image V of the image M is not visible outside the region, (2). ) In the region, at least a part of the virtual image V of the image M can be visually recognized at a predetermined brightness or higher, and outside the region, the entire virtual image V of the image M is less than the predetermined brightness, or (3) the image display unit 20. When can display a virtual image V that can be viewed stereoscopically, at least a part of the virtual image V can be viewed stereoscopically, and a part of the virtual image V is not stereoscopically viewed outside the region. That is, when the viewer arranges the eyes (both eyes) 4 outside the eye box 200, the viewer perceives that the entire virtual image V of the image M cannot be seen, and the visibility of the entire virtual image V of the image M is very low. It is difficult, or the virtual image V of the image M cannot be viewed stereoscopically. The predetermined brightness is, for example, about 1/50 of the brightness of the virtual image of the image M visually recognized at the center of the eye box.
 表示領域100は、画像表示部20の内部で生成された画像Mが、虚像Vとして結像する平面、曲面、又は一部曲面の領域であり、結像面とも呼ばれる。表示領域100は、画像表示部20の後述する表示器21の表示面(例えば、液晶ディスプレイパネルの出射面)21aが虚像として結像される位置であり、すなわち、表示領域100は、画像表示部20の後述する表示面21aに対応し(言い換えると、表示領域100は、後述する表示器21の表示面21aと、共役関係となる。)、そして、表示領域100で視認される虚像は、画像表示部20の後述する表示面21aに表示される画像に対応している、と言える。表示領域100自体は、実際に視認者の目4に視認されない、又は視認されにくい程度に視認性が低いことが好ましい。表示領域100には、車両1の左右方向(X軸方向)を軸とした水平方向(XZ平面)とのなす角度(図1のチルト角θt)と、アイボックス200の中心205と表示領域100の上端101とを結ぶ線分と、アイボックス中心と表示領域100の下端102とを結ぶ線分とのなす角度を表示領域100の縦画角として、この縦画角の二等分線と水平方向(XZ平面)とのなす角度(図1の縦配置角θv)と、が設定される。 The display area 100 is an area of a plane, a curved surface, or a partially curved surface in which the image M generated inside the image display unit 20 forms an image as a virtual image V, and is also called an image forming surface. The display area 100 is a position where the display surface (for example, the exit surface of the liquid crystal display panel) 21a of the display 21 described later of the image display unit 20 is imaged as a virtual image, that is, the display area 100 is the image display unit. Corresponding to the display surface 21a described later of 20 (in other words, the display area 100 has a conjugate relationship with the display surface 21a of the display 21 described later), and the virtual image visually recognized in the display area 100 is an image. It can be said that it corresponds to the image displayed on the display surface 21a described later of the display unit 20. It is preferable that the display area 100 itself has low visibility to the extent that it is not actually visible to the eyes 4 of the viewer or is difficult to see. The display area 100 includes an angle (tilt angle θt in FIG. 1) formed by the horizontal direction (XZ plane) about the left-right direction (X-axis direction) of the vehicle 1, the center 205 of the eyebox 200, and the display area 100. The angle formed by the line segment connecting the upper end 101 of the eye box and the line segment connecting the center of the eyebox and the lower end 102 of the display area 100 is defined as the vertical angle of the display area 100, and is horizontal to the bisector of this vertical angle. The angle formed by the direction (XZ plane) (vertical arrangement angle θv in FIG. 1) is set.
 本実施形態の表示領域100は、前方(Z軸正方向)を向いた際に概ね正対するように、概ね90[degree]のチルト角θtを有する。但し、チルト角θtは、これに限定されるものではなく、0≦θt<90[degree]の範囲で変更し得る。この場合、例えば、チルト角θtは、60[degree]に設定され、表示領域100は、視認者から見て上側の領域が下側の領域より遠方になるように配置されてもよい。 The display area 100 of the present embodiment has a tilt angle θt of approximately 90 [degree] so as to substantially face the front (Z-axis positive direction). However, the tilt angle θt is not limited to this, and can be changed within the range of 0 ≦ θt <90 [degree]. In this case, for example, the tilt angle θt may be set to 60 [degree], and the display area 100 may be arranged so that the upper area is farther than the lower area when viewed from the viewer.
 図2は、本実施形態のHUD装置20の構成を示す図である。HUD装置20は、画像Mを表示する表示面21aを有する表示器21と、リレー光学系25と、を含む。 FIG. 2 is a diagram showing the configuration of the HUD device 20 of the present embodiment. The HUD device 20 includes a display 21 having a display surface 21a for displaying the image M, and a relay optical system 25.
 図2の表示器21は、液晶ディスプレイパネル22と、光源ユニット24と、から構成される。表示面21aは、液晶ディスプレイパネル22の視認側の表面であり、画像Mの表示光40を出射する。表示面21aの中心からリレー光学系25及び前記被投影部を介してアイボックス200(アイボックス200の中央)へ向かう表示光40の光軸40pに対する、表示面21aの角度の設定により、表示領域100の角度(チルト角θtを含む。)が設定され得る。 The display 21 of FIG. 2 is composed of a liquid crystal display panel 22 and a light source unit 24. The display surface 21a is a surface on the visual side of the liquid crystal display panel 22, and emits the display light 40 of the image M. The display area is set by setting the angle of the display surface 21a with respect to the optical axis 40p of the display light 40 from the center of the display surface 21a toward the eye box 200 (center of the eye box 200) via the relay optical system 25 and the projected portion. An angle of 100 (including a tilt angle θt) can be set.
 リレー光学系25は、表示器21から出射された表示光40(表示器21からアイボックス200へ向かう光。)の光路上に配置され、表示器21からの表示光40をHUD装置20の外側のフロントウインドシールド2に投影する1つ又はそれ以上の光学部材で構成される。図2のリレー光学系25は、1つの凹状の第1ミラー26と、1つの平面の第2ミラー27と、を含む。 The relay optical system 25 is arranged on the optical path of the display light 40 (light from the display 21 toward the eyebox 200) emitted from the display 21, and the display light 40 from the display 21 is directed to the outside of the HUD device 20. It is composed of one or more optical members projected onto the front windshield 2. The relay optical system 25 of FIG. 2 includes one concave first mirror 26 and one flat second mirror 27.
 第1ミラー26は、例えば、正の光学的パワーを有する自由曲面形状である。換言すると、第1ミラー26は、領域毎に光学的パワーが異なる曲面形状であってもよく、すなわち、表示光40が通る領域(光路)に応じて表示光40に付加される光学的パワーが異なってもよい。具体的には、表示面21aの各領域からアイボックス200へ向かう第1画像光41、第2画像光42、第3画像光43(図2参照)とで、リレー光学系25によって付加される光学的パワーが異なってもよい。 The first mirror 26 has, for example, a free curved surface shape having positive optical power. In other words, the first mirror 26 may have a curved surface shape in which the optical power differs for each region, that is, the optical power added to the display light 40 according to the region (optical path) through which the display light 40 passes. It may be different. Specifically, the first image light 41, the second image light 42, and the third image light 43 (see FIG. 2) heading from each region of the display surface 21a toward the eyebox 200 are added by the relay optical system 25. The optical power may be different.
 なお、第2ミラー27は、例えば、平面ミラーであるが、これに限定されるものではなく、光学的パワーを有する曲面であってもよい。すなわち、リレー光学系25は、複数のミラー(例えば、本実施形態の第1ミラー26、第2ミラー27。)を合成することで、表示光40が通る領域(光路)に応じて付加される光学的パワーを異ならせてもよい。なお、第2ミラー27は、省略されてもよい。すなわち、表示器21から出射される表示光40は、第1ミラー26により被投影部(フロントウインドシールド)2に反射されてもよい。 The second mirror 27 is, for example, a flat mirror, but is not limited to this, and may be a curved surface having optical power. That is, the relay optical system 25 is added according to the region (optical path) through which the display light 40 passes by synthesizing a plurality of mirrors (for example, the first mirror 26 and the second mirror 27 of the present embodiment). The optical power may be different. The second mirror 27 may be omitted. That is, the display light 40 emitted from the display 21 may be reflected by the first mirror 26 on the projected portion (front windshield) 2.
 また、本実施形態では、リレー光学系25は、2つのミラーを含んでいたが、これに限定されるものではなく、これらに追加又は代替で、1つ又はそれ以上の、レンズなどの屈折光学部材、ホログラムなどの回折光学部材、反射光学部材、又はこれらの組み合わせを含んでいてもよい。 Further, in the present embodiment, the relay optical system 25 includes two mirrors, but the present invention is not limited to this, and one or more refractive optics such as a lens may be added or substituted to these. It may include a member, a diffractive optical member such as a hologram, a catoptric member, or a combination thereof.
 また、本実施形態のリレー光学系25は、この曲面形状(光学的パワーの一例。)により、表示領域100までの距離を設定する機能、及び表示面21aに表示された画像を拡大した虚像を生成する機能、を有するが、これに加えて、フロントウインドシールド2の湾曲形状により生じ得る虚像の歪みを抑制する(補正する)機能、を有していてもよい。 Further, the relay optical system 25 of the present embodiment has a function of setting the distance to the display area 100 by the curved surface shape (an example of optical power), and a virtual image obtained by enlarging the image displayed on the display surface 21a. It has a function of generating, but in addition to this, it may have a function of suppressing (correcting) distortion of a virtual image that may occur due to the curved shape of the front windshield 2.
 また、リレー光学系25は、表示制御装置30により制御されるアクチュエータ28、29が取り付けられ、回転可能であってもよい。これについては、後述する。 Further, the relay optical system 25 may be rotatable to which actuators 28 and 29 controlled by the display control device 30 are attached. This will be described later.
 液晶ディスプレイパネル22は、光源ユニット24から光を入射し、空間光変調した表示光40をリレー光学系25(第2ミラー27)へ向けて出射する。液晶ディスプレイパネル22は、例えば、視認者から見た虚像Vの上下方向(Y軸方向)に対応する画素が配列される方向が短辺である矩形状である。視認者は、液晶ディスプレイパネル22の透過光を、虚像光学系90を介して視認する。虚像光学系90は、図2で示すリレー光学系25とフロントウインドシールド2とを合わせたものである。 The liquid crystal display panel 22 receives light from the light source unit 24 and emits the spatial light-modulated display light 40 toward the relay optical system 25 (second mirror 27). The liquid crystal display panel 22 has, for example, a rectangular shape whose short side is the direction in which the pixels corresponding to the vertical direction (Y-axis direction) of the virtual image V seen from the viewer are arranged. The viewer visually recognizes the transmitted light of the liquid crystal display panel 22 via the virtual image optical system 90. The virtual image optical system 90 is a combination of the relay optical system 25 shown in FIG. 2 and the front windshield 2.
 光源ユニット24は、光源(不図示)と、照明光学系(不図示)と、によって構成される。 The light source unit 24 is composed of a light source (not shown) and an illumination optical system (not shown).
 光源(不図示)は、例えば、複数のチップ型のLEDであり、液晶ディスプレイパネル(空間光変調素子の一例)22へ照明光を出射する。光源ユニット24は、例えば、4つの光源で構成されており、液晶ディスプレイパネル22の長辺に沿って一列に配置される。光源ユニット24は、表示制御装置30からの制御のもと、照明光を液晶ディスプレイパネル22に向けて出射する。光源ユニット24の構成や光源の配置などはこれに限定されない。 The light source (not shown) is, for example, a plurality of chip-type LEDs, and emits illumination light to a liquid crystal display panel (an example of a spatial light modulation element) 22. The light source unit 24 is composed of, for example, four light sources, and is arranged in a row along the long side of the liquid crystal display panel 22. The light source unit 24 emits illumination light toward the liquid crystal display panel 22 under the control of the display control device 30. The configuration of the light source unit 24 and the arrangement of the light sources are not limited to this.
 照明光学系(不図示)は、例えば、光源ユニット24の照明光の出射方向に配置された1つ又は複数のレンズ(不図示)と、1つ又は複数のレンズの出射方向に配置された拡散板(不図示)と、によって構成される。 The illumination optical system (not shown) includes, for example, one or a plurality of lenses (not shown) arranged in the emission direction of the illumination light of the light source unit 24, and diffusion arranged in the emission direction of the one or a plurality of lenses. It is composed of a board (not shown).
 なお、表示器21は、自発光型ディスプレイであってもよく、又は、スクリーンに画像を投影するプロジェクション型ディスプレイであってもよい。この場合、表示面21aは、プロジェクション型ディスプレイのスクリーンである。 The display 21 may be a self-luminous display or a projection type display that projects an image on a screen. In this case, the display surface 21a is the screen of the projection type display.
 また、表示器21は、表示制御装置30により制御されるモータなどを含む不図示のアクチュエータが取り付けられ、表示面21aを移動、及び/又は回転可能であってもよい。 Further, the display 21 may be attached with an actuator (not shown) including a motor controlled by the display control device 30, and may be movable and / or rotatable on the display surface 21a.
 リレー光学系25は、アイボックス200を上下方向(Y軸方向)に移動させる2つの回転軸(第1の回転軸AX1、第2の回転軸AX2)を有する。第1の回転軸AX1、第2の回転軸AX2それぞれは、HUD装置20が車両1に取り付けられた状態で、車両1の左右方向(X軸方向)と垂直とならない(換言すると、YZ平面と平行にならない)ように設定される。具体的には、第1の回転軸AX1、第2の回転軸AX2は、車両1の左右方向(X軸方向)との間の角度が、45[degree]未満に設定され、さらに好ましくは、20[degree]未満に設定される。 The relay optical system 25 has two rotation axes (first rotation axis AX1 and second rotation axis AX2) that move the eyebox 200 in the vertical direction (Y-axis direction). Each of the first rotation axis AX1 and the second rotation axis AX2 is not perpendicular to the left-right direction (X-axis direction) of the vehicle 1 in the state where the HUD device 20 is attached to the vehicle 1 (in other words, the YZ plane). It is set so that it is not parallel). Specifically, the angle between the first rotation axis AX1 and the second rotation axis AX2 with respect to the left-right direction (X-axis direction) of the vehicle 1 is set to less than 45 [degree], and more preferably. It is set to less than 20 [degree].
 第1の回転軸AX1でのリレー光学系25の回転によれば、表示領域100の上下方向の移動量が比較的小さく、アイボックス200の上下方向の移動量が比較的大きい。また、第2の回転軸AX2でのリレー光学系25の回転によれば、表示領域100の上下方向の移動量が比較的大きく、アイボックス200の上下方向の移動量が比較的小さい。すなわち、第1の回転軸AX1と第2の回転軸AX2とを対比すると、第1の回転軸AX1の回転による『アイボックス200の上下方向の移動量/表示領域100の上下方向の移動量』は、第2の回転軸AX2の回転による『アイボックス200の上下方向の移動量/表示領域100の上下方向の移動量』より大きくなる。言い換えると、第1の回転軸AX1でのリレー光学系25の回転による表示領域100の上下方向の移動量とアイボックス200の上下方向の移動量との相対量が、第2の回転軸AX2でのリレー光学系25の回転による表示領域100の上下方向の移動量とアイボックス200の上下方向の移動量との相対量とが異なる。 According to the rotation of the relay optical system 25 on the first rotation axis AX1, the amount of vertical movement of the display area 100 is relatively small, and the amount of vertical movement of the eyebox 200 is relatively large. Further, according to the rotation of the relay optical system 25 on the second rotation axis AX2, the amount of movement of the display area 100 in the vertical direction is relatively large, and the amount of movement of the eyebox 200 in the vertical direction is relatively small. That is, when the first rotation axis AX1 and the second rotation axis AX2 are compared, "the amount of vertical movement of the eyebox 200 / the amount of vertical movement of the display area 100" due to the rotation of the first rotation axis AX1. Is larger than "the amount of vertical movement of the eyebox 200 / the amount of vertical movement of the display area 100" due to the rotation of the second rotation axis AX2. In other words, the relative amount of the vertical movement amount of the display area 100 and the vertical movement amount of the eyebox 200 due to the rotation of the relay optical system 25 on the first rotation axis AX1 is the relative amount on the second rotation axis AX2. The relative amount of the vertical movement amount of the display area 100 due to the rotation of the relay optical system 25 and the vertical movement amount of the eyebox 200 are different.
 HUD装置20は、第1の回転軸AX1で第1ミラー26を回転させる第1アクチュエータ28と、第2の回転軸AX2で第1ミラー26を回転させる第2アクチュエータ29と、を含む。言い換えると、HUD装置20は、1つのリレー光学系25を2つの軸(第1の回転軸AX1、第2の回転軸AX2)で回転させる。なお、第1アクチュエータ28と第2アクチュエータ29は、統合された1つの2軸アクチュエータで構成されてもよい。 The HUD device 20 includes a first actuator 28 that rotates the first mirror 26 on the first rotation axis AX1 and a second actuator 29 that rotates the first mirror 26 on the second rotation axis AX2. In other words, the HUD device 20 rotates one relay optical system 25 on two axes (first rotation axis AX1 and second rotation axis AX2). The first actuator 28 and the second actuator 29 may be composed of one integrated two-axis actuator.
 また、他の実施形態におけるHUD装置20は、2つのリレー光学系25を2つの軸(第1の回転軸AX1、第2の回転軸AX2)で回転させる。例えば、HUD装置20は、第1の回転軸AX1で第1ミラー26を回転させる第1アクチュエータ28と、第2の回転軸AX2で第2ミラー27を回転させる第2アクチュエータ29と、を含んでいてもよい。 Further, the HUD device 20 in another embodiment rotates the two relay optical systems 25 on two axes (first rotation axis AX1 and second rotation axis AX2). For example, the HUD device 20 includes a first actuator 28 that rotates the first mirror 26 on the first rotation axis AX1 and a second actuator 29 that rotates the second mirror 27 on the second rotation axis AX2. You may.
 なお、第1の回転軸AX1の回転により、アイボックス200の上下方向の移動量が比較的大きくなり、第2の回転軸AX2の回転により、表示領域100の上下方向の移動量が比較的大きくなるのであれば、第1の回転軸AX1と第2の回転軸AX2との配置は、これらに限定されない。また、アクチュエータによる駆動は、回転に加えて又は代えて、移動を含んでいてもよい。 The rotation of the first rotation axis AX1 causes the eye box 200 to move relatively large in the vertical direction, and the rotation of the second rotation axis AX2 causes the display area 100 to move relatively large in the vertical direction. If so, the arrangement of the first rotation axis AX1 and the second rotation axis AX2 is not limited to these. Further, the drive by the actuator may include movement in addition to or instead of rotation.
 また、他の実施形態におけるHUD装置20は、リレー光学系25を駆動しなくてもよい。換言すると、HUD装置20は、リレー光学系25を回転、及び/又は回転させるアクチュエータを有していなくてもよい。この実施形態のHUD装置20は、車両1の使用が想定される運転者の目高さのレンジをカバーする広いアイボックス200を備え得る。 Further, the HUD device 20 in another embodiment does not have to drive the relay optical system 25. In other words, the HUD device 20 may not have an actuator that rotates and / or rotates the relay optical system 25. The HUD device 20 of this embodiment may include a wide eye box 200 that covers a range of driver's eye heights where the vehicle 1 is expected to be used.
 画像表示部20は、後述する表示制御装置30の制御に基づいて、車両1のフロントウインドシールド2を介して視認される現実空間(実景)である前景に存在する、走行レーンの路面310、分岐路330、道路標識、障害物(歩行者320、自転車、自動二輪車、他車両など)、及び地物(建物、橋など)などの実オブジェクト300の近傍、実オブジェクト300に重なる位置、又は実オブジェクト300を基準に設定された位置に画像を表示することで、視覚的な拡張現実(AR:Augmented Reality)を視認者(典型的には、車両1の運転席に着座する視認者)に知覚させることもできる。本実施形態の説明では、実景に存在する実オブジェクト300の位置に応じて、表示される位置を変化させ得る画像をAR画像と定義し、実オブジェクト300の位置によらず、表示される位置が設定される画像を非AR画像と定義することとする。以下に、AR画像の例を説明する。 The image display unit 20 is a branch of the road surface 310 of the traveling lane, which exists in the foreground, which is a real space (actual view) visually recognized via the front windshield 2 of the vehicle 1, based on the control of the display control device 30 described later. Near a real object 300 such as a road 330, a road sign, an obstacle (pedestrian 320, a bicycle, a motorcycle, another vehicle, etc.), and a feature (building, a bridge, etc.), a position overlapping the real object 300, or a real object. By displaying the image at the position set with reference to 300, the visual augmented reality (AR) is perceived by the viewer (typically, the viewer sitting in the driver's seat of the vehicle 1). You can also do it. In the description of the present embodiment, an image whose displayed position can be changed according to the position of the real object 300 existing in the real scene is defined as an AR image, and the displayed position is determined regardless of the position of the real object 300. The set image is defined as a non-AR image. An example of an AR image will be described below.
 図3、図4は、車両の内側から視認者が前方を向いた際に視認する前景と、前景に重なって視認される第1の態様のAR画像とを示す図である。第1の態様のAR画像は、視認者から見て表示領域100の内側に見える実オブジェクト300に対して表示される。本明細書では、『画像の第1の態様』は、表示領域100内の後述する第1表示領域150内に表示される画像であり、アイボックス200内の所定の位置(例えば、中心205であるが、これに限定されない。)から見て、第1表示領域150と重なる実景領域に存在する実オブジェクトに対して表示される際の画像の態様である。すなわち、第1の態様の画像の虚像は、視認者から見ると、実オブジェクトに重なる、実オブジェクトを囲む、実オブジェクトに近接する、などの表現をし得る。他方、『画像の第1の態様』に対する後述する『画像の第2の態様』は、視認者から見て、後述の第1表示領域150と重なる実景領域の外側に存在する実オブジェクトに対して表示される際の画像の態様である。 3 and 4 are diagrams showing a foreground that is visually recognized when a viewer faces forward from the inside of the vehicle, and an AR image of the first aspect that is visually recognized so as to overlap the foreground. The AR image of the first aspect is displayed with respect to the real object 300 which is visible inside the display area 100 when viewed from the viewer. In the present specification, the "first aspect of the image" is an image displayed in the first display area 150 described later in the display area 100, and is a predetermined position in the eye box 200 (for example, at the center 205). However, the present invention is not limited to this.) It is an aspect of the image when it is displayed with respect to a real object existing in the real scene area overlapping with the first display area 150. That is, the virtual image of the image of the first aspect can be expressed as overlapping with the real object, surrounding the real object, approaching the real object, and the like when viewed from the viewer. On the other hand, the "second aspect of the image" described later with respect to the "first aspect of the image" refers to a real object existing outside the real scene area overlapping the first display area 150 described later when viewed from the viewer. This is the aspect of the image when it is displayed.
 まず、図3を参照する。図3では、実オブジェクト(歩行者)320が、視認者から見た表示領域100(第1表示領域150)と重なる実景領域内に存在する。本実施形態の画像表示部20は、視認者から見た表示領域100と重なる実景領域内に存在する歩行者320に対して、第1の態様のAR画像の虚像V10(V11,V12,V13)を表示する。虚像V11は、歩行者320の位置を指示する歩行者320を外側から囲むように位置する矩形の画像であり(実オブジェクト300の近傍に配置される一例。)、虚像V12は、実オブジェクト300の種類(歩行者)を示すイラストレーションであり、実オブジェクト300に重ねて配置される画像であり(実オブジェクト300に重なるように配置される一例。)、そして、第3虚像V13は、歩行者320の移動方向を示す矢印形状であり、歩行者320を基準に、歩行者320が移動する方向側にずれた位置に表示される画像である(実オブジェクト300を基準に設定された位置に配置される一例。)。なお、図3において、表示領域100を矩形状で図示してあるが、前述したように、表示領域100は、実際に視認者に視認されない、又は視認されにくい程度に視認性が低い。すなわち、表示器21の表示面21aで表示された画像Mの虚像V11,V12,V13が明確に視認され、表示器21の表示面21a自体の虚像(画像Mが表示されていない領域の虚像)は視認されない(視認されにくい)。 First, refer to FIG. In FIG. 3, the real object (pedestrian) 320 exists in the real scene area that overlaps with the display area 100 (first display area 150) seen by the viewer. The image display unit 20 of the present embodiment has a virtual image V10 (V11, V12, V13) of the AR image of the first aspect with respect to the pedestrian 320 existing in the actual view area overlapping the display area 100 seen by the viewer. Is displayed. The virtual image V11 is a rectangular image located so as to surround the pedestrian 320 indicating the position of the pedestrian 320 from the outside (an example of being arranged in the vicinity of the real object 300), and the virtual image V12 is the real object 300. It is an illustration showing a type (pedestrian), is an image placed on top of the real object 300 (an example of being placed so as to overlap the real object 300), and the third virtual image V13 is a pedestrian 320. It is an arrow shape indicating the moving direction, and is an image displayed at a position shifted to the moving direction side with respect to the pedestrian 320 (arranged at a position set with reference to the real object 300). One case.). Although the display area 100 is shown in a rectangular shape in FIG. 3, as described above, the display area 100 is so low in visibility that it is not actually visible to the viewer or is difficult to see. That is, the virtual images V11, V12, and V13 of the image M displayed on the display surface 21a of the display 21 are clearly visible, and the virtual image of the display surface 21a itself of the display 21 (virtual image of the area where the image M is not displayed). Is not visible (hard to see).
 次に、図4を参照する。図4では、実オブジェクト(分岐路)330が、視認者から見た表示領域100と重なる実景領域内に存在する。本実施形態の画像表示部20は、視認者から見た表示領域100と重なる実景領域内に存在する分岐路330に対して、第1の態様のAR画像の虚像V10(V14)を表示する。虚像V14は、視認者から見て、案内経路を示す矢印形状の仮想オブジェクトを、車両1の前景における路面310及び分岐路330に重なる位置に配置される。また、虚像V14は、路面310とのなす角度が0[degree](換言すると、路面310と平行)であるように視認されるように配置(角度)が設定された画像である。案内経路は、直進した後に分岐路330で右折することを示しており、視認者から見て車両1の走行レーンの路面310に重なり、前方の分岐路330に向かうように直進方向(Z軸正方向)を指示し、分岐路330から先の案内経路を示す部分は、視認者から見て右折方向の分岐路の路面310に重なるように右方向(X軸負方向)を指示する。 Next, refer to FIG. In FIG. 4, the real object (branch path) 330 exists in the real scene area that overlaps with the display area 100 as seen by the viewer. The image display unit 20 of the present embodiment displays the virtual image V10 (V14) of the AR image of the first aspect with respect to the branch path 330 existing in the actual scene area overlapping the display area 100 seen by the viewer. The virtual image V14 is arranged at a position where an arrow-shaped virtual object indicating a guide path is overlapped with the road surface 310 and the branch road 330 in the foreground of the vehicle 1 when viewed from the viewer. Further, the virtual image V14 is an image in which the arrangement (angle) is set so that the angle formed by the road surface 310 is visually recognized as 0 [degree] (in other words, parallel to the road surface 310). The guidance route indicates that the vehicle goes straight and then turns right at the branch road 330, overlaps the road surface 310 of the traveling lane of the vehicle 1 from the viewpoint of the viewer, and goes straight toward the front branch road 330 (Z-axis positive). The direction) is instructed, and the portion indicating the guide path beyond the branch road 330 is instructed in the right direction (X-axis negative direction) so as to overlap the road surface 310 of the branch road in the right turn direction when viewed from the viewer.
 図5、図6、及び図7は、車両の内側から視認者が前方を向いた際に視認する前景と、前景に重なって視認される第2の態様のAR画像の虚像とを示す図である。第2の態様のAR画像の虚像は、視認者から見て表示領域100(後述する第1表示領域150の一例)の外側に見える実オブジェクト300に対して表示される。 5, 6 and 7 are diagrams showing a foreground that is visually recognized when a viewer faces forward from the inside of the vehicle and a virtual image of an AR image of the second aspect that is visually recognized overlapping the foreground. is there. The virtual image of the AR image of the second aspect is displayed on the real object 300 that is visible outside the display area 100 (an example of the first display area 150 described later) when viewed from the viewer.
 まず、図5を参照する。図5の例では、画像表示部20は、第2の態様のAR画像である虚像V20(V21)を、表示領域100の上下左右の外縁の幅のある領域(外縁領域)110に表示する。後述する表示制御装置30は、視認者から見て、表示領域100の外側に存在する歩行者320の近くに虚像V21を配置する。虚像V21は、例えば、歩行者320の位置を基準とした波紋画像であり、静止画、又は動画であってもよい。虚像V21は、歩行者320の方向を指示するような形状や動きを有していてもよいが、当該形状や動き有していなくてもよい。また、第2の態様のAR画像である虚像V21の態様は、これに限定されるものではなく、矢印、テキスト、及び/又はマークなどであってもよい。このように、表示制御装置30は、歩行者320に近い、表示領域100内の外縁領域110に第2の態様のAR画像である虚像V21を表示させることで、虚像V21と結びつく実オブジェクトを視認者に把握させやすくすることができる。 First, refer to FIG. In the example of FIG. 5, the image display unit 20 displays the virtual image V20 (V21), which is the AR image of the second aspect, in the wide area (outer edge area) 110 of the upper, lower, left, and right outer edges of the display area 100. The display control device 30, which will be described later, arranges the virtual image V21 near the pedestrian 320 existing outside the display area 100 when viewed from the viewer. The virtual image V21 is, for example, a ripple image based on the position of the pedestrian 320, and may be a still image or a moving image. The virtual image V21 may have a shape or movement that indicates the direction of the pedestrian 320, but may not have the shape or movement. Further, the aspect of the virtual image V21 which is the AR image of the second aspect is not limited to this, and may be an arrow, a text, and / or a mark. In this way, the display control device 30 visually recognizes the real object linked to the virtual image V21 by displaying the virtual image V21 which is the AR image of the second aspect in the outer edge region 110 in the display area 100 close to the pedestrian 320. It can be made easier for people to understand.
 次に、図6を参照する。図6の例では、画像表示部20は、第2の態様のAR画像である虚像V20(V22)を、表示領域100内の予め定められた所定の領域(固定領域)120に表示する。図6の例では、固定領域120は、表示領域100の中央の下側の領域に設定されている。後述する表示制御装置30は、固定領域120に、視認者から見て、表示領域100の外側に存在する歩行者320を指示する形状、及び/又は動きを有する虚像V22を配置する。虚像V22は、例えば、歩行者320の位置を基準とした波紋画像であり、静止画、又は動画であってもよい。また、第2の態様のAR画像である虚像V22の態様は、表示領域100の外側に存在する歩行者320を指示する形状、及び/又は動きを含むものであれば、これに限定されるものではなく、1つ又はそれ以上の矢印、テキスト、及び/又はマークなどで構成されてもよい。このように、表示制御装置30は、予め定められた固定領域120に、表示領域100の外側に存在する歩行者320を指示する形状、及び/又は動きを含む第2の態様のAR画像である虚像V22を表示させることで、視認者の目位置の移動量を抑えつつ、虚像V22と結びつく実オブジェクトを視認者に把握させやすくすることができる。なお、固定領域120は、完全に固定という訳ではなく、画像表示部20に表示する複数の画像のレイアウトによって変更されてもよく、後述するI/Oインタフェースから取得する実景の状態や車両1の状態によって変更されてもよい。 Next, refer to FIG. In the example of FIG. 6, the image display unit 20 displays the virtual image V20 (V22), which is the AR image of the second aspect, in a predetermined predetermined area (fixed area) 120 in the display area 100. In the example of FIG. 6, the fixed area 120 is set in the lower area of the center of the display area 100. The display control device 30, which will be described later, arranges a virtual image V22 having a shape and / or a movement indicating the pedestrian 320 existing outside the display area 100 when viewed from the viewer in the fixed area 120. The virtual image V22 is, for example, a ripple image based on the position of the pedestrian 320, and may be a still image or a moving image. Further, the aspect of the virtual image V22, which is the AR image of the second aspect, is limited as long as it includes a shape and / or a movement indicating a pedestrian 320 existing outside the display area 100. Instead, it may consist of one or more arrows, text, and / or marks and the like. As described above, the display control device 30 is an AR image of the second aspect including a shape and / or a movement indicating the pedestrian 320 existing outside the display area 100 in the predetermined fixed area 120. By displaying the virtual image V22, it is possible to make it easier for the viewer to grasp the real object associated with the virtual image V22 while suppressing the amount of movement of the eye position of the viewer. The fixed area 120 is not completely fixed, and may be changed depending on the layout of a plurality of images displayed on the image display unit 20, such as the state of the actual scene acquired from the I / O interface described later or the vehicle 1. It may be changed depending on the state.
 図7A、図7B、及び図7Cは、第2の態様のAR画像である虚像V20(V23)の大きさ(表示態様の一例)が、視認者から見た表示領域100の外側に位置する実オブジェクト340の位置に応じて変化する推移を示す図である。視認者から見た実オブジェクト340の位置は、車両1の前進に伴い、図7A、図7B、図7Cの順に、左側(X軸正方向)かつ手前側(Z正負方向)へ徐々に移動していく。この際、後述する画像表示部20は、実オブジェクト340の左側(X軸正方向)への移動に追従するように、虚像23も左側(X軸正方向)への徐々に移動させてもよい。また、後述する画像表示部20は、実オブジェクト340の手前側(Z軸負方向)への移動に追従するように、虚像23の大きさを徐々に大きくさせてもよい。すなわち、後述する画像表示部20は、実オブジェクト340の位置に応じて、第2の態様のAR画像である虚像V23の位置、及び/又は大きさ(表示態様の一例)を変化させてもよい。 7A, 7B, and 7C show an actual size (an example of a display mode) of the virtual image V20 (V23), which is an AR image of the second aspect, located outside the display area 100 as seen by the viewer. It is a figure which shows the transition which changes according to the position of an object 340. The position of the real object 340 as seen from the viewer gradually moves to the left side (X-axis positive direction) and the front side (Z positive / negative direction) in the order of FIGS. 7A, 7B, and 7C as the vehicle 1 advances. To go. At this time, the image display unit 20, which will be described later, may gradually move the virtual image 23 to the left side (X-axis positive direction) so as to follow the movement of the real object 340 to the left side (X-axis positive direction). .. Further, the image display unit 20 described later may gradually increase the size of the virtual image 23 so as to follow the movement of the real object 340 toward the front side (Z-axis negative direction). That is, the image display unit 20 described later may change the position and / or size (an example of the display mode) of the virtual image V23 which is the AR image of the second aspect according to the position of the real object 340. ..
 図8A、図8B、及び図8Cは、第2の態様のAR画像である虚像V20(V23)の輝度(表示態様の一例)が、視認者から見た表示領域100の外側に位置する実オブジェクト340の位置に応じて変化する推移を示す図である。視認者から見た実オブジェクト340の位置は、車両1の前進に伴い、図8A、図8B、図8Cの順に、左側(X軸正方向)かつ手前側(Z正負方向)へ徐々に移動していく。この際、後述する画像表示部20は、実オブジェクト340の左側(X軸正方向)への移動に追従するように、虚像23も左側(X軸正方向)への徐々に移動させてもよい。また、後述する画像表示部20は、実オブジェクト340の手前側(Z軸負方向)への移動に追従するように、虚像23の輝度を徐々に低くさせてもよい。なお、この説明は、画像表示部20が、実オブジェクト340の手前側(Z軸負方向)への移動に追従するように、虚像23の輝度を徐々に高くすることを否定するものではない。後述する画像表示部20は、実オブジェクト340の位置に応じて、第2の態様のAR画像である虚像V23の位置、及び/又は輝度(表示態様の一例)を変化させてもよい。後述する画像表示部20は、実オブジェクト340の位置情報に加えて、車両1の情報、車両1の乗員に関する情報、虚像の表示対象である実オブジェクトの位置以外の情報、及び/又は虚像の表示対象ではない実オブジェクトの位置などの情報に応じて、第2の態様のAR画像である虚像V23の表示態様を変化させてもよい。なお、ここでいう虚像の表示態様の変化は、上述したもの以外に、色の変化、明度の変化、点灯と点滅との切り替え、及び/又は表示と非表示との切り替えを含み得る。 8A, 8B, and 8C show a real object in which the brightness (an example of a display mode) of the virtual image V20 (V23), which is the AR image of the second aspect, is located outside the display area 100 as seen by the viewer. It is a figure which shows the transition which changes according to the position of 340. The position of the real object 340 as seen from the viewer gradually moves to the left side (X-axis positive direction) and the front side (Z positive / negative direction) in the order of FIGS. 8A, 8B, and 8C as the vehicle 1 advances. To go. At this time, the image display unit 20, which will be described later, may gradually move the virtual image 23 to the left side (X-axis positive direction) so as to follow the movement of the real object 340 to the left side (X-axis positive direction). .. Further, the image display unit 20 described later may gradually reduce the brightness of the virtual image 23 so as to follow the movement of the real object 340 toward the front side (Z-axis negative direction). It should be noted that this description does not deny that the image display unit 20 gradually increases the brightness of the virtual image 23 so as to follow the movement of the real object 340 toward the front side (Z-axis negative direction). The image display unit 20 described later may change the position and / or the brightness (an example of the display mode) of the virtual image V23 which is the AR image of the second aspect according to the position of the real object 340. In addition to the position information of the real object 340, the image display unit 20, which will be described later, displays information about the vehicle 1, information about the occupants of the vehicle 1, information other than the position of the real object to which the virtual image is displayed, and / or the virtual image. The display mode of the virtual image V23, which is the AR image of the second aspect, may be changed according to information such as the position of a real object that is not the target. The change in the display mode of the virtual image referred to here may include a change in color, a change in brightness, switching between lighting and blinking, and / or switching between display and non-display, in addition to those described above.
 図9は、第2の態様の非AR画像の虚像を説明する図である。図9では、実オブジェクト(分岐路)330が、視認者から見た表示領域100と重なる実景領域の外側に存在する。本実施形態の画像表示部20は、視認者から見た表示領域100と重ならない実景領域に存在する分岐路330に対して、表示領域100内の予め定められた所定の領域(固定領域)120に、第2の態様の非AR画像の虚像V30(V31、V32)を表示する。後述する表示制御装置30は、固定領域120に、案内経路(ここでは、右折を示す。)を示す非AR画像である虚像V31と、分岐路までの距離を示す非AR画像である虚像V32を配置する。ここでいう『非AR画像』とは、実景に存在する実オブジェクトの実空間上の位置に応じて、画像の位置や指示する方向を変化させない画像である。虚像V31は、右折方向を示す矢印画像であるが、分岐路330の位置に応じて(換言すると、車両1と分岐路330との位置関係に応じて)、表示される位置や指示する方向を変化させない場合(具体的には、固定領域120に同じ形状を維持する場合)、非AR画像と分類する。なお、第2の態様の非AR画像は、表示領域100の外側に存在する歩行者320に関する情報を含むものであれば、これに限定されるものではなく、1つ又はそれ以上のテキスト、及び/又はマークなどで構成されてもよい。 FIG. 9 is a diagram illustrating a virtual image of the non-AR image of the second aspect. In FIG. 9, the real object (branch path) 330 exists outside the real scene area that overlaps with the display area 100 as seen by the viewer. The image display unit 20 of the present embodiment has a predetermined area (fixed area) 120 in the display area 100 with respect to the branch path 330 existing in the actual view area that does not overlap with the display area 100 seen by the viewer. The virtual image V30 (V31, V32) of the non-AR image of the second aspect is displayed. The display control device 30 described later provides a virtual image V31 which is a non-AR image showing the guide path (here, a right turn is shown) and a virtual image V32 which is a non-AR image showing the distance to the branch path in the fixed area 120. Deploy. The "non-AR image" referred to here is an image that does not change the position of the image or the direction to be instructed according to the position of the real object existing in the real scene in the real space. The virtual image V31 is an arrow image showing the right turn direction, but the displayed position and the direction to be indicated are determined according to the position of the branch road 330 (in other words, according to the positional relationship between the vehicle 1 and the branch road 330). When it is not changed (specifically, when the same shape is maintained in the fixed region 120), it is classified as a non-AR image. The non-AR image of the second aspect is not limited to this as long as it includes information about the pedestrian 320 existing outside the display area 100, and one or more texts and / Or may be composed of a mark or the like.
 図10は、視認者から見た表示領域100と重なる実景領域の外側に存在する歩行者320に対して、第2の態様の非AR画像であるマークで構成された虚像V33を表示する例を示す図である。本実施形態の画像表示部20は、固定領域120に、視認者から見た表示領域100と重なる実景領域の外側に存在する歩行者320に対して、第2の態様の非AR画像の虚像V30(V33)を表示する。このように、表示制御装置30は、予め定められた固定領域120に、表示領域100の外側に存在する実オブジェクト(歩行者320,分岐路330)の存在を報知する第2の態様の非AR画像である虚像V30(V31,V32,V33)を表示させることで、視認者の目位置の移動量を抑えつつ、表示領域100外の実オブジェクトの存在(接近)を視認者に把握させやすくすることができる。 FIG. 10 shows an example in which a virtual image V33 composed of a mark, which is a non-AR image of the second aspect, is displayed on a pedestrian 320 existing outside an actual scene area that overlaps with the display area 100 seen by the viewer. It is a figure which shows. The image display unit 20 of the present embodiment has a virtual image V30 of the non-AR image of the second aspect with respect to the pedestrian 320 existing in the fixed area 120 outside the actual view area overlapping the display area 100 seen by the viewer. (V33) is displayed. As described above, the display control device 30 notifies the presence of the real object (pedestrian 320, branch road 330) existing outside the display area 100 to the predetermined fixed area 120, which is the non-AR of the second aspect. By displaying the virtual image V30 (V31, V32, V33) which is an image, it is easy for the viewer to grasp the existence (approach) of the real object outside the display area 100 while suppressing the movement amount of the eye position of the viewer. be able to.
 図11は、いくつかの実施形態に係る、車両用表示システム10のブロック図である。表示制御装置30は、1つ又は複数のI/Oインタフェース31、1つ又は複数のプロセッサ33、1つ又は複数の画像処理回路35、及び1つ又は複数のメモリ37を備える。図11に記載される様々な機能ブロックは、ハードウェア、ソフトウェア、又はこれら両方の組み合わせで構成されてもよい。図11は、1つの実施形態に過ぎず、図示された構成要素は、より数の少ない構成要素に組み合わされてもよく、又は追加の構成要素があってもよい。例えば、画像処理回路35(例えば、グラフィック処理ユニット)が、1つ又は複数のプロセッサ33に含まれてもよい。 FIG. 11 is a block diagram of the vehicle display system 10 according to some embodiments. The display control device 30 includes one or more I / O interfaces 31, one or more processors 33, one or more image processing circuits 35, and one or more memories 37. The various functional blocks shown in FIG. 11 may consist of hardware, software, or a combination of both. FIG. 11 shows only one embodiment, and the illustrated components may be combined with a smaller number of components, or there may be additional components. For example, the image processing circuit 35 (for example, a graphic processing unit) may be included in one or more processors 33.
 図示するように、プロセッサ33及び画像処理回路35は、メモリ37と動作可能に連結される。より具体的には、プロセッサ33及び画像処理回路35は、メモリ37に記憶されているプログラムを実行することで、例えば画像データを生成、及び/又は送信するなど、車両用表示システム10(画像表示部20)の操作を行うことができる。プロセッサ33及び/又は画像処理回路35は、少なくとも1つの汎用マイクロプロセッサ(例えば、中央処理装置(CPU))、少なくとも1つの特定用途向け集積回路(ASIC)、少なくとも1つのフィールドプログラマブルゲートアレイ(FPGA)、又はそれらの任意の組み合わせを含むことができる。メモリ37は、ハードディスクのような任意のタイプの磁気媒体、CD及びDVDのような任意のタイプの光学媒体、揮発性メモリのような任意のタイプの半導体メモリ、及び不揮発性メモリを含む。揮発性メモリは、DRAM及びSRAMを含み、不揮発性メモリは、ROM及びNVRAMを含んでもよい。 As shown, the processor 33 and the image processing circuit 35 are operably connected to the memory 37. More specifically, the processor 33 and the image processing circuit 35 execute a program stored in the memory 37 to generate and / or transmit image data, for example, and display the vehicle display system 10 (image display). The operation of unit 20) can be performed. The processor 33 and / or the image processing circuit 35 includes at least one general purpose microprocessor (eg, central processing unit (CPU)), at least one application specific integrated circuit (ASIC), and at least one field programmable gate array (FPGA). , Or any combination thereof. The memory 37 includes any type of magnetic medium such as a hard disk, any type of optical medium such as a CD and DVD, any type of semiconductor memory such as a volatile memory, and a non-volatile memory. The volatile memory may include DRAM and SRAM, and the non-volatile memory may include ROM and NVRAM.
 図示するように、プロセッサ33は、I/Oインタフェース31と動作可能に連結されている。I/Oインタフェース31は、例えば、車両に設けられた後述の車両ECU401、又は他の電子機器(後述する符号403~417)と、CAN(Controller Area Network)の規格に応じて通信(CAN通信とも称する)を行う。なお、I/Oインタフェース31が採用する通信規格は、CANに限定されず、例えば、CANFD(CAN with Flexible Data Rate)、LIN(Local Interconnect Network)、Ethernet(登録商標)、MOST(Media Oriented Systems Transport:MOSTは登録商標)、UART、もしくはUSBなどの有線通信インタフェース、又は、例えば、Bluetooth(登録商標)ネットワークなどのパーソナルエリアネットワーク(PAN)、802.11x Wi-Fi(登録商標)ネットワークなどのローカルエリアネットワーク(LAN)等の数十メートル内の近距離無線通信インタフェースである車内通信(内部通信)インタフェースを含む。また、I/Oインタフェース31は、無線ワイドエリアネットワーク(WWAN0、IEEE802.16-2004(WiMAX:Worldwide Interoperability for Microwave Access))、IEEE802.16eベース(Mobile WiMAX)、4G、4G-LTE、LTE Advanced、5Gなどのセルラー通信規格により広域通信網(例えば、インターネット通信網)などの車外通信(外部通信)インタフェースを含んでいてもよい。 As shown, the processor 33 is operably connected to the I / O interface 31. The I / O interface 31 communicates with, for example, the vehicle ECU 401 described later or another electronic device (reference numerals 403 to 417 described later) provided in the vehicle according to the standard of CAN (Controller Area Network) (also referred to as CAN communication). ). The communication standard adopted by the I / O interface 31 is not limited to CAN, for example, CANFD (CAN with Flexible Data Rate), LIN (Local Interconnect Network), Ethernet (registered trademark), MOST (Media Oriented Systems Transport). : MOST is a registered trademark), a wired communication interface such as UART, or USB, or a local such as a personal area network (PAN) such as a Bluetooth (registered trademark) network, or an 802.1x Wi-Fi (registered trademark) network. In-vehicle communication (internal communication) interface, which is a short-range wireless communication interface within several tens of meters such as an area network (LAN), is included. The I / O interface 31 is a wireless wide area network (WWAN0, IEEE 802.16-2004 (WiMAX: Worldwide Interoperability for Microwave Access)), IEEE 802.16e base (Mobile WiMAX), 4G, 4G-LTE, LTE Advanced, An external communication (external communication) interface such as a wide area network (for example, an Internet communication network) may be included according to a cellular communication standard such as 5G.
 図示するように、プロセッサ33は、I/Oインタフェース31と相互動作可能に連結されることで、車両用表示システム10(I/Oインタフェース31)に接続される種々の他の電子機器等と情報を授受可能となる。I/Oインタフェース31には、例えば、車両ECU401、道路情報データベース403、自車位置検出部405、車外センサ407、操作検出部409、目位置検出部411、視線方向検出部413、携帯情報端末415、及び外部通信機器417などが動作可能に連結される。なお、I/Oインタフェース31は、車両用表示システム10に接続される他の電子機器等から受信する情報を加工(変換、演算、解析)する機能を含んでいてもよい。 As shown in the figure, the processor 33 is interoperably connected to the I / O interface 31 to provide information with various other electronic devices and the like connected to the vehicle display system 10 (I / O interface 31). Can be exchanged. The I / O interface 31 includes, for example, a vehicle ECU 401, a road information database 403, a vehicle position detection unit 405, an external sensor 407, an operation detection unit 409, an eye position detection unit 411, a line-of-sight direction detection unit 413, and a mobile information terminal 415. , And the external communication device 417 and the like are operably connected. The I / O interface 31 may include a function of processing (converting, calculating, analyzing) information received from another electronic device or the like connected to the vehicle display system 10.
 表示器21は、プロセッサ33及び画像処理回路35に動作可能に連結される。したがって、画像表示部20によって表示される画像は、プロセッサ33及び/又は画像処理回路35から受信された画像データに基づいてもよい。プロセッサ33及び画像処理回路35は、I/Oインタフェース31から取得される情報に基づき、画像表示部20が表示する画像を制御する。 The display 21 is operably connected to the processor 33 and the image processing circuit 35. Therefore, the image displayed by the image display unit 20 may be based on the image data received from the processor 33 and / or the image processing circuit 35. The processor 33 and the image processing circuit 35 control the image displayed by the image display unit 20 based on the information acquired from the I / O interface 31.
 車両ECU401は、車両1に設けられたセンサやスイッチから、車両1の状態(例えば、走行距離、車速、アクセルペダル開度、ブレーキペダル開度、エンジンスロットル開度、インジェクター燃料噴射量、エンジン回転数、モータ回転数、ステアリング操舵角、シフトポジション、ドライブモード、各種警告状態、姿勢(ロール角、及び/又はピッチング角を含む)、車両の振動(振動の大きさ、頻度、及び/又は周波数を含む))などを取得し、車両1の前記状態を収集、及び管理(制御も含んでもよい。)するものであり、機能の一部として、車両1の前記状態の数値(例えば、車両1の車速。)を示す信号を、表示制御装置30のプロセッサ33へ出力することができる。なお、車両ECU401は、単にセンサ等で検出した数値(例えば、ピッチング角が前傾方向に3[degree]。)をプロセッサ33へ送信することに加え、又はこれに代わり、センサで検出した数値を含む車両1の1つ又は複数の状態に基づく判定結果(例えば、車両1が予め定められた前傾状態の条件を満たしていること。)、若しくは/及び解析結果(例えば、ブレーキペダル開度の情報と組み合わせされて、ブレーキにより車両が前傾状態になったこと。)を、プロセッサ33へ送信してもよい。例えば、車両ECU401は、車両1が車両ECU401のメモリ(不図示)に予め記憶された所定の条件を満たすような判定結果を示す信号を表示制御装置30へ出力してもよい。なお、I/Oインタフェース31は、車両ECU401を介さずに、車両1に設けられた車両1に設けられたセンサやスイッチから、上述したような情報を取得してもよい。 The vehicle ECU 401 uses sensors and switches provided on the vehicle 1 to determine the state of the vehicle 1 (for example, mileage, vehicle speed, accelerator pedal opening, brake pedal opening, engine throttle opening, injector fuel injection amount, engine rotation speed). , Motor speed, steering angle, shift position, drive mode, various warning states, attitude (including roll angle and / or pitching angle), vehicle vibration (including magnitude, frequency, and / or frequency of vibration) )) And the like, and collect and manage (may include control) the state of the vehicle 1. As a part of the function, the numerical value of the state of the vehicle 1 (for example, the vehicle speed of the vehicle 1). ) Can be output to the processor 33 of the display control device 30. In addition, the vehicle ECU 401 simply transmits the numerical value detected by the sensor or the like (for example, the pitching angle is 3 [brake] in the forward tilt direction) to the processor 33, or instead, the numerical value detected by the sensor is used. Judgment results based on one or more states of the including vehicle 1 (for example, the vehicle 1 satisfies a predetermined condition of the forward leaning state) and / and analysis results (for example, of the brake pedal opening degree). Combined with the information, the brake has caused the vehicle to lean forward.) May be transmitted to the processor 33. For example, the vehicle ECU 401 may output a signal indicating a determination result indicating that the vehicle 1 satisfies a predetermined condition stored in advance in a memory (not shown) of the vehicle ECU 401 to the display control device 30. The I / O interface 31 may acquire the above-mentioned information from the sensors and switches provided in the vehicle 1 provided in the vehicle 1 without going through the vehicle ECU 401.
 また、車両ECU401は、車両用表示システム10が表示する画像を指示する指示信号を表示制御装置30へ出力してもよく、この際、画像の座標、サイズ、種類、表示態様、画像の報知必要度、及び/又は報知必要度を判定する元となる必要度関連情報を、前記指示信号に付加して送信してもよい。 Further, the vehicle ECU 401 may output an instruction signal indicating an image to be displayed by the vehicle display system 10 to the display control device 30, and at this time, it is necessary to notify the coordinates, size, type, display mode, and image of the image. The degree and / or the necessity-related information that is the basis for determining the notification necessity may be added to the instruction signal and transmitted.
 道路情報データベース403は、車両1に設けられた図示しないナビゲーション装置、又は車両1と車外通信インタフェース(I/Oインタフェース31)を介して接続される外部サーバー、に含まれ、後述する自車位置検出部405から取得される車両1の位置に基づき、車両1の周辺の情報(車両1の周辺の実オブジェクト関連情報)である車両1が走行する道路情報(車線,白線,停止線,横断歩道,道路の幅員,車線数,交差点,カーブ,分岐路,交通規制など)、地物情報(建物、橋、河川など)の、有無、位置(車両1までの距離を含む)、方向、形状、種類、詳細情報などを読み出し、プロセッサ33に送信してもよい。また、道路情報データベース403は、出発地から目的地までの適切な経路(ナビゲーション情報)を算出し、当該ナビゲーション情報を示す信号、又は経路を示す画像データをプロセッサ33へ出力してもよい。 The road information database 403 is included in a navigation device (not shown) provided in the vehicle 1 or an external server connected to the vehicle 1 via an external communication interface (I / O interface 31), and the vehicle position detection described later. Based on the position of the vehicle 1 acquired from the section 405, the road information (lane, white line, stop line, crosswalk, etc.) on which the vehicle 1 travels, which is the information around the vehicle 1 (information related to the actual object around the vehicle 1). Road width, number of lanes, intersections, curves, branch roads, traffic regulations, etc.), feature information (buildings, bridges, rivers, etc.), presence / absence, position (including distance to vehicle 1), direction, shape, type , Detailed information and the like may be read out and transmitted to the processor 33. Further, the road information database 403 may calculate an appropriate route (navigation information) from the departure point to the destination, and output a signal indicating the navigation information or image data indicating the route to the processor 33.
 自車位置検出部405は、車両1に設けられたGNSS(全地球航法衛星システム)等であり、現在の車両1の位置、方位を検出し、検出結果を示す信号を、プロセッサ33を介して、又は直接、道路情報データベース403、後述する携帯情報端末415、及び/もしくは外部通信機器417へ出力する。道路情報データベース403、後述する携帯情報端末415、及び/又は外部通信機器417は、自車位置検出部405から車両1の位置情報を連続的、断続的、又は所定のイベント毎に取得することで、車両1の周辺の情報を選択・生成して、プロセッサ33へ出力してもよい。 The own vehicle position detection unit 405 is a GNSS (Global Navigation Satellite System) or the like provided in the vehicle 1, detects the current position and orientation of the vehicle 1, and transmits a signal indicating the detection result via the processor 33. , Or directly output to the road information database 403, the portable information terminal 415 described later, and / or the external communication device 417. The road information database 403, the mobile information terminal 415 described later, and / or the external communication device 417 acquire the position information of the vehicle 1 from the own vehicle position detection unit 405 continuously, intermittently, or at a predetermined event. , Information around the vehicle 1 may be selected and generated and output to the processor 33.
 車外センサ407は、車両1の周辺(前方、側方、及び後方)に存在する実オブジェクト300を検出する。車外センサ407が検知する実オブジェクト300は、例えば、障害物(歩行者、自転車、自動二輪車、他車両など)、後述する走行レーンの路面310、区画線、路側物、及び/又は地物(建物など)などを含んでいてもよい。車外センサとしては、例えば、ミリ波レーダ、超音波レーダ、レーザレーダ等のレーダセンサ、カメラ、又はこれらの組み合わせからなる検出ユニットと、当該1つ又は複数の検出ユニットからの検出データを処理する(データフュージョンする)処理装置と、から構成される。これらレーダセンサやカメラセンサによる物体検知については従来の周知の手法を適用する。これらのセンサによる物体検知によって、三次元空間内での実オブジェクトの有無、実オブジェクトが存在する場合には、その実オブジェクトの位置(車両1からの相対的な距離、車両1の進行方向を前後方向とした場合の左右方向の位置、上下方向の位置等)、大きさ(横方向(左右方向)、高さ方向(上下方向)等の大きさ)、移動方向(横方向(左右方向)、奥行き方向(前後方向))、移動速度(横方向(左右方向)、奥行き方向(前後方向))、及び/又は種類等を検出してもよい。1つ又は複数の車外センサ407は、各センサの検知周期毎に、車両1の前方の実オブジェクトを検知して、実オブジェクト情報の一例である実オブジェクト情報(実オブジェクトの有無、実オブジェクトが存在する場合には実オブジェクト毎の位置、大きさ、及び/又は種類等の情報)をプロセッサ33に出力することができる。なお、これら実オブジェクト情報は、他の機器(例えば、車両ECU401)を経由してプロセッサ33に送信されてもよい。また、夜間等の周辺が暗いときでも実オブジェクトが検知できるように、センサとしてカメラを利用する場合には赤外線カメラや近赤外線カメラが望ましい。また、センサとしてカメラを利用する場合、視差で距離等も取得できるステレオカメラが望ましい。 The vehicle exterior sensor 407 detects the real object 300 existing around the vehicle 1 (front, side, and rear). The real object 300 detected by the external sensor 407 is, for example, an obstacle (pedestrian, bicycle, motorcycle, other vehicle, etc.), a road surface 310 of a traveling lane described later, a lane marking, a roadside object, and / or a feature (building). Etc.) may be included. As the vehicle exterior sensor, for example, a detection unit composed of a radar sensor such as a millimeter wave radar, an ultrasonic radar, a laser radar, a camera, or a combination thereof, and detection data from the one or a plurality of detection units are processed ( It consists of a processing device (data fusion) and. Conventional well-known methods are applied to object detection by these radar sensors and camera sensors. By object detection by these sensors, the presence or absence of a real object in the three-dimensional space, and if the real object exists, the position of the real object (relative distance from the vehicle 1 and the traveling direction of the vehicle 1 in the front-rear direction). (Horizontal position, vertical position, etc.), size (horizontal direction (horizontal direction), height direction (vertical direction), etc.), movement direction (horizontal direction (horizontal direction), depth) Direction (front-back direction)), movement speed (horizontal direction (left-right direction), depth direction (front-back direction)), and / or type may be detected. One or more external sensors 407 detect a real object in front of the vehicle 1 for each detection cycle of each sensor, and the real object information (presence or absence of the real object, existence of the real object exists) which is an example of the real object information. In this case, information such as the position, size, and / or type of each real object) can be output to the processor 33. Note that these real object information may be transmitted to the processor 33 via another device (for example, vehicle ECU 401). Further, when using a camera as a sensor, an infrared camera or a near-infrared camera is desirable so that a real object can be detected even when the surroundings are dark such as at night. Further, when a camera is used as a sensor, a stereo camera capable of acquiring a distance or the like by parallax is desirable.
 操作検出部409は、例えば、車両1のCID(Center Information Display)、インストルメントパネルなどに設けられたハードウェアスイッチ、又は画像とタッチセンサなどとを兼ね合わされたソフトウェアスイッチなどであり、車両1の乗員(運転席の着座するユーザ、及び/又は助手席に着座するユーザ)による操作に基づく操作情報を、プロセッサ33へ出力する。例えば、操作検出部409は、ユーザの操作により、表示領域100を移動させる操作に基づく表示領域設定情報、アイボックス200を移動させる操作に基づくアイボックス設定情報、視認者の目位置4を設定する操作に基づく情報(目位置情報の一例)などを、プロセッサ33へ出力する。 The operation detection unit 409 is, for example, a CID (Center Information Display) of the vehicle 1, a hardware switch provided on the instrument panel, or a software switch that combines an image and a touch sensor, and the like. The operation information based on the operation by the occupant (the user seated in the driver's seat and / or the user seated in the passenger seat) is output to the processor 33. For example, the operation detection unit 409 sets the display area setting information based on the operation of moving the display area 100, the eye box setting information based on the operation of moving the eye box 200, and the eye position 4 of the viewer by the user's operation. Information based on the operation (an example of eye position information) is output to the processor 33.
 目位置検出部411は、車両1の運転席に着座する視認者の目の位置を検出する赤外線カメラなどのカメラを含み、撮像した画像を、プロセッサ33に出力してもよい。プロセッサ33は、目位置検出部411から撮像画像(目の位置を推定可能な情報の一例)を取得し、この撮像画像を解析することで視認者の目の位置を特定することができる。目位置検出部411は、カメラの撮像画像を解析し、解析結果である視認者の目の位置を示す信号をプロセッサ33に出力してもよい。なお、車両1の視認者の目の位置、又は視認者の目の位置を推定可能な情報を取得する方法は、これらに限定されるものではなく、既知の目位置検出(推定)技術を用いて取得されてもよい。プロセッサ33は、視認者の目の位置に基づき、画像の位置を少なくとも調整することで、前景の所望の位置(実オブジェクトとの特定の位置関係になる位置)に重畳した画像を、目位置を検出した視認者(視認者)に視認させてもよい。 The eye position detection unit 411 may include a camera such as an infrared camera that detects the position of the eyes of a viewer sitting in the driver's seat of the vehicle 1, and may output the captured image to the processor 33. The processor 33 acquires an image (an example of information capable of estimating the eye position) from the eye position detection unit 411, and can identify the eye position of the viewer by analyzing the captured image. The eye position detection unit 411 may analyze the image captured by the camera and output a signal indicating the position of the eyes of the viewer, which is the analysis result, to the processor 33. The method of acquiring information capable of estimating the eye position of the viewer of the vehicle 1 or the eye position of the viewer is not limited to these, and a known eye position detection (estimation) technique is used. May be obtained. The processor 33 adjusts at least the position of the image based on the position of the eyes of the viewer to obtain the eye position of the image superimposed on the desired position of the foreground (the position having a specific positional relationship with the real object). You may make the detected visual person (visual person) visually recognize.
 視線方向検出部413は、車両1の運転席に着座する視認者の顔を撮像する赤外線カメラ、又は可視光カメラを含み、撮像した画像を、プロセッサ33に出力してもよい。プロセッサ33は、視線方向検出部413から撮像画像(視線方向を推定可能な情報の一例)を取得し、この撮像画像を解析することで視認者の視線方向(及び/又は前記注視位置)を特定することができる。なお、視線方向検出部413は、カメラからの撮像画像を解析し、解析結果である視認者の視線方向(及び/又は前記注視位置)を示す信号をプロセッサ33に出力してもよい。なお、車両1の視認者の視線方向を推定可能な情報を取得する方法は、これらに限定されるものではなく、EOG(Electro-oculogram)法、角膜反射法、強膜反射法、プルキンエ像検出法、サーチコイル法、赤外線眼底カメラ法などの他の既知の視線方向検出(推定)技術を用いて取得されてもよい。 The line-of-sight direction detection unit 413 may include an infrared camera or a visible light camera that captures the face of a viewer sitting in the driver's seat of the vehicle 1, and may output the captured image to the processor 33. The processor 33 acquires an captured image (an example of information capable of estimating the line-of-sight direction) from the line-of-sight direction detection unit 413, and identifies the line-of-sight direction (and / or the gaze position) of the viewer by analyzing the captured image. can do. The line-of-sight direction detection unit 413 may analyze the captured image from the camera and output a signal indicating the line-of-sight direction (and / or the gaze position) of the viewer, which is the analysis result, to the processor 33. The method for acquiring information that can estimate the line-of-sight direction of the viewer of the vehicle 1 is not limited to these, and is not limited to these, but is an EOG (Electro-oculargram) method, a corneal reflex method, a scleral reflex method, and Purkinje image detection. It may be obtained using other known line-of-sight detection (estimation) techniques such as the method, search coil method, infrared fundus camera method.
 携帯情報端末415は、スマートフォン、ノートパソコン、スマートウォッチ、又は視認者(又は車両1の他の乗員)が携帯可能なその他の情報機器である。I/Oインタフェース31は、携帯情報端末415とペアリングすることで、携帯情報端末415と通信を行うことが可能であり、携帯情報端末415(又は携帯情報端末を通じたサーバ)に記録されたデータを取得する。携帯情報端末415は、例えば、上述の道路情報データベース403及び自車位置検出部405と同様の機能を有し、前記道路情報(実オブジェクト関連情報の一例)を取得し、プロセッサ33に送信してもよい。また、携帯情報端末415は、車両1の近傍の商業施設に関連するコマーシャル情報(実オブジェクト関連情報の一例)を取得し、プロセッサ33に送信してもよい。なお、携帯情報端末415は、携帯情報端末415の所持者(例えば、視認者)のスケジュール情報、携帯情報端末415での着信情報、メールの受信情報などをプロセッサ33に送信し、プロセッサ33及び画像処理回路35は、これらに関する画像データを生成及び/又は送信してもよい。 The mobile information terminal 415 is a smartphone, a laptop computer, a smart watch, or other information device that can be carried by a viewer (or another occupant of the vehicle 1). The I / O interface 31 can communicate with the mobile information terminal 415 by pairing with the mobile information terminal 415, and the data recorded in the mobile information terminal 415 (or the server through the mobile information terminal). To get. The mobile information terminal 415 has, for example, the same functions as the above-mentioned road information database 403 and own vehicle position detection unit 405, acquires the road information (an example of real object-related information), and transmits it to the processor 33. May be good. Further, the mobile information terminal 415 may acquire commercial information (an example of information related to a real object) related to a commercial facility in the vicinity of the vehicle 1 and transmit it to the processor 33. The mobile information terminal 415 transmits the schedule information of the owner (for example, the viewer) of the mobile information terminal 415, the incoming information on the mobile information terminal 415, the reception information of the mail, etc. to the processor 33, and the processor 33 and the image. The processing circuit 35 may generate and / or transmit image data relating to these.
 外部通信機器417は、車両1と情報のやりとりをする通信機器であり、例えば、車両1と車車間通信(V2V:Vehicle To Vehicle)により接続される他車両、歩車間通信(V2P:Vehicle To Pedestrian)により接続される歩行者(歩行者が携帯する携帯情報端末)、路車間通信(V2I:Vehicle To roadside Infrastructure)により接続されるネットワーク通信機器であり、広義には、車両1との通信(V2X:Vehicle To Everything)により接続される全てのものを含む。外部通信機器417は、例えば、歩行者、自転車、自動二輪車、他車両(先行車等)、路面、区画線、路側物、及び/又は地物(建物など)の位置を取得し、プロセッサ33へ出力してもよい。また、外部通信機器417は、上述の自車位置検出部405と同様の機能を有し、車両1の位置情報を取得し、プロセッサ33に送信してもよく、さらに上述の道路情報データベース403の機能も有し、前記道路情報(実オブジェクト関連情報の一例)を取得し、プロセッサ33に送信してもよい。なお、外部通信機器417から取得される情報は、上述のものに限定されない。 The external communication device 417 is a communication device that exchanges information with the vehicle 1. For example, another vehicle or pedestrian communication (V2P: Vehicle To Pestation) connected to the vehicle 1 by vehicle-to-vehicle communication (V2V: Vehicle To Vehicle). ), A network communication device connected by a pedestrian (a mobile information terminal carried by a pedestrian) and a vehicle-to-vehicle communication (V2I: Vehicle To vehicle Infrastructure). : Includes everything connected by (Vehicle To Everything). The external communication device 417 acquires the positions of, for example, pedestrians, bicycles, motorcycles, other vehicles (preceding vehicles, etc.), road surfaces, lane markings, roadside objects, and / or features (buildings, etc.) and sends them to the processor 33. It may be output. Further, the external communication device 417 has the same function as the own vehicle position detection unit 405 described above, and may acquire the position information of the vehicle 1 and transmit it to the processor 33, and further, the road information database 403 described above may be used. It also has a function, and the road information (an example of information related to a real object) may be acquired and transmitted to the processor 33. The information acquired from the external communication device 417 is not limited to the above.
 メモリ37に記憶されたソフトウェア構成要素は、実オブジェクト情報検出モジュール502、実オブジェクト位置特定モジュール504、報知必要度判定モジュール506、目位置検出モジュール508、車両姿勢検出モジュール510、表示領域設定モジュール512、実オブジェクト位置判定モジュール514、実景領域区分モジュール516、画像種類設定モジュール518、画像配置設定モジュール520、画像サイズ設定モジュール522、視線方向判定モジュール524、グラフィックモジュール526、及び駆動モジュール528を含む。 The software components stored in the memory 37 are the actual object information detection module 502, the actual object position identification module 504, the notification necessity determination module 506, the eye position detection module 508, the vehicle attitude detection module 510, the display area setting module 512, and the like. It includes a real object position determination module 514, an actual scene area division module 516, an image type setting module 518, an image arrangement setting module 520, an image size setting module 522, a line-of-sight direction determination module 524, a graphic module 526, and a drive module 528.
 実オブジェクト情報検出モジュール502は、車両1の前方に存在する実オブジェクト300の少なくとも位置を含む情報(実オブジェクト情報とも呼ぶ)を取得する。実オブジェクト情報検出モジュール502は、例えば、車外センサ407から、車両1の前景に存在する実オブジェクト300の位置(車両1の運転席にいる視認者から車両1の進行方向(前方)を視認した際の高さ方向(上下方向)、横方向(左右方向)の位置であり、これらに、奥行き方向(前方向)の位置(距離)が追加されてもよい。)、及び実オブジェクト300のサイズ(高さ方向、横方向のサイズ。)、車両1に対する相対速度(相対的な移動方向も含む。)、を含む情報(実オブジェクト情報の一例)を取得してもよい。また、実オブジェクト情報検出モジュール502は、外部通信機器417を介して実オブジェクト(例えば、他車両)の位置、相対速度、種類、他車両の方向指示器の点灯状態、舵角操作の状態、及び/又は運転支援システムによる進行予定経路、進行スケジュール、を示す情報(実オブジェクト関連情報の一例)を取得してもよい。 The real object information detection module 502 acquires information (also referred to as real object information) including at least the position of the real object 300 existing in front of the vehicle 1. For example, when the real object information detection module 502 visually recognizes the position of the real object 300 existing in the foreground of the vehicle 1 from the vehicle exterior sensor 407 (when the viewer in the driver's seat of the vehicle 1 visually recognizes the traveling direction (forward) of the vehicle 1). The position in the height direction (vertical direction) and the horizontal direction (horizontal direction) of, and the position (distance) in the depth direction (front direction) may be added to these positions) and the size of the real object 300 (. Information (an example of real object information) including the height direction, the size in the lateral direction, and the relative speed with respect to the vehicle 1 (including the relative moving direction) may be acquired. Further, the real object information detection module 502 includes the position, relative speed, type of the real object (for example, another vehicle), the lighting state of the direction indicator of the other vehicle, the state of steering angle operation, and the state of steering angle operation via the external communication device 417. / Or information indicating the planned progress route and the progress schedule by the driving support system (an example of information related to the actual object) may be acquired.
 また、実オブジェクト情報検出モジュール502は、車外センサ407から、車両1の走行レーンの路面310(図3参照)の左側の区画線311(図3参照)の位置と、右側の区画線312(図3参照)の位置とを取得し、それら左右の区画線311,312の間の領域(走行レーンの路面310)を認識してもよい。 Further, in the real object information detection module 502, the position of the lane marking 311 (see FIG. 3) on the left side and the lane marking 312 (see FIG. 3) on the right side of the road surface 310 (see FIG. 3) of the traveling lane of the vehicle 1 from the vehicle exterior sensor 407. The position of (see 3) may be acquired, and the region (road surface 310 of the traveling lane) between the left and right lane markings 31 and 312 may be recognized.
 また、実オブジェクト情報検出モジュール502は、後述する虚像Vのコンテンツ(以下では、適宜「画像の種類」ともいう)を決定する元となる、車両1の前景に存在する実オブジェクトに関する情報(実オブジェクト関連情報)を検出してもよい。実オブジェクト関連情報は、例えば、実オブジェクトが、歩行者、建物、又は他車両であるなどの実オブジェクトの種類を示す種類情報、実オブジェクトの移動方向を示す移動方向情報、実オブジェクトまでの距離や到達時間を示す距離時間情報、又は駐車場(実オブジェクトの一例。)の料金などの実オブジェクトの個別詳細情報、である(但し、これらに限定されない)。例えば、実オブジェクト情報検出モジュール502は、道路情報データベース403又は携帯情報端末415から種類情報、距離時間情報、及び/もしくは個別詳細情報を取得し、車外センサ407から種類情報、移動方向情報、及び/もしくは距離時間情報を取得し、並びに/又は外部通信機器417から種類情報、移動方向情報、距離時間情報、及び/もしくは個別詳細情報を検出してもよい。 Further, the real object information detection module 502 is information about a real object existing in the foreground of the vehicle 1 (real object), which is a source for determining the content of the virtual image V described later (hereinafter, also appropriately referred to as “image type”). Related information) may be detected. The real object-related information includes, for example, type information indicating the type of the real object such as a pedestrian, a building, or another vehicle, the movement direction information indicating the moving direction of the real object, the distance to the real object, and the like. Distance time information indicating the arrival time, or individual detailed information of the real object such as the charge of the parking lot (an example of the real object) (but not limited to these). For example, the real object information detection module 502 acquires type information, distance / time information, and / or individual detailed information from the road information database 403 or the mobile information terminal 415, and the type information, movement direction information, and / or from the vehicle exterior sensor 407. Alternatively, the distance / time information may be acquired and / or the type information, the moving direction information, the distance / time information, and / or the individual detailed information may be detected from the external communication device 417.
 実オブジェクト位置特定モジュール504は、I/Oインタフェース31を介して、車外センサ407、若しくは外部通信機器417から実オブジェクト300の現在の位置を示す観測位置を取得し、又はこれら2つ以上の観測位置をデータフュージョンした実オブジェクトの観測位置を取得し、取得した観測位置に基づいて実オブジェクト300の位置(特定位置とも呼ぶ。)を設定する。後述する画像配置設定モジュール520は、この実オブジェクト位置特定モジュール504が設定した実オブジェクト300の特定位置を基準に画像の位置を決定する。 The real object position identification module 504 acquires an observation position indicating the current position of the real object 300 from the vehicle exterior sensor 407 or the external communication device 417 via the I / O interface 31, or two or more of these observation positions. The observation position of the real object obtained by data fusion is acquired, and the position (also referred to as a specific position) of the real object 300 is set based on the acquired observation position. The image arrangement setting module 520, which will be described later, determines the position of the image based on the specific position of the real object 300 set by the real object position specifying module 504.
 実オブジェクト位置特定モジュール504は、直前に取得した実オブジェクト300の観測位置に基づいて実オブジェクト300の位置を特定してもよいが、これに限定されず、少なくとも直前に取得した実オブジェクト300の観測位置を含む過去に取得した1つ又は複数の実オブジェクト300の観測位置を元に予測される所定の時刻における実オブジェクトの予測位置に基づいて実オブジェクト300の位置を特定(推定)してもよい。すなわち、実オブジェクト位置特定モジュール504と後述する画像配置設定モジュール520を実行することで、プロセッサ33は、直前に取得した実オブジェクト300の観測位置に基づいて虚像Vの位置を設定し得るし、少なくとも直前に取得した実オブジェクト300の観測位置を含む過去に取得した1つ又は複数の実オブジェクト300の観測位置を元に予測される虚像Vの表示更新タイミングにおける実オブジェクト300の予測位置に基づいて虚像Vの位置を設定し得る。なお、実オブジェクト位置特定モジュール504による予測位置の算出方法に特段の制約はなく、実オブジェクト位置特定モジュール504が処理対象とする表示更新タイミングよりも過去に取得された観測位置に基づいて予測を行う限り、如何なる手法を用いてもよい。実オブジェクト位置特定モジュール504は、例えば、最小二乗法や、カルマンフィルタ、α-βフィルタ、又はパーティクルフィルタなどの予測アルゴリズムを用いて、過去の1つ又は複数の観測位置を用いて、次回の値を予測するようにしてもよい。なお、車両用表示システム10は、実オブジェクトの観測位置、及び/又は予測位置を取得できればよく、実オブジェクトの予測位置を設定する(算出する)機能を有していなくてもよく、実オブジェクトの予測位置を設定する(算出する)機能の一部又は全部は、車両用表示システム10の表示制御装置30とは別(例えば、車両ECU401)に設けられてもよい。 The real object position specifying module 504 may specify the position of the real object 300 based on the observed position of the real object 300 acquired immediately before, but is not limited to this, and at least observes the real object 300 acquired immediately before. The position of the real object 300 may be specified (estimated) based on the predicted position of the real object at a predetermined time predicted based on the observed position of one or more real objects 300 acquired in the past including the position. .. That is, by executing the real object position specifying module 504 and the image arrangement setting module 520 described later, the processor 33 can set the position of the virtual image V based on the observed position of the real object 300 acquired immediately before, or at least. A virtual image based on the predicted position of the real object 300 at the display update timing of the virtual image V predicted based on the observed position of one or more real objects 300 acquired in the past including the observed position of the real object 300 acquired immediately before. The position of V can be set. There are no particular restrictions on the method of calculating the predicted position by the real object position specifying module 504, and the prediction is performed based on the observation position acquired in the past from the display update timing targeted by the real object position specifying module 504. Any method may be used as long as it is used. The real object positioning module 504 uses, for example, a least squares method, a prediction algorithm such as a Kalman filter, an α-β filter, or a particle filter, and uses one or more observation positions in the past to determine the next value. You may try to predict. The vehicle display system 10 only needs to be able to acquire the observed position and / or the predicted position of the real object, and does not have to have the function of setting (calculating) the predicted position of the real object. A part or all of the function of setting (calculating) the predicted position may be provided separately from the display control device 30 of the vehicle display system 10 (for example, the vehicle ECU 401).
 報知必要度判定モジュール506は、車両用表示システム10が表示する各虚像Vのコンテンツが視認者に報知するべき内容であるかを判定する。報知必要度判定モジュール506は、I/Oインタフェース31に接続される種々の他の電子機器から情報を取得し、報知必要度を算出してもよい。また、図11でI/Oインタフェース31に接続された電子機器が車両ECU401に情報を送信し、受信した情報に基づき車両ECU401が決定した報知必要度を、報知必要度判定モジュール506が検出(取得)してもよい。『報知必要度』は、例えば、起こり得る自体の重大さの程度から導き出される危険度、反応行動を起こすまでに要求される反応時間の長短から導き出される緊急度、車両1や視認者(又は車両1の他の乗員)の状況から導き出される有効度、又はこれらの組み合わせなどで決定され得る(報知必要度の指標はこれらに限定されない)。報知必要度判定モジュール506は、報知必要度を推定する元となる必要度関連情報を検出し、これから報知必要度を推定してもよい。画像の報知必要度を推定する元となる必要度関連情報は、例えば、実オブジェクトや交通規制(道路情報の一例)の位置、種類などで推定されてもよく、I/Oインタフェース31に接続される種々の他の電子機器から入力される他の情報に基づいて、又は他の情報を加味して推定されてもよい。すなわち、報知必要度判定モジュール506は、視認者に報知すべきかを判定し、後述する画像を表示しないことも選択し得る。なお、車両用表示システム10は、報知必要度を取得できればよく、報知必要度を推定する(算出する)機能を有していなくてもよく、報知必要度を推定する機能の一部又は全部は、車両用表示システム10の表示制御装置30とは別(例えば、車両ECU401)に設けられてもよい。 The notification necessity determination module 506 determines whether the content of each virtual image V displayed by the vehicle display system 10 should be notified to the viewer. The notification necessity determination module 506 may acquire information from various other electronic devices connected to the I / O interface 31 and calculate the notification necessity. Further, the electronic device connected to the I / O interface 31 in FIG. 11 transmits information to the vehicle ECU 401, and the notification necessity determination module 506 detects (acquires) the notification necessity determined by the vehicle ECU 401 based on the received information. ) May. The "notification necessity" is, for example, the degree of danger derived from the degree of seriousness that can occur, the degree of urgency derived from the length of the reaction time required to take a reaction action, the vehicle 1 or the viewer (or the vehicle). It can be determined by the effectiveness derived from the situation of (1 other occupant), or a combination thereof (the index of the need for notification is not limited to these). The notification necessity determination module 506 may detect the necessity-related information that is the source for estimating the notification necessity, and may estimate the notification necessity from this. The necessity-related information that is the basis for estimating the notification necessity of the image may be estimated by, for example, the position and type of a real object or traffic regulation (an example of road information), and is connected to the I / O interface 31. It may be estimated based on other information input from various other electronic devices or in addition to other information. That is, the notification necessity determination module 506 may determine whether to notify the viewer and may choose not to display the image described later. The vehicle display system 10 only needs to be able to acquire the notification necessity, and does not have to have a function of estimating (calculating) the notification necessity, and some or all of the functions for estimating the notification necessity are , It may be provided separately from the display control device 30 of the vehicle display system 10 (for example, the vehicle ECU 401).
 目位置検出モジュール508は、車両1の視認者の目の位置を検出する。目位置検出モジュール508は、複数段階で設けられた高さ領域のどこに視認者の目の高さがあるかの判定、視認者の目の高さ(Y軸方向の位置)の検出、視認者の目の高さ(Y軸方向の位置)及び奥行方向の位置(Z軸方向の位置)の検出、及び/又は視認者の目の位置(X,Y,Z軸方向の位置)の検出、に関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。目位置検出モジュール508は、例えば、目位置検出部411から視認者の目の位置を取得する、又は、目位置検出部411から視認者の目の高さを含む目の位置を推定可能な情報を受信し、視認者の目の高さを含む目の位置を推定する。目の位置を推定可能な情報は、例えば、車両1の運転席の位置、視認者の顔の位置、座高の高さ、視認者による図示しない操作部での入力値などであってもよい。 The eye position detection module 508 detects the position of the eyes of the viewer of the vehicle 1. The eye position detection module 508 determines where the height of the viewer's eyes is in a height region provided in a plurality of stages, detects the height of the eyes of the viewer (position in the Y-axis direction), and the viewer. Detection of eye height (position in Y-axis direction) and depth direction (position in Z-axis direction), and / or detection of visual eye position (position in X, Y, Z-axis direction), Includes various software components for performing various actions related to. The eye position detection module 508 can acquire, for example, the position of the eyes of the viewer from the eye position detection unit 411, or can estimate the position of the eyes including the height of the eyes of the viewer from the eye position detection unit 411. Is received and the eye position including the eye height of the viewer is estimated. The information that can estimate the position of the eyes may be, for example, the position of the driver's seat of the vehicle 1, the position of the face of the viewer, the height of the sitting height, the input value by the viewer at the operation unit (not shown), or the like.
 車両姿勢検出モジュール510は、車両1に搭載され、車両1の姿勢を検出する。車両姿勢検出モジュール510は、複数段階で設けられた姿勢領域のどこに車両1の姿勢があるかの判定、車両1の地球座標系における角度(ピッチング角、ローリング角)の検出、車両1の路面に対する角度(ピッチング角、ローリング角)の検出、及び/又は車両1の路面に対する高さ(Y軸方向の位置)の検出、に関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。車両姿勢検出モジュール510は、例えば、車両1に設けられた三軸加速度センサ(不図示)と、前記三軸加速度センサが検出した三軸加速度を解析することで、水平面を基準とした車両1のピッチング角(車両姿勢)を推定し、車両1のピッチング角に関する情報を含む車両姿勢情報をプロセッサ33に出力する。なお、車両姿勢検出モジュール510は、前述した三軸加速度センサ以外に、車両1のサスペンション近傍に配置されるハイトセンサ(不図示)で構成されてもよい。このとき、車両姿勢検出モジュール510は、前記ハイトセンサが検出する車両1の地面からの高さを解析することで、前述したように車両1のピッチング角を推定し、車両1のピッチング角に関する情報を含む車両姿勢情報をプロセッサ33に出力する。なお、車両姿勢検出モジュール510が、車両1のピッチング角を求める方法は、上述した方法に限定されず、公知のセンサや解析方法を用いて車両1のピッチング角を求めてもよい。 The vehicle posture detection module 510 is mounted on the vehicle 1 and detects the posture of the vehicle 1. The vehicle attitude detection module 510 determines where the attitude of the vehicle 1 is in the attitude region provided in a plurality of stages, detects the angle (pitching angle, rolling angle) of the vehicle 1 in the Earth coordinate system, and with respect to the road surface of the vehicle 1. It includes various software components for performing various actions related to the detection of angles (pitching angle, rolling angle) and / or the detection of the height (position in the Y-axis direction) of the vehicle 1 with respect to the road surface. The vehicle attitude detection module 510 analyzes, for example, a triaxial acceleration sensor (not shown) provided in the vehicle 1 and the triaxial acceleration detected by the triaxial acceleration sensor to obtain the vehicle 1 with reference to a horizontal plane. The pitching angle (vehicle attitude) is estimated, and vehicle attitude information including information on the pitching angle of the vehicle 1 is output to the processor 33. The vehicle attitude detection module 510 may be composed of a height sensor (not shown) arranged in the vicinity of the suspension of the vehicle 1 in addition to the above-mentioned three-axis acceleration sensor. At this time, the vehicle posture detection module 510 estimates the pitching angle of the vehicle 1 as described above by analyzing the height of the vehicle 1 detected by the height sensor from the ground, and provides information on the pitching angle of the vehicle 1. The vehicle attitude information including the above is output to the processor 33. The method by which the vehicle posture detection module 510 obtains the pitching angle of the vehicle 1 is not limited to the above-mentioned method, and the pitching angle of the vehicle 1 may be obtained by using a known sensor or analysis method.
 表示領域設定モジュール512は、入力する視認者の目位置4の情報や設定情報に基づき、第1アクチュエータ28の回転量(角度)、及び第2アクチュエータ29の回転量(角度)、を設定する。表示領域100の位置は、当該アクチュエータの回転量(角度)により決定され得る。したがって、当該アクチュエータの回転量(角度)は、表示領域100の位置を推定可能な情報の一例である。例えば、表示領域設定モジュール512は、目位置検出モジュール508で検出した目位置情報、目位置検出モジュール508で推定した目位置推定情報に基づき、第1アクチュエータ28の回転量(角度)、及び第2アクチュエータ29の回転量(角度)、を設定すること、に関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。すなわち、表示領域設定モジュール512は、目位置、又は目の位置を推定可能な情報から、第1の回転軸AX1を軸とした回転量(角度)、及び第2の回転軸AX2を軸とした回転量(角度)、を設定するためのテーブルデータ、演算式、などを含み得る。 The display area setting module 512 sets the rotation amount (angle) of the first actuator 28 and the rotation amount (angle) of the second actuator 29 based on the input information of the eye position 4 of the viewer and the setting information. The position of the display area 100 can be determined by the amount of rotation (angle) of the actuator. Therefore, the rotation amount (angle) of the actuator is an example of information that can estimate the position of the display area 100. For example, the display area setting module 512 has the rotation amount (angle) of the first actuator 28 and the second eye position estimation information based on the eye position information detected by the eye position detection module 508 and the eye position estimation information estimated by the eye position detection module 508. Includes various software components for performing various operations related to setting the amount of rotation (angle) of the actuator 29. That is, the display area setting module 512 has the rotation amount (angle) about the first rotation axis AX1 and the second rotation axis AX2 as the axes from the eye position or the information that can estimate the eye position. It may include table data, arithmetic expressions, etc. for setting the amount of rotation (angle).
 また、表示領域設定モジュール512は、入力する視認者の目位置4の情報や設定情報に基づき、表示器21の表示面21aのうち使用する領域を変更してもよい。すなわち、表示領域設定モジュール512は、表示器21の表示面21aで画像の表示に使用する領域を変更することで、虚像Vの表示に使用される表示領域100の位置を変更することもできる。したがって、表示器21の表示面21aで画像の表示に使用する領域を示す情報は、表示領域100の位置を推定可能な情報の一例と言える。 Further, the display area setting module 512 may change the area to be used in the display surface 21a of the display 21 based on the information of the eye position 4 of the viewer to be input and the setting information. That is, the display area setting module 512 can also change the position of the display area 100 used for displaying the virtual image V by changing the area used for displaying the image on the display surface 21a of the display 21. Therefore, the information indicating the area used for displaying the image on the display surface 21a of the display 21 can be said to be an example of the information that can estimate the position of the display area 100.
 また、表示領域設定モジュール512は、操作検出部409による操作や車両ECU401からの指示に基づき、第1の回転軸AX1を軸とした回転量(角度)、及び第2の回転軸AX2を軸とした回転量(角度)、を設定してもよい。例えば、表示領域設定モジュール512は、(1)図示しない視認者識別部から取得した視認者の好みのアイボックスの位置情報(アイボックス位置設定情報の一例)、好みの表示領域の位置情報(表示領域設定情報の一例)、(2)車両1に設けられた操作検出部(付随)から取得した、ユーザの操作に基づく、表示領域100を移動させる操作に基づく表示領域設定情報、アイボックス200を移動させる操作に基づくアイボックス設定情報、(3)車両ECU401から取得した、車両ECU401が決定した表示領域100の位置を示す表示領域設定情報、アイボックス200の位置を示すアイボックス設定情報などから、第1の回転軸AX1を軸とした第1アクチュエータ28の回転量(角度)、及び第2の回転軸AX2を軸とした第2アクチュエータ29の回転量(角度)、を設定すること、に関係する様々な動作を実行するための様々なソフトウェア構成要素を含む。 Further, the display area setting module 512 uses the rotation amount (angle) about the first rotation axis AX1 and the second rotation axis AX2 as axes based on the operation by the operation detection unit 409 and the instruction from the vehicle ECU 401. The amount of rotation (angle) may be set. For example, the display area setting module 512 may include (1) position information of the viewer's favorite eyebox (an example of eyebox position setting information) and position information of the favorite display area (display) acquired from a viewer identification unit (not shown). (Example of area setting information), (2) Display area setting information based on the operation of moving the display area 100, eye box 200, which is obtained from the operation detection unit (accompanying) provided in the vehicle 1 and is based on the user's operation. From the eye box setting information based on the operation of moving, (3) the display area setting information indicating the position of the display area 100 determined by the vehicle ECU 401, the eye box setting information indicating the position of the eye box 200, etc. acquired from the vehicle ECU 401. It is related to setting the rotation amount (angle) of the first actuator 28 about the first rotation axis AX1 and the rotation amount (angle) of the second actuator 29 about the second rotation axis AX2. Includes various software components to perform different actions.
 なお、表示領域設定モジュール512は、表示領域100を所定の位置に移動させるための表示領域設定情報を取得した場合、アイボックス200の位置を維持する、又はアイボックス200の移動量を小さく抑えるように、表示領域100を所定の位置に移動させるための駆動量に加え、第1の回転軸AX1を軸とした回転量(角度)、及び第2の回転軸AX2を軸とした回転量(角度)、を設定(補正)し得る。また、逆に、表示領域設定モジュール512は、アイボックス200を所定の位置に移動させるためのアイボックス設定情報のみを取得した場合、表示領域100の位置を維持する、又は表示領域100の移動量を小さく抑えるように、アイボックス200を所定の位置に移動させるための駆動量に加え、第1の回転軸AX1を軸とした回転量(角度)、及び第2の回転軸AX2を軸とした回転量(角度)、を設定(補正)し得る。 When the display area setting module 512 acquires the display area setting information for moving the display area 100 to a predetermined position, the display area setting module 512 maintains the position of the eye box 200 or keeps the movement amount of the eye box 200 small. In addition to the driving amount for moving the display area 100 to a predetermined position, the rotation amount (angle) about the first rotation axis AX1 and the rotation amount (angle) about the second rotation axis AX2. ), Can be set (corrected). On the contrary, when the display area setting module 512 acquires only the eye box setting information for moving the eye box 200 to a predetermined position, the display area setting module 512 maintains the position of the display area 100 or the amount of movement of the display area 100. In addition to the driving amount for moving the eyebox 200 to a predetermined position, the rotation amount (angle) about the first rotation axis AX1 and the second rotation axis AX2 are used as axes so as to keep the size small. The amount of rotation (angle) can be set (corrected).
 このように、複数のアクチュエータを用いて、表示領域100及びアイボックス200を移動させる方法(装置)としては、例えば、本出願人による特願2019-178812号に記載の方法(装置)を採用してもよい。また、他の実施形態におけるHUD装置20が、リレー光学系25を移動させるアクチュエータを有する場合、表示領域設定モジュール512は、1つ又は複数のアクチュエータによるリレー光学系25の移動量を設定し得る。 As described above, as a method (device) for moving the display area 100 and the eyebox 200 using a plurality of actuators, for example, the method (device) described in Japanese Patent Application No. 2019-178812 by the applicant is adopted. You may. Further, when the HUD device 20 in another embodiment has an actuator for moving the relay optical system 25, the display area setting module 512 may set the amount of movement of the relay optical system 25 by one or a plurality of actuators.
 また、表示領域設定モジュール512は、車両用表示システムが搭載される車両1の種類に応じて設定され、予めメモリ37に記憶される、表示領域100(及び/又は後述する第1表示領域150)の位置を基準に、上述した表示領域100を推定可能な情報に基づき補正することで、現在の表示領域100の位置を推定し、メモリ37に記憶させてもよい。 Further, the display area setting module 512 is set according to the type of the vehicle 1 on which the vehicle display system is mounted, and is stored in the memory 37 in advance, the display area 100 (and / or the first display area 150 described later). The current position of the display area 100 may be estimated and stored in the memory 37 by correcting the display area 100 described above based on the estimable information based on the position of.
 実オブジェクト位置判定モジュール514は、実オブジェクト300の位置が、第1判定実景領域R10内に入るか否か、及び第2判定実景領域R20内に入るか否か、を判定する。すなわち、表示領域設定モジュール512は、実オブジェクトの観測位置、及び/又は予測位置から、その実オブジェクトが第1判定実景領域R10内に入るか否か、及び第2判定実景領域R20内に入るか否か、を判定するための判定値、テーブルデータ、演算式などを含み得る。例えば、実オブジェクト位置判定モジュール514は、実オブジェクトの観測位置、及び/又は予測位置と比較するための、第1判定実景領域R10内に入るか否かの判定値(左右方向(X軸方向)の位置、上下方向(Y軸方向)の位置)、及び第2判定実景領域R20内に入るか否かの判定値(左右方向(X軸方向)の位置、上下方向(Y軸方向)の位置)、などを含み得る。これら第1判定実景領域R10内に入るか否かの判定値、及び第2判定実景領域R20内に入るか否かの判定値は、後述する実景領域区分モジュール516により設定される(変更される)。 The real object position determination module 514 determines whether or not the position of the real object 300 is within the first determination actual scene area R10 and whether or not the position is within the second determination actual scene area R20. That is, the display area setting module 512 determines whether or not the real object enters the first determination actual scene area R10 and whether or not it enters the second determination actual scene area R20 from the observation position and / or the predicted position of the real object. It may include a determination value, table data, an arithmetic expression, etc. for determining whether or not. For example, the real object position determination module 514 determines whether or not to enter the first determination actual scene area R10 for comparison with the observed position and / or the predicted position of the actual object (left-right direction (X-axis direction)). Position, vertical direction (Y-axis direction) position), and judgment value of whether or not to enter the second judgment actual scene area R20 (left-right direction (X-axis direction) position, vertical direction (Y-axis direction) position) ), Etc. can be included. The determination value of whether or not to enter the first determination actual scene area R10 and the determination value of whether or not to enter the second determination actual scene area R20 are set (changed) by the actual scene area division module 516 described later. ).
 実景領域区分モジュール516は、実オブジェクトが、第1判定実景領域R10内に入るか否かの判定値の範囲、及び第2判定実景領域R20内に入るか否かの判定値の範囲を設定する。 The real scene area division module 516 sets a range of judgment values as to whether or not the real object is within the first judgment real scene area R10, and a range of judgment values as to whether or not the real object is within the second judgment real scene area R20. ..
 実景領域区分モジュール516及び実オブジェクト位置判定モジュール514による第1乃至第5の判定方法は、以下に説明されるが、後述するように、視認者の目位置4、表示領域100(第1表示領域150)の位置、車両1の姿勢などに応じて、第2判定実景領域R20内に入ると判定される範囲が変更されるのであれば、これらに限定されない。 The first to fifth determination methods by the actual view area division module 516 and the actual object position determination module 514 will be described below, but as will be described later, the visual position 4 of the viewer and the display area 100 (first display area). If the range determined to be within the second determination actual scene area R20 is changed according to the position of 150), the posture of the vehicle 1, and the like, the present invention is not limited to these.
 (第1の設定方法)
 第1の設定方法において、実オブジェクト位置判定モジュール514は、実オブジェクト位置特定モジュール504から取得する実オブジェクトの観測位置、及び/又は予測位置と、メモリ37に予め記憶された判定値とに基づき、実オブジェクト300の位置が、第1判定実景領域R10内に入るか否か、及び第2判定実景領域R20内に入るか否か、を判定する。
(First setting method)
In the first setting method, the real object position determination module 514 is based on the observation position and / or the predicted position of the real object acquired from the real object position identification module 504 and the determination value stored in advance in the memory 37. It is determined whether or not the position of the real object 300 is within the first determination actual scene area R10 and whether or not the position is within the second determination actual scene area R20.
 図12A、図12Bは、車両1の左右方向(X軸方向)から見た場合の、アイボックス200と、第1の態様の画像の虚像V10を表示する第1表示領域150、第1判定実景領域R10、及び第2判定実景領域R20、の位置関係を示す図である。図12Aは、実オブジェクト300が、第1判定実景領域R10内に入る場合を示し、図12Bは、実オブジェクト300が、第2判定実景領域R20内に入る場合を示す。 12A and 12B show the eye box 200, the first display area 150 displaying the virtual image V10 of the image of the first aspect, and the first determination actual view when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the positional relationship of the area R10 and the 2nd determination actual scene area R20. FIG. 12A shows a case where the real object 300 enters the first determination actual scene area R10, and FIG. 12B shows a case where the real object 300 enters the second determination actual scene area R20.
 一実施形態において、第1判定実景領域R10は、表示領域100内で第1の態様の画像の虚像V10を表示する第1表示領域150の上端150aとアイボックス200の中心205(アイボックス200内の所定の位置の一例であり、これに限定されない。)とを結ぶ線と、第1表示領域150の下端150bとアイボックス200の中心205(アイボックス200内の所定の位置の一例であり、これに限定されない。)とを結ぶ線と、の間の領域である。また、第2判定実景領域R20は、第1判定実景領域R10の上側(Y軸正方向)に隣接する所定範囲の領域である。ここでいう、第1の態様の画像の虚像V10を表示する第1表示領域150は、表示領域100内の表示領域100より小さい所定の領域であってもよく、表示領域100と一致していてもよい(図3~図10の例では、第1表示領域150と表示領域100とが一致している。)。典型的には、アイボックス200と、第1表示領域150とは、車両用表示システム10が搭載される車両1の種類に応じて設定されるため、第1判定実景領域R10、及び第2判定実景領域R20は、車両1の種類毎に予め一定の値に設定され、メモリ37に記憶される。但し、第1判定実景領域R10、及び第2判定実景領域R20は、車両1の個体差、HUD装置20の個体差(車両1への組付け誤差も含む。)、車両1に設けられた車外センサ407の個体差(車両1への組付け誤差も含む。)などを考慮したキャリブレーションにより、車両用表示システム10の個体毎に、予め設定され、メモリ37に記憶されてもよい。 In one embodiment, the first determination actual scene area R10 is the upper end 150a of the first display area 150 and the center 205 of the eyebox 200 (inside the eyebox 200) for displaying the virtual image V10 of the image of the first aspect in the display area 100. It is an example of a predetermined position in the eye box 200, and is an example of a predetermined position in the eye box 200, and is an example of a line connecting the lower end 150b of the first display area 150 and the center 205 of the eye box 200. It is an area between the line connecting with, but not limited to). Further, the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10. The first display area 150 for displaying the virtual image V10 of the image of the first aspect referred to here may be a predetermined area smaller than the display area 100 in the display area 100 and coincides with the display area 100. (In the example of FIGS. 3 to 10, the first display area 150 and the display area 100 coincide with each other). Typically, the eye box 200 and the first display area 150 are set according to the type of the vehicle 1 on which the vehicle display system 10 is mounted, so that the first determination actual scene area R10 and the second determination are made. The actual scene area R20 is set to a constant value in advance for each type of vehicle 1 and stored in the memory 37. However, the first determination actual scene area R10 and the second determination actual scene area R20 are the individual difference of the vehicle 1, the individual difference of the HUD device 20 (including the assembly error to the vehicle 1), and the outside of the vehicle provided in the vehicle 1. It may be preset and stored in the memory 37 for each individual of the vehicle display system 10 by calibration in consideration of individual differences of the sensor 407 (including an error in assembling to the vehicle 1).
 具体的には、図12Aに示すように、アイボックス200の中心205と、実オブジェクト300とを結ぶ直線が、実景の第1判定実景領域R10の範囲内を通る場合には、実オブジェクト位置判定モジュール514は、実オブジェクト300が第1判定実景領域R10内に入っていると判定する。他方で、図12Bに示すように、アイボックス200の中心205と、実オブジェクト300とを結ぶ直線が、第2判定実景領域R20の範囲内を通る場合には、実オブジェクト位置判定モジュール514は、実オブジェクト300が第2判定実景領域R20内に入っていると判定する。 Specifically, as shown in FIG. 12A, when the straight line connecting the center 205 of the eyebox 200 and the real object 300 passes within the range of the first judgment real scene area R10 of the real scene, the real object position determination is made. Module 514 determines that the real object 300 is in the first determination real scene area R10. On the other hand, as shown in FIG. 12B, when the straight line connecting the center 205 of the eyebox 200 and the real object 300 passes within the range of the second determination actual scene area R20, the real object position determination module 514 determines. It is determined that the real object 300 is in the second determination actual scene area R20.
 (第2の設定方法)
 実オブジェクト位置判定モジュール514は、上述の第1の設定方法に加えて、又は代えて、以下の第2の設定方法を実行し得る。第2の設定方法において、実オブジェクト位置判定モジュール514は、実オブジェクト位置特定モジュール504から取得する実オブジェクトの観測位置、及び/又は予測位置と、表示領域設定モジュール512から取得する表示領域100の位置(又は表示領域100の位置を推定可能な情報)と、に基づき、実オブジェクト300が、第1判定実景領域R10内に入るか否か、及び第2の態様の画像の虚像Vを表示する第2判定実景領域R20内に入るか否か、を判定する。第2の設定方法では、実景領域区分モジュール516は、表示領域100の位置に応じて、第1判定実景領域R10の範囲と、第2判定実景領域R20の範囲とを変更するする。すなわち、実景領域区分モジュール516は、表示領域設定モジュール512から取得する表示領域100の位置(又は表示領域100の位置を推定可能な情報)から、第1判定実景領域R10、及び第2判定実景領域R20を設定するためのテーブルデータ、演算プログラムなどを含み得る。当該テーブルデータは、例えば、表示領域100の位置と、第1判定実景領域R10内に入るか否かの判定値(左右方向(X軸方向)の位置、上下方向(Y軸方向)の位置)とを対応付けたデータ、及び表示領域100の位置と、第2判定実景領域R20内に入るか否かの判定値(左右方向(X軸方向)の位置、上下方向(Y軸方向)の位置)とを対応付けたデータである。
(Second setting method)
The real object position determination module 514 may execute the following second setting method in addition to or in place of the first setting method described above. In the second setting method, the real object position determination module 514 determines the observation position and / or the predicted position of the real object acquired from the real object position identification module 504, and the position of the display area 100 acquired from the display area setting module 512. Based on (or information that can estimate the position of the display area 100), whether or not the real object 300 enters the first determination real scene area R10, and the virtual image V of the image of the second aspect are displayed. 2 Judgment Judges whether or not to enter the actual scene area R20. In the second setting method, the actual scene area division module 516 changes the range of the first determination actual scene area R10 and the range of the second determination actual scene area R20 according to the position of the display area 100. That is, the actual scene area division module 516 has the first determination actual scene area R10 and the second determination actual scene area from the position of the display area 100 (or information that can estimate the position of the display area 100) acquired from the display area setting module 512. It may include table data for setting R20, an arithmetic program, and the like. The table data includes, for example, the position of the display area 100 and the determination value of whether or not to enter the first determination actual scene area R10 (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)). The data associated with the above, the position of the display area 100, and the judgment value (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)) of whether or not to enter the second judgment actual scene area R20. ) And the data.
 図13A、図13B、図13Cは、車両1の左右方向(X軸方向)から見た場合の、表示領域100の位置の変化に応じた、第1判定実景領域R10、及び第2判定実景領域R20の範囲の変化を示す図である。表示領域100は、HUD装置20の第1ミラー26を回転させることで、図13A、図13B、図13Cの順に、徐々に下側(Y軸負方向)に移動されている。 13A, 13B, and 13C show a first determination actual view area R10 and a second determination actual view area according to a change in the position of the display area 100 when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the change of the range of R20. The display area 100 is gradually moved downward (Y-axis negative direction) in the order of FIGS. 13A, 13B, and 13C by rotating the first mirror 26 of the HUD device 20.
 実景領域区分モジュール516は、表示領域100の位置に応じて、第1判定実景領域R10、及び第2判定実景領域R20の範囲を変更し、実オブジェクト位置判定モジュール514は、実オブジェクト300が、実景領域区分モジュール516で変更された第1判定実景領域R10内に入るか否か、及び適宜変更された第2判定実景領域R20内に入るか否かを判定する。一実施形態において、第1判定実景領域R10は、表示領域100内で第1の態様の画像の虚像V10を表示する第1表示領域150の上端150aとアイボックス200の中心205(アイボックス200内の所定の位置の一例であり、これに限定されない。)とを結ぶ線と、第1表示領域150の下端150bとアイボックス200の中心205(アイボックス200内の所定の位置の一例であり、これに限定されない。)とを結ぶ線と、の間の領域である。また、第2判定実景領域R20は、第1判定実景領域R10の上側(Y軸正方向)に隣接する所定範囲の領域である。 The real scene area division module 516 changes the range of the first judgment real scene area R10 and the second judgment real scene area R20 according to the position of the display area 100, and the real object position judgment module 514 has the real object 300 as the real scene. It is determined whether or not to enter the first determination actual scene area R10 changed by the area division module 516, and whether or not to enter the second determination actual scene area R20 which has been appropriately changed. In one embodiment, the first determination actual scene area R10 is the upper end 150a of the first display area 150 and the center 205 of the eyebox 200 (inside the eyebox 200) for displaying the virtual image V10 of the image of the first aspect in the display area 100. It is an example of a predetermined position in the eye box 200, and is an example of a predetermined position in the eye box 200, and is an example of a line connecting the lower end 150b of the first display area 150 and the center 205 of the eye box 200. It is an area between the line connecting with, but not limited to). Further, the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
 具体的には、実景領域区分モジュール516は、図13Bに示すように、第1表示領域151が、図13Aに示す第1表示領域150より下に配置される場合、第1判定実景領域R12も、第1判定実景領域R10より下に配置する。この際、実景領域区分モジュール516は、第1判定実景領域R12の上側(Y軸正方向)に隣接する第2判定実景領域R22の範囲を拡大して設定する(R22>R21)。換言すると、表示領域100(第1表示領域150)の位置が基準の位置からずれる場合、実景領域区分モジュール516は、第2判定実景領域R20を拡大する。図13Bに示すように、アイボックス200の中心205と、実オブジェクト300とを結ぶ直線が、第2判定実景領域R22の範囲内を通る場合には、実オブジェクト位置判定モジュール514は、実オブジェクト300が第2判定実景領域R22内に入っていると判定する。 Specifically, as shown in FIG. 13B, when the first display area 151 is arranged below the first display area 150 shown in FIG. 13A, the actual scene area division module 516 also includes the first determination actual scene area R12. , The first determination is arranged below the actual scene area R10. At this time, the actual scene area division module 516 sets the range of the second determination actual scene area R22 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R12 by expanding it (R22> R21). In other words, when the position of the display area 100 (first display area 150) deviates from the reference position, the actual scene area division module 516 expands the second determination actual scene area R20. As shown in FIG. 13B, when the straight line connecting the center 205 of the eye box 200 and the real object 300 passes within the range of the second determination actual scene area R22, the real object position determination module 514 determines the real object 300. Is in the second determination actual scene area R22.
 さらに、図13Cに示すように、第1表示領域152が、図13Bに示す第1表示領域151より下に配置される場合、第1判定実景領域R13も、第1判定実景領域R12より下に配置される。この際、実オブジェクト位置判定モジュール514は、第1判定実景領域R13の上側(Y軸正方向)に隣接する第2判定実景領域R23の範囲をさらに拡げる(R23>R22)。図13Cに示すように、アイボックス200の中心205と、実オブジェクト300とを結ぶ直線が、第2判定実景領域R23の範囲内を通る場合には、実オブジェクト位置判定モジュール514は、実オブジェクト300が第2判定実景領域R23内に入っていると判定する。 Further, as shown in FIG. 13C, when the first display area 152 is arranged below the first display area 151 shown in FIG. 13B, the first determination actual scene area R13 is also below the first determination actual scene area R12. Be placed. At this time, the real object position determination module 514 further expands the range of the second determination actual scene area R23 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R13 (R23> R22). As shown in FIG. 13C, when the straight line connecting the center 205 of the eye box 200 and the real object 300 passes within the range of the second determination real scene area R23, the real object position determination module 514 determines the real object 300. Is in the second determination actual scene area R23.
 すなわち、第2の設定方法では、図13Aに示す第1表示領域150(表示領域100)が重なる第1判定実景領域R11を基準表示領域とした場合、表示領域100の位置の変更に伴い、第1表示領域150(表示領域100)が重なる第1判定実景領域R10が、第1標準実景領域R10sから離れるに従い、第2判定実景領域R20を拡げる。これによれば、実オブジェクト300に対する画像を第2態様で表示させる領域が拡大されるため、第1態様で画像を表示する領域から外れてしまった実オブジェクト300に対して第2態様の画像で視認者に認識させやすくすることができる。また、表示領域100の位置が違っていても、特定の第1標準実景領域R10s、又はその近傍に存在する実オブジェクト300に対して、第2の態様の画像の虚像V20、V30で視認者に認識させやすくすることができる。 That is, in the second setting method, when the first determination actual scene area R11 on which the first display area 150 (display area 100) overlaps shown in FIG. 13A is used as the reference display area, the position of the display area 100 is changed and the second setting method is performed. The second determination actual scene area R20 is expanded as the first determination actual scene area R10 on which the one display area 150 (display area 100) overlaps is separated from the first standard actual scene area R10s. According to this, since the area for displaying the image for the real object 300 in the second aspect is expanded, the image for the second aspect is used for the real object 300 which is out of the area for displaying the image in the first aspect. It can be easily recognized by the viewer. Further, even if the position of the display area 100 is different, the virtual images V20 and V30 of the image of the second aspect can be viewed by the viewer with respect to the real object 300 existing in or near the specific first standard real scene area R10s. It can be made easier to recognize.
 (第3の設定方法)
 実オブジェクト位置判定モジュール514は、上述の第1の設定方法、及び/又は第2の設定方法に加えて、又は代えて、以下の第3の設定方法を実行し得る。第3の設定方法において、実オブジェクト位置判定モジュール514は、実オブジェクト位置特定モジュール504から取得する実オブジェクトの観測位置、及び/又は予測位置と、目位置検出モジュール508から取得する視認者の目位置4(又は目位置を推定可能な情報)と、に基づき、実オブジェクト300が、第1判定実景領域R10内に入るか否か、及び第2の態様の画像の虚像Vを表示する第2判定実景領域R20内に入るか否か、を判定する。第3の設定方法では、視認者の目位置4に応じて、第1判定実景領域R10の範囲と、第2判定実景領域R20の範囲とが変化し、実オブジェクト300が、適宜変更された第1判定実景領域R10内に入っているか否か、第2判定実景領域R20内に入っているか否かを判定する。すなわち、実オブジェクト位置判定モジュール514は、目位置検出モジュール508から取得する視認者の目位置4(又は目位置を推定可能な情報)から、第1判定実景領域R10と、第2判定実景領域R20と、を設定するためのテーブルデータ、演算プログラム、などを含み得る。当該テーブルデータは、例えば、視認者の目位置4と、第1判定実景領域R10内に入るか否かの判定値(左右方向(X軸方向)の位置、上下方向(Y軸方向)の位置)とを対応付けたデータである。
(Third setting method)
The real object position determination module 514 may execute the following third setting method in addition to or in place of the first setting method and / or the second setting method described above. In the third setting method, the real object position determination module 514 includes the observation position and / or the predicted position of the real object acquired from the real object position identification module 504, and the eye position of the viewer acquired from the eye position detection module 508. Based on 4 (or information that can estimate the eye position), whether or not the real object 300 is within the first determination actual scene area R10, and the second determination to display the virtual image V of the image of the second aspect. It is determined whether or not the object is within the real scene area R20. In the third setting method, the range of the first determination actual scene area R10 and the range of the second determination actual scene area R20 change according to the eye position 4 of the viewer, and the real object 300 is appropriately changed. It is determined whether or not the first determination actual scene area R10 is included, and whether or not the second determination actual scene area R20 is included. That is, the real object position determination module 514 has the first determination actual view area R10 and the second determination actual view area R20 from the eye position 4 (or information that can estimate the eye position) of the viewer acquired from the eye position detection module 508. And may include table data, arithmetic programs, etc. for setting. The table data is, for example, the eye position 4 of the viewer and the determination value (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)) of whether or not to enter the first determination actual scene area R10. ) And the data.
 図14A、図14B、図14Cは、車両1の左右方向(X軸方向)から見た場合の、視認者の目位置(目高さ)4の変化に応じた、第1判定実景領域R10、第2判定実景領域R20の範囲の変化を示す図である。視認者の目位置4は、図14Aに示す符号4a、図14Bに示す符号4b、図14Cに示す符号4cの順に、徐々に高くなっている。 14A, 14B, and 14C show the first determination actual scene area R10 according to the change in the eye position (eye height) 4 of the viewer when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the change of the range of the 2nd determination real scene area R20. The eye position 4 of the viewer gradually increases in the order of reference numeral 4a shown in FIG. 14A, reference numeral 4b shown in FIG. 14B, and reference numeral 4c shown in FIG. 14C.
 実オブジェクト位置判定モジュール514は、視認者の目位置4に応じて、第1判定実景領域R10、第2判定実景領域R20を変更し、実オブジェクト300が、適宜変更された第1判定実景領域R10内に入るか否か、及び適宜変更された第2判定実景領域R20内に入るか否か、を判定する。一実施形態において、第1判定実景領域R10は、図14Aに示すように、表示領域100内で第1の態様の画像の虚像V10を表示する第1表示領域150の上端150aと観測された目位置4a(アイボックス200内の所定の位置の一例であり、これに限定されない。)とを結ぶ線と、第1表示領域150の下端150bと観測された目位置4a(アイボックス200内の所定の位置の一例であり、これに限定されない。)とを結ぶ線と、の間の領域である。また、第2判定実景領域R20は、第1判定実景領域R10の上側(Y軸正方向)に隣接する所定範囲の領域である。 The real object position determination module 514 changes the first determination actual view area R10 and the second determination actual view area R20 according to the eye position 4 of the viewer, and the real object 300 is appropriately changed in the first determination actual view area R10. It is determined whether or not to enter the inside, and whether or not to enter the second determination actual scene area R20 which has been appropriately changed. In one embodiment, as shown in FIG. 14A, the first determination actual scene area R10 is the eye observed as the upper end 150a of the first display area 150 that displays the virtual image V10 of the image of the first aspect in the display area 100. A line connecting the position 4a (an example of a predetermined position in the eyebox 200, and not limited to this), the lower end 150b of the first display area 150, and the observed eye position 4a (predetermined position in the eyebox 200). It is an example of the position of, and is not limited to this.) It is an area between the line connecting with and. Further, the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
 具体的には、図14Bに示すように、目位置4bが、図14Aに示す目位置4aより上に配置される場合、第1判定実景領域R12は、図14Aに示す第1判定実景領域R11より下に配置される。この際、実オブジェクト位置判定モジュール514は、第1判定実景領域R12の上側(Y軸正方向)に隣接する第2判定実景領域R22の範囲を拡げる(R22>R21)。換言すると、目位置4が移動した場合、実オブジェクト位置判定モジュール514は、第2判定実景領域R20を拡大する。図14Bに示すように、目位置4bと、実オブジェクト300とを結ぶ直線が、第2判定実景領域R22の範囲内を通る場合には、実オブジェクト位置判定モジュール514は、実オブジェクト300が第2判定実景領域R22内に入っていると判定する。 Specifically, as shown in FIG. 14B, when the eye position 4b is arranged above the eye position 4a shown in FIG. 14A, the first determination actual scene area R12 is the first determination actual scene area R11 shown in FIG. 14A. Placed below. At this time, the real object position determination module 514 expands the range of the second determination actual scene area R22 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R12 (R22> R21). In other words, when the eye position 4 moves, the real object position determination module 514 expands the second determination actual scene area R20. As shown in FIG. 14B, when the straight line connecting the eye position 4b and the real object 300 passes within the range of the second determination actual scene area R22, the real object position determination module 514 determines that the real object 300 is the second. Judgment It is determined that the actual scene area R22 is included.
 さらに、図14Cに示すように、目位置4cが、図14Bに示す目位置4bより上に配置される場合、第1判定実景領域R13は、図14Bに示す第1判定実景領域R12より下に配置される。この際、実オブジェクト位置判定モジュール514は、第1判定実景領域R13の上側(Y軸正方向)に隣接する第2判定実景領域R23の範囲をさらに拡げる(R23>R22)。図14Cに示すように、目位置4cと、実オブジェクト300とを結ぶ直線が、第2判定実景領域R23の範囲内を通る場合には、実オブジェクト位置判定モジュール514は、実オブジェクト300が第2判定実景領域R23内に入っていると判定する。 Further, as shown in FIG. 14C, when the eye position 4c is arranged above the eye position 4b shown in FIG. 14B, the first determination actual scene area R13 is below the first determination actual scene area R12 shown in FIG. 14B. Be placed. At this time, the real object position determination module 514 further expands the range of the second determination actual scene area R23 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R13 (R23> R22). As shown in FIG. 14C, when the straight line connecting the eye position 4c and the real object 300 passes within the range of the second judgment real scene area R23, the real object position determination module 514 has the real object 300 as the second. Judgment It is determined that the actual scene area R23 is included.
 すなわち、第3の設定方法では、図14Aに示す第1表示領域150(表示領域100)が重なる第1判定実景領域R11を基準表示領域とした場合、目位置4の変更に伴い、第1表示領域150(表示領域100)が重なる第1判定実景領域R10が、第1標準実景領域R10sから離れるに従い、第2判定実景領域R20を拡げる。これによれば、実オブジェクト300に対する画像を第2態様で表示させる領域が拡大されるため、第1態様で画像を表示する領域から外れてしまった実オブジェクト300に対して第2態様の画像で視認者に認識させやすくすることができる。また、表示領域100の位置の違っていても、特定の第1標準実景領域R10s、又はその近傍に存在する実オブジェクト300に対して、第2の態様の画像で視認者に認識させやすくすることができる。 That is, in the third setting method, when the first determination actual scene area R11 on which the first display area 150 (display area 100) overlaps as shown in FIG. 14A is used as the reference display area, the first display is made as the eye position 4 is changed. The second determination actual scene area R20 is expanded as the first determination actual scene area R10 on which the areas 150 (display area 100) overlap is separated from the first standard actual scene area R10s. According to this, since the area for displaying the image for the real object 300 in the second aspect is expanded, the image for the second aspect is used for the real object 300 which is out of the area for displaying the image in the first aspect. It can be easily recognized by the viewer. Further, even if the position of the display area 100 is different, it is easy for the viewer to recognize the real object 300 existing in the specific first standard real scene area R10s or its vicinity in the image of the second aspect. Can be done.
 (第4の設定方法)
 実オブジェクト位置判定モジュール514は、上述の第1乃至第3の設定方法に加えて、又は代えて、以下の第4の設定方法を実行し得る。第4の設定方法において、実オブジェクト位置判定モジュール514は、実オブジェクト位置特定モジュール504から取得する実オブジェクトの観測位置、及び/又は予測位置と、車両ECU401から取得する車両1の姿勢(例えば、チルト角)と、に基づき、実オブジェクト300が、第1判定実景領域R10内に入るか否か、及び第2の態様の画像の虚像Vを表示する第2判定実景領域R20内に入るか否か、を判定する。第4の設定方法では、車両1の姿勢に応じて、第1判定実景領域R10の範囲と、第2判定実景領域R20の範囲とが変化し、実オブジェクト300が、適宜変更された第1判定実景領域R10内に入っているか否か、第2判定実景領域R20内に入っているか否かを判定する。すなわち、実オブジェクト位置判定モジュール514は、車両ECU401から取得する車両1の姿勢(又は車両1の姿勢を推定可能な情報)から、第1判定実景領域R10と、第2判定実景領域R20と、を設定するためのテーブルデータ、演算プログラム、などを含み得る。当該テーブルデータは、例えば、視認者の目位置4と、第1判定実景領域R10内に入るか否かの判定値(左右方向(X軸方向)の位置、上下方向(Y軸方向)の位置)とを対応付けたデータである。
(Fourth setting method)
The real object position determination module 514 may execute the following fourth setting method in addition to or in place of the first to third setting methods described above. In the fourth setting method, the real object position determination module 514 determines the observation position and / or the predicted position of the real object acquired from the real object position identification module 504, and the posture of the vehicle 1 (for example, tilt) acquired from the vehicle ECU 401. Based on (angle) and, whether or not the real object 300 enters the first determination actual scene area R10 and whether or not the real object 300 enters the second determination actual scene area R20 that displays the virtual image V of the image of the second aspect. , Is determined. In the fourth setting method, the range of the first determination actual scene area R10 and the range of the second determination actual scene area R20 change according to the posture of the vehicle 1, and the actual object 300 is appropriately changed in the first determination. It is determined whether or not it is in the real scene area R10, and whether or not it is in the second determination real scene area R20. That is, the real object position determination module 514 determines the first determination actual view area R10 and the second determination actual view area R20 from the attitude of the vehicle 1 (or information that can estimate the attitude of the vehicle 1) acquired from the vehicle ECU 401. It may include table data for setting, arithmetic programs, and so on. The table data is, for example, the eye position 4 of the viewer and the determination value (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)) of whether or not to enter the first determination actual scene area R10. ) And the data.
 図15A、図15Bは、車両1の左右方向(X軸方向)から見た場合の、車両1のチルト角θtの変化に応じた、第1判定実景領域R10、第2判定実景領域R20の範囲の変化を示す図である。車両1の姿勢は、図15Bに示すチルト角θt2の方が、図15Aに示すチルト角θt1より前傾している。 15A and 15B show the ranges of the first determination actual view area R10 and the second determination actual view area R20 according to the change in the tilt angle θt of the vehicle 1 when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the change of. As for the posture of the vehicle 1, the tilt angle θt2 shown in FIG. 15B is tilted forward from the tilt angle θt1 shown in FIG. 15A.
 実オブジェクト位置判定モジュール514は、車両1の姿勢に応じて、第1判定実景領域R10、第2判定実景領域R20を変更し、実オブジェクト300が、適宜変更された第1判定実景領域R10内に入るか否か、及び適宜変更された第2判定実景領域R20内に入るか否か、を判定する。一実施形態において、第1判定実景領域R10は、表示領域100内で第1の態様の画像の虚像V10を表示する第1表示領域150の上端150aとアイボックス200の中心205(アイボックス200内の所定の位置の一例であり、これに限定されない。)とを結ぶ線と、第1表示領域150の下端150bとアイボックス200の中心205(アイボックス200内の所定の位置の一例であり、これに限定されない。)とを結ぶ線と、の間の領域である。また、第2判定実景領域R20は、第1判定実景領域R10の上側(Y軸正方向)に隣接する所定範囲の領域である。 The real object position determination module 514 changes the first determination actual view area R10 and the second determination actual view area R20 according to the posture of the vehicle 1, and the real object 300 is placed in the first determination actual view area R10 which has been appropriately changed. It is determined whether or not to enter, and whether or not to enter within the second determination actual scene area R20 which has been appropriately changed. In one embodiment, the first determination actual scene area R10 is the upper end 150a of the first display area 150 and the center 205 of the eyebox 200 (inside the eyebox 200) for displaying the virtual image V10 of the image of the first aspect in the display area 100. It is an example of a predetermined position in the eye box 200, and is an example of a predetermined position in the eye box 200, and is an example of a line connecting the lower end 150b of the first display area 150 and the center 205 of the eye box 200. It is an area between the line connecting with, but not limited to). Further, the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
 具体的には、図15Bに示すように、第1表示領域151が、図15Aに示す第1表示領域150より下に配置される場合、第1判定実景領域R12も、第1判定実景領域R11より下に配置される。この際、実オブジェクト位置判定モジュール514は、第1判定実景領域R11の上側(Y軸正方向)に隣接する第2判定実景領域R21の範囲を拡げる(R22>R21)。換言すると、表示領域100の位置が所定の位置からずれる場合、実オブジェクト位置判定モジュール514は、第2判定実景領域R20を拡大する。図15Bに示すように、アイボックス200の中心205と、実オブジェクト300とを結ぶ直線が、第2判定実景領域R22の範囲内を通る場合には、実オブジェクト位置判定モジュール514は、実オブジェクト300が第2判定実景領域R22内に入っていると判定する。 Specifically, as shown in FIG. 15B, when the first display area 151 is arranged below the first display area 150 shown in FIG. 15A, the first determination actual scene area R12 is also the first determination actual scene area R11. Placed below. At this time, the real object position determination module 514 expands the range of the second determination actual view area R21 adjacent to the upper side (Y-axis positive direction) of the first determination actual view area R11 (R22> R21). In other words, when the position of the display area 100 deviates from a predetermined position, the real object position determination module 514 expands the second determination actual view area R20. As shown in FIG. 15B, when the straight line connecting the center 205 of the eye box 200 and the real object 300 passes within the range of the second determination real scene area R22, the real object position determination module 514 determines the real object 300. Is in the second determination actual scene area R22.
 すなわち、第4の設定方法では、図15Aに示す第1表示領域150(表示領域100)が重なる第1判定実景領域R11を基準表示領域とした場合、図15Bに示すように、表示領域100の位置のずれに伴い、第1表示領域150(表示領域100)が重なる第1判定実景領域R10が、第1標準実景領域R10sから離れるに従い、第2判定実景領域R20を拡げる。これによれば、実オブジェクト300に対する画像を第2態様で表示させる領域が拡大されるため、第1態様で画像を表示する領域から外れてしまった実オブジェクト300に対して第2態様の画像で視認者に認識させやすくすることができる。また、表示領域100の位置の違っていても、特定の第1標準実景領域R10s、又はその近傍に存在する実オブジェクト300に対して、第2の態様の画像で視認者に認識させやすくすることができる。 That is, in the fourth setting method, when the first determination actual scene area R11 on which the first display area 150 (display area 100) overlaps in FIG. 15A is used as the reference display area, as shown in FIG. 15B, the display area 100 The second determination actual scene area R20 is expanded as the first determination actual scene area R10 on which the first display area 150 (display area 100) overlaps is separated from the first standard actual scene area R10s as the position shifts. According to this, since the area for displaying the image for the real object 300 in the second aspect is expanded, the image for the second aspect is used for the real object 300 which is out of the area for displaying the image in the first aspect. It can be easily recognized by the viewer. Further, even if the position of the display area 100 is different, it is easy for the viewer to recognize the real object 300 existing in the specific first standard real scene area R10s or its vicinity in the image of the second aspect. Can be done.
 図16A、図16B、図16C、及び図16Dを用いて、実景領域区分モジュール516が行う第2判定実景領域R20の拡大設定の態様の例を説明する。図16Aは、図13Bと同じであり、図13Aの第1表示領域150の位置を基準表示領域として、当該基準表示領域よりも第1表示領域151が下に配置された際に拡大された第2判定実景領域R22を示している。 16A, 16B, 16C, and 16D will be used to describe an example of an expansion setting of the second determination actual scene area R20 performed by the actual scene area division module 516. 16A is the same as FIG. 13B, and is enlarged when the first display area 151 is arranged below the reference display area with the position of the first display area 150 in FIG. 13A as the reference display area. 2 Judgment actual scene area R22 is shown.
 図16Bは、図16Aと同じ状況であっても第2判定実景領域R20を更に拡大した態様を示している。具体的には、拡大された第2判定実景領域R22の一部が、第2標準実景領域R20sの一部と重なる。すなわち、一実施形態において、実景領域区分モジュール516は、基準となる第2判定実景領域R21の一部に重なるように、第2判定実景領域R20を拡大して設定する。 FIG. 16B shows a mode in which the second determination actual scene area R20 is further expanded even in the same situation as in FIG. 16A. Specifically, a part of the enlarged second determination actual scene area R22 overlaps with a part of the second standard actual scene area R20s. That is, in one embodiment, the actual scene area division module 516 is set by enlarging the second determination actual scene area R20 so as to overlap a part of the reference second determination actual scene area R21.
 図16Cは、図16Bと同じ状況であっても第2判定実景領域R20を更に拡大した態様を示している。具体的には、拡大された第2判定実景領域R23の一部が、第2標準実景領域R20sの全体と重なる。すなわち、一実施形態において、実景領域区分モジュール516は、基準となる第2判定実景領域R21の全体を含むように、第2判定実景領域R20を拡大して設定する。 FIG. 16C shows a mode in which the second determination actual scene area R20 is further expanded even in the same situation as in FIG. 16B. Specifically, a part of the enlarged second determination actual scene area R23 overlaps with the entire second standard actual scene area R20s. That is, in one embodiment, the actual scene area division module 516 is set by enlarging the second determination actual scene area R20 so as to include the entire second determination actual scene area R21 as a reference.
 図16Dは、図16Cと同じ状況であっても第2判定実景領域R20を更に拡大した態様を示している。具体的には、拡大された第2判定実景領域R23の一部が、第2標準実景領域R20sの全体とさらに広い範囲を含む。すなわち、一実施形態において、実景領域区分モジュール516は、基準となる第2判定実景領域R21の全体とさらに広い範囲を含むように、第2判定実景領域R20を拡大して設定する。 FIG. 16D shows a mode in which the second determination actual scene area R20 is further expanded even in the same situation as in FIG. 16C. Specifically, a part of the enlarged second determination actual scene area R23 includes the entire second standard actual scene area R20s and a wider range. That is, in one embodiment, the actual scene area division module 516 expands and sets the second determination actual scene area R20 so as to include the entire reference second determination actual scene area R21 and a wider range.
 図17Aないし図17Fは、アイボックス200から前方を向いた際の第1判定実景領域R10、及び第2判定実景領域R20、の位置関係を模式的に示す図である。これらの図では、第1表示領域150は、形状であるが、これに限定されるものではない。 17A to 17F are diagrams schematically showing the positional relationship between the first determination actual scene area R10 and the second determination actual scene area R20 when facing forward from the eye box 200. In these figures, the first display area 150 has a shape, but is not limited thereto.
 図17Aでは、第2判定実景領域R20は、第1判定実景領域R10の左端より左側に隣接する領域と、第1判定実景領域R10の右端より右側に隣接する領域と、第1判定実景領域R10の上端より上側に隣接する領域と、を含む下側がへこんだ領域である。なお、第2判定実景領域R20は、狭く図示しているが、もっと広い範囲であることが好ましい(図17Aないし図17Fも同様)。 In FIG. 17A, the second determination actual scene area R20 includes an area adjacent to the left side of the left end of the first determination actual scene area R10, an area adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene area R10. The area adjacent to the upper side of the upper end and the lower side including the area are dented. Although the second determination actual scene area R20 is shown narrowly, it is preferably a wider range (the same applies to FIGS. 17A to 17F).
 また、いくつかの実施形態では、図17Bに示すように、第2判定実景領域R20は、図17Aでの領域に加え、第1判定実景領域R10の下端より下側に隣接する領域をさらに含む中空の領域であってもよい。 Further, in some embodiments, as shown in FIG. 17B, the second determination actual scene area R20 further includes an area adjacent to the lower end of the first determination actual scene area R10 in addition to the area in FIG. 17A. It may be a hollow region.
 また、いくつかの実施形態では、図17Cに示すように、第2判定実景領域R20は、第1判定実景領域R10の左端より左側に隣接する領域と、第1判定実景領域R10の右端より右側に隣接する領域と、を含んでいなくてもよい。 Further, in some embodiments, as shown in FIG. 17C, the second determination actual scene area R20 is an area adjacent to the left side of the left end of the first determination actual scene area R10 and a right side of the right end of the first determination actual scene area R10. It does not have to include the area adjacent to.
 また、いくつかの実施形態では、図17Dに示すように、第2判定実景領域R20は、分離した複数の領域で構成されていてもよい。 Further, in some embodiments, as shown in FIG. 17D, the second determination actual scene area R20 may be composed of a plurality of separated areas.
 上記の図17Aないし図17Dでは、表示領域100と第1態様の画像の虚像V10を表示する第1表示領域150とを一致させて図示してあるが、これらに限定されない。第1表示領域150は、表示領域100より小さい領域となり得る。図17Eでは、第2判定実景領域R20は、第1判定実景領域R10の左端より左側に隣接する領域と、第1判定実景領域R10の右端より右側に隣接する領域と、第1判定実景領域R10の上端より上側に隣接する領域と、を含む下側がへこんだ領域に設定され得る。この場合、第2判定実景領域R20は、第1判定実景領域R10と隣接する領域が、表示領域100内に配置され得る。 In FIGS. 17A to 17D described above, the display area 100 and the first display area 150 for displaying the virtual image V10 of the image of the first aspect are shown in agreement with each other, but the present invention is not limited thereto. The first display area 150 can be smaller than the display area 100. In FIG. 17E, the second determination actual scene area R20 includes an area adjacent to the left side of the left end of the first determination actual scene area R10, an area adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene area R10. It can be set to an area adjacent to the upper side of the upper end of the above and a dented area on the lower side including. In this case, in the second determination actual scene area R20, an area adjacent to the first determination actual scene area R10 may be arranged in the display area 100.
 上記の図17Aないし図17Eでは、第1判定実景領域R10と第2判定実景領域R20とが隣接していたが、これらに限定されない。この場合、第1表示領域150は、表示領域100より小さい領域となり得る。図17Fでは、第2判定実景領域R20は、第1判定実景領域R10の左端より左側に隣接しない領域と、第1判定実景領域R10の右端より右側に隣接しない領域と、第1判定実景領域R10の上端より上側に隣接しない領域と、を含む下側がへこんだ領域に設定され得る。なお、図17Gでは、第2判定実景領域R20は、第1判定実景領域R10の左端より左側に隣接する領域と、第1判定実景領域R10の右端より右側に隣接する領域と、第1判定実景領域R10の上端より上側に隣接しない領域と、を含む下側がへこんだ領域に設定され得る。すなわち、第1判定実景領域R10と第2判定実景領域R20とは、一部のみ隣接しており、他の部分では隣接しなくてもよい(第1判定実景領域R10又は第2判定実景領域R20ではない領域を間に含んでもよい)。また、図17F、図17Gの例を変更して、表示領域100と第1態様の画像の虚像V10を表示する第1表示領域150とを一致させてもよい。 In FIGS. 17A to 17E above, the first determination actual scene area R10 and the second determination actual scene area R20 are adjacent to each other, but the present invention is not limited to these. In this case, the first display area 150 can be smaller than the display area 100. In FIG. 17F, the second determination actual scene area R20 includes an area not adjacent to the left side of the left end of the first determination actual scene area R10, an area not adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene area R10. It can be set to a region that is not adjacent to the upper end of the upper end and a dented region that includes the lower side. In FIG. 17G, the second determination actual scene area R20 includes an area adjacent to the left side of the left end of the first determination actual scene area R10, an area adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene. A region not adjacent to the upper end of the region R10 and a region including a lower side may be set as a dented region. That is, the first determination actual scene area R10 and the second determination actual scene area R20 are adjacent to each other only in a part, and may not be adjacent to each other in other parts (first determination actual scene area R10 or second determination actual scene area R20). Areas that are not may be included in between). Further, the examples of FIGS. 17F and 17G may be modified so that the display area 100 and the first display area 150 for displaying the virtual image V10 of the image of the first aspect are matched.
 画像種類設定モジュール518は、実オブジェクト位置判定モジュール514で、第1判定実景領域R10内に入る実オブジェクトに対し、第1の態様の画像を設定し、第2判定実景領域R20内に入る実オブジェクトに対し、第2の態様の画像を設定する。 The image type setting module 518 is the real object position determination module 514, and sets the image of the first aspect with respect to the real object entering the first determination actual scene area R10, and sets the image of the first aspect, and enters the second determination actual scene area R20. On the other hand, the image of the second aspect is set.
 また、画像種類設定モジュール518は、例えば、実オブジェクト情報検出モジュール502により検出された実オブジェクトの種類、位置、実オブジェクト情報検出モジュール502で検出された実オブジェクト関連情報の種類、数、及び/又は報知必要度判定モジュール506で検出された(推定された)報知必要度の大きさに基づいて、実オブジェクトに対して表示する画像の種類を決定(変更)してもよい。また、画像種類設定モジュール518は、後述する視線方向判定モジュール524による判定結果により、表示する画像の種類を増減させてもよい。具体的には、実オブジェクト300が視認者によって視認されにくい状態である場合、実オブジェクトの近傍に視認者によって視認される画像の種類を多くしてもよい。 Further, the image type setting module 518 is, for example, the type and position of the real object detected by the real object information detection module 502, the type, number, and / or of the real object-related information detected by the real object information detection module 502. The type of image to be displayed for the real object may be determined (changed) based on the magnitude of the (estimated) notification necessity detected by the notification necessity determination module 506. Further, the image type setting module 518 may increase or decrease the type of the image to be displayed depending on the determination result by the line-of-sight direction determination module 524 described later. Specifically, when the real object 300 is in a state where it is difficult for the viewer to see it, the number of types of images visually recognized by the viewer may be increased in the vicinity of the real object.
 画像配置設定モジュール520は、虚像Vが、実オブジェクト300と特定の位置関係になって視認されるように、実オブジェクト位置特定モジュール504が特定した実オブジェクト300の位置(観測位置又は予測位置)に基づき、虚像Vの座標(視認者が車両1の運転席から表示領域100の方向を見た際の左右方向(X軸方向)、及び上下方向(Y軸方向)を少なくとも含む)を決定する。これに加え、画像配置設定モジュール520は、実オブジェクト位置特定モジュール504が設定した実オブジェクト300の決定位置に基づき、視認者が車両1の運転席から表示領域100の方向を見た際の前後方向(Z軸方向)を決定してもよい。なお、画像配置設定モジュール520は、目位置検出部411が検出した視認者の目の位置に基づいて、虚像Vの位置を調整する。例えば、画像配置設定モジュール520は、虚像Vのコンテンツが区画線311,312の間の領域(路面310)に視認されるように、虚像Vの左右方向、及び上下方向の位置を決定する。 The image arrangement setting module 520 sets the virtual image V at the position (observed position or predicted position) of the real object 300 specified by the real object position specifying module 504 so that the virtual image V can be visually recognized in a specific positional relationship with the real object 300. Based on this, the coordinates of the virtual image V (including at least the left-right direction (X-axis direction) and the up-down direction (Y-axis direction) when the viewer sees the direction of the display area 100 from the driver's seat of the vehicle 1) are determined. In addition to this, the image arrangement setting module 520 is in the front-rear direction when the viewer sees the direction of the display area 100 from the driver's seat of the vehicle 1 based on the determined position of the real object 300 set by the real object position identification module 504. (Z-axis direction) may be determined. The image arrangement setting module 520 adjusts the position of the virtual image V based on the position of the eyes of the viewer detected by the eye position detection unit 411. For example, the image arrangement setting module 520 determines the positions of the virtual image V in the horizontal direction and the vertical direction so that the contents of the virtual image V can be visually recognized in the region (road surface 310) between the division lines 311, 312.
 また、画像配置設定モジュール520は、虚像Vの角度(X方向を軸としたピッチング角、Y方向を軸としたヨーレート角、Z方向を軸としたローリング角)を設定し得る。なお、虚像Vの角度は、予め設定された角度であり、車両1の前後左右方向(XZ面)と平行となるように設定され得る。 Further, the image arrangement setting module 520 can set the angle of the virtual image V (pitching angle about the X direction, yaw rate angle about the Y direction, rolling angle about the Z direction). The angle of the virtual image V is a preset angle, and can be set to be parallel to the front-rear and left-right directions (XZ plane) of the vehicle 1.
 画像サイズ設定モジュール522は、対応付ける実オブジェクト300の位置、形状、及び/又はサイズに合わせて、虚像Vのサイズを変更してもよい。例えば、画像サイズ設定モジュール522は、対応付ける実オブジェクト300の位置が遠方であれば、虚像Vのサイズを小さくし得る。また、画像サイズ設定モジュール522は、対応付ける実オブジェクト300のサイズが大きければ、虚像Vのサイズを大きくし得る。 The image size setting module 522 may change the size of the virtual image V according to the position, shape, and / or size of the corresponding real object 300. For example, the image size setting module 522 can reduce the size of the virtual image V if the position of the corresponding real object 300 is far away. Further, the image size setting module 522 can increase the size of the virtual image V if the size of the corresponding real object 300 is large.
 また、画像サイズ設定モジュール522は、報知必要度判定モジュール506で検出された(推定された)報知必要度の大きさに基づいて、虚像Vのサイズを決定し得る。 Further, the image size setting module 522 can determine the size of the virtual image V based on the magnitude of the (estimated) notification necessity detected by the notification necessity determination module 506.
 画像サイズ設定モジュール522は、過去の所定の回数の実オブジェクト300のサイズに基づいて、今回の表示更新周期で表示する虚像Vのコンテンツを表示するサイズを予測算出する機能を有してもよい。第1の手法として、画像サイズ設定モジュール522は、カメラ(車外センサ407の一例)による過去の2つの撮像画像間で、例えば、Lucas-Kanade法を使用して、実オブジェクト300の画素を追跡することで、今回の表示更新周期における実オブジェクト300のサイズを予測し、予測した実オブジェクト300のサイズに合わせて虚像Vのサイズを決定してもよい。第2手法として、過去の2つの撮像画像間での実オブジェクト300のサイズの変化に基づき、実オブジェクト300のサイズの変化率を求めて、実オブジェクト300のサイズの変化率に応じて虚像Vのサイズを決定してもよい。なお、時系列で変化する視点からの実オブジェクト300のサイズ変化を推定する方法は、上記に限られず、例えば、Horn-Schunck法、Buxton-Buxton、Black-Jepson法などのオプティカルフロー推定アルゴリズムを含む公知の手法を用いてもよい。 The image size setting module 522 may have a function of predicting and calculating the size of displaying the contents of the virtual image V to be displayed in the current display update cycle based on the size of the real object 300 a predetermined number of times in the past. As a first method, the image sizing module 522 tracks the pixels of the real object 300 between two past images captured by a camera (an example of an outside sensor 407), for example using the Lucas-Kanade method. Therefore, the size of the real object 300 in the current display update cycle may be predicted, and the size of the virtual image V may be determined according to the predicted size of the real object 300. As a second method, the rate of change in the size of the real object 300 is obtained based on the change in the size of the real object 300 between the two past captured images, and the virtual image V is obtained according to the rate of change in the size of the real object 300. You may decide the size. The method of estimating the size change of the real object 300 from the viewpoint that changes in time series is not limited to the above, and includes, for example, an optical flow estimation algorithm such as the Horn-Schunkk method, the Boxon-Buxton, and the Black-Jepson method. A known method may be used.
 視線方向判定モジュール524は、車両1の視認者が、虚像V又は虚像Vが対応付けられた実オブジェクトを見ていること、及び/並びに虚像V又は虚像Vが対応付けられた実オブジェクトを見ていないこと、の判定に関する様々な動作を実行するための様々なソフトウェア構成要素を含む。 In the line-of-sight direction determination module 524, the viewer of the vehicle 1 is looking at the real object to which the virtual image V or the virtual image V is associated, and / and is looking at the real object to which the virtual image V or the virtual image V is associated. Includes various software components for performing various actions related to the determination of absence.
 また、視線方向判定モジュール524は、視認者が虚像Vのコンテンツ以外の何を視認しているかを検出してもよい。例えば、視線方向判定モジュール524は、実オブジェクト情報検出モジュール502が検出した車両1の前景に存在する実オブジェクト300の位置と、視線方向検出部413から取得した視認者の視線方向と、を比較することで、注視している実オブジェクト300を特定し、視認された実オブジェクト300を特定する情報を、プロセッサ33に送信してもよい。 Further, the line-of-sight direction determination module 524 may detect what the viewer is viewing other than the content of the virtual image V. For example, the line-of-sight direction determination module 524 compares the position of the real object 300 existing in the foreground of the vehicle 1 detected by the real object information detection module 502 with the line-of-sight direction of the viewer acquired from the line-of-sight direction detection unit 413. As a result, the real object 300 to be watched may be specified, and the information for identifying the visually recognized real object 300 may be transmitted to the processor 33.
 グラフィックモジュール526は、レンダリングなどの画像処理をして画像データを生成し、表示器21を駆動するための様々な既知のソフトウェア構成要素を含む。また、グラフィックモジュール526は、表示される画像の、種類、配置(位置座標、角度)、サイズ、表示距離(3Dの場合。)、視覚的効果(例えば、輝度、透明度、彩度、コントラスト、又は他の視覚特性)、を変更するための様々な既知のソフトウェア構成要素を含んでいてもよい。グラフィックモジュール526は、画像種類設定モジュール518が設定した種類、画像配置設定モジュール520が設定した位置座標(視認者が車両1の運転席から表示領域100の方向を見た際の左右方向(X軸方向)、及び上下方向(Y軸方向)の位置座標を少なくとも含む位置座標。)画像配置設定モジュール520が設定した角度(X方向を軸としたピッチング角、Y方向を軸としたヨーレート角、Z方向を軸としたローリング角)、及び画像サイズ設定モジュール522が設定したサイズで視認者に視認されるように画像データを生成し、画像表示部20に表示する。 The graphic module 526 includes various known software components for performing image processing such as rendering to generate image data and driving the display 21. The graphic module 526 also provides the type, arrangement (position coordinates, angle), size, display distance (in the case of 3D), visual effect (eg, brightness, transparency, saturation, contrast, or contrast) of the displayed image. Other visual characteristics), may include various known software components for modification. The graphic module 526 includes a type set by the image type setting module 518 and a position coordinate set by the image arrangement setting module 520 (horizontal direction (X-axis) when the viewer sees the direction of the display area 100 from the driver's seat of the vehicle 1). Direction) and position coordinates including at least the position coordinates in the vertical direction (Y-axis direction).) Angle set by the image arrangement setting module 520 (pitching angle centered on the X direction, yaw rate angle centered on the Y direction, Z) The image data is generated so that the viewer can see it in the size set by the image size setting module 522 and the rolling angle with respect to the direction), and the image data is displayed on the image display unit 20.
 駆動モジュール528は、表示器21を駆動すること、光源ユニット24を駆動すること、並びに第1アクチュエータ28及び/又は第2アクチュエータ29を駆動すること、を実行するための様々な既知のソフトウェア構成要素を含む。駆動モジュール528は、表示領域設定モジュール512、及びグラフィックモジュール526が生成した駆動データに基づき、液晶ディスプレイパネル22、光源ユニット24、並びに第1アクチュエータ28及び第2アクチュエータ29を駆動する。 The drive module 528 is a variety of known software components for driving the display 21, driving the light source unit 24, and driving the first actuator 28 and / or the second actuator 29. including. The drive module 528 drives the liquid crystal display panel 22, the light source unit 24, and the first actuator 28 and the second actuator 29 based on the drive data generated by the display area setting module 512 and the graphic module 526.
 図18A、図18Bは、いくつかの実施形態に従って、車両の外側の実景に存在する実オブジェクトに対して、第1態様又は第2態様の画像の虚像を表示する動作を実行する方法S100を示すフロー図である。方法S100は、ディスプレイを含む画像表示部20と、この画像表示部20を制御する表示制御装置30と、において実行される。方法S100内のいくつかの動作は任意選択的に組み合わされ、いくつかの動作の手順は任意選択的に変更され、いくつかの動作は任意選択的に省略される。 18A and 18B show a method S100 for performing an operation of displaying a virtual image of an image of the first aspect or the second aspect with respect to a real object existing in a real view outside the vehicle according to some embodiments. It is a flow chart. Method S100 is executed by an image display unit 20 including a display and a display control device 30 that controls the image display unit 20. Some actions in method S100 are optionally combined, some steps are optionally modified, and some actions are optionally omitted.
 以下で説明するように、方法S100は、実オブジェクトの認知性を高める画像(虚像)の提示方法を提供する。 As described below, method S100 provides a method of presenting an image (virtual image) that enhances the cognition of a real object.
 ブロックS110において、表示制御装置30は、第1判定実景領域R10の範囲を設定する。いくつかの実施形態では、表示制御装置30のプロセッサ33は、実景領域区分モジュール516を実行し、メモリ37に予め記憶された第1判定実景領域R10を読み出すことで設定する(S111)。また、いくつかの実施形態では、プロセッサ33は、表示領域設定モジュール512を実行し、取得したリレー光学系の状態(S113)、表示器の使用領域(S115)、視認者の目位置(S117)、車両1の姿勢(S119)、又はこれらの組み合わせに基づき、第1判定実景領域R10の範囲を設定する。 In block S110, the display control device 30 sets the range of the first determination actual scene area R10. In some embodiments, the processor 33 of the display control device 30 is set by executing the real scene area division module 516 and reading out the first determination real scene area R10 stored in advance in the memory 37 (S111). Further, in some embodiments, the processor 33 executes the display area setting module 512 and acquires the state of the relay optical system (S113), the used area of the display (S115), and the eye position of the viewer (S117). , The posture of the vehicle 1 (S119), or a combination thereof, the range of the first determination actual scene area R10 is set.
 ブロックS120において、表示制御装置30は、第2判定実景領域R20の範囲を拡大するための所定の条件を満たすことを検出する。表示制御装置30は、アイボックス200の所定の位置(例えば、中心205)から見て(又は視認者の目位置4から見て)表示領域100が重なる実景領域が、第1標準実景領域R10sからずれていること(ずれていると推定されること)、を検出した場合、所定の条件を満たしたと判定する。例えば、表示制御装置30は、リレー光学系の状態(S122)、表示器の使用領域(S124)、視認者の目位置(S126)、車両1の姿勢(S128)などからアイボックス200の所定の位置(例えば、中心205)から見て(又は視認者の目位置4から見て)表示領域100が重なる実景領域が、第1標準実景領域R10sからずれていること(ずれていると推定されること)を検出することができる。すなわち、いくつかの実施形態では、表示制御装置30のプロセッサ33は、実景領域区分モジュール516を実行し、取得したリレー光学系の状態(S122)、表示器の使用領域(S124)、視認者の目位置(S126)、車両1の姿勢(S128)、又はこれらの組み合わせ、に基づき、第2判定実景領域R20の範囲を拡大するための所定の条件を満たすことを検出することができる。 In block S120, the display control device 30 detects that a predetermined condition for expanding the range of the second determination actual scene area R20 is satisfied. In the display control device 30, the actual view area where the display areas 100 overlap when viewed from a predetermined position (for example, the center 205) of the eye box 200 (or when viewed from the eye position 4 of the viewer) is from the first standard actual view area R10s. When it is detected that there is a deviation (estimated to be a deviation), it is determined that the predetermined condition is satisfied. For example, the display control device 30 determines the eye box 200 from the state of the relay optical system (S122), the area used by the display (S124), the eye position of the viewer (S126), the posture of the vehicle 1 (S128), and the like. The actual view area where the display areas 100 overlap when viewed from the position (for example, the center 205) (or when viewed from the eye position 4 of the viewer) is deviated (estimated to be deviated) from the first standard actual view area R10s. That) can be detected. That is, in some embodiments, the processor 33 of the display control device 30 executes the real scene area division module 516 and acquires the state of the relay optical system (S122), the display area used (S124), and the viewer. Based on the eye position (S126), the posture of the vehicle 1 (S128), or a combination thereof, it is possible to detect that a predetermined condition for expanding the range of the second determination actual scene area R20 is satisfied.
 ブロックS130において、表示制御装置30は、S120で所定の条件が満たされていた場合、第2判定実景領域R20の範囲を拡大して設定する。例えば、いくつかの実施形態では、表示制御装置30のプロセッサ33は、実景領域区分モジュール516により、第2判定実景領域の範囲を標準範囲より拡大すること(S132)、第2判定実景領域R20が第2標準実景領域R20sの一部に重なるように拡大すること(S134)、第2判定実景領域R20が第2標準実景領域R20sの全体を含むように拡大すること(S136)、又は第2判定実景領域R20が第2標準実景領域R20sの全体と、さらに広い範囲を含むように拡大すること(S138)のいずれか1つを実行する。なお、表示制御装置30は、第2判定実景領域R20の拡大の程度を、実オブジェクト情報検出モジュール502で取得する実オブジェクト300の種類毎に異ならせてもよい。例えば、いくつかの実施形態では、表示制御装置30は、走行レーン、障害物、及び地物で、第2判定実景領域R20の拡大の程度を変えてもよい。なお、いくつかの実施形態では、表示制御装置30は、S120で所定の条件が満たされていない場合、第2判定実景領域R20の範囲を、ブロックS110で設定された第1判定実景領域R10を基準に、メモリ37に予め記憶された第2標準実景領域R20sに設定する。 In the block S130, the display control device 30 expands and sets the range of the second determination actual scene area R20 when the predetermined condition is satisfied in S120. For example, in some embodiments, the processor 33 of the display control device 30 expands the range of the second determination actual scene area from the standard range by the actual scene area division module 516 (S132), and the second determination actual scene area R20 Enlarging so as to overlap a part of the second standard actual scene area R20s (S134), expanding the second determination actual scene area R20 so as to include the entire second standard actual scene area R20s (S136), or the second determination. One of the entire second standard real scene area R20s and the expansion so as to include a wider range (S138) is executed. The display control device 30 may make the degree of expansion of the second determination real scene area R20 different for each type of the real object 300 acquired by the real object information detection module 502. For example, in some embodiments, the display control device 30 may vary the degree of expansion of the second determination actual view area R20 in the traveling lane, the obstacle, and the feature. In some embodiments, when the predetermined condition is not satisfied in S120, the display control device 30 sets the range of the second determination actual view area R20 to the first determination actual view area R10 set in the block S110. As a reference, the second standard real scene area R20s stored in advance in the memory 37 is set.
 ブロックS140において、表示制御装置30は、実オブジェクト位置特定モジュール504を実行することで、実オブジェクト位置を取得する。 In block S140, the display control device 30 acquires the real object position by executing the real object position specifying module 504.
 ブロックS150において、表示制御装置30は、ブロックS140で取得した実オブジェクトの位置が、ブロックS110で設定された第1判定実景領域R10内に入るか否か、及びブロックS130で設定された第2判定実景領域R20内に入るか否か、を判定する。いくつかの実施形態では、表示制御装置30のプロセッサ33は、実オブジェクト位置特定モジュール504を実行し、
実オブジェクト位置特定モジュール504から取得した実オブジェクトの位置が、ブロックS110で設定された第1判定実景領域R10内に入るか否か、及びブロックS130で設定された第2判定実景領域R20内に入るか否か、を判定し、この判定結果に基づき、画像種類設定モジュール518が、実オブジェクトに対応する画像を第1態様、又は第2態様に設定し、画像表示部20に画像(虚像)を表示させる(ブロックS152,S154)。
In the block S150, the display control device 30 determines whether or not the position of the real object acquired in the block S140 falls within the first determination actual scene area R10 set in the block S110, and the second determination set in the block S130. It is determined whether or not the object is within the real scene area R20. In some embodiments, the processor 33 of the display control device 30 executes the real object positioning module 504.
Whether or not the position of the real object acquired from the real object position specifying module 504 falls within the first judgment real scene area R10 set in the block S110, and enters the second judgment real scene area R20 set in the block S130. Based on this determination result, the image type setting module 518 sets the image corresponding to the real object in the first mode or the second mode, and displays the image (virtual image) on the image display unit 20. Display (blocks S152, S154).
 上述の処理プロセスの動作は、汎用プロセッサ又は特定用途向けチップなどの情報処理装置の1つ以上の機能モジュールを実行させることにより実施することができる。これらのモジュール、これらのモジュールの組み合わせ、及び/又はそれらの機能を代替えし得る公知のハードウェアとの組み合わせは全て、本発明の保護の範囲内に含まれる。 The operation of the above-mentioned processing process can be performed by executing one or more functional modules of an information processing device such as a general-purpose processor or a chip for a specific purpose. All of these modules, combinations of these modules, and / or combinations with known hardware that can replace their functionality are within the scope of the protection of the present invention.
 車両用表示システム10の機能ブロックは、任意選択的に、説明される様々な実施形態の原理を実行するために、ハードウェア、ソフトウェア、又はハードウェア及びソフトウェアの組み合わせによって実行される。図11で説明する機能ブロックが、説明される実施形態の原理を実施するために、任意選択的に、組み合わされ、又は1つの機能ブロックを2以上のサブブロックに分離されてもいいことは、当業者に理解されるだろう。したがって、本明細書における説明は、本明細書で説明されている機能ブロックのあらゆる可能な組み合わせ若しくは分割を、任意選択的に支持する。 The functional blocks of the vehicle display system 10 are optionally executed by hardware, software, or a combination of hardware and software in order to execute the principles of the various embodiments described. The functional blocks described in FIG. 11 may be optionally combined or one functional block separated into two or more subblocks in order to implement the principles of the embodiments described. It will be understood by those skilled in the art. Accordingly, the description herein optionally supports any possible combination or division of functional blocks described herein.
 以上に説明したように、本実施形態の表示制御装置30は、車両内のアイボックス200から見て前景に重なる表示領域100内に、画像の虚像Vを表示する画像表示部20を制御する表示制御装置30であって、情報を取得可能な1つ又は複数のI/Oインタフェース31と、1つ又は複数のプロセッサ33と、メモリ37と、メモリ37に格納され、1つ又は複数のプロセッサ33によって実行されるように構成される1つ又は複数のコンピュータ・プログラムと、を備え、1つ又は複数のI/Oインタフェース31は、車両の周辺に存在する実オブジェクトの位置と、表示領域100の位置、アイボックス200内の観察者の目位置4、車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つと、を取得し、1つ又は複数のプロセッサ33は、実オブジェクトの位置が、第1判定実景領域R10内に入るか否か、及び第2判定実景領域R20内に入るか否かを判定し、実オブジェクトの位置が、第1判定実景領域R10内に入る場合、実オブジェクトに対応する第1態様の画像の虚像V10を表示させ、実オブジェクトの位置が、第2判定実景領域R20内に入る場合、実オブジェクトに対応する第2態様の画像の虚像V20(V30)を表示させ、表示領域100の位置、目位置4、車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、第2判定実景領域R20の範囲を拡大する、命令を実行する。 As described above, the display control device 30 of the present embodiment controls the image display unit 20 that displays the virtual image V of the image in the display area 100 that overlaps the foreground when viewed from the eye box 200 in the vehicle. The control device 30, one or more I / O interfaces 31 capable of acquiring information, one or more processors 33, a memory 37, and one or more processors 33 stored in the memory 37. It comprises one or more computer programs configured to be executed by, and one or more I / O interfaces 31 are the positions of real objects present around the vehicle and the display area 100. Acquiring at least one of a position, an observer's eye position 4 in the eyebox 200, a vehicle attitude, or information that can estimate these, and one or more processors 33 may be a position of a real object. Determines whether or not to enter the first determination actual scene area R10 and whether or not to enter the second determination actual scene area R20, and when the position of the real object enters the first determination actual scene area R10, the actual object When the virtual image V10 of the image of the first aspect corresponding to the object is displayed and the position of the real object falls within the second determination real scene area R20, the virtual image V20 (V30) of the image of the second aspect corresponding to the real object is displayed. Display and execute an instruction to expand the range of the second determination actual scene area R20 based on at least one of the position of the display area 100, the eye position 4, the posture of the vehicle, or information that can estimate these. ..
 また、いくつかの実施形態では、1つ又は複数のプロセッサ33は、表示領域100の位置、目位置4、車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、アイボックス200から見て表示領域100の少なくとも一部と重なる前景の領域を第1判定実景領域R10に設定し、第2判定実景領域R20は、アイボックス200から見て第1判定実景領域R10の上側に視認される前景の領域を含むように設定する、命令を実行する。 Also, in some embodiments, the one or more processors 33 are based on at least one of the position of the display area 100, the eye position 4, the posture of the vehicle, or information that can estimate these. The foreground area that overlaps at least a part of the display area 100 when viewed from the box 200 is set as the first determination actual scene area R10, and the second determination actual scene area R20 is above the first determination actual scene area R10 when viewed from the eye box 200. Executes a command that is set to include the area of the foreground that is visible to the CPU.
 また、いくつかの実施形態では、1つ又は複数のプロセッサ33は、第1判定実景領域R10の一部と第2判定実景領域R20の一部とが、隣接するように設定する、命令を実行する。 Further, in some embodiments, one or more processors 33 execute an instruction to set a part of the first determination actual scene area R10 and a part of the second determination actual scene area R20 to be adjacent to each other. To do.
 また、いくつかの実施形態では、メモリ37は、前景の特定の領域を、第1標準実景領域R10sとして記憶しており、1つ又は複数のプロセッサ33は、表示領域100の位置、目位置4、車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、アイボックス200から見て表示領域100の少なくとも一部と重なる前景の領域が、第1標準実景領域R10sに対してずれると推定される場合、第2判定実景領域R20の範囲を拡大する、命令を実行する。 Further, in some embodiments, the memory 37 stores a specific area of the foreground as the first standard real-world area R10s, and one or more processors 33 use the position of the display area 100 and the eye position 4 of the display area 100. , The posture of the vehicle, or at least one of the information that can be estimated from these, the foreground region that overlaps with at least a part of the display region 100 as viewed from the eyebox 200 is relative to the first standard real scene region R10s. If it is presumed that the deviation occurs, an instruction is executed to expand the range of the second determination actual scene area R20.
 また、いくつかの実施形態では、メモリ37は、前景の特定の領域を、第1標準実景領域R10sとして記憶しており、1つ又は複数のプロセッサ33は、表示領域100の位置、目位置4、車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、アイボックス200から見て表示領域100の少なくとも一部と重なる前景の領域を第1判定実景領域R10に設定し、第1判定実景領域R10が、第1標準実景領域R10sに対してずれているか判定し、第1判定実景領域R10が、第1標準実景領域R10sに対してずれていると判定される場合、第2判定実景領域R20の範囲を拡大する、命令を実行する。 Further, in some embodiments, the memory 37 stores a specific area of the foreground as the first standard real-world area R10s, and one or more processors 33 use the position of the display area 100 and the eye position 4 of the display area 100. , The posture of the vehicle, or at least one of the information that can be estimated from these, and the foreground area that overlaps with at least a part of the display area 100 as viewed from the eyebox 200 is set as the first determination actual scene area R10. , When it is determined whether the first determination actual scene area R10 is deviated from the first standard actual scene area R10s, and it is determined that the first determination actual scene area R10 is deviated from the first standard actual scene area R10s. Execute a command to expand the range of the second determination actual scene area R20.
 また、いくつかの実施形態では、1つ又は複数のプロセッサ33は、表示領域100の位置、目位置4、車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、第2判定実景領域R20の範囲の拡大幅を変更する、命令を実行する。 Also, in some embodiments, the one or more processors 33 are based on at least one of the position of the display area 100, the eye position 4, the posture of the vehicle, or information that can estimate these. 2 Execute a command to change the enlargement width of the range of the judgment actual scene area R20.
 また、いくつかの実施形態では、1つ又は複数のプロセッサ33は、第2態様の画像の虚像V20(V30)を、表示領域100の外縁領域110に表示させる、命令を実行する。 Further, in some embodiments, the one or more processors 33 execute an instruction to display the virtual image V20 (V30) of the image of the second aspect in the outer edge region 110 of the display region 100.
 また、いくつかの実施形態では、1つ又は複数のI/Oインタフェース31で取得する実オブジェクトの位置は、アイボックス200から前景を向いた際の左右方向の位置を含み、1つ又は複数のプロセッサ33は、アイボックス200から見た第2態様の画像の虚像V20(V30)の左右方向の位置を、実オブジェクトの左右方向の位置に追従するように移動させる、命令を実行する。 Also, in some embodiments, the position of the real object acquired by one or more I / O interfaces 31 includes one or more positions in the left-right direction when facing the foreground from the eyebox 200. The processor 33 executes an instruction to move the position in the left-right direction of the virtual image V20 (V30) of the image of the second aspect seen from the eyebox 200 so as to follow the position in the left-right direction of the real object.
 また、いくつかの実施形態では、メモリ37は、前景の特定の領域を、第2標準実景領域R20sとして記憶しており、1つ又は複数のプロセッサ33は、第2判定実景領域R20が、第2標準実景領域R20sの少なくとも一部を含むように範囲を拡大する、命令を実行する。 Further, in some embodiments, the memory 37 stores a specific area of the foreground as the second standard real-sight area R20s, and one or more processors 33 have the second determination real-sight area R20 as the second determination real-sight area R20s. 2 Execute an instruction to expand the range so as to include at least a part of the standard real scene area R20s.
 また、いくつかの実施形態では、メモリ37は、前景の特定の領域を、第2標準実景領域R20sとして記憶しており、1つ又は複数のプロセッサ33は、第2判定実景領域R20が、第2標準実景領域R20sの全体を含むまで範囲を拡大する、命令を実行する。 Further, in some embodiments, the memory 37 stores a specific area of the foreground as the second standard real-sight area R20s, and one or more processors 33 have the second determination real-sight area R20 as the second determination real-sight area R20s. 2 Execute a command that extends the range to include the entire standard real-world area R20s.
1    :車両
2    :フロントウインドシールド
4    :目位置
10   :車両用表示システム
20   :HUD装置(画像表示部)
21   :表示器
21a  :表示面
22   :液晶ディスプレイパネル
23   :虚像
24   :光源ユニット
25   :リレー光学系
26   :第1ミラー
27   :第2ミラー
30   :表示制御装置
31   :I/Oインタフェース
33   :プロセッサ
35   :画像処理回路
37   :メモリ
40   :表示光
40p  :光軸
41   :第1画像光
42   :第2画像光
43   :第3画像光
90   :虚像光学系
100  :表示領域
101  :上端
102  :下端
110  :外縁領域
120  :固定領域
150  :第1表示領域
150a :上端
150b :下端
151、152  :第1表示領域
200  :アイボックス
205  :中心
300  :実オブジェクト
502  :実オブジェクト情報検出モジュール
504  :実オブジェクト位置特定モジュール
506  :報知必要度判定モジュール
508  :目位置検出モジュール
510  :車両姿勢検出モジュール
512  :表示領域設定モジュール
514  :実オブジェクト位置判定モジュール
516  :実景領域区分モジュール
518  :画像種類設定モジュール
520  :画像配置設定モジュール
522  :画像サイズ設定モジュール
524  :視線方向判定モジュール
526  :グラフィックモジュール
528  :駆動モジュール
M    :画像
R10、R11、R12、R13  :第1判定実景領域
R10s :第1標準実景領域
R20s :第2標準実景領域
R20、R21、R22、R23:第2判定実景領域
V    :虚像
θt   :チルト角
θv   :縦配置角
1: Vehicle 2: Front windshield 4: Eye position 10: Vehicle display system 20: HUD device (image display unit)
21: Display 21a: Display surface 22: Liquid crystal display panel 23: Virtual image 24: Light source unit 25: Relay optical system 26: First mirror 27: Second mirror 30: Display control device 31: I / O interface 33: Processor 35 : Image processing circuit 37: Memory 40: Display light 40p: Optical axis 41: First image light 42: Second image light 43: Third image light 90: Virtual image optical system 100: Display area 101: Upper end 102: Lower end 110: Outer edge area 120: Fixed area 150: First display area 150a: Upper end 150b: Lower end 151, 152: First display area 200: Eye box 205: Center 300: Real object 502: Real object information detection module 504: Real object position identification Module 506: Notification necessity determination module 508: Eye position detection module 510: Vehicle attitude detection module 512: Display area setting module 514: Real object position determination module 516: Real view area division module 518: Image type setting module 520: Image arrangement setting Module 522: Image size setting module 524: Line-of-sight direction determination module 526: Graphic module 528: Drive module M: Images R10, R11, R12, R13: First determination actual view area R10s: First standard actual view area R20s: Second standard actual view Areas R20, R21, R22, R23: Second determination Actual scene area V: Virtual image θt: Tilt angle θv: Vertical arrangement angle

Claims (13)

  1.  車両内のアイボックス(200)から見て前景に重なる表示領域(100)内に、画像の虚像(V)を表示する画像表示部(20)を制御する表示制御装置(30)において、
     情報を取得可能な1つ又は複数のI/Oインタフェース(31)と、
     1つ又は複数のプロセッサ(33)と、
     メモリ(37)と、
     前記メモリ(37)に格納され、前記1つ又は複数のプロセッサ(33)によって実行されるように構成される1つ又は複数のコンピュータ・プログラムと、を備え、
     前記1つ又は複数のI/Oインタフェース(31)は、
      車両の周辺に存在する実オブジェクトの位置と、
      前記表示領域(100)の位置、前記アイボックス(200)内の観察者の目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つと、を取得し、
     前記1つ又は複数のプロセッサ(33)は、
     前記実オブジェクトの位置が、第1判定実景領域(R10)内に入るか否か、及び第2判定実景領域(R20)内に入るか否かを判定し、
      前記実オブジェクトの位置が、前記第1判定実景領域(R10)内に入る場合、前記実オブジェクトに対応する第1態様の画像の虚像(V10)を表示させ、前記実オブジェクトの位置が、前記第2判定実景領域(R20)内に入る場合、前記実オブジェクトに対応する第2態様の画像の虚像(V20;V30)を表示させ、
     前記表示領域(100)の位置、前記目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、前記第2判定実景領域(R20)の範囲を拡大する、命令を実行する、
     表示制御装置(30)。
    In the display control device (30) that controls the image display unit (20) that displays the virtual image (V) of the image in the display area (100) that overlaps the foreground when viewed from the eye box (200) in the vehicle.
    One or more I / O interfaces (31) from which information can be obtained,
    With one or more processors (33),
    Memory (37) and
    It comprises one or more computer programs stored in the memory (37) and configured to be executed by the one or more processors (33).
    The one or more I / O interfaces (31)
    The position of real objects around the vehicle and
    The position of the display area (100), the eye position (4) of the observer in the eye box (200), the posture of the vehicle, or at least one of information that can estimate these is acquired.
    The one or more processors (33)
    It is determined whether or not the position of the real object is within the first determination actual scene area (R10) and whether or not it is within the second determination actual scene area (R20).
    When the position of the real object is within the first determination real scene area (R10), the virtual image (V10) of the image of the first aspect corresponding to the real object is displayed, and the position of the real object is the first. 2 When entering the determination real scene area (R20), a virtual image (V20; V30) of the image of the second aspect corresponding to the real object is displayed.
    The range of the second determination actual scene area (R20) is determined based on at least one of the position of the display area (100), the eye position (4), the posture of the vehicle, or information that can estimate these. Extend, execute instructions,
    Display control device (30).
  2.  前記1つ又は複数のプロセッサ(33)は、
     前記表示領域(100)の位置、前記目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、前記アイボックス(200)から見て前記表示領域(100)の少なくとも一部と重なる前記前景の領域を前記第1判定実景領域(R10)に設定し、
     前記第2判定実景領域(R20)は、前記アイボックス(200)から見て前記第1判定実景領域(R20)の上側に視認される前記前景の領域を含むように設定する、命令を実行する、
     請求項1に記載の表示制御装置(30)。
    The one or more processors (33)
    The display area as viewed from the eyebox (200) based on at least one of the position of the display area (100), the eye position (4), the posture of the vehicle, or information that can estimate these. The foreground region that overlaps at least a part of (100) is set as the first determination actual scene region (R10).
    The second determination actual scene area (R20) is set to include the foreground area visually recognized above the first determination actual scene area (R20) when viewed from the eye box (200). ,
    The display control device (30) according to claim 1.
  3.  前記1つ又は複数のプロセッサ(33)は、
     前記第1判定実景領域(R10)の一部と前記第2判定実景領域(R20)の一部とが、隣接するように設定する、命令を実行する、
     請求項2に記載の表示制御装置(30)。
    The one or more processors (33)
    A command is executed to set a part of the first determination actual scene area (R10) and a part of the second determination actual scene area (R20) to be adjacent to each other.
    The display control device (30) according to claim 2.
  4.  前記メモリ(37)は、前記前景の特定の領域を、第1標準実景領域(R10s)として記憶しており、
     前記1つ又は複数のプロセッサ(33)は、
     前記表示領域(100)の位置、前記目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、前記アイボックス(200)から見て前記表示領域(100)の少なくとも一部と重なる前記前景の領域が、前記第1標準実景領域(R10s)に対してずれると推定される場合、前記第2判定実景領域(R20)の範囲を拡大する、命令を実行する、
     請求項1に記載の表示制御装置(30)。
    The memory (37) stores a specific area of the foreground as a first standard real scene area (R10s).
    The one or more processors (33)
    The display area as viewed from the eyebox (200) based on at least one of the position of the display area (100), the eye position (4), the posture of the vehicle, or information that can estimate these. When it is estimated that the foreground region that overlaps at least a part of (100) deviates from the first standard actual scene area (R10s), an instruction to expand the range of the second determination actual scene area (R20). To execute,
    The display control device (30) according to claim 1.
  5.  前記メモリ(37)は、前記前景の特定の領域を、第1標準実景領域(R10s)として記憶しており、
     前記1つ又は複数のプロセッサ(33)は、
     前記表示領域(100)の位置、前記目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、前記アイボックス(200)から見て前記表示領域(100)の少なくとも一部と重なる前記前景の領域を前記第1判定実景領域(R10)に設定し、
     前記第1判定実景領域(R10)が、前記第1標準実景領域(R10s)に対してずれているか判定し、前記第1判定実景領域(R10)が、前記第1標準実景領域(R10s)に対してずれていると判定される場合、前記第2判定実景領域(R20)の範囲を拡大する、命令を実行する、
     請求項1に記載の表示制御装置(30)。
    The memory (37) stores a specific area of the foreground as a first standard real scene area (R10s).
    The one or more processors (33)
    The display area as viewed from the eyebox (200) based on at least one of the position of the display area (100), the eye position (4), the posture of the vehicle, or information that can estimate these. The foreground region that overlaps at least a part of (100) is set as the first determination actual scene region (R10).
    It is determined whether the first determination actual scene area (R10) is deviated from the first standard actual scene area (R10s), and the first determination actual scene area (R10) becomes the first standard actual scene area (R10s). If it is determined that there is a deviation from the other, the range of the second determination actual scene area (R20) is expanded, the command is executed, and the command is executed.
    The display control device (30) according to claim 1.
  6.  前記1つ又は複数のプロセッサ(33)は、
     前記表示領域(100)の位置、前記目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、前記第2判定実景領域(R20)の範囲の拡大幅を変更する、命令を実行する、
     請求項1に記載の表示制御装置(30)。
    The one or more processors (33)
    The range of the second determination actual scene area (R20) based on at least one of the position of the display area (100), the eye position (4), the posture of the vehicle, or information that can estimate these. Change the enlargement width, execute the command,
    The display control device (30) according to claim 1.
  7.  前記1つ又は複数のプロセッサ(33)は、
     前記第2態様の画像の虚像(V20;V30)を、前記表示領域(100)の外縁領域(110)に表示させる、命令を実行する、
     請求項1に記載の表示制御装置(30)。
    The one or more processors (33)
    An instruction is executed to display a virtual image (V20; V30) of the image of the second aspect in the outer edge region (110) of the display region (100).
    The display control device (30) according to claim 1.
  8.  前記1つ又は複数のI/Oインタフェース(31)で取得する前記実オブジェクトの位置は、前記アイボックス(200)から前記前景を向いた際の左右方向の位置を含み、
     前記1つ又は複数のプロセッサ(33)は、
     前記アイボックス(200)から見た前記第2態様の画像の虚像(V20;V30)の左右方向の位置を、前記実オブジェクトの左右方向の位置に追従するように移動させる、命令を実行する、
     請求項1に記載の表示制御装置(30)。
    The position of the real object acquired by the one or more I / O interfaces (31) includes a position in the left-right direction when facing the foreground from the eyebox (200).
    The one or more processors (33)
    Execute a command to move the left-right position of the virtual image (V20; V30) of the image of the second aspect as seen from the eye box (200) so as to follow the left-right position of the real object.
    The display control device (30) according to claim 1.
  9.  前記メモリ(37)は、前記前景の特定の領域を、第2標準実景領域(R20s)として記憶しており、
     前記1つ又は複数のプロセッサ(33)は、
     前記第2判定実景領域(R20)が、前記第2標準実景領域(R20s)の少なくとも一部を含むように範囲を拡大する、命令を実行する、
     請求項1に記載の表示制御装置(30)。
    The memory (37) stores a specific area of the foreground as a second standard real scene area (R20s).
    The one or more processors (33)
    Execute the instruction to expand the range so that the second determination real scene area (R20) includes at least a part of the second standard real scene area (R20s).
    The display control device (30) according to claim 1.
  10.  前記メモリ(37)は、前記前景の特定の領域を、第2標準実景領域(R20s)として記憶しており、
     前記1つ又は複数のプロセッサ(33)は、
     前記第2判定実景領域(R20)が、前記第2標準実景領域(R20s)の全体を含むまで範囲を拡大する、命令を実行する、
     請求項1に記載の表示制御装置(30)。
    The memory (37) stores a specific area of the foreground as a second standard real scene area (R20s).
    The one or more processors (33)
    Execute the instruction to expand the range so that the second determination actual scene area (R20) includes the entire second standard actual scene area (R20s).
    The display control device (30) according to claim 1.
  11.  表示面に画像を表示する表示器(21)と、
     前記表示器(21)が表示する前記画像の表示光を、外部の被投影部に投影することで、アイボックス(200)から見て前景に重なる表示領域(100)内に、画像の虚像(V)を表示する1つ又は複数のリレー光学系(25)と、
     情報を取得可能な1つ又は複数のI/Oインタフェース(31)と、
     1つ又は複数のプロセッサ(33)と、
     メモリ(37)と、
     前記メモリ(37)に格納され、前記1つ又は複数のプロセッサ(33)によって実行されるように構成される1つ又は複数のコンピュータ・プログラムと、を備え、
     前記1つ又は複数のI/Oインタフェース(31)は、
      車両の周辺に存在する実オブジェクトの位置と、
      前記表示領域(100)の位置、前記表示面において前記画像を表示する使用領域の位置、前記アイボックス(200)内の観察者の目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つと、を取得し、
     前記1つ又は複数のプロセッサ(33)は、
     前記実オブジェクトの位置が、第1判定実景領域(R10)内に入るか否か、及び第2判定実景領域(R20)内に入るか否かを判定し、
      前記実オブジェクトの位置が、前記第1判定実景領域(R10)内に入る場合、前記実オブジェクトに対応する第1態様の画像の虚像(V10)を表示させ、前記実オブジェクトの位置が、前記第2判定実景領域(R20)内に入る場合、前記実オブジェクトに対応する第2態様の画像の虚像(V20;V30)を表示させ、
     前記表示領域(100)の位置、前記使用領域の位置、前記目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、前記第2判定実景領域(R20)の範囲を拡大する、命令を実行する、
     ヘッドアップディスプレイ装置(20)。
    A display (21) that displays an image on the display surface and
    By projecting the display light of the image displayed by the display (21) onto an external projected portion, a virtual image (100) of the image (100) overlapping the foreground when viewed from the eyebox (200) is formed. One or more relay optics (25) displaying V),
    One or more I / O interfaces (31) from which information can be obtained,
    With one or more processors (33),
    Memory (37) and
    It comprises one or more computer programs stored in the memory (37) and configured to be executed by the one or more processors (33).
    The one or more I / O interfaces (31)
    The position of real objects around the vehicle and
    The position of the display area (100), the position of the used area for displaying the image on the display surface, the eye position (4) of the observer in the eye box (200), the posture of the vehicle, or these can be estimated. Information, at least one of them, and
    The one or more processors (33)
    It is determined whether or not the position of the real object is within the first determination actual scene area (R10) and whether or not it is within the second determination actual scene area (R20).
    When the position of the real object is within the first determination real scene area (R10), the virtual image (V10) of the image of the first aspect corresponding to the real object is displayed, and the position of the real object is the first. 2 When entering the determination real scene area (R20), a virtual image (V20; V30) of the image of the second aspect corresponding to the real object is displayed.
    The second determination actual scene area is based on at least one of the position of the display area (100), the position of the used area, the eye position (4), the posture of the vehicle, or information that can estimate these. Expand the range of (R20), execute the command,
    Head-up display device (20).
  12.  前記1つ又は複数のリレー光学系(25)を、回転及び/又は移動させる1つ又は複数のアクチュエータ(28,29)をさらに備え、
     前記表示領域(100)の位置を推定可能な情報は、前記1つ又は複数のアクチュエータ(28,29)の駆動量を含む、
     請求項11に記載のヘッドアップディスプレイ装置(20)。
    Further comprising one or more actuators (28, 29) for rotating and / or moving the one or more relay optics (25).
    The information that can estimate the position of the display area (100) includes the driving amount of the one or more actuators (28, 29).
    The head-up display device (20) according to claim 11.
  13.  車両内のアイボックス(200)から見て前景に重なる表示領域(100)内に、画像の虚像(V)を表示する画像表示部(20)を制御する方法であって、
      車両の周辺に存在する実オブジェクトの位置を取得することと、
      前記表示領域(100)の位置、前記アイボックス(200)内の観察者の目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つを取得することと、
     前記実オブジェクトの位置が、第1判定実景領域(R10)内に入るか否か、及び第2判定実景領域(R20)内に入るか否かを判定することと、
      前記実オブジェクトの位置が、前記第1判定実景領域(R10)内に入る場合、前記実オブジェクトに対応する第1態様の画像の虚像(V10)を表示させ、前記実オブジェクトの位置が、前記第2判定実景領域(R20)内に入る場合、前記実オブジェクトに対応する第2態様の画像の虚像(V20;V30)を表示させることと、
     前記表示領域(100)の位置、前記目位置(4)、前記車両の姿勢、又はこれらを推定可能な情報、の少なくともいずれか1つに基づき、前記第2判定実景領域(R20)の範囲を拡大することと、を含む、
     方法。
     
    This is a method of controlling an image display unit (20) that displays a virtual image (V) of an image in a display area (100) that overlaps the foreground when viewed from the eye box (200) in the vehicle.
    To get the position of a real object that exists around the vehicle,
    Acquiring at least one of the position of the display area (100), the eye position (4) of the observer in the eyebox (200), the posture of the vehicle, or information that can estimate these. ,
    Determining whether or not the position of the real object is within the first determination actual scene area (R10) and whether or not it is within the second determination actual scene area (R20).
    When the position of the real object is within the first determination real scene area (R10), the virtual image (V10) of the image of the first aspect corresponding to the real object is displayed, and the position of the real object is the first. 2 When entering the judgment real scene area (R20), the virtual image (V20; V30) of the image of the second aspect corresponding to the real object is displayed.
    The range of the second determination actual scene area (R20) is determined based on at least one of the position of the display area (100), the eye position (4), the posture of the vehicle, or information that can estimate these. To expand, including,
    Method.
PCT/JP2020/048680 2019-12-27 2020-12-25 Display control device, head-up display device, and method WO2021132555A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021567664A JP7459883B2 (en) 2019-12-27 2020-12-25 Display control device, head-up display device, and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-238220 2019-12-27
JP2019238220 2019-12-27

Publications (1)

Publication Number Publication Date
WO2021132555A1 true WO2021132555A1 (en) 2021-07-01

Family

ID=76575998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/048680 WO2021132555A1 (en) 2019-12-27 2020-12-25 Display control device, head-up display device, and method

Country Status (2)

Country Link
JP (1) JP7459883B2 (en)
WO (1) WO2021132555A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202476A (en) * 2022-06-30 2022-10-18 泽景(西安)汽车电子有限责任公司 Display image adjusting method and device, electronic equipment and storage medium
US20230306692A1 (en) * 2022-03-24 2023-09-28 Gm Global Technlology Operations Llc System and method for social networking using an augmented reality display

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015004784A1 (en) * 2013-07-11 2015-01-15 トヨタ自動車株式会社 Vehicular information display device, and vehicular information display method
JP2016222061A (en) * 2015-05-28 2016-12-28 日本精機株式会社 Display system for vehicle
JP2018146912A (en) * 2017-03-09 2018-09-20 クラリオン株式会社 On-vehicle display device, and on-vehicle display method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015004784A1 (en) * 2013-07-11 2015-01-15 トヨタ自動車株式会社 Vehicular information display device, and vehicular information display method
JP2016222061A (en) * 2015-05-28 2016-12-28 日本精機株式会社 Display system for vehicle
JP2018146912A (en) * 2017-03-09 2018-09-20 クラリオン株式会社 On-vehicle display device, and on-vehicle display method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230306692A1 (en) * 2022-03-24 2023-09-28 Gm Global Technlology Operations Llc System and method for social networking using an augmented reality display
US11798240B2 (en) * 2022-03-24 2023-10-24 GM Global Technology Operations LLC System and method for social networking using an augmented reality display
CN115202476A (en) * 2022-06-30 2022-10-18 泽景(西安)汽车电子有限责任公司 Display image adjusting method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
JP7459883B2 (en) 2024-04-02
JPWO2021132555A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
JP6201690B2 (en) Vehicle information projection system
US11525694B2 (en) Superimposed-image display device and computer program
JP7006235B2 (en) Display control device, display control method and vehicle
WO2021132555A1 (en) Display control device, head-up display device, and method
JP2020032866A (en) Vehicular virtual reality providing device, method and computer program
JP7255608B2 (en) DISPLAY CONTROLLER, METHOD, AND COMPUTER PROGRAM
WO2021200914A1 (en) Display control device, head-up display device, and method
WO2022230995A1 (en) Display control device, head-up display device, and display control method
WO2020158601A1 (en) Display control device, method, and computer program
JP2022072954A (en) Display control device, head-up display device, and display control method
JP2020121607A (en) Display control device, method and computer program
WO2021200913A1 (en) Display control device, image display device, and method
JP2020121704A (en) Display control device, head-up display device, method and computer program
WO2023003045A1 (en) Display control device, head-up display device, and display control method
JP2021056358A (en) Head-up display device
JP7434894B2 (en) Vehicle display device
JP2022077138A (en) Display controller, head-up display device, and display control method
WO2023145852A1 (en) Display control device, display system, and display control method
JP2022113292A (en) Display control device, head-up display device, and display control method
JP2022190724A (en) Display control device, head-up display device and display control method
JP2022057051A (en) Display controller and virtual display device
JP2021160409A (en) Display control device, image display device, and method
WO2023210682A1 (en) Display control device, head-up display device, and display control method
JP7014206B2 (en) Display control device and display control program
JP2020199883A (en) Display control device, head-up display device, method and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20906593

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021567664

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20906593

Country of ref document: EP

Kind code of ref document: A1