WO2021132555A1 - Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé - Google Patents

Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé Download PDF

Info

Publication number
WO2021132555A1
WO2021132555A1 PCT/JP2020/048680 JP2020048680W WO2021132555A1 WO 2021132555 A1 WO2021132555 A1 WO 2021132555A1 JP 2020048680 W JP2020048680 W JP 2020048680W WO 2021132555 A1 WO2021132555 A1 WO 2021132555A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
image
display
vehicle
real object
Prior art date
Application number
PCT/JP2020/048680
Other languages
English (en)
Japanese (ja)
Inventor
勇希 舛屋
博 平澤
中村 崇
一夫 諸橋
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Priority to JP2021567664A priority Critical patent/JP7459883B2/ja
Publication of WO2021132555A1 publication Critical patent/WO2021132555A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present disclosure relates to a display control device, a head-up display device, and a method used in a vehicle to superimpose and visually recognize an image on the foreground of the vehicle.
  • Patent Document 1 a virtual object as if it actually exists in the foreground (actual view) of the own vehicle is expressed by an image with a sense of perspective, and augmented reality (AR) is generated to generate a virtual object (AR: Augmented Reality).
  • AR Augmented Reality
  • a head-up display device that recognizes the above is described.
  • the head-up display device can typically display a virtual image of an image only in a limited area (virtual image display area) seen by the viewer. It is assumed that the position of the virtual image display area is fixed even if the eye height (eye position) of the driver of the own vehicle changes (substantially, when the eye height (eye position) changes, the position of the virtual image display area changes. Due to the difference in eye height (eye position) of the driver of the own vehicle, the area of the actual view outside the own vehicle that overlaps with the virtual image display area seen by the viewer is different.
  • the real scene area where the virtual image display areas viewed from a position higher than the reference eye height overlap is from the viewer. It is a region below the reference real scene area when viewed (in perspective, it is a real scene area closer to the reference real scene area).
  • the actual view area where the virtual image display areas viewed from a position lower than the reference eye height overlap is the area above the reference actual view area when viewed from the viewer (in terms of perspective, it is farther than the reference actual view area). It becomes the actual view area on the side).
  • the actual view area where the virtual image display areas overlap differs depending on the difference in eye height (eye position), even if the relationship between the own vehicle and the actual object is constant, for a person with a low eye height of the viewer.
  • a real object is included in the virtual image display area, and the virtual object corresponding to the real object is displayed, but for a person with a high visual height, the real object is not included in the virtual image display area.
  • it is assumed that the virtual object corresponding to the real object is not displayed.
  • the outline of the present disclosure makes it easier to recognize information about a real object even if the position of the virtual image display area or the eye position changes. More specifically, the position of the virtual image display area or the eye position of the viewer is different. Even if it is, it is also related to suppressing the variation of the information presented.
  • the display control device described in the present specification is a display control device that controls an image display unit that displays a virtual image of an image in a display area that overlaps the foreground when viewed from an eye box in the vehicle, and is information.
  • the one or more I / O interfaces comprising a plurality of computer programs include the position of a real object existing around the vehicle, the position of the display area, and the observer's eyes in the eyebox.
  • the position, the attitude of the vehicle, or at least one of information capable of estimating these is acquired, and the one or more processors move the position of the real object into the first determination real scene area.
  • the image of the first aspect corresponding to the real object is displayed.
  • the virtual image is displayed and the position of the real object is within the second determination real scene area
  • the virtual image of the image of the second aspect corresponding to the real object is displayed, and the position of the display area, the eye position
  • An instruction is executed to expand the range of the second determination actual scene area based on at least one of the attitude of the vehicle or information that can estimate these.
  • FIG. 1 is a diagram showing an application example of a vehicle display system.
  • FIG. 2 is a diagram showing a configuration of an image display unit.
  • FIG. 3 is a diagram showing a foreground and a virtual image of the image of the first aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 4 is a diagram showing a foreground and a virtual image of the image of the first aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 5 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 6 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 7A is a diagram showing a real object and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 7B shows a situation in which the real object is closer to the vehicle than in FIG. 7A, and the real object and the virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle, are displayed. It is a figure which shows.
  • FIG. 7C shows a situation in which the real object is closer to the vehicle than in FIG.
  • FIG. 8A is a diagram showing a real object and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 8B is a diagram showing a situation in which the real object is closer to the vehicle than in FIG. 8A.
  • FIG. 8C is a diagram showing a situation in which the real object is closer to the vehicle than in FIG. 8B.
  • FIG. 9 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 10 is a diagram showing a foreground and a virtual image of the image of the second aspect, which are visually recognized when facing forward from the eye box in the vehicle.
  • FIG. 11 is a block diagram of a vehicle display system.
  • FIG. 12A shows a first display area, a first determination actual view area, and a second determination actual view for displaying a virtual image of the eye box and the image of the first aspect when viewed from the left-right direction (X-axis direction) of the vehicle. It is a figure which shows the positional relationship of a region.
  • FIG. 12A shows a first display area, a first determination actual view area, and a second determination actual view for displaying a virtual image of the eye box and the image of the first aspect when viewed from the left-right direction (X-axis direction) of the vehicle. It is a figure which shows the positional relationship of a region.
  • FIG. 12A shows a first display area, a first determination actual view area, and a second determination actual view for displaying
  • FIG. 12B shows the eye box and the first display area, the first determination actual scene area, and the second determination actual scene for displaying the virtual image of the image of the first aspect when viewed from the left-right direction (X-axis direction) of the vehicle. It is a figure which shows the positional relationship of a region.
  • FIG. 13A is a diagram showing the positional relationship between the first display area, the first determination actual scene area, and the second determination actual scene area when viewed from the left-right direction (X-axis direction) of the vehicle.
  • FIG. 13B is a diagram showing a situation in which the first display area is arranged below FIG. 13A.
  • FIG. 13C is a diagram showing a situation in which the first display area is arranged below FIG. 13B.
  • FIG. 14A is a diagram showing the positional relationship between the first display area, the first determination actual scene area, and the second determination actual scene area when viewed from the left-right direction (X-axis direction) of the vehicle.
  • FIG. 14B is a diagram showing a situation in which the eye position of the viewer is arranged on the upper side of FIG. 14A.
  • FIG. 14C is a diagram showing a situation in which the eye position of the viewer is arranged above FIG. 14B.
  • FIG. 145 is a diagram showing the positional relationship between the first display area, the first determination actual scene area, and the second determination actual scene area when viewed from the left-right direction (X-axis direction) of the vehicle.
  • FIG. X-axis direction left-right direction
  • FIG. 15B is a diagram showing a situation in which the posture of the vehicle is tilted forward as compared with FIG. 15A.
  • 16A is the same as FIG. 13B, and the second determination actual scene area when the first display area is arranged below the reference display area with the position of the first display area of FIG. 13A as the reference display area. Shows an expanded aspect of.
  • FIG. 16B shows an enlarged aspect of the second determination actual scene area.
  • FIG. 16C shows an enlarged aspect of the second determination actual scene area.
  • FIG. 16D shows an enlarged aspect of the second determination actual scene area.
  • FIG. 17A is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17A is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17B is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17C is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17D is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17E is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17F is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 17G is a diagram schematically showing the positional relationship between the first determination actual scene area and the second determination actual scene area when facing forward from the eye box.
  • FIG. 18A is a flowchart showing a method of performing an operation of displaying a virtual image of an image of the first aspect or the second aspect with respect to a real object existing in a real view outside the vehicle according to some embodiments.
  • FIG. 18B is a flowchart following FIG. 18A.
  • FIGS. 1, 2, and 11 provide a description of the configuration of an exemplary vehicle display system.
  • 3 to 10 provide a description of a display example.
  • 12A to 18 show exemplary operations.
  • the present invention is not limited to the following embodiments (including the contents of the drawings). Of course, changes (including deletion of components) can be made to the following embodiments. Further, in the following description, in order to facilitate understanding of the present invention, description of known technical matters will be omitted as appropriate.
  • the vehicle display system 10 of the present embodiment includes an image display unit 20, a display control device 30 that controls the image display unit 20, and electronic devices 401 to 417 connected to the display control device 30.
  • the image display unit 20 in the vehicle display system 10 is a head-up display (HUD: Head-Up Display) device provided in the dashboard 5 of the vehicle 1.
  • the image display unit 20 emits the display light 40 toward the front windshield 2 (an example of the projected unit), and the front windshield 2 eye the display light 40 of the image M displayed by the image display unit 20. Reflects on the box 200.
  • the viewer can display the virtual image V of the image M displayed by the image display unit 20 at a position overlapping the foreground, which is the real space that is visually recognized through the front windshield 2. It can be visually recognized.
  • the left-right direction of the vehicle 1 is the X-axis direction (the left side when facing the front of the vehicle 1 is the X-axis positive direction), and the vertical direction is the Y-axis direction (a vehicle traveling on the road surface).
  • the upper side of 1 is the Y-axis positive direction), and the front-rear direction of the vehicle 1 is the Z-axis direction (the front of the vehicle 1 is the Z-axis positive direction).
  • the "eye box" used in the description of the present embodiment is (1) a region in which at least a part of the virtual image V of the image M is visible in the region, and a part of the virtual image V of the image M is not visible outside the region, (2). ) In the region, at least a part of the virtual image V of the image M can be visually recognized at a predetermined brightness or higher, and outside the region, the entire virtual image V of the image M is less than the predetermined brightness, or (3) the image display unit 20.
  • the image display unit 20 When can display a virtual image V that can be viewed stereoscopically, at least a part of the virtual image V can be viewed stereoscopically, and a part of the virtual image V is not stereoscopically viewed outside the region.
  • the predetermined brightness is, for example, about 1/50 of the brightness of the virtual image of the image M visually recognized at the center of the eye box.
  • the display area 100 is an area of a plane, a curved surface, or a partially curved surface in which the image M generated inside the image display unit 20 forms an image as a virtual image V, and is also called an image forming surface.
  • the display area 100 is a position where the display surface (for example, the exit surface of the liquid crystal display panel) 21a of the display 21 described later of the image display unit 20 is imaged as a virtual image, that is, the display area 100 is the image display unit.
  • the display surface 21a described later of 20 corresponds to the display surface 21a described later of 20 (in other words, the display area 100 has a conjugate relationship with the display surface 21a of the display 21 described later), and the virtual image visually recognized in the display area 100 is an image.
  • the display area 100 includes an angle (tilt angle ⁇ t in FIG. 1) formed by the horizontal direction (XZ plane) about the left-right direction (X-axis direction) of the vehicle 1, the center 205 of the eyebox 200, and the display area 100.
  • the angle formed by the line segment connecting the upper end 101 of the eye box and the line segment connecting the center of the eyebox and the lower end 102 of the display area 100 is defined as the vertical angle of the display area 100, and is horizontal to the bisector of this vertical angle.
  • the angle formed by the direction (XZ plane) (vertical arrangement angle ⁇ v in FIG. 1) is set.
  • the display area 100 of the present embodiment has a tilt angle ⁇ t of approximately 90 [degree] so as to substantially face the front (Z-axis positive direction).
  • the tilt angle ⁇ t is not limited to this, and can be changed within the range of 0 ⁇ ⁇ t ⁇ 90 [degree].
  • the tilt angle ⁇ t may be set to 60 [degree]
  • the display area 100 may be arranged so that the upper area is farther than the lower area when viewed from the viewer.
  • FIG. 2 is a diagram showing the configuration of the HUD device 20 of the present embodiment.
  • the HUD device 20 includes a display 21 having a display surface 21a for displaying the image M, and a relay optical system 25.
  • the display 21 of FIG. 2 is composed of a liquid crystal display panel 22 and a light source unit 24.
  • the display surface 21a is a surface on the visual side of the liquid crystal display panel 22, and emits the display light 40 of the image M.
  • the display area is set by setting the angle of the display surface 21a with respect to the optical axis 40p of the display light 40 from the center of the display surface 21a toward the eye box 200 (center of the eye box 200) via the relay optical system 25 and the projected portion.
  • An angle of 100 (including a tilt angle ⁇ t) can be set.
  • the relay optical system 25 is arranged on the optical path of the display light 40 (light from the display 21 toward the eyebox 200) emitted from the display 21, and the display light 40 from the display 21 is directed to the outside of the HUD device 20. It is composed of one or more optical members projected onto the front windshield 2.
  • the relay optical system 25 of FIG. 2 includes one concave first mirror 26 and one flat second mirror 27.
  • the first mirror 26 has, for example, a free curved surface shape having positive optical power.
  • the first mirror 26 may have a curved surface shape in which the optical power differs for each region, that is, the optical power added to the display light 40 according to the region (optical path) through which the display light 40 passes. It may be different.
  • the first image light 41, the second image light 42, and the third image light 43 (see FIG. 2) heading from each region of the display surface 21a toward the eyebox 200 are added by the relay optical system 25.
  • the optical power may be different.
  • the second mirror 27 is, for example, a flat mirror, but is not limited to this, and may be a curved surface having optical power. That is, the relay optical system 25 is added according to the region (optical path) through which the display light 40 passes by synthesizing a plurality of mirrors (for example, the first mirror 26 and the second mirror 27 of the present embodiment). The optical power may be different.
  • the second mirror 27 may be omitted. That is, the display light 40 emitted from the display 21 may be reflected by the first mirror 26 on the projected portion (front windshield) 2.
  • the relay optical system 25 includes two mirrors, but the present invention is not limited to this, and one or more refractive optics such as a lens may be added or substituted to these. It may include a member, a diffractive optical member such as a hologram, a catoptric member, or a combination thereof.
  • the relay optical system 25 of the present embodiment has a function of setting the distance to the display area 100 by the curved surface shape (an example of optical power), and a virtual image obtained by enlarging the image displayed on the display surface 21a. It has a function of generating, but in addition to this, it may have a function of suppressing (correcting) distortion of a virtual image that may occur due to the curved shape of the front windshield 2.
  • relay optical system 25 may be rotatable to which actuators 28 and 29 controlled by the display control device 30 are attached. This will be described later.
  • the liquid crystal display panel 22 receives light from the light source unit 24 and emits the spatial light-modulated display light 40 toward the relay optical system 25 (second mirror 27).
  • the liquid crystal display panel 22 has, for example, a rectangular shape whose short side is the direction in which the pixels corresponding to the vertical direction (Y-axis direction) of the virtual image V seen from the viewer are arranged.
  • the viewer visually recognizes the transmitted light of the liquid crystal display panel 22 via the virtual image optical system 90.
  • the virtual image optical system 90 is a combination of the relay optical system 25 shown in FIG. 2 and the front windshield 2.
  • the light source unit 24 is composed of a light source (not shown) and an illumination optical system (not shown).
  • the light source (not shown) is, for example, a plurality of chip-type LEDs, and emits illumination light to a liquid crystal display panel (an example of a spatial light modulation element) 22.
  • the light source unit 24 is composed of, for example, four light sources, and is arranged in a row along the long side of the liquid crystal display panel 22.
  • the light source unit 24 emits illumination light toward the liquid crystal display panel 22 under the control of the display control device 30.
  • the configuration of the light source unit 24 and the arrangement of the light sources are not limited to this.
  • the illumination optical system includes, for example, one or a plurality of lenses (not shown) arranged in the emission direction of the illumination light of the light source unit 24, and diffusion arranged in the emission direction of the one or a plurality of lenses. It is composed of a board (not shown).
  • the display 21 may be a self-luminous display or a projection type display that projects an image on a screen.
  • the display surface 21a is the screen of the projection type display.
  • the display 21 may be attached with an actuator (not shown) including a motor controlled by the display control device 30, and may be movable and / or rotatable on the display surface 21a.
  • the relay optical system 25 has two rotation axes (first rotation axis AX1 and second rotation axis AX2) that move the eyebox 200 in the vertical direction (Y-axis direction).
  • Each of the first rotation axis AX1 and the second rotation axis AX2 is not perpendicular to the left-right direction (X-axis direction) of the vehicle 1 in the state where the HUD device 20 is attached to the vehicle 1 (in other words, the YZ plane). It is set so that it is not parallel).
  • the angle between the first rotation axis AX1 and the second rotation axis AX2 with respect to the left-right direction (X-axis direction) of the vehicle 1 is set to less than 45 [degree], and more preferably. It is set to less than 20 [degree].
  • the amount of vertical movement of the display area 100 is relatively small, and the amount of vertical movement of the eyebox 200 is relatively large.
  • the rotation of the relay optical system 25 on the second rotation axis AX2 the amount of movement of the display area 100 in the vertical direction is relatively large, and the amount of movement of the eyebox 200 in the vertical direction is relatively small. That is, when the first rotation axis AX1 and the second rotation axis AX2 are compared, "the amount of vertical movement of the eyebox 200 / the amount of vertical movement of the display area 100" due to the rotation of the first rotation axis AX1.
  • the relative amount of the vertical movement amount of the display area 100 and the vertical movement amount of the eyebox 200 due to the rotation of the relay optical system 25 on the first rotation axis AX1 is the relative amount on the second rotation axis AX2.
  • the relative amount of the vertical movement amount of the display area 100 due to the rotation of the relay optical system 25 and the vertical movement amount of the eyebox 200 are different.
  • the HUD device 20 includes a first actuator 28 that rotates the first mirror 26 on the first rotation axis AX1 and a second actuator 29 that rotates the first mirror 26 on the second rotation axis AX2.
  • the HUD device 20 rotates one relay optical system 25 on two axes (first rotation axis AX1 and second rotation axis AX2).
  • the first actuator 28 and the second actuator 29 may be composed of one integrated two-axis actuator.
  • the HUD device 20 in another embodiment rotates the two relay optical systems 25 on two axes (first rotation axis AX1 and second rotation axis AX2).
  • the HUD device 20 includes a first actuator 28 that rotates the first mirror 26 on the first rotation axis AX1 and a second actuator 29 that rotates the second mirror 27 on the second rotation axis AX2. You may.
  • the rotation of the first rotation axis AX1 causes the eye box 200 to move relatively large in the vertical direction
  • the rotation of the second rotation axis AX2 causes the display area 100 to move relatively large in the vertical direction. If so, the arrangement of the first rotation axis AX1 and the second rotation axis AX2 is not limited to these. Further, the drive by the actuator may include movement in addition to or instead of rotation.
  • the HUD device 20 in another embodiment does not have to drive the relay optical system 25.
  • the HUD device 20 may not have an actuator that rotates and / or rotates the relay optical system 25.
  • the HUD device 20 of this embodiment may include a wide eye box 200 that covers a range of driver's eye heights where the vehicle 1 is expected to be used.
  • the image display unit 20 is a branch of the road surface 310 of the traveling lane, which exists in the foreground, which is a real space (actual view) visually recognized via the front windshield 2 of the vehicle 1, based on the control of the display control device 30 described later.
  • a real object 300 such as a road 330, a road sign, an obstacle (pedestrian 320, a bicycle, a motorcycle, another vehicle, etc.), and a feature (building, a bridge, etc.
  • a position overlapping the real object 300 or a real object.
  • the visual augmented reality (AR) is perceived by the viewer (typically, the viewer sitting in the driver's seat of the vehicle 1). You can also do it.
  • an image whose displayed position can be changed according to the position of the real object 300 existing in the real scene is defined as an AR image, and the displayed position is determined regardless of the position of the real object 300.
  • the set image is defined as a non-AR image. An example of an AR image will be described below.
  • FIG. 3 and 4 are diagrams showing a foreground that is visually recognized when a viewer faces forward from the inside of the vehicle, and an AR image of the first aspect that is visually recognized so as to overlap the foreground.
  • the AR image of the first aspect is displayed with respect to the real object 300 which is visible inside the display area 100 when viewed from the viewer.
  • the "first aspect of the image” is an image displayed in the first display area 150 described later in the display area 100, and is a predetermined position in the eye box 200 (for example, at the center 205).
  • the present invention is not limited to this.) It is an aspect of the image when it is displayed with respect to a real object existing in the real scene area overlapping with the first display area 150.
  • the virtual image of the image of the first aspect can be expressed as overlapping with the real object, surrounding the real object, approaching the real object, and the like when viewed from the viewer.
  • the "second aspect of the image” described later with respect to the "first aspect of the image” refers to a real object existing outside the real scene area overlapping the first display area 150 described later when viewed from the viewer. This is the aspect of the image when it is displayed.
  • the real object (pedestrian) 320 exists in the real scene area that overlaps with the display area 100 (first display area 150) seen by the viewer.
  • the image display unit 20 of the present embodiment has a virtual image V10 (V11, V12, V13) of the AR image of the first aspect with respect to the pedestrian 320 existing in the actual view area overlapping the display area 100 seen by the viewer. Is displayed.
  • the virtual image V11 is a rectangular image located so as to surround the pedestrian 320 indicating the position of the pedestrian 320 from the outside (an example of being arranged in the vicinity of the real object 300), and the virtual image V12 is the real object 300.
  • a type (pedestrian)
  • the third virtual image V13 is a pedestrian 320.
  • It is an arrow shape indicating the moving direction, and is an image displayed at a position shifted to the moving direction side with respect to the pedestrian 320 (arranged at a position set with reference to the real object 300).
  • the display area 100 is shown in a rectangular shape in FIG. 3, as described above, the display area 100 is so low in visibility that it is not actually visible to the viewer or is difficult to see.
  • the virtual images V11, V12, and V13 of the image M displayed on the display surface 21a of the display 21 are clearly visible, and the virtual image of the display surface 21a itself of the display 21 (virtual image of the area where the image M is not displayed). Is not visible (hard to see).
  • the real object (branch path) 330 exists in the real scene area that overlaps with the display area 100 as seen by the viewer.
  • the image display unit 20 of the present embodiment displays the virtual image V10 (V14) of the AR image of the first aspect with respect to the branch path 330 existing in the actual scene area overlapping the display area 100 seen by the viewer.
  • the virtual image V14 is arranged at a position where an arrow-shaped virtual object indicating a guide path is overlapped with the road surface 310 and the branch road 330 in the foreground of the vehicle 1 when viewed from the viewer.
  • the virtual image V14 is an image in which the arrangement (angle) is set so that the angle formed by the road surface 310 is visually recognized as 0 [degree] (in other words, parallel to the road surface 310).
  • the guidance route indicates that the vehicle goes straight and then turns right at the branch road 330, overlaps the road surface 310 of the traveling lane of the vehicle 1 from the viewpoint of the viewer, and goes straight toward the front branch road 330 (Z-axis positive).
  • the direction) is instructed, and the portion indicating the guide path beyond the branch road 330 is instructed in the right direction (X-axis negative direction) so as to overlap the road surface 310 of the branch road in the right turn direction when viewed from the viewer.
  • FIG. 5, 6 and 7 are diagrams showing a foreground that is visually recognized when a viewer faces forward from the inside of the vehicle and a virtual image of an AR image of the second aspect that is visually recognized overlapping the foreground. is there.
  • the virtual image of the AR image of the second aspect is displayed on the real object 300 that is visible outside the display area 100 (an example of the first display area 150 described later) when viewed from the viewer.
  • the image display unit 20 displays the virtual image V20 (V21), which is the AR image of the second aspect, in the wide area (outer edge area) 110 of the upper, lower, left, and right outer edges of the display area 100.
  • the display control device 30, which will be described later, arranges the virtual image V21 near the pedestrian 320 existing outside the display area 100 when viewed from the viewer.
  • the virtual image V21 is, for example, a ripple image based on the position of the pedestrian 320, and may be a still image or a moving image.
  • the virtual image V21 may have a shape or movement that indicates the direction of the pedestrian 320, but may not have the shape or movement.
  • the aspect of the virtual image V21 which is the AR image of the second aspect is not limited to this, and may be an arrow, a text, and / or a mark.
  • the display control device 30 visually recognizes the real object linked to the virtual image V21 by displaying the virtual image V21 which is the AR image of the second aspect in the outer edge region 110 in the display area 100 close to the pedestrian 320. It can be made easier for people to understand.
  • the image display unit 20 displays the virtual image V20 (V22), which is the AR image of the second aspect, in a predetermined predetermined area (fixed area) 120 in the display area 100.
  • the fixed area 120 is set in the lower area of the center of the display area 100.
  • the display control device 30, which will be described later, arranges a virtual image V22 having a shape and / or a movement indicating the pedestrian 320 existing outside the display area 100 when viewed from the viewer in the fixed area 120.
  • the virtual image V22 is, for example, a ripple image based on the position of the pedestrian 320, and may be a still image or a moving image.
  • the aspect of the virtual image V22 which is the AR image of the second aspect, is limited as long as it includes a shape and / or a movement indicating a pedestrian 320 existing outside the display area 100. Instead, it may consist of one or more arrows, text, and / or marks and the like.
  • the display control device 30 is an AR image of the second aspect including a shape and / or a movement indicating the pedestrian 320 existing outside the display area 100 in the predetermined fixed area 120.
  • the fixed area 120 is not completely fixed, and may be changed depending on the layout of a plurality of images displayed on the image display unit 20, such as the state of the actual scene acquired from the I / O interface described later or the vehicle 1. It may be changed depending on the state.
  • FIGS. 7A, 7B, and 7C show an actual size (an example of a display mode) of the virtual image V20 (V23), which is an AR image of the second aspect, located outside the display area 100 as seen by the viewer. It is a figure which shows the transition which changes according to the position of an object 340.
  • the position of the real object 340 as seen from the viewer gradually moves to the left side (X-axis positive direction) and the front side (Z positive / negative direction) in the order of FIGS. 7A, 7B, and 7C as the vehicle 1 advances. To go.
  • the image display unit 20, which will be described later may gradually move the virtual image 23 to the left side (X-axis positive direction) so as to follow the movement of the real object 340 to the left side (X-axis positive direction). .. Further, the image display unit 20 described later may gradually increase the size of the virtual image 23 so as to follow the movement of the real object 340 toward the front side (Z-axis negative direction). That is, the image display unit 20 described later may change the position and / or size (an example of the display mode) of the virtual image V23 which is the AR image of the second aspect according to the position of the real object 340. ..
  • FIGS. 8A, 8B, and 8C show a real object in which the brightness (an example of a display mode) of the virtual image V20 (V23), which is the AR image of the second aspect, is located outside the display area 100 as seen by the viewer. It is a figure which shows the transition which changes according to the position of 340.
  • the position of the real object 340 as seen from the viewer gradually moves to the left side (X-axis positive direction) and the front side (Z positive / negative direction) in the order of FIGS. 8A, 8B, and 8C as the vehicle 1 advances. To go.
  • the image display unit 20, which will be described later, may gradually move the virtual image 23 to the left side (X-axis positive direction) so as to follow the movement of the real object 340 to the left side (X-axis positive direction). .. Further, the image display unit 20 described later may gradually reduce the brightness of the virtual image 23 so as to follow the movement of the real object 340 toward the front side (Z-axis negative direction). It should be noted that this description does not deny that the image display unit 20 gradually increases the brightness of the virtual image 23 so as to follow the movement of the real object 340 toward the front side (Z-axis negative direction).
  • the image display unit 20 described later may change the position and / or the brightness (an example of the display mode) of the virtual image V23 which is the AR image of the second aspect according to the position of the real object 340.
  • the image display unit 20, which will be described later displays information about the vehicle 1, information about the occupants of the vehicle 1, information other than the position of the real object to which the virtual image is displayed, and / or the virtual image.
  • the display mode of the virtual image V23, which is the AR image of the second aspect may be changed according to information such as the position of a real object that is not the target.
  • the change in the display mode of the virtual image referred to here may include a change in color, a change in brightness, switching between lighting and blinking, and / or switching between display and non-display, in addition to those described above.
  • FIG. 9 is a diagram illustrating a virtual image of the non-AR image of the second aspect.
  • the real object (branch path) 330 exists outside the real scene area that overlaps with the display area 100 as seen by the viewer.
  • the image display unit 20 of the present embodiment has a predetermined area (fixed area) 120 in the display area 100 with respect to the branch path 330 existing in the actual view area that does not overlap with the display area 100 seen by the viewer.
  • the virtual image V30 (V31, V32) of the non-AR image of the second aspect is displayed.
  • the display control device 30 described later provides a virtual image V31 which is a non-AR image showing the guide path (here, a right turn is shown) and a virtual image V32 which is a non-AR image showing the distance to the branch path in the fixed area 120.
  • the "non-AR image” referred to here is an image that does not change the position of the image or the direction to be instructed according to the position of the real object existing in the real scene in the real space.
  • the virtual image V31 is an arrow image showing the right turn direction, but the displayed position and the direction to be indicated are determined according to the position of the branch road 330 (in other words, according to the positional relationship between the vehicle 1 and the branch road 330).
  • the non-AR image of the second aspect is not limited to this as long as it includes information about the pedestrian 320 existing outside the display area 100, and one or more texts and / Or may be composed of a mark or the like.
  • FIG. 10 shows an example in which a virtual image V33 composed of a mark, which is a non-AR image of the second aspect, is displayed on a pedestrian 320 existing outside an actual scene area that overlaps with the display area 100 seen by the viewer. It is a figure which shows.
  • the image display unit 20 of the present embodiment has a virtual image V30 of the non-AR image of the second aspect with respect to the pedestrian 320 existing in the fixed area 120 outside the actual view area overlapping the display area 100 seen by the viewer. (V33) is displayed.
  • the display control device 30 notifies the presence of the real object (pedestrian 320, branch road 330) existing outside the display area 100 to the predetermined fixed area 120, which is the non-AR of the second aspect.
  • V30 V31, V32, V33
  • V30 V31, V32, V33
  • FIG. 11 is a block diagram of the vehicle display system 10 according to some embodiments.
  • the display control device 30 includes one or more I / O interfaces 31, one or more processors 33, one or more image processing circuits 35, and one or more memories 37.
  • the various functional blocks shown in FIG. 11 may consist of hardware, software, or a combination of both.
  • FIG. 11 shows only one embodiment, and the illustrated components may be combined with a smaller number of components, or there may be additional components.
  • the image processing circuit 35 (for example, a graphic processing unit) may be included in one or more processors 33.
  • the processor 33 and the image processing circuit 35 are operably connected to the memory 37. More specifically, the processor 33 and the image processing circuit 35 execute a program stored in the memory 37 to generate and / or transmit image data, for example, and display the vehicle display system 10 (image display). The operation of unit 20) can be performed.
  • the processor 33 and / or the image processing circuit 35 includes at least one general purpose microprocessor (eg, central processing unit (CPU)), at least one application specific integrated circuit (ASIC), and at least one field programmable gate array (FPGA). , Or any combination thereof.
  • the memory 37 includes any type of magnetic medium such as a hard disk, any type of optical medium such as a CD and DVD, any type of semiconductor memory such as a volatile memory, and a non-volatile memory.
  • the volatile memory may include DRAM and SRAM, and the non-volatile memory may include ROM and NVRAM.
  • the processor 33 is operably connected to the I / O interface 31.
  • the I / O interface 31 communicates with, for example, the vehicle ECU 401 described later or another electronic device (reference numerals 403 to 417 described later) provided in the vehicle according to the standard of CAN (Controller Area Network) (also referred to as CAN communication). ).
  • CAN Controller Area Network
  • the communication standard adopted by the I / O interface 31 is not limited to CAN, for example, CANFD (CAN with Flexible Data Rate), LIN (Local Interconnect Network), Ethernet (registered trademark), MOST (Media Oriented Systems Transport).
  • MOST is a registered trademark
  • a wired communication interface such as UART, or USB
  • a local such as a personal area network (PAN)
  • PAN personal area network
  • Bluetooth registered trademark
  • 802.1x Wi-Fi registered trademark
  • In-vehicle communication (internal communication) interface which is a short-range wireless communication interface within several tens of meters such as an area network (LAN), is included.
  • the I / O interface 31 is a wireless wide area network (WWAN0, IEEE 802.16-2004 (WiMAX: Worldwide Interoperability for Microwave Access)), IEEE 802.16e base (Mobile WiMAX), 4G, 4G-LTE, LTE Advanced,
  • An external communication (external communication) interface such as a wide area network (for example, an Internet communication network) may be included according to a cellular communication standard such as 5G.
  • the processor 33 is interoperably connected to the I / O interface 31 to provide information with various other electronic devices and the like connected to the vehicle display system 10 (I / O interface 31). Can be exchanged.
  • the I / O interface 31 includes, for example, a vehicle ECU 401, a road information database 403, a vehicle position detection unit 405, an external sensor 407, an operation detection unit 409, an eye position detection unit 411, a line-of-sight direction detection unit 413, and a mobile information terminal 415.
  • the external communication device 417 and the like are operably connected.
  • the I / O interface 31 may include a function of processing (converting, calculating, analyzing) information received from another electronic device or the like connected to the vehicle display system 10.
  • the display 21 is operably connected to the processor 33 and the image processing circuit 35. Therefore, the image displayed by the image display unit 20 may be based on the image data received from the processor 33 and / or the image processing circuit 35.
  • the processor 33 and the image processing circuit 35 control the image displayed by the image display unit 20 based on the information acquired from the I / O interface 31.
  • the vehicle ECU 401 uses sensors and switches provided on the vehicle 1 to determine the state of the vehicle 1 (for example, mileage, vehicle speed, accelerator pedal opening, brake pedal opening, engine throttle opening, injector fuel injection amount, engine rotation speed). , Motor speed, steering angle, shift position, drive mode, various warning states, attitude (including roll angle and / or pitching angle), vehicle vibration (including magnitude, frequency, and / or frequency of vibration) )) and the like, and collect and manage (may include control) the state of the vehicle 1. As a part of the function, the numerical value of the state of the vehicle 1 (for example, the vehicle speed of the vehicle 1). ) Can be output to the processor 33 of the display control device 30.
  • the state of the vehicle 1 for example, mileage, vehicle speed, accelerator pedal opening, brake pedal opening, engine throttle opening, injector fuel injection amount, engine rotation speed.
  • the vehicle ECU 401 simply transmits the numerical value detected by the sensor or the like (for example, the pitching angle is 3 [brake] in the forward tilt direction) to the processor 33, or instead, the numerical value detected by the sensor is used.
  • Judgment results based on one or more states of the including vehicle 1 (for example, the vehicle 1 satisfies a predetermined condition of the forward leaning state) and / and analysis results (for example, of the brake pedal opening degree). Combined with the information, the brake has caused the vehicle to lean forward.) May be transmitted to the processor 33.
  • the vehicle ECU 401 may output a signal indicating a determination result indicating that the vehicle 1 satisfies a predetermined condition stored in advance in a memory (not shown) of the vehicle ECU 401 to the display control device 30.
  • the I / O interface 31 may acquire the above-mentioned information from the sensors and switches provided in the vehicle 1 provided in the vehicle 1 without going through the vehicle ECU 401.
  • the vehicle ECU 401 may output an instruction signal indicating an image to be displayed by the vehicle display system 10 to the display control device 30, and at this time, it is necessary to notify the coordinates, size, type, display mode, and image of the image.
  • the degree and / or the necessity-related information that is the basis for determining the notification necessity may be added to the instruction signal and transmitted.
  • the road information database 403 is included in a navigation device (not shown) provided in the vehicle 1 or an external server connected to the vehicle 1 via an external communication interface (I / O interface 31), and the vehicle position detection described later. Based on the position of the vehicle 1 acquired from the section 405, the road information (lane, white line, stop line, crosswalk, etc.) on which the vehicle 1 travels, which is the information around the vehicle 1 (information related to the actual object around the vehicle 1). Road width, number of lanes, intersections, curves, branch roads, traffic regulations, etc.), feature information (buildings, bridges, rivers, etc.), presence / absence, position (including distance to vehicle 1), direction, shape, type , Detailed information and the like may be read out and transmitted to the processor 33. Further, the road information database 403 may calculate an appropriate route (navigation information) from the departure point to the destination, and output a signal indicating the navigation information or image data indicating the route to the processor 33.
  • the own vehicle position detection unit 405 is a GNSS (Global Navigation Satellite System) or the like provided in the vehicle 1, detects the current position and orientation of the vehicle 1, and transmits a signal indicating the detection result via the processor 33. , Or directly output to the road information database 403, the portable information terminal 415 described later, and / or the external communication device 417.
  • the road information database 403, the mobile information terminal 415 described later, and / or the external communication device 417 acquire the position information of the vehicle 1 from the own vehicle position detection unit 405 continuously, intermittently, or at a predetermined event. , Information around the vehicle 1 may be selected and generated and output to the processor 33.
  • the vehicle exterior sensor 407 detects the real object 300 existing around the vehicle 1 (front, side, and rear).
  • the real object 300 detected by the external sensor 407 is, for example, an obstacle (pedestrian, bicycle, motorcycle, other vehicle, etc.), a road surface 310 of a traveling lane described later, a lane marking, a roadside object, and / or a feature (building). Etc.) may be included.
  • a detection unit composed of a radar sensor such as a millimeter wave radar, an ultrasonic radar, a laser radar, a camera, or a combination thereof, and detection data from the one or a plurality of detection units are processed ( It consists of a processing device (data fusion) and.
  • One or more external sensors 407 detect a real object in front of the vehicle 1 for each detection cycle of each sensor, and the real object information (presence or absence of the real object, existence of the real object exists) which is an example of the real object information.
  • information such as the position, size, and / or type of each real object
  • these real object information may be transmitted to the processor 33 via another device (for example, vehicle ECU 401).
  • a camera an infrared camera or a near-infrared camera is desirable so that a real object can be detected even when the surroundings are dark such as at night.
  • a stereo camera capable of acquiring a distance or the like by parallax is desirable.
  • the operation detection unit 409 is, for example, a CID (Center Information Display) of the vehicle 1, a hardware switch provided on the instrument panel, or a software switch that combines an image and a touch sensor, and the like.
  • the operation information based on the operation by the occupant is output to the processor 33.
  • the operation detection unit 409 sets the display area setting information based on the operation of moving the display area 100, the eye box setting information based on the operation of moving the eye box 200, and the eye position 4 of the viewer by the user's operation.
  • Information based on the operation is output to the processor 33.
  • the eye position detection unit 411 may include a camera such as an infrared camera that detects the position of the eyes of a viewer sitting in the driver's seat of the vehicle 1, and may output the captured image to the processor 33.
  • the processor 33 acquires an image (an example of information capable of estimating the eye position) from the eye position detection unit 411, and can identify the eye position of the viewer by analyzing the captured image.
  • the eye position detection unit 411 may analyze the image captured by the camera and output a signal indicating the position of the eyes of the viewer, which is the analysis result, to the processor 33.
  • the method of acquiring information capable of estimating the eye position of the viewer of the vehicle 1 or the eye position of the viewer is not limited to these, and a known eye position detection (estimation) technique is used.
  • the processor 33 adjusts at least the position of the image based on the position of the eyes of the viewer to obtain the eye position of the image superimposed on the desired position of the foreground (the position having a specific positional relationship with the real object). You may make the detected visual person (visual person) visually recognize.
  • the line-of-sight direction detection unit 413 may include an infrared camera or a visible light camera that captures the face of a viewer sitting in the driver's seat of the vehicle 1, and may output the captured image to the processor 33.
  • the processor 33 acquires an captured image (an example of information capable of estimating the line-of-sight direction) from the line-of-sight direction detection unit 413, and identifies the line-of-sight direction (and / or the gaze position) of the viewer by analyzing the captured image. can do.
  • the line-of-sight direction detection unit 413 may analyze the captured image from the camera and output a signal indicating the line-of-sight direction (and / or the gaze position) of the viewer, which is the analysis result, to the processor 33.
  • the method for acquiring information that can estimate the line-of-sight direction of the viewer of the vehicle 1 is not limited to these, and is not limited to these, but is an EOG (Electro-oculargram) method, a corneal reflex method, a scleral reflex method, and Purkinje image detection. It may be obtained using other known line-of-sight detection (estimation) techniques such as the method, search coil method, infrared fundus camera method.
  • the mobile information terminal 415 is a smartphone, a laptop computer, a smart watch, or other information device that can be carried by a viewer (or another occupant of the vehicle 1).
  • the I / O interface 31 can communicate with the mobile information terminal 415 by pairing with the mobile information terminal 415, and the data recorded in the mobile information terminal 415 (or the server through the mobile information terminal). To get.
  • the mobile information terminal 415 has, for example, the same functions as the above-mentioned road information database 403 and own vehicle position detection unit 405, acquires the road information (an example of real object-related information), and transmits it to the processor 33. May be good.
  • the mobile information terminal 415 may acquire commercial information (an example of information related to a real object) related to a commercial facility in the vicinity of the vehicle 1 and transmit it to the processor 33.
  • the mobile information terminal 415 transmits the schedule information of the owner (for example, the viewer) of the mobile information terminal 415, the incoming information on the mobile information terminal 415, the reception information of the mail, etc. to the processor 33, and the processor 33 and the image.
  • the processing circuit 35 may generate and / or transmit image data relating to these.
  • the external communication device 417 is a communication device that exchanges information with the vehicle 1. For example, another vehicle or pedestrian communication (V2P: Vehicle To Pestation) connected to the vehicle 1 by vehicle-to-vehicle communication (V2V: Vehicle To Vehicle). ), A network communication device connected by a pedestrian (a mobile information terminal carried by a pedestrian) and a vehicle-to-vehicle communication (V2I: Vehicle To vehicle Infrastructure). : Includes everything connected by (Vehicle To Everything).
  • the external communication device 417 acquires the positions of, for example, pedestrians, bicycles, motorcycles, other vehicles (preceding vehicles, etc.), road surfaces, lane markings, roadside objects, and / or features (buildings, etc.) and sends them to the processor 33.
  • the external communication device 417 has the same function as the own vehicle position detection unit 405 described above, and may acquire the position information of the vehicle 1 and transmit it to the processor 33, and further, the road information database 403 described above may be used. It also has a function, and the road information (an example of information related to a real object) may be acquired and transmitted to the processor 33.
  • the information acquired from the external communication device 417 is not limited to the above.
  • the software components stored in the memory 37 are the actual object information detection module 502, the actual object position identification module 504, the notification necessity determination module 506, the eye position detection module 508, the vehicle attitude detection module 510, the display area setting module 512, and the like. It includes a real object position determination module 514, an actual scene area division module 516, an image type setting module 518, an image arrangement setting module 520, an image size setting module 522, a line-of-sight direction determination module 524, a graphic module 526, and a drive module 528.
  • the real object information detection module 502 acquires information (also referred to as real object information) including at least the position of the real object 300 existing in front of the vehicle 1. For example, when the real object information detection module 502 visually recognizes the position of the real object 300 existing in the foreground of the vehicle 1 from the vehicle exterior sensor 407 (when the viewer in the driver's seat of the vehicle 1 visually recognizes the traveling direction (forward) of the vehicle 1). The position in the height direction (vertical direction) and the horizontal direction (horizontal direction) of, and the position (distance) in the depth direction (front direction) may be added to these positions) and the size of the real object 300 (.
  • information also referred to as real object information
  • Information (an example of real object information) including the height direction, the size in the lateral direction, and the relative speed with respect to the vehicle 1 (including the relative moving direction) may be acquired.
  • the real object information detection module 502 includes the position, relative speed, type of the real object (for example, another vehicle), the lighting state of the direction indicator of the other vehicle, the state of steering angle operation, and the state of steering angle operation via the external communication device 417.
  • information indicating the planned progress route and the progress schedule by the driving support system (an example of information related to the actual object) may be acquired.
  • the position of (see 3) may be acquired, and the region (road surface 310 of the traveling lane) between the left and right lane markings 31 and 312 may be recognized.
  • the real object information detection module 502 is information about a real object existing in the foreground of the vehicle 1 (real object), which is a source for determining the content of the virtual image V described later (hereinafter, also appropriately referred to as “image type”). Related information) may be detected.
  • the real object-related information includes, for example, type information indicating the type of the real object such as a pedestrian, a building, or another vehicle, the movement direction information indicating the moving direction of the real object, the distance to the real object, and the like.
  • Distance time information indicating the arrival time, or individual detailed information of the real object such as the charge of the parking lot (an example of the real object) (but not limited to these).
  • the real object information detection module 502 acquires type information, distance / time information, and / or individual detailed information from the road information database 403 or the mobile information terminal 415, and the type information, movement direction information, and / or from the vehicle exterior sensor 407.
  • the distance / time information may be acquired and / or the type information, the moving direction information, the distance / time information, and / or the individual detailed information may be detected from the external communication device 417.
  • the real object position identification module 504 acquires an observation position indicating the current position of the real object 300 from the vehicle exterior sensor 407 or the external communication device 417 via the I / O interface 31, or two or more of these observation positions.
  • the observation position of the real object obtained by data fusion is acquired, and the position (also referred to as a specific position) of the real object 300 is set based on the acquired observation position.
  • the image arrangement setting module 520 which will be described later, determines the position of the image based on the specific position of the real object 300 set by the real object position specifying module 504.
  • the real object position specifying module 504 may specify the position of the real object 300 based on the observed position of the real object 300 acquired immediately before, but is not limited to this, and at least observes the real object 300 acquired immediately before.
  • the position of the real object 300 may be specified (estimated) based on the predicted position of the real object at a predetermined time predicted based on the observed position of one or more real objects 300 acquired in the past including the position. .. That is, by executing the real object position specifying module 504 and the image arrangement setting module 520 described later, the processor 33 can set the position of the virtual image V based on the observed position of the real object 300 acquired immediately before, or at least.
  • a virtual image based on the predicted position of the real object 300 at the display update timing of the virtual image V predicted based on the observed position of one or more real objects 300 acquired in the past including the observed position of the real object 300 acquired immediately before.
  • the position of V can be set.
  • the real object positioning module 504 uses, for example, a least squares method, a prediction algorithm such as a Kalman filter, an ⁇ - ⁇ filter, or a particle filter, and uses one or more observation positions in the past to determine the next value.
  • the vehicle display system 10 only needs to be able to acquire the observed position and / or the predicted position of the real object, and does not have to have the function of setting (calculating) the predicted position of the real object.
  • a part or all of the function of setting (calculating) the predicted position may be provided separately from the display control device 30 of the vehicle display system 10 (for example, the vehicle ECU 401).
  • the notification necessity determination module 506 determines whether the content of each virtual image V displayed by the vehicle display system 10 should be notified to the viewer.
  • the notification necessity determination module 506 may acquire information from various other electronic devices connected to the I / O interface 31 and calculate the notification necessity. Further, the electronic device connected to the I / O interface 31 in FIG. 11 transmits information to the vehicle ECU 401, and the notification necessity determination module 506 detects (acquires) the notification necessity determined by the vehicle ECU 401 based on the received information. ) May.
  • the "notification necessity" is, for example, the degree of danger derived from the degree of seriousness that can occur, the degree of urgency derived from the length of the reaction time required to take a reaction action, the vehicle 1 or the viewer (or the vehicle).
  • the notification necessity determination module 506 may detect the necessity-related information that is the source for estimating the notification necessity, and may estimate the notification necessity from this.
  • the necessity-related information that is the basis for estimating the notification necessity of the image may be estimated by, for example, the position and type of a real object or traffic regulation (an example of road information), and is connected to the I / O interface 31. It may be estimated based on other information input from various other electronic devices or in addition to other information. That is, the notification necessity determination module 506 may determine whether to notify the viewer and may choose not to display the image described later.
  • the vehicle display system 10 only needs to be able to acquire the notification necessity, and does not have to have a function of estimating (calculating) the notification necessity, and some or all of the functions for estimating the notification necessity are , It may be provided separately from the display control device 30 of the vehicle display system 10 (for example, the vehicle ECU 401).
  • the eye position detection module 508 detects the position of the eyes of the viewer of the vehicle 1.
  • the eye position detection module 508 determines where the height of the viewer's eyes is in a height region provided in a plurality of stages, detects the height of the eyes of the viewer (position in the Y-axis direction), and the viewer. Detection of eye height (position in Y-axis direction) and depth direction (position in Z-axis direction), and / or detection of visual eye position (position in X, Y, Z-axis direction), Includes various software components for performing various actions related to.
  • the eye position detection module 508 can acquire, for example, the position of the eyes of the viewer from the eye position detection unit 411, or can estimate the position of the eyes including the height of the eyes of the viewer from the eye position detection unit 411. Is received and the eye position including the eye height of the viewer is estimated.
  • the information that can estimate the position of the eyes may be, for example, the position of the driver's seat of the vehicle 1, the position of the face of the viewer, the height of the sitting height, the input value by the viewer at the operation unit (not shown), or the like.
  • the vehicle posture detection module 510 is mounted on the vehicle 1 and detects the posture of the vehicle 1.
  • the vehicle attitude detection module 510 determines where the attitude of the vehicle 1 is in the attitude region provided in a plurality of stages, detects the angle (pitching angle, rolling angle) of the vehicle 1 in the Earth coordinate system, and with respect to the road surface of the vehicle 1. It includes various software components for performing various actions related to the detection of angles (pitching angle, rolling angle) and / or the detection of the height (position in the Y-axis direction) of the vehicle 1 with respect to the road surface.
  • the vehicle attitude detection module 510 analyzes, for example, a triaxial acceleration sensor (not shown) provided in the vehicle 1 and the triaxial acceleration detected by the triaxial acceleration sensor to obtain the vehicle 1 with reference to a horizontal plane.
  • the pitching angle (vehicle attitude) is estimated, and vehicle attitude information including information on the pitching angle of the vehicle 1 is output to the processor 33.
  • the vehicle attitude detection module 510 may be composed of a height sensor (not shown) arranged in the vicinity of the suspension of the vehicle 1 in addition to the above-mentioned three-axis acceleration sensor. At this time, the vehicle posture detection module 510 estimates the pitching angle of the vehicle 1 as described above by analyzing the height of the vehicle 1 detected by the height sensor from the ground, and provides information on the pitching angle of the vehicle 1. The vehicle attitude information including the above is output to the processor 33.
  • the method by which the vehicle posture detection module 510 obtains the pitching angle of the vehicle 1 is not limited to the above-mentioned method, and the pitching angle of the vehicle 1 may be obtained by using a known sensor or analysis method.
  • the display area setting module 512 sets the rotation amount (angle) of the first actuator 28 and the rotation amount (angle) of the second actuator 29 based on the input information of the eye position 4 of the viewer and the setting information.
  • the position of the display area 100 can be determined by the amount of rotation (angle) of the actuator. Therefore, the rotation amount (angle) of the actuator is an example of information that can estimate the position of the display area 100.
  • the display area setting module 512 has the rotation amount (angle) of the first actuator 28 and the second eye position estimation information based on the eye position information detected by the eye position detection module 508 and the eye position estimation information estimated by the eye position detection module 508. Includes various software components for performing various operations related to setting the amount of rotation (angle) of the actuator 29.
  • the display area setting module 512 has the rotation amount (angle) about the first rotation axis AX1 and the second rotation axis AX2 as the axes from the eye position or the information that can estimate the eye position. It may include table data, arithmetic expressions, etc. for setting the amount of rotation (angle).
  • the display area setting module 512 may change the area to be used in the display surface 21a of the display 21 based on the information of the eye position 4 of the viewer to be input and the setting information. That is, the display area setting module 512 can also change the position of the display area 100 used for displaying the virtual image V by changing the area used for displaying the image on the display surface 21a of the display 21. Therefore, the information indicating the area used for displaying the image on the display surface 21a of the display 21 can be said to be an example of the information that can estimate the position of the display area 100.
  • the display area setting module 512 uses the rotation amount (angle) about the first rotation axis AX1 and the second rotation axis AX2 as axes based on the operation by the operation detection unit 409 and the instruction from the vehicle ECU 401.
  • the amount of rotation (angle) may be set.
  • the display area setting module 512 may include (1) position information of the viewer's favorite eyebox (an example of eyebox position setting information) and position information of the favorite display area (display) acquired from a viewer identification unit (not shown). (Example of area setting information), (2) Display area setting information based on the operation of moving the display area 100, eye box 200, which is obtained from the operation detection unit (accompanying) provided in the vehicle 1 and is based on the user's operation.
  • the display area setting information indicating the position of the display area 100 determined by the vehicle ECU 401, the eye box setting information indicating the position of the eye box 200, etc. acquired from the vehicle ECU 401. It is related to setting the rotation amount (angle) of the first actuator 28 about the first rotation axis AX1 and the rotation amount (angle) of the second actuator 29 about the second rotation axis AX2. Includes various software components to perform different actions.
  • the display area setting module 512 When the display area setting module 512 acquires the display area setting information for moving the display area 100 to a predetermined position, the display area setting module 512 maintains the position of the eye box 200 or keeps the movement amount of the eye box 200 small. In addition to the driving amount for moving the display area 100 to a predetermined position, the rotation amount (angle) about the first rotation axis AX1 and the rotation amount (angle) about the second rotation axis AX2. ), Can be set (corrected). On the contrary, when the display area setting module 512 acquires only the eye box setting information for moving the eye box 200 to a predetermined position, the display area setting module 512 maintains the position of the display area 100 or the amount of movement of the display area 100.
  • the rotation amount (angle) about the first rotation axis AX1 and the second rotation axis AX2 are used as axes so as to keep the size small.
  • the amount of rotation (angle) can be set (corrected).
  • the display area setting module 512 may set the amount of movement of the relay optical system 25 by one or a plurality of actuators.
  • the display area setting module 512 is set according to the type of the vehicle 1 on which the vehicle display system is mounted, and is stored in the memory 37 in advance, the display area 100 (and / or the first display area 150 described later).
  • the current position of the display area 100 may be estimated and stored in the memory 37 by correcting the display area 100 described above based on the estimable information based on the position of.
  • the real object position determination module 514 determines whether or not the position of the real object 300 is within the first determination actual scene area R10 and whether or not the position is within the second determination actual scene area R20. That is, the display area setting module 512 determines whether or not the real object enters the first determination actual scene area R10 and whether or not it enters the second determination actual scene area R20 from the observation position and / or the predicted position of the real object. It may include a determination value, table data, an arithmetic expression, etc. for determining whether or not. For example, the real object position determination module 514 determines whether or not to enter the first determination actual scene area R10 for comparison with the observed position and / or the predicted position of the actual object (left-right direction (X-axis direction)).
  • Position, vertical direction (Y-axis direction) position), and judgment value of whether or not to enter the second judgment actual scene area R20 (left-right direction (X-axis direction) position, vertical direction (Y-axis direction) position)
  • Etc. can be included.
  • the determination value of whether or not to enter the first determination actual scene area R10 and the determination value of whether or not to enter the second determination actual scene area R20 are set (changed) by the actual scene area division module 516 described later. ).
  • the real scene area division module 516 sets a range of judgment values as to whether or not the real object is within the first judgment real scene area R10, and a range of judgment values as to whether or not the real object is within the second judgment real scene area R20. ..
  • the first to fifth determination methods by the actual view area division module 516 and the actual object position determination module 514 will be described below, but as will be described later, the visual position 4 of the viewer and the display area 100 (first display area). If the range determined to be within the second determination actual scene area R20 is changed according to the position of 150), the posture of the vehicle 1, and the like, the present invention is not limited to these.
  • the real object position determination module 514 is based on the observation position and / or the predicted position of the real object acquired from the real object position identification module 504 and the determination value stored in advance in the memory 37. It is determined whether or not the position of the real object 300 is within the first determination actual scene area R10 and whether or not the position is within the second determination actual scene area R20.
  • FIG. 12A and 12B show the eye box 200, the first display area 150 displaying the virtual image V10 of the image of the first aspect, and the first determination actual view when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the positional relationship of the area R10 and the 2nd determination actual scene area R20.
  • FIG. 12A shows a case where the real object 300 enters the first determination actual scene area R10
  • FIG. 12B shows a case where the real object 300 enters the second determination actual scene area R20.
  • the first determination actual scene area R10 is the upper end 150a of the first display area 150 and the center 205 of the eyebox 200 (inside the eyebox 200) for displaying the virtual image V10 of the image of the first aspect in the display area 100. It is an example of a predetermined position in the eye box 200, and is an example of a predetermined position in the eye box 200, and is an example of a line connecting the lower end 150b of the first display area 150 and the center 205 of the eye box 200. It is an area between the line connecting with, but not limited to). Further, the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
  • the first display area 150 for displaying the virtual image V10 of the image of the first aspect referred to here may be a predetermined area smaller than the display area 100 in the display area 100 and coincides with the display area 100. (In the example of FIGS. 3 to 10, the first display area 150 and the display area 100 coincide with each other).
  • the eye box 200 and the first display area 150 are set according to the type of the vehicle 1 on which the vehicle display system 10 is mounted, so that the first determination actual scene area R10 and the second determination are made.
  • the actual scene area R20 is set to a constant value in advance for each type of vehicle 1 and stored in the memory 37.
  • the first determination actual scene area R10 and the second determination actual scene area R20 are the individual difference of the vehicle 1, the individual difference of the HUD device 20 (including the assembly error to the vehicle 1), and the outside of the vehicle provided in the vehicle 1. It may be preset and stored in the memory 37 for each individual of the vehicle display system 10 by calibration in consideration of individual differences of the sensor 407 (including an error in assembling to the vehicle 1).
  • the real object position determination is made.
  • Module 514 determines that the real object 300 is in the first determination real scene area R10.
  • the real object position determination module 514 determines. It is determined that the real object 300 is in the second determination actual scene area R20.
  • the real object position determination module 514 may execute the following second setting method in addition to or in place of the first setting method described above.
  • the real object position determination module 514 determines the observation position and / or the predicted position of the real object acquired from the real object position identification module 504, and the position of the display area 100 acquired from the display area setting module 512. Based on (or information that can estimate the position of the display area 100), whether or not the real object 300 enters the first determination real scene area R10, and the virtual image V of the image of the second aspect are displayed. 2 Judgment Judges whether or not to enter the actual scene area R20.
  • the actual scene area division module 516 changes the range of the first determination actual scene area R10 and the range of the second determination actual scene area R20 according to the position of the display area 100. That is, the actual scene area division module 516 has the first determination actual scene area R10 and the second determination actual scene area from the position of the display area 100 (or information that can estimate the position of the display area 100) acquired from the display area setting module 512. It may include table data for setting R20, an arithmetic program, and the like. The table data includes, for example, the position of the display area 100 and the determination value of whether or not to enter the first determination actual scene area R10 (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)).
  • FIGS. 13A, 13B, and 13C show a first determination actual view area R10 and a second determination actual view area according to a change in the position of the display area 100 when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the change of the range of R20.
  • the display area 100 is gradually moved downward (Y-axis negative direction) in the order of FIGS. 13A, 13B, and 13C by rotating the first mirror 26 of the HUD device 20.
  • the real scene area division module 516 changes the range of the first judgment real scene area R10 and the second judgment real scene area R20 according to the position of the display area 100, and the real object position judgment module 514 has the real object 300 as the real scene. It is determined whether or not to enter the first determination actual scene area R10 changed by the area division module 516, and whether or not to enter the second determination actual scene area R20 which has been appropriately changed.
  • the first determination actual scene area R10 is the upper end 150a of the first display area 150 and the center 205 of the eyebox 200 (inside the eyebox 200) for displaying the virtual image V10 of the image of the first aspect in the display area 100.
  • the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
  • the actual scene area division module 516 when the first display area 151 is arranged below the first display area 150 shown in FIG. 13A, the actual scene area division module 516 also includes the first determination actual scene area R12. , The first determination is arranged below the actual scene area R10. At this time, the actual scene area division module 516 sets the range of the second determination actual scene area R22 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R12 by expanding it (R22> R21). In other words, when the position of the display area 100 (first display area 150) deviates from the reference position, the actual scene area division module 516 expands the second determination actual scene area R20. As shown in FIG. 13B, when the straight line connecting the center 205 of the eye box 200 and the real object 300 passes within the range of the second determination actual scene area R22, the real object position determination module 514 determines the real object 300. Is in the second determination actual scene area R22.
  • the real object position determination module 514 further expands the range of the second determination actual scene area R23 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R13 (R23> R22). As shown in FIG. 13C, when the straight line connecting the center 205 of the eye box 200 and the real object 300 passes within the range of the second determination real scene area R23, the real object position determination module 514 determines the real object 300. Is in the second determination actual scene area R23.
  • the second setting method when the first determination actual scene area R11 on which the first display area 150 (display area 100) overlaps shown in FIG. 13A is used as the reference display area, the position of the display area 100 is changed and the second setting method is performed.
  • the second determination actual scene area R20 is expanded as the first determination actual scene area R10 on which the one display area 150 (display area 100) overlaps is separated from the first standard actual scene area R10s. According to this, since the area for displaying the image for the real object 300 in the second aspect is expanded, the image for the second aspect is used for the real object 300 which is out of the area for displaying the image in the first aspect. It can be easily recognized by the viewer.
  • the virtual images V20 and V30 of the image of the second aspect can be viewed by the viewer with respect to the real object 300 existing in or near the specific first standard real scene area R10s. It can be made easier to recognize.
  • the real object position determination module 514 may execute the following third setting method in addition to or in place of the first setting method and / or the second setting method described above.
  • the real object position determination module 514 includes the observation position and / or the predicted position of the real object acquired from the real object position identification module 504, and the eye position of the viewer acquired from the eye position detection module 508. Based on 4 (or information that can estimate the eye position), whether or not the real object 300 is within the first determination actual scene area R10, and the second determination to display the virtual image V of the image of the second aspect. It is determined whether or not the object is within the real scene area R20.
  • the range of the first determination actual scene area R10 and the range of the second determination actual scene area R20 change according to the eye position 4 of the viewer, and the real object 300 is appropriately changed. It is determined whether or not the first determination actual scene area R10 is included, and whether or not the second determination actual scene area R20 is included. That is, the real object position determination module 514 has the first determination actual view area R10 and the second determination actual view area R20 from the eye position 4 (or information that can estimate the eye position) of the viewer acquired from the eye position detection module 508. And may include table data, arithmetic programs, etc. for setting.
  • the table data is, for example, the eye position 4 of the viewer and the determination value (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)) of whether or not to enter the first determination actual scene area R10. ) And the data.
  • FIG. 14A, 14B, and 14C show the first determination actual scene area R10 according to the change in the eye position (eye height) 4 of the viewer when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the change of the range of the 2nd determination real scene area R20.
  • the eye position 4 of the viewer gradually increases in the order of reference numeral 4a shown in FIG. 14A, reference numeral 4b shown in FIG. 14B, and reference numeral 4c shown in FIG. 14C.
  • the real object position determination module 514 changes the first determination actual view area R10 and the second determination actual view area R20 according to the eye position 4 of the viewer, and the real object 300 is appropriately changed in the first determination actual view area R10. It is determined whether or not to enter the inside, and whether or not to enter the second determination actual scene area R20 which has been appropriately changed.
  • the first determination actual scene area R10 is the eye observed as the upper end 150a of the first display area 150 that displays the virtual image V10 of the image of the first aspect in the display area 100.
  • a line connecting the position 4a (an example of a predetermined position in the eyebox 200, and not limited to this), the lower end 150b of the first display area 150, and the observed eye position 4a (predetermined position in the eyebox 200). It is an example of the position of, and is not limited to this.) It is an area between the line connecting with and.
  • the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
  • the first determination actual scene area R12 is the first determination actual scene area R11 shown in FIG. 14A. Placed below.
  • the real object position determination module 514 expands the range of the second determination actual scene area R22 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R12 (R22> R21). In other words, when the eye position 4 moves, the real object position determination module 514 expands the second determination actual scene area R20. As shown in FIG.
  • the real object position determination module 514 determines that the real object 300 is the second. Judgment It is determined that the actual scene area R22 is included.
  • the real object position determination module 514 further expands the range of the second determination actual scene area R23 adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R13 (R23> R22).
  • the real object position determination module 514 has the real object 300 as the second. Judgment It is determined that the actual scene area R23 is included.
  • the first display is made as the eye position 4 is changed.
  • the second determination actual scene area R20 is expanded as the first determination actual scene area R10 on which the areas 150 (display area 100) overlap is separated from the first standard actual scene area R10s.
  • the image for the second aspect is used for the real object 300 which is out of the area for displaying the image in the first aspect. It can be easily recognized by the viewer. Further, even if the position of the display area 100 is different, it is easy for the viewer to recognize the real object 300 existing in the specific first standard real scene area R10s or its vicinity in the image of the second aspect. Can be done.
  • the real object position determination module 514 may execute the following fourth setting method in addition to or in place of the first to third setting methods described above.
  • the real object position determination module 514 determines the observation position and / or the predicted position of the real object acquired from the real object position identification module 504, and the posture of the vehicle 1 (for example, tilt) acquired from the vehicle ECU 401. Based on (angle) and, whether or not the real object 300 enters the first determination actual scene area R10 and whether or not the real object 300 enters the second determination actual scene area R20 that displays the virtual image V of the image of the second aspect. , Is determined.
  • the range of the first determination actual scene area R10 and the range of the second determination actual scene area R20 change according to the posture of the vehicle 1, and the actual object 300 is appropriately changed in the first determination. It is determined whether or not it is in the real scene area R10, and whether or not it is in the second determination real scene area R20. That is, the real object position determination module 514 determines the first determination actual view area R10 and the second determination actual view area R20 from the attitude of the vehicle 1 (or information that can estimate the attitude of the vehicle 1) acquired from the vehicle ECU 401. It may include table data for setting, arithmetic programs, and so on.
  • the table data is, for example, the eye position 4 of the viewer and the determination value (position in the left-right direction (X-axis direction), position in the up-down direction (Y-axis direction)) of whether or not to enter the first determination actual scene area R10. ) And the data.
  • 15A and 15B show the ranges of the first determination actual view area R10 and the second determination actual view area R20 according to the change in the tilt angle ⁇ t of the vehicle 1 when viewed from the left-right direction (X-axis direction) of the vehicle 1. It is a figure which shows the change of.
  • the tilt angle ⁇ t2 shown in FIG. 15B is tilted forward from the tilt angle ⁇ t1 shown in FIG. 15A.
  • the real object position determination module 514 changes the first determination actual view area R10 and the second determination actual view area R20 according to the posture of the vehicle 1, and the real object 300 is placed in the first determination actual view area R10 which has been appropriately changed. It is determined whether or not to enter, and whether or not to enter within the second determination actual scene area R20 which has been appropriately changed.
  • the first determination actual scene area R10 is the upper end 150a of the first display area 150 and the center 205 of the eyebox 200 (inside the eyebox 200) for displaying the virtual image V10 of the image of the first aspect in the display area 100.
  • the second determination actual scene area R20 is an area of a predetermined range adjacent to the upper side (Y-axis positive direction) of the first determination actual scene area R10.
  • the first determination actual scene area R12 is also the first determination actual scene area R11. Placed below.
  • the real object position determination module 514 expands the range of the second determination actual view area R21 adjacent to the upper side (Y-axis positive direction) of the first determination actual view area R11 (R22> R21). In other words, when the position of the display area 100 deviates from a predetermined position, the real object position determination module 514 expands the second determination actual view area R20.
  • the real object position determination module 514 determines the real object 300. Is in the second determination actual scene area R22.
  • the display area 100 when the first determination actual scene area R11 on which the first display area 150 (display area 100) overlaps in FIG. 15A is used as the reference display area, as shown in FIG. 15B, the display area 100
  • the second determination actual scene area R20 is expanded as the first determination actual scene area R10 on which the first display area 150 (display area 100) overlaps is separated from the first standard actual scene area R10s as the position shifts.
  • the image for the second aspect is used for the real object 300 which is out of the area for displaying the image in the first aspect. It can be easily recognized by the viewer. Further, even if the position of the display area 100 is different, it is easy for the viewer to recognize the real object 300 existing in the specific first standard real scene area R10s or its vicinity in the image of the second aspect. Can be done.
  • 16A, 16B, 16C, and 16D will be used to describe an example of an expansion setting of the second determination actual scene area R20 performed by the actual scene area division module 516.
  • 16A is the same as FIG. 13B, and is enlarged when the first display area 151 is arranged below the reference display area with the position of the first display area 150 in FIG. 13A as the reference display area. 2 Judgment actual scene area R22 is shown.
  • FIG. 16B shows a mode in which the second determination actual scene area R20 is further expanded even in the same situation as in FIG. 16A. Specifically, a part of the enlarged second determination actual scene area R22 overlaps with a part of the second standard actual scene area R20s. That is, in one embodiment, the actual scene area division module 516 is set by enlarging the second determination actual scene area R20 so as to overlap a part of the reference second determination actual scene area R21.
  • FIG. 16C shows a mode in which the second determination actual scene area R20 is further expanded even in the same situation as in FIG. 16B. Specifically, a part of the enlarged second determination actual scene area R23 overlaps with the entire second standard actual scene area R20s. That is, in one embodiment, the actual scene area division module 516 is set by enlarging the second determination actual scene area R20 so as to include the entire second determination actual scene area R21 as a reference.
  • FIG. 16D shows a mode in which the second determination actual scene area R20 is further expanded even in the same situation as in FIG. 16C.
  • a part of the enlarged second determination actual scene area R23 includes the entire second standard actual scene area R20s and a wider range. That is, in one embodiment, the actual scene area division module 516 expands and sets the second determination actual scene area R20 so as to include the entire reference second determination actual scene area R21 and a wider range.
  • 17A to 17F are diagrams schematically showing the positional relationship between the first determination actual scene area R10 and the second determination actual scene area R20 when facing forward from the eye box 200.
  • the first display area 150 has a shape, but is not limited thereto.
  • the second determination actual scene area R20 includes an area adjacent to the left side of the left end of the first determination actual scene area R10, an area adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene area R10.
  • the area adjacent to the upper side of the upper end and the lower side including the area are dented.
  • the second determination actual scene area R20 is shown narrowly, it is preferably a wider range (the same applies to FIGS. 17A to 17F).
  • the second determination actual scene area R20 further includes an area adjacent to the lower end of the first determination actual scene area R10 in addition to the area in FIG. 17A. It may be a hollow region.
  • the second determination actual scene area R20 is an area adjacent to the left side of the left end of the first determination actual scene area R10 and a right side of the right end of the first determination actual scene area R10. It does not have to include the area adjacent to.
  • the second determination actual scene area R20 may be composed of a plurality of separated areas.
  • the display area 100 and the first display area 150 for displaying the virtual image V10 of the image of the first aspect are shown in agreement with each other, but the present invention is not limited thereto.
  • the first display area 150 can be smaller than the display area 100.
  • the second determination actual scene area R20 includes an area adjacent to the left side of the left end of the first determination actual scene area R10, an area adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene area R10. It can be set to an area adjacent to the upper side of the upper end of the above and a dented area on the lower side including. In this case, in the second determination actual scene area R20, an area adjacent to the first determination actual scene area R10 may be arranged in the display area 100.
  • the first determination actual scene area R10 and the second determination actual scene area R20 are adjacent to each other, but the present invention is not limited to these.
  • the first display area 150 can be smaller than the display area 100.
  • the second determination actual scene area R20 includes an area not adjacent to the left side of the left end of the first determination actual scene area R10, an area not adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene area R10. It can be set to a region that is not adjacent to the upper end of the upper end and a dented region that includes the lower side.
  • the second determination actual scene area R20 includes an area adjacent to the left side of the left end of the first determination actual scene area R10, an area adjacent to the right side of the right end of the first determination actual scene area R10, and a first determination actual scene.
  • a region not adjacent to the upper end of the region R10 and a region including a lower side may be set as a dented region. That is, the first determination actual scene area R10 and the second determination actual scene area R20 are adjacent to each other only in a part, and may not be adjacent to each other in other parts (first determination actual scene area R10 or second determination actual scene area R20). Areas that are not may be included in between).
  • FIGS. 17F and 17G may be modified so that the display area 100 and the first display area 150 for displaying the virtual image V10 of the image of the first aspect are matched.
  • the image type setting module 518 is the real object position determination module 514, and sets the image of the first aspect with respect to the real object entering the first determination actual scene area R10, and sets the image of the first aspect, and enters the second determination actual scene area R20. On the other hand, the image of the second aspect is set.
  • the image type setting module 518 is, for example, the type and position of the real object detected by the real object information detection module 502, the type, number, and / or of the real object-related information detected by the real object information detection module 502.
  • the type of image to be displayed for the real object may be determined (changed) based on the magnitude of the (estimated) notification necessity detected by the notification necessity determination module 506.
  • the image type setting module 518 may increase or decrease the type of the image to be displayed depending on the determination result by the line-of-sight direction determination module 524 described later. Specifically, when the real object 300 is in a state where it is difficult for the viewer to see it, the number of types of images visually recognized by the viewer may be increased in the vicinity of the real object.
  • the image arrangement setting module 520 sets the virtual image V at the position (observed position or predicted position) of the real object 300 specified by the real object position specifying module 504 so that the virtual image V can be visually recognized in a specific positional relationship with the real object 300. Based on this, the coordinates of the virtual image V (including at least the left-right direction (X-axis direction) and the up-down direction (Y-axis direction) when the viewer sees the direction of the display area 100 from the driver's seat of the vehicle 1) are determined. In addition to this, the image arrangement setting module 520 is in the front-rear direction when the viewer sees the direction of the display area 100 from the driver's seat of the vehicle 1 based on the determined position of the real object 300 set by the real object position identification module 504.
  • the image arrangement setting module 520 adjusts the position of the virtual image V based on the position of the eyes of the viewer detected by the eye position detection unit 411. For example, the image arrangement setting module 520 determines the positions of the virtual image V in the horizontal direction and the vertical direction so that the contents of the virtual image V can be visually recognized in the region (road surface 310) between the division lines 311, 312.
  • the image arrangement setting module 520 can set the angle of the virtual image V (pitching angle about the X direction, yaw rate angle about the Y direction, rolling angle about the Z direction).
  • the angle of the virtual image V is a preset angle, and can be set to be parallel to the front-rear and left-right directions (XZ plane) of the vehicle 1.
  • the image size setting module 522 may change the size of the virtual image V according to the position, shape, and / or size of the corresponding real object 300. For example, the image size setting module 522 can reduce the size of the virtual image V if the position of the corresponding real object 300 is far away. Further, the image size setting module 522 can increase the size of the virtual image V if the size of the corresponding real object 300 is large.
  • the image size setting module 522 can determine the size of the virtual image V based on the magnitude of the (estimated) notification necessity detected by the notification necessity determination module 506.
  • the image size setting module 522 may have a function of predicting and calculating the size of displaying the contents of the virtual image V to be displayed in the current display update cycle based on the size of the real object 300 a predetermined number of times in the past.
  • the image sizing module 522 tracks the pixels of the real object 300 between two past images captured by a camera (an example of an outside sensor 407), for example using the Lucas-Kanade method. Therefore, the size of the real object 300 in the current display update cycle may be predicted, and the size of the virtual image V may be determined according to the predicted size of the real object 300.
  • the rate of change in the size of the real object 300 is obtained based on the change in the size of the real object 300 between the two past captured images, and the virtual image V is obtained according to the rate of change in the size of the real object 300.
  • the method of estimating the size change of the real object 300 from the viewpoint that changes in time series is not limited to the above, and includes, for example, an optical flow estimation algorithm such as the Horn-Schunkk method, the Boxon-Buxton, and the Black-Jepson method. A known method may be used.
  • the viewer of the vehicle 1 is looking at the real object to which the virtual image V or the virtual image V is associated, and / and is looking at the real object to which the virtual image V or the virtual image V is associated.
  • the viewer of the vehicle 1 is looking at the real object to which the virtual image V or the virtual image V is associated, and / and is looking at the real object to which the virtual image V or the virtual image V is associated.
  • the line-of-sight direction determination module 524 may detect what the viewer is viewing other than the content of the virtual image V. For example, the line-of-sight direction determination module 524 compares the position of the real object 300 existing in the foreground of the vehicle 1 detected by the real object information detection module 502 with the line-of-sight direction of the viewer acquired from the line-of-sight direction detection unit 413. As a result, the real object 300 to be watched may be specified, and the information for identifying the visually recognized real object 300 may be transmitted to the processor 33.
  • the graphic module 526 includes various known software components for performing image processing such as rendering to generate image data and driving the display 21.
  • the graphic module 526 also provides the type, arrangement (position coordinates, angle), size, display distance (in the case of 3D), visual effect (eg, brightness, transparency, saturation, contrast, or contrast) of the displayed image. Other visual characteristics), may include various known software components for modification.
  • the graphic module 526 includes a type set by the image type setting module 518 and a position coordinate set by the image arrangement setting module 520 (horizontal direction (X-axis) when the viewer sees the direction of the display area 100 from the driver's seat of the vehicle 1).
  • Angle set by the image arrangement setting module 520 (pitching angle centered on the X direction, yaw rate angle centered on the Y direction, Z)
  • the image data is generated so that the viewer can see it in the size set by the image size setting module 522 and the rolling angle with respect to the direction), and the image data is displayed on the image display unit 20.
  • the drive module 528 is a variety of known software components for driving the display 21, driving the light source unit 24, and driving the first actuator 28 and / or the second actuator 29. including.
  • the drive module 528 drives the liquid crystal display panel 22, the light source unit 24, and the first actuator 28 and the second actuator 29 based on the drive data generated by the display area setting module 512 and the graphic module 526.
  • Method S100 for performing an operation of displaying a virtual image of an image of the first aspect or the second aspect with respect to a real object existing in a real view outside the vehicle according to some embodiments. It is a flow chart.
  • Method S100 is executed by an image display unit 20 including a display and a display control device 30 that controls the image display unit 20. Some actions in method S100 are optionally combined, some steps are optionally modified, and some actions are optionally omitted.
  • method S100 provides a method of presenting an image (virtual image) that enhances the cognition of a real object.
  • the display control device 30 sets the range of the first determination actual scene area R10.
  • the processor 33 of the display control device 30 is set by executing the real scene area division module 516 and reading out the first determination real scene area R10 stored in advance in the memory 37 (S111). Further, in some embodiments, the processor 33 executes the display area setting module 512 and acquires the state of the relay optical system (S113), the used area of the display (S115), and the eye position of the viewer (S117). , The posture of the vehicle 1 (S119), or a combination thereof, the range of the first determination actual scene area R10 is set.
  • the display control device 30 detects that a predetermined condition for expanding the range of the second determination actual scene area R20 is satisfied.
  • the actual view area where the display areas 100 overlap when viewed from a predetermined position (for example, the center 205) of the eye box 200 (or when viewed from the eye position 4 of the viewer) is from the first standard actual view area R10s.
  • a deviation estimated to be a deviation
  • the display control device 30 determines the eye box 200 from the state of the relay optical system (S122), the area used by the display (S124), the eye position of the viewer (S126), the posture of the vehicle 1 (S128), and the like.
  • the display control device 30 expands and sets the range of the second determination actual scene area R20 when the predetermined condition is satisfied in S120.
  • the processor 33 of the display control device 30 expands the range of the second determination actual scene area from the standard range by the actual scene area division module 516 (S132), and the second determination actual scene area R20 Enlarging so as to overlap a part of the second standard actual scene area R20s (S134), expanding the second determination actual scene area R20 so as to include the entire second standard actual scene area R20s (S136), or the second determination.
  • One of the entire second standard real scene area R20s and the expansion so as to include a wider range (S138) is executed.
  • the display control device 30 may make the degree of expansion of the second determination real scene area R20 different for each type of the real object 300 acquired by the real object information detection module 502. For example, in some embodiments, the display control device 30 may vary the degree of expansion of the second determination actual view area R20 in the traveling lane, the obstacle, and the feature. In some embodiments, when the predetermined condition is not satisfied in S120, the display control device 30 sets the range of the second determination actual view area R20 to the first determination actual view area R10 set in the block S110. As a reference, the second standard real scene area R20s stored in advance in the memory 37 is set.
  • the display control device 30 acquires the real object position by executing the real object position specifying module 504.
  • the display control device 30 determines whether or not the position of the real object acquired in the block S140 falls within the first determination actual scene area R10 set in the block S110, and the second determination set in the block S130. It is determined whether or not the object is within the real scene area R20.
  • the processor 33 of the display control device 30 executes the real object positioning module 504. Whether or not the position of the real object acquired from the real object position specifying module 504 falls within the first judgment real scene area R10 set in the block S110, and enters the second judgment real scene area R20 set in the block S130.
  • the image type setting module 518 sets the image corresponding to the real object in the first mode or the second mode, and displays the image (virtual image) on the image display unit 20. Display (blocks S152, S154).
  • the operation of the above-mentioned processing process can be performed by executing one or more functional modules of an information processing device such as a general-purpose processor or a chip for a specific purpose. All of these modules, combinations of these modules, and / or combinations with known hardware that can replace their functionality are within the scope of the protection of the present invention.
  • the functional blocks of the vehicle display system 10 are optionally executed by hardware, software, or a combination of hardware and software in order to execute the principles of the various embodiments described.
  • the functional blocks described in FIG. 11 may be optionally combined or one functional block separated into two or more subblocks in order to implement the principles of the embodiments described. It will be understood by those skilled in the art. Accordingly, the description herein optionally supports any possible combination or division of functional blocks described herein.
  • the display control device 30 of the present embodiment controls the image display unit 20 that displays the virtual image V of the image in the display area 100 that overlaps the foreground when viewed from the eye box 200 in the vehicle.
  • the control device 30, one or more I / O interfaces 31 capable of acquiring information, one or more processors 33, a memory 37, and one or more processors 33 stored in the memory 37. It comprises one or more computer programs configured to be executed by, and one or more I / O interfaces 31 are the positions of real objects present around the vehicle and the display area 100. Acquiring at least one of a position, an observer's eye position 4 in the eyebox 200, a vehicle attitude, or information that can estimate these, and one or more processors 33 may be a position of a real object.
  • the one or more processors 33 are based on at least one of the position of the display area 100, the eye position 4, the posture of the vehicle, or information that can estimate these.
  • the foreground area that overlaps at least a part of the display area 100 when viewed from the box 200 is set as the first determination actual scene area R10, and the second determination actual scene area R20 is above the first determination actual scene area R10 when viewed from the eye box 200. Executes a command that is set to include the area of the foreground that is visible to the CPU.
  • one or more processors 33 execute an instruction to set a part of the first determination actual scene area R10 and a part of the second determination actual scene area R20 to be adjacent to each other. To do.
  • the memory 37 stores a specific area of the foreground as the first standard real-world area R10s, and one or more processors 33 use the position of the display area 100 and the eye position 4 of the display area 100.
  • the posture of the vehicle, or at least one of the information that can be estimated from these, the foreground region that overlaps with at least a part of the display region 100 as viewed from the eyebox 200 is relative to the first standard real scene region R10s. If it is presumed that the deviation occurs, an instruction is executed to expand the range of the second determination actual scene area R20.
  • the memory 37 stores a specific area of the foreground as the first standard real-world area R10s, and one or more processors 33 use the position of the display area 100 and the eye position 4 of the display area 100.
  • the posture of the vehicle, or at least one of the information that can be estimated from these, and the foreground area that overlaps with at least a part of the display area 100 as viewed from the eyebox 200 is set as the first determination actual scene area R10.
  • the one or more processors 33 are based on at least one of the position of the display area 100, the eye position 4, the posture of the vehicle, or information that can estimate these. 2 Execute a command to change the enlargement width of the range of the judgment actual scene area R20.
  • the one or more processors 33 execute an instruction to display the virtual image V20 (V30) of the image of the second aspect in the outer edge region 110 of the display region 100.
  • the position of the real object acquired by one or more I / O interfaces 31 includes one or more positions in the left-right direction when facing the foreground from the eyebox 200.
  • the processor 33 executes an instruction to move the position in the left-right direction of the virtual image V20 (V30) of the image of the second aspect seen from the eyebox 200 so as to follow the position in the left-right direction of the real object.
  • the memory 37 stores a specific area of the foreground as the second standard real-sight area R20s, and one or more processors 33 have the second determination real-sight area R20 as the second determination real-sight area R20s. 2 Execute an instruction to expand the range so as to include at least a part of the standard real scene area R20s.
  • the memory 37 stores a specific area of the foreground as the second standard real-sight area R20s, and one or more processors 33 have the second determination real-sight area R20 as the second determination real-sight area R20s. 2 Execute a command that extends the range to include the entire standard real-world area R20s.
  • Vehicle 2 Front windshield 4: Eye position 10: Vehicle display system 20: HUD device (image display unit) 21: Display 21a: Display surface 22: Liquid crystal display panel 23: Virtual image 24: Light source unit 25: Relay optical system 26: First mirror 27: Second mirror 30: Display control device 31: I / O interface 33: Processor 35 : Image processing circuit 37: Memory 40: Display light 40p: Optical axis 41: First image light 42: Second image light 43: Third image light 90: Virtual image optical system 100: Display area 101: Upper end 102: Lower end 110: Outer edge area 120: Fixed area 150: First display area 150a: Upper end 150b: Lower end 151, 152: First display area 200: Eye box 205: Center 300: Real object 502: Real object information detection module 504: Real object position identification Module 506: Notification necessity determination module 508: Eye position detection module 510: Vehicle attitude detection module 512: Display area setting module 514: Real object position determination module 516: Real view area division module 518: Image type setting module 520: Image

Abstract

L'invention permet de reconnaître facilement des informations se rapportant à un objet réel même dans le cas d'un changement de l'emplacement de l'œil ou de la position d'une région d'affichage d'une image virtuelle. Dans la présente invention, un processeur : amène une image virtuelle V10 d'une image de premier mode correspondant à un objet réel à être affichée dans le cas où la position de l'objet réel se trouve dans une première région de scène effective déterminante R10 ; amène une image virtuelle V20 (V30) d'une image de second mode correspondant à l'objet réel à être affichée dans le cas où la position de l'objet réel se trouve dans une seconde région de scène effective déterminante R20 ; puis étend la portée de la seconde région de scène effective déterminante R20 sur la base d'au moins un aspect parmi la position d'une région d'affichage 100, l'emplacement de l'œil 4 d'un observateur, l'assiette d'un véhicule et des informations à partir desquelles l'un quelconque des trois aspects précédents peut être estimé.
PCT/JP2020/048680 2019-12-27 2020-12-25 Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé WO2021132555A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021567664A JP7459883B2 (ja) 2019-12-27 2020-12-25 表示制御装置、ヘッドアップディスプレイ装置、及び方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019238220 2019-12-27
JP2019-238220 2019-12-27

Publications (1)

Publication Number Publication Date
WO2021132555A1 true WO2021132555A1 (fr) 2021-07-01

Family

ID=76575998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/048680 WO2021132555A1 (fr) 2019-12-27 2020-12-25 Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé

Country Status (2)

Country Link
JP (1) JP7459883B2 (fr)
WO (1) WO2021132555A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202476A (zh) * 2022-06-30 2022-10-18 泽景(西安)汽车电子有限责任公司 显示图像的调整方法、装置、电子设备及存储介质
US20230306692A1 (en) * 2022-03-24 2023-09-28 Gm Global Technlology Operations Llc System and method for social networking using an augmented reality display

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015004784A1 (fr) * 2013-07-11 2015-01-15 トヨタ自動車株式会社 Dispositif d'affichage d'informations de véhicule et procédé d'affichage d'informations de véhicule
JP2016222061A (ja) * 2015-05-28 2016-12-28 日本精機株式会社 車両用表示システム
JP2018146912A (ja) * 2017-03-09 2018-09-20 クラリオン株式会社 車載表示装置、及び車載表示方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015004784A1 (fr) * 2013-07-11 2015-01-15 トヨタ自動車株式会社 Dispositif d'affichage d'informations de véhicule et procédé d'affichage d'informations de véhicule
JP2016222061A (ja) * 2015-05-28 2016-12-28 日本精機株式会社 車両用表示システム
JP2018146912A (ja) * 2017-03-09 2018-09-20 クラリオン株式会社 車載表示装置、及び車載表示方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230306692A1 (en) * 2022-03-24 2023-09-28 Gm Global Technlology Operations Llc System and method for social networking using an augmented reality display
US11798240B2 (en) * 2022-03-24 2023-10-24 GM Global Technology Operations LLC System and method for social networking using an augmented reality display
CN115202476A (zh) * 2022-06-30 2022-10-18 泽景(西安)汽车电子有限责任公司 显示图像的调整方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
JP7459883B2 (ja) 2024-04-02
JPWO2021132555A1 (fr) 2021-07-01

Similar Documents

Publication Publication Date Title
JP6201690B2 (ja) 車両情報投影システム
US11525694B2 (en) Superimposed-image display device and computer program
JP7006235B2 (ja) 表示制御装置、表示制御方法および車両
WO2021132555A1 (fr) Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé
JP2020032866A (ja) 車両用仮想現実提供装置、方法、及びコンピュータ・プログラム
JP7255608B2 (ja) 表示制御装置、方法、及びコンピュータ・プログラム
WO2021200914A1 (fr) Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé
WO2022230995A1 (fr) Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé de commande d'affichage
WO2020158601A1 (fr) Dispositif, procédé et programme informatique de commande d'affichage
JP2022072954A (ja) 表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法
JP2020121607A (ja) 表示制御装置、方法、及びコンピュータ・プログラム
WO2021200913A1 (fr) Dispositif de commande d'affichage, dispositif d'affichage d'image et procédé
JP2020121704A (ja) 表示制御装置、ヘッドアップディスプレイ装置、方法、及びコンピュータ・プログラム
WO2023003045A1 (fr) Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé de commande d'affichage
JP2021056358A (ja) ヘッドアップディスプレイ装置
JP7434894B2 (ja) 車両用表示装置
JP2022077138A (ja) 表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法
WO2023145852A1 (fr) Dispositif de commande d'affichage, système d'affichage et procédé de commande d'affichage
JP2022113292A (ja) 表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法
JP2022190724A (ja) 表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法
JP2022057051A (ja) 表示制御装置、虚像表示装置
JP2021160409A (ja) 表示制御装置、画像表示装置、及び方法
WO2023210682A1 (fr) Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé de commande d'affichage
JP7014206B2 (ja) 表示制御装置および表示制御プログラム
JP2020199883A (ja) 表示制御装置、ヘッドアップディスプレイ装置、方法、及びコンピュータ・プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20906593

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021567664

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20906593

Country of ref document: EP

Kind code of ref document: A1