WO2020040276A1 - Display device - Google Patents

Display device Download PDF

Info

Publication number
WO2020040276A1
WO2020040276A1 PCT/JP2019/032968 JP2019032968W WO2020040276A1 WO 2020040276 A1 WO2020040276 A1 WO 2020040276A1 JP 2019032968 W JP2019032968 W JP 2019032968W WO 2020040276 A1 WO2020040276 A1 WO 2020040276A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
area
vehicle
displayed
Prior art date
Application number
PCT/JP2019/032968
Other languages
French (fr)
Japanese (ja)
Inventor
成美 的場
匠 佐藤
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Priority to JP2020538477A priority Critical patent/JPWO2020040276A1/en
Publication of WO2020040276A1 publication Critical patent/WO2020040276A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present invention relates to a display device.
  • Patent Literature 1 discloses an image display device that displays an image visually recognized by a viewer (mainly, a driver of the vehicle) as a virtual image overlapping a scene in front of the vehicle.
  • an image for example, an image (hereinafter, referred to as an AR (Augmented Reality) image) in a mode visually recognized along the road in front of the vehicle, or stands up to the viewer side than the AR image.
  • an image hereinafter, referred to as a non-AR image
  • simply displaying both of them may cause trouble for the viewer due to the mixture of information.
  • the present invention has been made in view of the above circumstances, and it is an object of the present invention to provide a display device that can display both an AR image and a non-AR image in an easily viewable manner when they are displayed in the same period.
  • a display device includes: A display device that displays a superimposed image visually recognized by a viewer as a virtual image overlapping a scene in front of the vehicle, A display control of the superimposed image is performed, and a first image that is visually recognized along a road surface ahead of the vehicle in a display area of the superimposed image, and a second image that is more standing up to the viewer side than the first image and visually recognized.
  • Display control means capable of displaying an image and When the first image and the second image are displayed in the same period, the display control unit displays the first image in a first area in the display area and a second image adjacent to the first area. Displaying the second image in two regions, Displaying a boundary between the first region and the second region in a gradation form; A linear image visually recognized linearly along the boundary is displayed.
  • FIG. 1 It is a figure for demonstrating the mounting aspect to the vehicle of the head-up display (HUD) apparatus which concerns on one Embodiment of this invention.
  • FIG. 1 It is a block diagram of a display system for vehicles.
  • (A)-(c) is a figure for demonstrating the example of a structure of the virtual image in a simultaneous display mode. It is a figure of a table showing an example of a linear image pattern.
  • (A)-(c) is a figure which shows the example of a virtual image display corresponding to a linear image pattern.
  • (A)-(c) is a figure which shows the example of a virtual image display corresponding to a linear image pattern.
  • (A)-(c) is a figure which shows the example of a virtual image display corresponding to a linear image pattern.
  • (A) And (b) is a figure for demonstrating a shadow image.
  • (A) And (b) is a figure for demonstrating the example of a structure of the virtual image concerning a modification.
  • the display device is the HUD (Head-Up Display) device 10 included in the vehicle display system 100 shown in FIG.
  • the HUD device 10 is provided inside the dashboard 2 of the vehicle 1 and integrates not only information on the vehicle 1 (hereinafter, referred to as vehicle information) but also information other than the vehicle information. Notify the driver 5.
  • vehicle information includes not only information on the vehicle 1 itself but also information on the outside of the vehicle 1 related to the operation of the vehicle 1.
  • the vehicle display system 100 is a system configured in the vehicle 1, and includes a HUD device 10, a peripheral information acquisition unit 40, a forward information acquisition unit 50, a car navigation (car navigation) device 60, and an ECU (Electronic Control). Unit) 70 and an operation unit 80.
  • a HUD device 10 a system configured in the vehicle 1, and includes a HUD device 10, a peripheral information acquisition unit 40, a forward information acquisition unit 50, a car navigation (car navigation) device 60, and an ECU (Electronic Control). Unit) 70 and an operation unit 80.
  • the HUD device 10 emits the display light L toward the combiner-treated windshield 3 as shown in FIG.
  • the display light L reflected by the windshield 3 travels to the driver 5 side.
  • the driver 5 can visually recognize an image represented by the display light L as a virtual image V in front of the windshield 3 by placing the viewpoint in the eye box 4. That is, the HUD device 10 displays the virtual image V at a position in front of the windshield 3. Thereby, the driver 5 can observe the virtual image V in a manner superimposed on the front scenery.
  • the HUD device 10 includes the display unit 20 and the control device 30 illustrated in FIG. 2, and a reflection unit (not illustrated).
  • the display unit 20 displays a superimposed image visually recognized by the driver 5 as the virtual image V under the control of the control device 30.
  • the display unit 20 has, for example, a TFT (Thin Film Transistor) type LCD (Liquid Crystal Display) and a backlight that illuminates the LCD from behind.
  • the backlight is composed of, for example, an LED (Light Emitting Diode).
  • the display unit 20 generates the display light L by the LCD illuminated by the backlight displaying an image under the control of the control device 30.
  • the generated display light L is emitted toward the windshield 3 after being reflected by the reflector.
  • the reflector is composed of, for example, two mirrors, a folding mirror and a concave mirror.
  • the turning mirror turns the display light L emitted from the display unit 20 back to the concave mirror.
  • the concave mirror reflects the display light L from the folding mirror toward the windshield 3 while expanding the display light L.
  • the virtual image V visually recognized by the driver 5 is an enlarged image of the image displayed on the display unit 20. Note that the number of mirrors constituting the reflecting section can be arbitrarily changed according to the design.
  • the display unit 20 is not limited to one using an LCD as long as it can display a superimposed image, and may be an OLED (Organic Light Emitting Diodes), a DMD (Digital Micro Mirror Device), an LCOS (Liquid Crystal On Silicon), or the like. A display device may be used.
  • OLED Organic Light Emitting Diodes
  • DMD Digital Micro Mirror Device
  • LCOS Liquid Crystal On Silicon
  • the control device 30 includes a microcomputer that controls the entire operation of the HUD device 10, and includes a control unit 31, a read only memory (ROM) 32, and a random access memory (RAM) 33.
  • the control device 30 includes a drive circuit and an input / output circuit for communicating with various systems in the vehicle 1 as a configuration (not shown).
  • the ROM 32 stores an operation program and various image data in advance.
  • the RAM 33 temporarily stores various calculation results and the like.
  • the control unit 31 includes a CPU (Central Processing Unit) 31a that executes an operation program stored in a ROM 32, and a GDC (Graphics Display Controller) 31b that executes image processing in cooperation with the CPU.
  • the GDC 31b includes, for example, a GPU (Graphics Processing Unit), an FPGA (Field-Programmable Gate Array), and an ASIC (Application Specific Integrated Circuit).
  • the ROM 32 stores an operation program for performing display control described later.
  • the configurations of the control device 30 and the control unit 31 are arbitrary as long as the functions described below are satisfied.
  • the control unit 31 controls the driving of the display unit 20.
  • the control unit 31 drives and controls the backlight of the display unit 20 with the CPU 31a, and drives and controls the LCD of the display unit 20 with the GDC 31b that operates in cooperation with the CPU 31a.
  • the CPU 31a of the control unit 31 controls the superimposed image based on various image data stored in the ROM 32 in cooperation with the GDC 31b.
  • the GDC 31b determines the control content of the display operation of the display unit 20 based on the display control command from the CPU 31a.
  • the GDC 31b reads, from the ROM 32, image part data necessary to compose one screen to be displayed on the display unit 20, and transfers the image part data to the RAM 33.
  • the GDC 31b uses the RAM 33 to create picture data for one screen based on image part data and various image data received from the outside of the HUD device 10 by communication.
  • the GDC 31b completes the picture data for one screen in the RAM 33, the GDC 31b transfers the picture data to the display unit 20 in synchronization with the image update timing.
  • the superimposed image visually recognized by the driver 5 as the virtual image V is displayed on the display unit 20.
  • a layer is assigned in advance to each image constituting the image visually recognized as the virtual image V, and the control unit 31 can perform individual display control of each image.
  • control unit 31 displays a display position of a content image (such as a first image C1 or a second image C2 described below shown in FIGS. 3A and 3B) in a display area of the virtual image V, and displays the content image. Control the distance and.
  • a content image such as a first image C1 or a second image C2 described below shown in FIGS. 3A and 3B
  • the solid-line frame indicated by the virtual image V in FIGS. 3A to 3C and the like indicates the display area of the virtual image V
  • the content image is an image visually recognized as a virtual image in the display area. This is equivalent to displaying the content image in the display area of the superimposed image on the display unit 20 that displays the superimposed image visually recognized by the driver 5 as the virtual image V.
  • the control of the display position means that the control unit 31 controls the display unit 20 so that an image is displayed at an arbitrary position in the display area of the virtual image V when viewed from the driver 5.
  • the control of the display distance is control of how far ahead the vehicle 1 the content image is displayed from the driver 5. That is, the control of the display distance means that the control unit 31 controls the display unit 20 so that the content image is visually recognized at a position in front of the vehicle 1 by a predetermined distance.
  • the control of the display distance is realized by the control unit 31 being able to control the display of the content image in the display area of the virtual image V in a manner having depth.
  • the 3D image data is stored in the ROM 32 in advance, and the position and the position of the object to be superimposed on the image, which can be calculated from the object information and the like to be described later.
  • drawing control of the content image is performed using a perspective method (for example, a single point perspective method).
  • a virtual surface the display surface of the display unit 20 on which the virtual image V is displayed is displayed.
  • the content image may have depth by setting the corresponding surface (front surface) to be inclined forward with respect to the vertical direction of the vehicle 1.
  • the control unit 31 may perform the drawing control in a mode in which the content image has a depth.
  • the viewpoint position of the driver 5 the control unit 31 may use an assumed viewpoint position stored in advance in the ROM 32, or may use a viewpoint detection unit (not shown) that detects the viewpoint position of the driver 5. Gaze information may be used.
  • the control unit 31 allows the first image C1 and the second image C2 can be displayed.
  • a “single display mode” in which one of the first image C1 and the second image C2 is displayed in the display area of the virtual image V but the other is not displayed, and the display mode of the virtual image V is A “simultaneous display mode” for displaying both the first image C1 and the second image C2 simultaneously (in the same period) is provided.
  • the control unit 31 switches from the current display mode to another display mode in response to the operation unit 80 receiving the display mode switching operation by the driver 5.
  • the first image C1 and the second image C2 will be described.
  • the first image C1 is an AR (Augmented Reality) image visually recognized along the road surface ahead of the vehicle 1 in the display area of the virtual image V.
  • the first image C1 is an image displayed at a position corresponding to a forward object described later, and is an image for notifying the presence of the forward object.
  • the forward object is, for example, a preceding vehicle of the vehicle 1 (hereinafter, also referred to as the own vehicle 1).
  • the first image C1 corresponds to the preceding vehicle (not shown). The example displayed on the position is shown.
  • the first image C1 is configured by displaying an exclamation mark in a substantially triangular frame.
  • the first image C1 may have any mode as long as it can alert the presence of the corresponding front object, and may be, for example, a character, a symbol, a graphic, an icon, or a combination thereof. Further, the “position corresponding to the front object” as the display position of the first image C1 is not limited to the position superimposed on the front object and visually recognized, and may be a position near the front object.
  • the second image C2 is a non-AR image in a mode in which the second image C2 stands up to the viewer side relative to the first image C1 in the display area of the virtual image V and is viewed (for example, a mode in which the virtual image V is viewed directly in front of the viewer).
  • the second image C2 shows a guide route set by the user and map information near the current location of the vehicle 1 (the map information is not shown).
  • the second image C2 is configured to include, for example, a map information image represented in a bird's-eye view mode, and a guide route image (for example, an image showing an arrow) superimposed on the image.
  • the control unit 31 controls display of the second image C2 based on route guidance information from the car navigation device 60 described later.
  • the first image C ⁇ b> 1 as the AR image is visually recognized by the driver 5 along the front road surface as a real scene.
  • the first image C1 is drawn by the control unit 31 using a perspective method (for example, a one-point perspective drawing method) so as to be along a front road surface specified based on traffic information and forward image data described below.
  • a perspective degree “100%” an image drawn using the perspective method and visually recognized along the front road surface.
  • an image which rises toward the driver 5 side from the front road surface and is visually recognized, and is visually recognized facing the driver 5 is defined as a perspective degree “0%”.
  • the degree between the perspective degree of 100% and 0% corresponds to the degree of rising from the road surface in front of the visually recognized image toward the driver 5.
  • the first image C1 as an AR image is displayed at a perspective degree of 100%
  • the second image C2 as a non-AR image is displayed at a perspective degree of 0%.
  • the first image C1 may be displayed at a perspective degree smaller than 100% as long as the driver 5 can recognize that the driver 5 is displayed along the front road surface.
  • the second image C2 may be displayed at a perspective degree higher than 0% as long as the second image C2 stands up to the driver 5 side more than the first image C1 and is visually recognized.
  • the display area of the virtual image V in the simultaneous display mode has a first area A and a second area B.
  • the first region A and the second region B are adjacent to each other with a boundary line D indicated by a dotted line in FIGS. 3B and 3C interposed therebetween.
  • the boundary line D shown by the dotted line in the figure is only for explaining the shapes of the first region A and the second region B, and does not need to be visually recognized.
  • the first area A is located above the second area B when viewed from the driver 5.
  • the first image C1 is displayed in the first area A
  • the second image C2 is displayed in the second area B.
  • a boundary portion G (a portion indicated by a broken line in FIG. 3C) including the boundary line D is displayed in a gradation.
  • the color of the background image of the portion of the first area A except for the boundary G is set to “first color”
  • the color of the background image of the portion of the second area B other than the boundary G is set to “first color”.
  • the "second color” a gradation process that continuously changes from one of the first color and the second color to the other is performed at the boundary G. Note that the first color and the second color are arbitrary as long as they are different colors.
  • a linear image E (see FIGS. 3A and 3B) that is viewed linearly along the boundary G is displayed.
  • the linear image E has a shape excluding the side corresponding to the lower base among the four sides constituting the isosceles trapezoid, and the legs have a rectangular display of the virtual image V.
  • the shape is toward the corner (lower corner) of the region.
  • the linear image E extends from the center of the virtual image V toward the end, and the end is located below the center when viewed from the driver 5.
  • the linear image E in the center portion extends in the left-right direction when viewed from the driver 5.
  • linear image E is displayed in the first area A, and the linear image E does not match the boundary line D.
  • the linear image E may be matched with the boundary line D, and the linear image E may function as a dividing line between the first area A and the second area B.
  • linear image patterns P1 to P9 having different forms from each other are prepared as shown in FIG. These patterns will be described later.
  • the CPU 31a of the control unit 31 communicates with each of the peripheral information acquisition unit 40, the forward information acquisition unit 50, the car navigation device 60, the ECU 70, and the operation unit 80.
  • communication systems such as CAN (Controller Area Network), Ethernet, MOST (Media Oriented Systems Transport), and LVDS (Low Voltage Differential Signaling) can be applied.
  • the peripheral information acquisition unit 40 acquires information on the periphery (external) of the vehicle 1 and performs communication between the vehicle 1 and a wireless network (V2N: Vehicle To Cellular Network) and communication between the vehicle 1 and another vehicle (V2V). : Vehicle To Vehicle), communication between the vehicle 1 and a pedestrian (V2P: Vehicle To Pedestrian), and communication between the vehicle 1 and a roadside infrastructure (V2I: Vehicle To roadside Infrastructure). That is, the peripheral information acquisition unit 40 enables communication by V2X (Vehicle @ To ⁇ Everything) between the vehicle 1 and the outside of the vehicle 1.
  • V2N Vehicle To Cellular Network
  • V2V Vehicle To Vehicle
  • V2P Vehicle To Pedestrian
  • V2I Vehicle To roadside Infrastructure
  • the peripheral information acquisition unit 40 includes a communication module that can directly access a WAN (Wide Area Network), an external device (such as a mobile router) that can access the WAN, and an access point of a public wireless LAN (Local Area Network). It has a communication module for communication and performs Internet communication.
  • the peripheral information acquisition unit 40 includes a GPS controller that calculates the position of the vehicle 1 based on a GPS (Global Positioning System) signal received from an artificial satellite or the like. With these configurations, communication by V2N is enabled.
  • the peripheral information acquisition unit 40 includes a wireless communication module that conforms to a predetermined wireless communication standard, and performs communication using V2V or V2P.
  • the peripheral information acquisition unit 40 has a communication device that wirelessly communicates with the roadside infrastructure, and is installed as an infrastructure from a base station of a driving safety support system (DSSS: Driving Safety Support Systems), for example. Object information and traffic information are acquired via the roadside apparatus. Thereby, communication by V2I becomes possible.
  • DSSS Driving Safety Support Systems
  • the peripheral information acquisition unit 40 acquires object information indicating the position, size, attributes, and the like of various objects such as vehicles, traffic lights, and pedestrians existing outside the own vehicle 1 by V2I, and controls the control unit. 31.
  • the object information is not limited to V2I, and may be obtained by any communication among V2X.
  • the peripheral information acquisition unit 40 acquires traffic information including the position and shape of the peripheral road of the vehicle 1 by V2I, and supplies the traffic information to the control unit 31. Further, the control unit 31 calculates the position of the own vehicle 1 based on information from the GPS controller of the peripheral information acquisition unit 40.
  • the forward information acquisition unit 50 includes, for example, a stereo camera that captures an image of a scene in front of the vehicle 1, and a LIDAR (Laser Imaging Detection And Ranging) that measures the distance from the vehicle 1 to an object located in front of the vehicle 1. It includes a distance measuring sensor, a sonar for detecting an object located in front of the vehicle 1, an ultrasonic sensor, a millimeter wave radar, and the like.
  • LIDAR Laser Imaging Detection And Ranging
  • the forward information acquisition unit 50 transmits to the CPU 31a forward image data indicating a front scenery image captured by a stereo camera, data indicating a distance to an object measured by a distance measurement sensor, and other detection data. I do.
  • the car navigation device 60 includes a GPS controller that calculates the position of the vehicle 1 based on a GPS (Global Positioning System) signal received from an artificial satellite or the like.
  • the car navigation device 60 has a storage unit that stores map data, reads out map data near the current position from the storage unit based on position information from the GPS controller, and displays a guide route to a destination set by the user. decide. Then, the car navigation device 60 outputs information on the current position of the vehicle 1 and the determined guidance route to the control unit 31.
  • the car navigation device 60 outputs information indicating the name and type of the facility in front of the vehicle 1 and the distance between the facility and the vehicle 1 to the control unit 31 by referring to the map data.
  • the map data includes road shape information (lane, road width, number of lanes, intersections, curves, branch roads, etc.), regulation information on road signs such as speed limits, and information on each lane when there are multiple lanes. Various information is associated with the position data.
  • the car navigation device 60 outputs these various types of information to the control unit 31 as route guidance information. Note that the car navigation device 60 is not limited to the one mounted on the vehicle 1, and performs wired or wireless communication with the control unit 31 and has a car navigation function such as a mobile terminal (smartphone, tablet PC (Personal Computer), etc.). ).
  • the ECU 70 controls each unit of the vehicle 1, and transmits, for example, vehicle speed information indicating the current vehicle speed of the vehicle 1 to the CPU 31a.
  • the CPU 31a may acquire vehicle speed information from a vehicle speed sensor.
  • the ECU 70 transmits to the CPU 31a measurement amounts such as the engine speed, warning information of the vehicle 1 itself (such as low fuel and abnormal engine oil pressure), and other vehicle information.
  • the CPU 31a can display an image indicating the vehicle speed, the engine speed, various warnings, and the like on the display unit 20 via the GDC 31b. That is, in addition to the first image C1 and the second image C2, a vehicle information image indicating vehicle information can be displayed in the display area of the virtual image V.
  • the vehicle information image is displayed at a perspective degree of 0% so that the vehicle information image is visually recognized facing the driver 5, for example.
  • the operation unit 80 receives various operations by the driver 5 and supplies a signal indicating the content of the received operation to the control unit 31. For example, the operation unit 80 receives a switching operation of the display mode of the virtual image V by the driver 5 or the like.
  • the control unit 31 displays the virtual image V in the simultaneous display mode as shown in FIG. 3A, for example, when the operation unit 80 receives a switching operation to the simultaneous display mode as a display mode switching operation. .
  • the display of the virtual image V in the simultaneous display mode may be limited according to the automatic driving level of the vehicle 1.
  • the virtual image V may be displayed in the simultaneous display mode only when the automatic operation level is equal to or higher than level 2 or level 3.
  • the control unit 31 can specify the current automatic operation level by acquiring operation mode information indicating the current automatic operation level from the ECU 70, for example.
  • the second image C2 is displayed in the second area B according to the guide route to the destination set by the user with the car navigation device 60.
  • the first image C1 is displayed in the first area A in response to the control unit 31 specifying the forward object to be superimposed. For example, based on the position of the object indicated by the object information acquired by V2I or V2V from the roadside infrastructure via the peripheral information acquisition unit 40 from the roadside infrastructure, the control unit 31 determines whether the object is in the display area of the virtual image V as viewed from the driver 5. When it is specified that the object is located at the position, it is determined that there is a forward object.
  • the method for specifying the presence of the forward object is not limited to this.
  • the object information from the peripheral information acquisition unit 40 is not limited to V2I or V2V, and may be acquired by any communication (at least one of V2N, V2V, and V2P) of V2X.
  • the control unit 31 may specify the forward object based on the information acquired from the forward information acquiring unit 50. Specifically, based on the forward image data, for example, a forward object may be specified by a known image analysis method such as a pattern matching method, or a distance measurement sensor, a sonar, an ultrasonic sensor, and a detection signal from a millimeter wave radar. The forward object may be specified based on the.
  • the control unit 31 displays the first image C1 in an AR mode at a position corresponding to the specified front object.
  • a linear image pattern (hereinafter, also simply referred to as a “pattern”) will be described with reference to FIGS.
  • the first image C1 that can be displayed in the first area A is not shown.
  • a pattern realized in the simultaneous display mode may be an arbitrary predetermined pattern, or may be set to a user's preference by a setting operation from the operation unit 80. It may be selected accordingly.
  • the ROM 32 various image parts data necessary for realizing one or more of the patterns shown in FIG. 4 and the like are stored in advance.
  • the patterns P1 to P3 are modes in which the width at the center of the linear image E is wider than the width at the end (portion toward the corner of the rectangular display area of the virtual image V).
  • the pattern P1 is a mode in which the linear image E is displayed in a gradation form assimilating the background color of the virtual image V from the end to the center.
  • the pattern P2 is a mode in which the linear image E is displayed in a gradation form assimilating the background image of the virtual image V from the center to the end.
  • the pattern P3 is a mode in which no gradation processing is performed on the linear image E, as shown in FIG.
  • the patterns P4 to P6 are modes in which the width at the end of the linear image E (the portion toward the corner of the rectangular display area of the virtual image V) is wider than the width at the center.
  • the pattern P4 is a mode in which the linear image E is displayed in a gradation form assimilating the background color of the virtual image V from the end to the center.
  • the pattern P5 is a mode in which the linear image E is displayed in a gradation form in which the linear image E is assimilated to the background color of the virtual image V from the center to the end.
  • the pattern P6 is a mode in which no gradation processing is performed on the linear image E, as shown in FIG.
  • the patterns P7 to P9 are modes in which the width of the linear image E is constant (the width at the end is equal to the width at the center).
  • the pattern P7 is a mode in which the linear image E is displayed in a gradation form assimilating the background color of the virtual image V from the end to the center.
  • the pattern P8 is a mode in which the linear image E is displayed in a gradation form in which the linear image E is assimilated to the background color of the virtual image V from the center to the end.
  • the pattern P9 is a mode in which no gradation processing is performed on the linear image E, as shown in FIG.
  • the “gradation shape assimilated to the background color of the virtual image V” refers to the background of the area where the linear image E is displayed. In this mode, the color gradually assimilate.
  • gradation processing may be performed so that a predetermined portion of the linear image E is assimilated to the background color of the first area A.
  • gradation processing may be performed so that a predetermined portion of the linear image E is assimilated to the background color of the second area B.
  • the HUD device 10 described above is an example of a display device that displays a superimposed image visually recognized by a viewer (for example, the driver 5) as a virtual image V overlapping a scene in front of the vehicle 1.
  • the HUD device 10 controls the display of the superimposed image, and the first image C1 (AR image) and the first image C1 (AR image) that are visually recognized along the road surface ahead of the vehicle 1 in the superimposed image display area (corresponding to the virtual image V display area).
  • a display control unit for example, the control unit 31 or the display unit 20 capable of displaying a second image C2 (non-AR image) that stands up to the viewer side from the one image C1 and is visually recognized is provided.
  • the first image C1 and the second image C2 are displayed in the same period (in the case of the simultaneous display mode)
  • the first image C1 is displayed in the first area A in the display area.
  • the second image C2 is displayed in the second area B adjacent to the first area A
  • the boundary G between the first area A and the second area B is displayed in a gradation form
  • along the boundary G A linear image E visually recognized in a linear manner is displayed.
  • the boundary G between the first area A and the second area B is displayed in a gradation. This can reduce the complexity of the information displayed on the boundary G given to the viewer. That is, when the AR image and the non-AR image are displayed in the same period, both can be displayed easily.
  • the linear image E extends from the center to the end of the superimposed image, and the width at the end is smaller than the width at the center (for example, linear image patterns P1 to P3). You may do so. By doing so, it is possible to display the first image C1 and the second image C2 without discomfort while clearly separating the first image C1 and the second image C2 near the center.
  • the linear image E extends from the center to the end of the superimposed image, and the width at the end is larger than the width at the center (for example, linear image patterns P4 to P6). You may do so. In this way, when the first image C1 is displayed in the first area A, it is possible to prevent the first image C1 and the linear image E from interfering with each other and causing trouble to the viewer.
  • the linear image E extends from the center to the end of the superimposed image, and is displayed in a gradation shape assimilated from the center to the end with the background color of the superimposed image (for example, , Linear image patterns P2, P5, P8).
  • the background color of the superimposed image for example, , Linear image patterns P2, P5, P8.
  • the linear image E extends from the center to the end of the superimposed image, and is displayed in a gradation form assimilating the background color of the superimposed image from the end to the center (for example, , Linear image patterns P1, P4, P7).
  • a gradation form assimilating the background color of the superimposed image from the end to the center (for example, , Linear image patterns P1, P4, P7).
  • the shadow image S adjacent to the linear image E may be displayed on the first area A side or the second area B side of the linear image E.
  • the first area A can be displayed more emphasized than the second area B.
  • the second area B is displayed more emphasized than the first area A.
  • the shadow image S is preferably, for example, a drop shadow drawn under the control of the control unit 31. However, if the shadow image S can be visually recognized in a shadow shape as shown in FIG. It may be realized using an effect.
  • the first area A is located above the second area B when viewed from the viewer, and the linear image E at the center of the superimposed image extends in the left-right direction when viewed from the viewer. Is also good.
  • the linear image E extends from the center of the superimposed image toward the end, and the end may be located below the center when viewed from the viewer.
  • the virtual image V in the simultaneous display mode is set with a boundary line D connecting the upper side and the lower side of the rectangle of the display area, and the first area A and the second area A are connected to each other.
  • the region B may be configured to be adjacent in the left-right direction. Further, as shown in FIG.
  • the virtual image V in the simultaneous display mode is defined such that the boundary line D is a curved line, and the end of the virtual image V is not the lower side or the corner but the left side of the rectangle of the display area of the virtual image V. Or a configuration extending toward the right side.
  • the boundary portion G between the first region A and the second region B can be displayed in a gradation form, and a linear image E visually recognized linearly along the boundary portion G can be displayed.
  • the shapes of the first area A and the second area B can be changed according to the purpose.
  • the display color of the linear image E may be a color included in the background color of either the first area A or the second area B, or a color not included in the background color (for example, the background color). Darker than black).
  • the phrase "the first image C1 (AR image) is visually recognized along the front road surface” is not limited to the perspective level of 100%, but means that the driver 5 can recognize that the image is displayed along the front road surface. If so, the predetermined degree may be smaller than 100%.
  • the forward object whose presence is notified by the first image C1 is not limited to the vehicle, but may be a pedestrian, a bicycle, or another moving object.
  • the projection target of the display light L is not limited to the windshield 3, but may be a combiner including a plate-shaped half mirror, a hologram element, and the like.
  • the display device that displays the virtual image V described above is not limited to the HUD device 10.
  • the display device may be configured as a head mounted display (HMD: Head Mounted Display) mounted on the head of the driver 5 of the vehicle 1. That is, the display device is not limited to the display device mounted on the vehicle 1 and may be any display device used in the vehicle 1.
  • HMD Head Mounted Display
  • REFERENCE SIGNS LIST 100 vehicle display system 10 HUD device 20 display unit 30 control device 31 control unit (31a CPU, 31b GDC) 32 ROM, 33 RAM 40: peripheral information acquisition unit 50: forward information acquisition unit 60: car navigation device 70: ECU Reference Signs List 80 operation unit 1 vehicle 2 dashboard 3 windshield 5 driver 4 eye box L display light V virtual image A 1st area C1 1st image B 2nd area , C2: second image D: boundary line, G: boundary portion E: linear image S: shadow image

Abstract

Provided is a display device capable of displaying both an AR image and a non-AR image in an easily distinguishable manner when displaying the both images in the same period. The display device displays a superimposed image that is recognized visually by a viewer as a virtual image V that overlaps with a scene in front of a vehicle. The display device is capable of displaying a first image C1 as an AR image that is visually recognized so as to match a road surface in front of the vehicle in a display area of the virtual image V, and a second image C2 as a non-AR image that is visually recognized so as to stand out toward the viewer from the first image C1. When displaying the first image C1 and the second image C2 in the same period, the display device displays the first image C1 in a first area A, displays the second image C2 in a second area B adjacent to the first area A, displays a boundary G between the first area A and the second area B in gradation, and displays a linear image E that is visually recognized linearly along the boundary G.

Description

表示装置Display device
 本発明は、表示装置に関する。 << The present invention relates to a display device.
 従来の表示装置として、特許文献1には、車両の前方風景と重なる虚像として視認者(主に車両の運転者)に視認される画像を表示するものが開示されている。 As a conventional display device, Patent Literature 1 discloses an image display device that displays an image visually recognized by a viewer (mainly, a driver of the vehicle) as a virtual image overlapping a scene in front of the vehicle.
特開2015-64472号公報JP-A-2015-64472
 上記のような表示装置では、虚像として、例えば、車両の前方路面に沿って視認される態様の画像(以下、AR(Augmented Reality)画像と言う。)や、AR画像よりも視認者側に立ち上がって視認される態様(例えば、視認者に正対して視認される態様)の画像(以下、非AR画像と言う。)の表示が可能である。しかしながら、所定の表示領域内において、AR画像と非AR画像とを同じ期間で表示する場合、単に双方を表示しただけでは、情報の混在により視認者に煩わしさを与えてしまう虞がある。 In the display device as described above, as a virtual image, for example, an image (hereinafter, referred to as an AR (Augmented Reality) image) in a mode visually recognized along the road in front of the vehicle, or stands up to the viewer side than the AR image. It is possible to display an image (hereinafter, referred to as a non-AR image) in an aspect that is visually recognized by the user (for example, an aspect that is directly viewed by a viewer). However, in the case where the AR image and the non-AR image are displayed in the same display area in the same period, simply displaying both of them may cause trouble for the viewer due to the mixture of information.
 本発明は、上記実情に鑑みてなされたものであり、AR画像と非AR画像とを同じ期間で表示する際に双方を見やすく表示することができる表示装置を提供することを目的とする。 The present invention has been made in view of the above circumstances, and it is an object of the present invention to provide a display device that can display both an AR image and a non-AR image in an easily viewable manner when they are displayed in the same period.
 上記目的を達成するため、本発明に係る表示装置は、
 車両の前方風景と重なる虚像として視認者に視認される重畳画像を表示する表示装置であって、
 前記重畳画像の表示制御を行い、前記重畳画像の表示領域内において前記車両の前方路面に沿って視認される第1画像と前記第1画像よりも前記視認者側に立ち上がって視認される第2画像とを表示可能な表示制御手段を備え、
 前記表示制御手段は、前記第1画像と前記第2画像とを同じ期間で表示する場合に、前記表示領域内における第1領域に前記第1画像を表示するとともに前記第1領域と隣り合う第2領域に前記第2画像を表示し、
 前記第1領域と前記第2領域の境界部をグラデーション状に表示し、
 前記境界部に沿って線状に視認される線状画像を表示する。
In order to achieve the above object, a display device according to the present invention includes:
A display device that displays a superimposed image visually recognized by a viewer as a virtual image overlapping a scene in front of the vehicle,
A display control of the superimposed image is performed, and a first image that is visually recognized along a road surface ahead of the vehicle in a display area of the superimposed image, and a second image that is more standing up to the viewer side than the first image and visually recognized. Display control means capable of displaying an image and
When the first image and the second image are displayed in the same period, the display control unit displays the first image in a first area in the display area and a second image adjacent to the first area. Displaying the second image in two regions,
Displaying a boundary between the first region and the second region in a gradation form;
A linear image visually recognized linearly along the boundary is displayed.
 本発明によれば、AR画像と非AR画像とを同じ期間で表示する際に双方を見やすく表示することができる。 According to the present invention, when an AR image and a non-AR image are displayed in the same period, both can be displayed in an easily viewable manner.
本発明の一実施形態に係るヘッドアップディスプレイ(HUD)装置の車両への搭載態様を説明するための図である。BRIEF DESCRIPTION OF THE DRAWINGS It is a figure for demonstrating the mounting aspect to the vehicle of the head-up display (HUD) apparatus which concerns on one Embodiment of this invention. 車両用表示システムのブロック図である。It is a block diagram of a display system for vehicles. (a)~(c)は、同時表示モードにおける虚像の構成例を説明するための図である。(A)-(c) is a figure for demonstrating the example of a structure of the virtual image in a simultaneous display mode. 線状画像パターンの例を示す表の図である。It is a figure of a table showing an example of a linear image pattern. (a)~(c)は、線状画像パターンに対応する虚像表示例を示す図である。(A)-(c) is a figure which shows the example of a virtual image display corresponding to a linear image pattern. (a)~(c)は、線状画像パターンに対応する虚像表示例を示す図である。(A)-(c) is a figure which shows the example of a virtual image display corresponding to a linear image pattern. (a)~(c)は、線状画像パターンに対応する虚像表示例を示す図である。(A)-(c) is a figure which shows the example of a virtual image display corresponding to a linear image pattern. (a)及び(b)は、影状画像を説明するための図である。(A) And (b) is a figure for demonstrating a shadow image. (a)及び(b)は、変形例に係る虚像の構成例を説明するための図である。(A) And (b) is a figure for demonstrating the example of a structure of the virtual image concerning a modification.
 本発明の一実施形態に係る表示装置について図面を参照して説明する。 A display device according to an embodiment of the present invention will be described with reference to the drawings.
 本実施形態に係る表示装置は、図2に示す車両用表示システム100に含まれるHUD(Head-Up Display)装置10である。HUD装置10は、図1に示すように、車両1のダッシュボード2の内部に設けられ、車両1に関する情報(以下、車両情報と言う。)だけでなく、車両情報以外の情報も統合的に運転者5に報知する。なお、車両情報は、車両1自体の情報だけでなく、車両1の運行に関連した車両1の外部の情報も含む。 The display device according to the present embodiment is the HUD (Head-Up Display) device 10 included in the vehicle display system 100 shown in FIG. As shown in FIG. 1, the HUD device 10 is provided inside the dashboard 2 of the vehicle 1 and integrates not only information on the vehicle 1 (hereinafter, referred to as vehicle information) but also information other than the vehicle information. Notify the driver 5. The vehicle information includes not only information on the vehicle 1 itself but also information on the outside of the vehicle 1 related to the operation of the vehicle 1.
 車両用表示システム100は、車両1内において構成されるシステムであり、HUD装置10と、周辺情報取得部40と、前方情報取得部50と、カーナビゲーション(カーナビ)装置60と、ECU(Electronic Control Unit)70と、操作部80と、を備える。 The vehicle display system 100 is a system configured in the vehicle 1, and includes a HUD device 10, a peripheral information acquisition unit 40, a forward information acquisition unit 50, a car navigation (car navigation) device 60, and an ECU (Electronic Control). Unit) 70 and an operation unit 80.
 HUD装置10は、図1に示すように、コンバイナ処理されたフロントガラス3に向けて表示光Lを射出する。フロントガラス3で反射した表示光Lは、運転者5側へと向かう。運転者5は、視点をアイボックス4内におくことで、フロントガラス3の前方に、表示光Lが表す画像を虚像Vとして視認することができる。つまり、HUD装置10は、フロントガラス3の前方位置に虚像Vを表示する。これにより、運転者5は、虚像Vを前方風景と重畳させて観察することができる。 The HUD device 10 emits the display light L toward the combiner-treated windshield 3 as shown in FIG. The display light L reflected by the windshield 3 travels to the driver 5 side. The driver 5 can visually recognize an image represented by the display light L as a virtual image V in front of the windshield 3 by placing the viewpoint in the eye box 4. That is, the HUD device 10 displays the virtual image V at a position in front of the windshield 3. Thereby, the driver 5 can observe the virtual image V in a manner superimposed on the front scenery.
 HUD装置10は、図2に示す表示部20及び制御装置30と、図示しない反射部とを備える。 The HUD device 10 includes the display unit 20 and the control device 30 illustrated in FIG. 2, and a reflection unit (not illustrated).
 表示部20は、制御装置30の制御により、虚像Vとして運転者5に視認される重畳画像を表示する。表示部20は、例えば、TFT(Thin Film Transistor)型のLCD(Liquid Crystal Display)や、LCDを背後から照明するバックライト等を有する。バックライトは、例えばLED(Light Emitting Diode)から構成されている。表示部20は、制御装置30の制御の下で、バックライトに照明されたLCDが画像を表示することにより表示光Lを生成する。生成した表示光Lは、反射部で反射した後に、フロントガラス3に向けて射出される。反射部は、例えば、折り返しミラーと凹面鏡の二枚の鏡から構成される。折り返しミラーは、表示部20から射出された表示光Lを折り返して凹面鏡へと向かわせる。凹面鏡は、折り返しミラーからの表示光Lを拡大しつつ、フロントガラス3に向けて反射させる。これにより、運転者5に視認される虚像Vは、表示部20に表示されている画像が拡大されたものとなる。なお、反射部を構成する鏡の枚数は設計に応じて任意に変更可能である。 The display unit 20 displays a superimposed image visually recognized by the driver 5 as the virtual image V under the control of the control device 30. The display unit 20 has, for example, a TFT (Thin Film Transistor) type LCD (Liquid Crystal Display) and a backlight that illuminates the LCD from behind. The backlight is composed of, for example, an LED (Light Emitting Diode). The display unit 20 generates the display light L by the LCD illuminated by the backlight displaying an image under the control of the control device 30. The generated display light L is emitted toward the windshield 3 after being reflected by the reflector. The reflector is composed of, for example, two mirrors, a folding mirror and a concave mirror. The turning mirror turns the display light L emitted from the display unit 20 back to the concave mirror. The concave mirror reflects the display light L from the folding mirror toward the windshield 3 while expanding the display light L. Thereby, the virtual image V visually recognized by the driver 5 is an enlarged image of the image displayed on the display unit 20. Note that the number of mirrors constituting the reflecting section can be arbitrarily changed according to the design.
 なお、以下では、虚像Vとして運転者5に視認される画像を表示部20が表示することを「重畳画像を表示する」とも言う。また、制御装置30が表示部20の表示制御を行うことを「重畳画像の表示制御を行う」とも言う。また、表示部20は、重畳画像を表示することができれば、LCDを用いたものに限られず、OLED(Organic Light Emitting Diodes)、DMD(Digital Micro mirror Device)、LCOS(Liquid Crystal On Silicon)などの表示デバイスを用いたものであってもよい。 Hereinafter, displaying the image visually recognized by the driver 5 as the virtual image V on the display unit 20 is also referred to as “displaying a superimposed image”. Controlling the display of the display unit 20 by the control device 30 is also referred to as “performing display control of a superimposed image”. The display unit 20 is not limited to one using an LCD as long as it can display a superimposed image, and may be an OLED (Organic Light Emitting Diodes), a DMD (Digital Micro Mirror Device), an LCOS (Liquid Crystal On Silicon), or the like. A display device may be used.
 制御装置30は、HUD装置10の全体動作を制御するマイクロコンピュータからなり、制御部31と、ROM(Read Only Memory)32と、RAM(Random Access Memory)33とを備える。また、制御装置30は、図示しない構成として、駆動回路や、車両1内の各種システムと通信を行うための入出力回路を備える。 The control device 30 includes a microcomputer that controls the entire operation of the HUD device 10, and includes a control unit 31, a read only memory (ROM) 32, and a random access memory (RAM) 33. The control device 30 includes a drive circuit and an input / output circuit for communicating with various systems in the vehicle 1 as a configuration (not shown).
 ROM32は、動作プログラムや各種の画像データを予め記憶する。RAM33は、各種の演算結果などを一時的に記憶する。制御部31は、ROM32に記憶された動作プログラムを実行するCPU(Central Processing Unit)31aと、CPUと協働して画像処理を実行するGDC(Graphics Display Controller)31bとを備える。GDC31bは、例えば、GPU(Graphics Processing Unit)、FPGA(Field-Programmable Gate Array)、ASIC(Application Specific Integrated Circuit)等から構成されている。特に、ROM32には、後述の表示制御を行うための動作プログラムが格納されている。なお、制御装置30や制御部31の構成は、以下に説明する機能を充足する限りにおいては任意である。 The ROM 32 stores an operation program and various image data in advance. The RAM 33 temporarily stores various calculation results and the like. The control unit 31 includes a CPU (Central Processing Unit) 31a that executes an operation program stored in a ROM 32, and a GDC (Graphics Display Controller) 31b that executes image processing in cooperation with the CPU. The GDC 31b includes, for example, a GPU (Graphics Processing Unit), an FPGA (Field-Programmable Gate Array), and an ASIC (Application Specific Integrated Circuit). In particular, the ROM 32 stores an operation program for performing display control described later. The configurations of the control device 30 and the control unit 31 are arbitrary as long as the functions described below are satisfied.
 制御部31は、表示部20を駆動制御する。例えば、制御部31は、CPU31aで表示部20のバックライトを駆動制御し、CPU31aと協働して動作するGDC31bで表示部20のLCDを駆動制御する。 The control unit 31 controls the driving of the display unit 20. For example, the control unit 31 drives and controls the backlight of the display unit 20 with the CPU 31a, and drives and controls the LCD of the display unit 20 with the GDC 31b that operates in cooperation with the CPU 31a.
 制御部31のCPU31aは、GDC31bと協働して、ROM32に記憶された各種の画像データに基づき、重畳画像の制御を行う。GDC31bは、CPU31aからの表示制御指令に基づき、表示部20の表示動作の制御内容を決定する。GDC31bは、表示部20に表示する1画面を構成するために必要な画像パーツデータをROM32から読み込み、RAM33へ転送する。また、GDC31bは、RAM33を使って、画像パーツデータやHUD装置10の外部から通信により受け取った各種の画像データを元に、1画面分の絵データを作成する。そして、GDC31bは、RAM33で1画面分の絵データを完成させたところで、画像の更新タイミングに合わせて、表示部20に転送する。これにより、表示部20に虚像Vとして運転者5に視認される重畳画像が表示される。また、虚像Vとして視認される画像を構成する各画像には予めレイヤが割り当てられており、制御部31は、各画像の個別の表示制御が可能となっている。 The CPU 31a of the control unit 31 controls the superimposed image based on various image data stored in the ROM 32 in cooperation with the GDC 31b. The GDC 31b determines the control content of the display operation of the display unit 20 based on the display control command from the CPU 31a. The GDC 31b reads, from the ROM 32, image part data necessary to compose one screen to be displayed on the display unit 20, and transfers the image part data to the RAM 33. The GDC 31b uses the RAM 33 to create picture data for one screen based on image part data and various image data received from the outside of the HUD device 10 by communication. When the GDC 31b completes the picture data for one screen in the RAM 33, the GDC 31b transfers the picture data to the display unit 20 in synchronization with the image update timing. Thereby, the superimposed image visually recognized by the driver 5 as the virtual image V is displayed on the display unit 20. In addition, a layer is assigned in advance to each image constituting the image visually recognized as the virtual image V, and the control unit 31 can perform individual display control of each image.
 また、制御部31は、虚像Vの表示領域内におけるコンテンツ画像(図3(a)、(b)に示す後述の第1画像C1や第2画像C2など)の表示位置と、コンテンツ画像の表示距離とを制御する。なお、図3(a)~(c)等で虚像Vが示している実線の枠は、虚像Vの表示領域を示し、コンテンツ画像は、当該表示領域内に虚像として視認される画像である。これは、虚像Vとして運転者5に視認される重畳画像を表示する表示部20において、当該重畳画像の表示領域内にコンテンツ画像が表示されることと同義である。 In addition, the control unit 31 displays a display position of a content image (such as a first image C1 or a second image C2 described below shown in FIGS. 3A and 3B) in a display area of the virtual image V, and displays the content image. Control the distance and. Note that the solid-line frame indicated by the virtual image V in FIGS. 3A to 3C and the like indicates the display area of the virtual image V, and the content image is an image visually recognized as a virtual image in the display area. This is equivalent to displaying the content image in the display area of the superimposed image on the display unit 20 that displays the superimposed image visually recognized by the driver 5 as the virtual image V.
 表示位置の制御とは、運転者5から見て、虚像Vの表示領域内における任意の位置に画像が表示されるように制御部31が表示部20を制御することである。また、表示距離の制御とは、運転者5から見て、コンテンツ画像が車両1からどのくらい前方に表示されるかの制御である。つまり、表示距離の制御とは、車両1の前方に所定距離向かった位置にコンテンツ画像が視認されるように制御部31が表示部20を制御することである。表示距離の制御については、制御部31が虚像Vの表示領域内におけるコンテンツ画像を、奥行きを持たせた態様で表示制御可能であることにより実現される。例えば、当該奥行きを持たせた態様でコンテンツ画像を表示するには、ROM32に予め3D画像データを格納しておき、後述の物体情報等から算出可能な、画像の重畳対象となる物体の位置や車両1から当該物体までの距離に基づいて、遠近法(例えば、一点透視図法)を用いてコンテンツ画像を描画制御する。また、表示部20の表示面(LCDの表示面や、DMDやLCOSを用いた場合にはスクリーン)を傾けて配置することにより、虚像Vが表示される仮想面(表示部20の表示面に対応する面)を車両1の上下方向に対して前方に傾けて設定することでコンテンツ画像に奥行きを持たせてもよい。この場合、運転者5の視点から視認させたい像の前記仮想面への射影を考慮して、制御部31は、コンテンツ画像を奥行きを持たせた態様で描画制御すればよい。運転者5の視点位置として、制御部31は、ROM32内に予め格納した想定される視点位置を用いてもよいし、運転者5の視点位置を検出する視点検出部(図示せず)からの視線情報を用いてもよい。 The control of the display position means that the control unit 31 controls the display unit 20 so that an image is displayed at an arbitrary position in the display area of the virtual image V when viewed from the driver 5. The control of the display distance is control of how far ahead the vehicle 1 the content image is displayed from the driver 5. That is, the control of the display distance means that the control unit 31 controls the display unit 20 so that the content image is visually recognized at a position in front of the vehicle 1 by a predetermined distance. The control of the display distance is realized by the control unit 31 being able to control the display of the content image in the display area of the virtual image V in a manner having depth. For example, in order to display a content image in a mode having the depth, the 3D image data is stored in the ROM 32 in advance, and the position and the position of the object to be superimposed on the image, which can be calculated from the object information and the like to be described later. Based on the distance from the vehicle 1 to the object, drawing control of the content image is performed using a perspective method (for example, a single point perspective method). Also, by arranging the display surface of the display unit 20 (the display surface of the LCD, or the screen when DMD or LCOS is used) at an angle, a virtual surface (the display surface of the display unit 20) on which the virtual image V is displayed is displayed. The content image may have depth by setting the corresponding surface (front surface) to be inclined forward with respect to the vertical direction of the vehicle 1. In this case, in consideration of the projection of the image desired to be visually recognized from the viewpoint of the driver 5 onto the virtual surface, the control unit 31 may perform the drawing control in a mode in which the content image has a depth. As the viewpoint position of the driver 5, the control unit 31 may use an assumed viewpoint position stored in advance in the ROM 32, or may use a viewpoint detection unit (not shown) that detects the viewpoint position of the driver 5. Gaze information may be used.
 この実施形態では、上記のようにコンテンツ画像の表示位置と表示距離とを制御可能であることにより、制御部31は、虚像Vの表示領域内に、次に述べる第1画像C1と第2画像C2とを表示可能となっている。例えば、虚像Vの表示モードとして、虚像Vの表示領域内に第1画像C1と第2画像C2のうち一方を表示するが他方を表示しない「単独表示モード」と、虚像Vの表示領域内に第1画像C1及び第2画像C2の双方を同時に(同じ期間で)表示する「同時表示モード」とが設けられている。そして、制御部31は、操作部80が運転者5による表示モード切替操作を受け付けたことに応じて、現状の表示モードから他の表示モードへと切り替える。まずは、第1画像C1及び第2画像C2について説明する。 In this embodiment, since the display position and the display distance of the content image can be controlled as described above, the control unit 31 allows the first image C1 and the second image C2 can be displayed. For example, as the display mode of the virtual image V, a “single display mode” in which one of the first image C1 and the second image C2 is displayed in the display area of the virtual image V but the other is not displayed, and the display mode of the virtual image V is A “simultaneous display mode” for displaying both the first image C1 and the second image C2 simultaneously (in the same period) is provided. Then, the control unit 31 switches from the current display mode to another display mode in response to the operation unit 80 receiving the display mode switching operation by the driver 5. First, the first image C1 and the second image C2 will be described.
 第1画像C1は、虚像Vの表示領域内において車両1の前方路面に沿って視認されるAR(Augmented Reality)画像である。
 第1画像C1は、後述する前方物体に対応する位置に表示される画像であり、前方物体の存在を報知するための画像である。前方物体は、例えば、車両1(以下、自車1とも言う。)の先行車両であり、図3(a)及び(b)は、第1画像C1が先行車両(図示せず)に対応する位置に表示されている例を示している。図3(a)及び(b)の例では、第1画像C1が略三角形の枠内にエクスクラメーションマークが表されて構成されている。なお、第1画像C1は、対応する前方物体の存在を注意喚起することができれば、その態様は任意であり、例えば、文字、記号、図形、アイコンやこれらの組み合わせであればよい。また、第1画像C1の表示位置としての「前方物体に対応する位置」とは、前方物体に重畳して視認される位置に限られず、前方物体の近傍の位置であってもよい。
The first image C1 is an AR (Augmented Reality) image visually recognized along the road surface ahead of the vehicle 1 in the display area of the virtual image V.
The first image C1 is an image displayed at a position corresponding to a forward object described later, and is an image for notifying the presence of the forward object. The forward object is, for example, a preceding vehicle of the vehicle 1 (hereinafter, also referred to as the own vehicle 1). In FIGS. 3A and 3B, the first image C1 corresponds to the preceding vehicle (not shown). The example displayed on the position is shown. In the examples of FIGS. 3A and 3B, the first image C1 is configured by displaying an exclamation mark in a substantially triangular frame. The first image C1 may have any mode as long as it can alert the presence of the corresponding front object, and may be, for example, a character, a symbol, a graphic, an icon, or a combination thereof. Further, the “position corresponding to the front object” as the display position of the first image C1 is not limited to the position superimposed on the front object and visually recognized, and may be a position near the front object.
 第2画像C2は、虚像Vの表示領域内において第1画像C1よりも視認者側に立ち上がって視認される態様(例えば、視認者に正対して視認される態様)の非AR画像である。
 第2画像C2は、図3(a)に示すように、ユーザにより設定された案内経路や、車両1の現在地近傍の地図情報(当該地図情報については図示省略)を示すものである。第2画像C2は、例えば、俯瞰態様で表した地図情報画像と、当該画像に重ねて表示される案内経路画像(例えば、矢印を示す画像など)とを含んで構成される。制御部31は、後述のカーナビ装置60からの経路案内情報に基づき、第2画像C2の表示制御を行う。
The second image C2 is a non-AR image in a mode in which the second image C2 stands up to the viewer side relative to the first image C1 in the display area of the virtual image V and is viewed (for example, a mode in which the virtual image V is viewed directly in front of the viewer).
As shown in FIG. 3A, the second image C2 shows a guide route set by the user and map information near the current location of the vehicle 1 (the map information is not shown). The second image C2 is configured to include, for example, a map information image represented in a bird's-eye view mode, and a guide route image (for example, an image showing an arrow) superimposed on the image. The control unit 31 controls display of the second image C2 based on route guidance information from the car navigation device 60 described later.
 AR画像としての第1画像C1は、実景としての前方路面に沿って運転者5に視認される。第1画像C1は、例えば、後述の交通情報や前方画像データに基づいて特定される前方路面に沿うように、制御部31によって遠近法(例えば、一点透視図法)を用いて描画される。ここで、遠近法を用いて描画されて前方路面に沿って視認される画像をパース度「100%」とする。一方、前方路面よりも運転者5側に起き上がって視認されるとともに、運転者5と正対して視認される画像をパース度「0%」とする。パース度100%~0%の間の度合は、視認される画像の前方路面から運転者5に向かっての起き上がり度合に対応する。こうした場合、例えば、AR画像としての第1画像C1はパース度100%で表示され、非AR画像としての第2画像C2はパース度0%で表示される。なお、第1画像C1は、運転者5が前方路面に沿って表示されていると認識可能な範囲であれば、パース度100%よりも小さい度合のパース度で表示されてもよい。また、第2画像C2は、第1画像C1よりも運転者5側に立ち上がって視認されるものであれば、パース度0%よりも大きい度合のパース度で表示されてもよい。 The first image C <b> 1 as the AR image is visually recognized by the driver 5 along the front road surface as a real scene. The first image C1 is drawn by the control unit 31 using a perspective method (for example, a one-point perspective drawing method) so as to be along a front road surface specified based on traffic information and forward image data described below. Here, an image drawn using the perspective method and visually recognized along the front road surface is defined as a perspective degree “100%”. On the other hand, an image which rises toward the driver 5 side from the front road surface and is visually recognized, and is visually recognized facing the driver 5 is defined as a perspective degree “0%”. The degree between the perspective degree of 100% and 0% corresponds to the degree of rising from the road surface in front of the visually recognized image toward the driver 5. In such a case, for example, the first image C1 as an AR image is displayed at a perspective degree of 100%, and the second image C2 as a non-AR image is displayed at a perspective degree of 0%. Note that the first image C1 may be displayed at a perspective degree smaller than 100% as long as the driver 5 can recognize that the driver 5 is displayed along the front road surface. Further, the second image C2 may be displayed at a perspective degree higher than 0% as long as the second image C2 stands up to the driver 5 side more than the first image C1 and is visually recognized.
 続いて、虚像Vの表示領域内に第1画像C1及び第2画像C2の双方を同じ期間で表示する「同時表示モード」の構成を図3(a)~(c)を参照して説明する。 Subsequently, the configuration of the “simultaneous display mode” in which both the first image C1 and the second image C2 are displayed in the display area of the virtual image V in the same period will be described with reference to FIGS. 3 (a) to 3 (c). .
 同時表示モードにおける虚像Vの表示領域は、第1領域Aと、第2領域Bとを有する。第1領域Aと第2領域Bとは、図3(b)、(c)に点線で示した境界線Dを挟んで隣り合っている。なお、図において点線で示した境界線Dは、第1領域Aと第2領域Bの形状を説明するためのものに過ぎず、実際に視認されるものでなくともよい。図3(b)に示すように、第1領域Aは、運転者5から見て第2領域Bよりも上方に位置する。第1領域A内に第1画像C1が表示され、第2領域B内に第2画像C2が表示される。 The display area of the virtual image V in the simultaneous display mode has a first area A and a second area B. The first region A and the second region B are adjacent to each other with a boundary line D indicated by a dotted line in FIGS. 3B and 3C interposed therebetween. In addition, the boundary line D shown by the dotted line in the figure is only for explaining the shapes of the first region A and the second region B, and does not need to be visually recognized. As shown in FIG. 3B, the first area A is located above the second area B when viewed from the driver 5. The first image C1 is displayed in the first area A, and the second image C2 is displayed in the second area B.
 また、同時表示モードにおける虚像Vでは、境界線Dを含む境界部G(図3(c)において破線で示した部分)がグラデーション状に表示される。具体的には、第1領域Aのうち境界部Gを除く部分の背景画像の色を「第1の色」とし、第2領域Bのうち境界部Gを除く部分の背景画像の色を「第2の色」とすると、境界部Gにおいては、第1の色と第2の色の一方から他方へ連続的に変化するグラデーション処理が施される。なお、第1の色と第2の色とは互いに異なる色であれば任意である。 {Circle around (3)} In the virtual image V in the simultaneous display mode, a boundary portion G (a portion indicated by a broken line in FIG. 3C) including the boundary line D is displayed in a gradation. Specifically, the color of the background image of the portion of the first area A except for the boundary G is set to “first color”, and the color of the background image of the portion of the second area B other than the boundary G is set to “first color”. Assuming that the "second color", a gradation process that continuously changes from one of the first color and the second color to the other is performed at the boundary G. Note that the first color and the second color are arbitrary as long as they are different colors.
 また、同時表示モードにおける虚像Vでは、境界部Gに沿って線状に視認される線状画像E(図3(a)、(b)など参照)が表示される。線状画像Eは、例えば、図3(a)に示すように、等脚台形を構成する四辺のうち下底に対応する辺を除いた形状をなすとともに、脚が虚像Vの矩形状の表示領域の角(下側の角)に向かう形状をなしている。これにより、線状画像Eは、虚像Vの中央部から端部に向かって延びており、且つ、当該端部は、運転者5から見て当該中央部よりも下方に位置している。また、当該中央部における線状画像Eは、運転者5から見て左右方向に沿っている。 {Circle around (3)} In the virtual image V in the simultaneous display mode, a linear image E (see FIGS. 3A and 3B) that is viewed linearly along the boundary G is displayed. For example, as shown in FIG. 3A, the linear image E has a shape excluding the side corresponding to the lower base among the four sides constituting the isosceles trapezoid, and the legs have a rectangular display of the virtual image V. The shape is toward the corner (lower corner) of the region. Accordingly, the linear image E extends from the center of the virtual image V toward the end, and the end is located below the center when viewed from the driver 5. In addition, the linear image E in the center portion extends in the left-right direction when viewed from the driver 5.
 なお、この実施形態では、図3(b)に示すように、線状画像Eが第1領域A内に表示され、線状画像Eと境界線Dが一致していない例を示しているが、線状画像Eを境界線Dと一致させ、線状画像Eを第1領域Aと第2領域Bとの区分線として機能させてもよい。この実施形態では、線状画像Eの態様として、図4に示すように、互いに異なる態様の線状画像パターンP1~P9が用意されている。これらパターンについては、後述する。 Note that, in this embodiment, as shown in FIG. 3B, an example is shown in which the linear image E is displayed in the first area A, and the linear image E does not match the boundary line D. Alternatively, the linear image E may be matched with the boundary line D, and the linear image E may function as a dividing line between the first area A and the second area B. In this embodiment, as the form of the linear image E, linear image patterns P1 to P9 having different forms from each other are prepared as shown in FIG. These patterns will be described later.
 図2に戻って、制御部31のCPU31aは、周辺情報取得部40、前方情報取得部50、カーナビ装置60、ECU70、操作部80の各々と通信を行う。当該通信としては、例えば、CAN(Controller Area Network)、Ethernet、MOST(Media Oriented Systems Transport)、LVDS(Low Voltage Differential Signaling)などの通信方式が適用可能である。 Returning to FIG. 2, the CPU 31a of the control unit 31 communicates with each of the peripheral information acquisition unit 40, the forward information acquisition unit 50, the car navigation device 60, the ECU 70, and the operation unit 80. As the communication, for example, communication systems such as CAN (Controller Area Network), Ethernet, MOST (Media Oriented Systems Transport), and LVDS (Low Voltage Differential Signaling) can be applied.
 周辺情報取得部40は、車両1の周辺(外部)の情報を取得するものであり、車両1とワイヤレスネットワークとの通信(V2N:Vehicle To cellular Network)、車両1と他車両との通信(V2V:Vehicle To Vehicle)、車両1と歩行者との通信(V2P:Vehicle To Pedestrian)、車両1と路側のインフラとの通信(V2I:Vehicle To roadside Infrastructure)を可能とする各種モジュールから構成される。つまり、周辺情報取得部40は、車両1と車両1の外部との間でV2X(Vehicle To Everything)による通信を可能とする。 The peripheral information acquisition unit 40 acquires information on the periphery (external) of the vehicle 1 and performs communication between the vehicle 1 and a wireless network (V2N: Vehicle To Cellular Network) and communication between the vehicle 1 and another vehicle (V2V). : Vehicle To Vehicle), communication between the vehicle 1 and a pedestrian (V2P: Vehicle To Pedestrian), and communication between the vehicle 1 and a roadside infrastructure (V2I: Vehicle To roadside Infrastructure). That is, the peripheral information acquisition unit 40 enables communication by V2X (Vehicle @ To \ Everything) between the vehicle 1 and the outside of the vehicle 1.
 例えば、(i)周辺情報取得部40は、WAN(Wide Area Network)に直接アクセスできる通信モジュール、WANにアクセス可能な外部装置(モバイルルータなど)や公衆無線LAN(Local Area Network)のアクセスポイント等と通信するための通信モジュールなどを備え、インターネット通信を行う。また、周辺情報取得部40は、人工衛星などから受信したGPS(Global Positioning System)信号に基づいて車両1の位置を算出するGPSコントローラを備える。これらの構成により、V2Nによる通信を可能とする。(ii)また、周辺情報取得部40は、所定の無線通信規格に準拠した無線通信モジュールを備え、V2VやV2Pによる通信を行う。(iii)また、周辺情報取得部40は、路側のインフラと無線通信する通信装置を有し、例えば、安全運転支援システム(DSSS:Driving Safety Support Systems)の基地局から、インフラストラクチャーとして設置された路側無線装置を介して、物体情報や交通情報を取得する。これによりV2Iによる通信が可能となる。 For example, (i) the peripheral information acquisition unit 40 includes a communication module that can directly access a WAN (Wide Area Network), an external device (such as a mobile router) that can access the WAN, and an access point of a public wireless LAN (Local Area Network). It has a communication module for communication and performs Internet communication. The peripheral information acquisition unit 40 includes a GPS controller that calculates the position of the vehicle 1 based on a GPS (Global Positioning System) signal received from an artificial satellite or the like. With these configurations, communication by V2N is enabled. (Ii) The peripheral information acquisition unit 40 includes a wireless communication module that conforms to a predetermined wireless communication standard, and performs communication using V2V or V2P. (Iii) The peripheral information acquisition unit 40 has a communication device that wirelessly communicates with the roadside infrastructure, and is installed as an infrastructure from a base station of a driving safety support system (DSSS: Driving Safety Support Systems), for example. Object information and traffic information are acquired via the roadside apparatus. Thereby, communication by V2I becomes possible.
 この実施形態では、周辺情報取得部40は、自車1の外部に存在する車両、信号機、歩行者などの各種物体の位置、大きさ、属性などを示す物体情報をV2Iにより取得し、制御部31に供給する。なお、物体情報は、V2Iに限られず、V2Xのうち任意の通信により取得されてもよい。また、周辺情報取得部40は、V2Iにより車両1の周辺道路の位置や形状を含む交通情報を取得し、制御部31に供給する。また、周辺情報取得部40のGPSコントローラからの情報に基づき、制御部31は、自車1の位置を算出する。 In this embodiment, the peripheral information acquisition unit 40 acquires object information indicating the position, size, attributes, and the like of various objects such as vehicles, traffic lights, and pedestrians existing outside the own vehicle 1 by V2I, and controls the control unit. 31. Note that the object information is not limited to V2I, and may be obtained by any communication among V2X. Further, the peripheral information acquisition unit 40 acquires traffic information including the position and shape of the peripheral road of the vehicle 1 by V2I, and supplies the traffic information to the control unit 31. Further, the control unit 31 calculates the position of the own vehicle 1 based on information from the GPS controller of the peripheral information acquisition unit 40.
 前方情報取得部50は、例えば、車両1の前方風景を撮像するステレオカメラや、自車1から自車1の前方に位置する物体までの距離を測定するLIDAR(Laser Imaging Detection And Ranging)などの測距センサや、車両1の前方に位置する物体を検出するソナー、超音波センサ、ミリ波レーダ等から構成される。 The forward information acquisition unit 50 includes, for example, a stereo camera that captures an image of a scene in front of the vehicle 1, and a LIDAR (Laser Imaging Detection And Ranging) that measures the distance from the vehicle 1 to an object located in front of the vehicle 1. It includes a distance measuring sensor, a sonar for detecting an object located in front of the vehicle 1, an ultrasonic sensor, a millimeter wave radar, and the like.
 この実施形態では、前方情報取得部50は、ステレオカメラにより撮像した前方風景画像を示す前方画像データや、測距センサが測定した物体までの距離を示すデータや、その他の検出データをCPU31aに送信する。 In this embodiment, the forward information acquisition unit 50 transmits to the CPU 31a forward image data indicating a front scenery image captured by a stereo camera, data indicating a distance to an object measured by a distance measurement sensor, and other detection data. I do.
 カーナビ装置60は、人工衛星などから受信したGPS(Global Positioning System)信号に基づいて車両1の位置を算出するGPSコントローラを含む。カーナビ装置60は、地図データを記憶する記憶部を有し、GPSコントローラからの位置情報に基づいて、現在位置近傍の地図データを記憶部から読み出し、ユーザにより設定された目的地までの案内経路を決定する。そして、カーナビ装置60は、現在の車両1の位置や決定した案内経路に関する情報を制御部31に出力する。また、カーナビ装置60は、地図データを参照することにより、車両1の前方の施設の名称・種類や、施設と車両1との距離などを示す情報を制御部31に出力する。地図データでは、道路形状情報(車線、道路の幅員、車線数、交差点、カーブ、分岐路等)、制限速度などの道路標識に関する規制情報、車線が複数存在する場合の各車線についての情報などの各種情報が位置データと対応付けられている。カーナビ装置60は、これらの各種情報を経路案内情報として、制御部31に出力する。なお、カーナビ装置60は、車両1に搭載されたものに限られず、制御部31との間で有線又は無線により通信を行い、カーナビゲーション機能を有する携帯端末(スマートフォン、タブレットPC(Personal Computer)など)により実現されてもよい。 The car navigation device 60 includes a GPS controller that calculates the position of the vehicle 1 based on a GPS (Global Positioning System) signal received from an artificial satellite or the like. The car navigation device 60 has a storage unit that stores map data, reads out map data near the current position from the storage unit based on position information from the GPS controller, and displays a guide route to a destination set by the user. decide. Then, the car navigation device 60 outputs information on the current position of the vehicle 1 and the determined guidance route to the control unit 31. In addition, the car navigation device 60 outputs information indicating the name and type of the facility in front of the vehicle 1 and the distance between the facility and the vehicle 1 to the control unit 31 by referring to the map data. The map data includes road shape information (lane, road width, number of lanes, intersections, curves, branch roads, etc.), regulation information on road signs such as speed limits, and information on each lane when there are multiple lanes. Various information is associated with the position data. The car navigation device 60 outputs these various types of information to the control unit 31 as route guidance information. Note that the car navigation device 60 is not limited to the one mounted on the vehicle 1, and performs wired or wireless communication with the control unit 31 and has a car navigation function such as a mobile terminal (smartphone, tablet PC (Personal Computer), etc.). ).
 ECU70は、車両1の各部を制御するものであり、例えば、車両1の現在の車速を示す車速情報をCPU31aへ送信する。なお、CPU31aは、車速センサから車速情報を取得してもよい。また、ECU70は、エンジン回転数などの計測量や、車両1自体の警告情報(燃料低下や、エンジン油圧異常など)や、その他の車両情報をCPU31aへ送信する。ECU70から取得した情報に基づいて、CPU31aは、GDC31bを介して、表示部20に車速、エンジン回転数、各種警告などを示す画像を表示させることが可能となっている。つまり、第1画像C1や第2画像C2の他に、虚像Vの表示領域内に車両情報を示す車両情報画像も表示可能となっている。なお、車両情報画像は、例えば、運転者5と正対して視認されるようにパース度0%で表示される。 The ECU 70 controls each unit of the vehicle 1, and transmits, for example, vehicle speed information indicating the current vehicle speed of the vehicle 1 to the CPU 31a. Note that the CPU 31a may acquire vehicle speed information from a vehicle speed sensor. In addition, the ECU 70 transmits to the CPU 31a measurement amounts such as the engine speed, warning information of the vehicle 1 itself (such as low fuel and abnormal engine oil pressure), and other vehicle information. Based on the information acquired from the ECU 70, the CPU 31a can display an image indicating the vehicle speed, the engine speed, various warnings, and the like on the display unit 20 via the GDC 31b. That is, in addition to the first image C1 and the second image C2, a vehicle information image indicating vehicle information can be displayed in the display area of the virtual image V. In addition, the vehicle information image is displayed at a perspective degree of 0% so that the vehicle information image is visually recognized facing the driver 5, for example.
 操作部80は、運転者5による各種操作を受け付けるものであり、受け付けた操作内容を示す信号を制御部31に供給する。例えば、操作部80は、運転者5による、虚像Vの表示モードの切替操作などを受け付ける。 The operation unit 80 receives various operations by the driver 5 and supplies a signal indicating the content of the received operation to the control unit 31. For example, the operation unit 80 receives a switching operation of the display mode of the virtual image V by the driver 5 or the like.
 制御部31は、例えば、操作部80が表示モード切替操作として同時表示モードへの切替操作を受け付けたことを契機に、図3(a)に示すように、虚像Vを同時表示モードで表示する。なお、同時表示モードでの虚像Vの表示は、車両1の自動運転レベルに応じて制限されてもよい。例えば、自動運転レベルがレベル2や、レベル3以上の場合にのみ、虚像Vの同時表示モードでの表示が可能であってもよい。制御部31は、例えば、ECU70から現在の自動運転レベルを示す運転モード情報を取得することで、現在の自動運転レベルを特定可能である。 The control unit 31 displays the virtual image V in the simultaneous display mode as shown in FIG. 3A, for example, when the operation unit 80 receives a switching operation to the simultaneous display mode as a display mode switching operation. . The display of the virtual image V in the simultaneous display mode may be limited according to the automatic driving level of the vehicle 1. For example, the virtual image V may be displayed in the simultaneous display mode only when the automatic operation level is equal to or higher than level 2 or level 3. The control unit 31 can specify the current automatic operation level by acquiring operation mode information indicating the current automatic operation level from the ECU 70, for example.
 第2画像C2は、ユーザがカーナビ装置60で設定した目的地までの案内経路に応じて第2領域B内に表示される。一方、第1画像C1については、重畳対象となる前方物体を制御部31が特定したことに応じて第1領域A内に表示される。例えば、制御部31は、路側のインフラから周辺情報取得部40を介してV2IやV2Vで取得した物体情報が示す物体の位置に基づき、当該物体が運転者5から見て虚像Vの表示領域内に位置すると特定した場合に、前方物体があると判別する。なお、前方物体の存在の特定手法は、これに限られない。例えば、周辺情報取得部40からの物体情報は、V2IやV2Vに限られず、V2Xのうち任意の通信(V2N、V2V、V2Pのうち少なくともいずれか)により取得してもよい。また、制御部31は、前方情報取得部50から取得した情報に基づき、前方物体を特定してもよい。具体的には、前方画像データに基づき、例えばパタンマッチング法などの公知の画像解析法により前方物体を特定してもよいし、測距センサ、ソナー、超音波センサ、ミリ波レーダからの検出信号に基づき前方物体を特定してもよい。以上のような手法で前方物体を特定すると、制御部31は、特定した前方物体に対応した位置に第1画像C1をAR態様で表示する。 The second image C2 is displayed in the second area B according to the guide route to the destination set by the user with the car navigation device 60. On the other hand, the first image C1 is displayed in the first area A in response to the control unit 31 specifying the forward object to be superimposed. For example, based on the position of the object indicated by the object information acquired by V2I or V2V from the roadside infrastructure via the peripheral information acquisition unit 40 from the roadside infrastructure, the control unit 31 determines whether the object is in the display area of the virtual image V as viewed from the driver 5. When it is specified that the object is located at the position, it is determined that there is a forward object. The method for specifying the presence of the forward object is not limited to this. For example, the object information from the peripheral information acquisition unit 40 is not limited to V2I or V2V, and may be acquired by any communication (at least one of V2N, V2V, and V2P) of V2X. Further, the control unit 31 may specify the forward object based on the information acquired from the forward information acquiring unit 50. Specifically, based on the forward image data, for example, a forward object may be specified by a known image analysis method such as a pattern matching method, or a distance measurement sensor, a sonar, an ultrasonic sensor, and a detection signal from a millimeter wave radar. The forward object may be specified based on the. When the front object is specified by the method described above, the control unit 31 displays the first image C1 in an AR mode at a position corresponding to the specified front object.
(線状画像パターンについて)
 続いて、線状画像パターン(以下、単に「パターン」とも言う。)について、図4~図7を参照して説明する。なお、図5~図7においては、第1領域A内に表示可能な第1画像C1については、図示省略している。また、以下に説明する種々のパターンのうち、同時表示モードの際に実現されるパターンは、予め定められた任意のパターンであってもよいし、操作部80からの設定操作によりユーザの好みに応じて選択されたものであってもよい。ROM32には、図4等に示すパターンのうち1つ以上のパターンを実現するために必要となる各種の画像パーツデータが予め記憶されている。
(About linear image pattern)
Subsequently, a linear image pattern (hereinafter, also simply referred to as a “pattern”) will be described with reference to FIGS. In FIGS. 5 to 7, the first image C1 that can be displayed in the first area A is not shown. Further, among various patterns described below, a pattern realized in the simultaneous display mode may be an arbitrary predetermined pattern, or may be set to a user's preference by a setting operation from the operation unit 80. It may be selected accordingly. In the ROM 32, various image parts data necessary for realizing one or more of the patterns shown in FIG. 4 and the like are stored in advance.
 パターンP1~P3は、線状画像Eの中央部における幅が、端部(虚像Vの矩形の表示領域の角に向かう部分)における幅よりも広い態様である。
 パターンP1は、図5(a)に示すように、線状画像Eがその端部から中央部に向かって虚像Vの背景色に同化するグラデーション状に表示される態様である。パターンP2は、図5(b)に示すように、線状画像Eがその中央部から端部に向かって虚像Vの背景色に同化するグラデーション状に表示される態様である。パターンP3は、図5(c)に示すように、線状画像Eにグラデーション処理を施さない態様である。
The patterns P1 to P3 are modes in which the width at the center of the linear image E is wider than the width at the end (portion toward the corner of the rectangular display area of the virtual image V).
As shown in FIG. 5A, the pattern P1 is a mode in which the linear image E is displayed in a gradation form assimilating the background color of the virtual image V from the end to the center. As shown in FIG. 5B, the pattern P2 is a mode in which the linear image E is displayed in a gradation form assimilating the background image of the virtual image V from the center to the end. The pattern P3 is a mode in which no gradation processing is performed on the linear image E, as shown in FIG.
 パターンP4~P6は、線状画像Eの端部(虚像Vの矩形の表示領域の角に向かう部分)における幅が、中央部における幅よりも広い態様である。
 パターンP4は、図6(a)に示すように、線状画像Eがその端部から中央部に向かって虚像Vの背景色に同化するグラデーション状に表示される態様である。パターンP5は、図6(b)に示すように、線状画像Eがその中央部から端部に向かって虚像Vの背景色に同化するグラデーション状に表示される態様である。パターンP6は、図6(c)に示すように、線状画像Eにグラデーション処理を施さない態様である。
The patterns P4 to P6 are modes in which the width at the end of the linear image E (the portion toward the corner of the rectangular display area of the virtual image V) is wider than the width at the center.
As shown in FIG. 6A, the pattern P4 is a mode in which the linear image E is displayed in a gradation form assimilating the background color of the virtual image V from the end to the center. As shown in FIG. 6B, the pattern P5 is a mode in which the linear image E is displayed in a gradation form in which the linear image E is assimilated to the background color of the virtual image V from the center to the end. The pattern P6 is a mode in which no gradation processing is performed on the linear image E, as shown in FIG.
 パターンP7~P9は、線状画像Eの幅が一定(端部における幅と中央部における幅とが等しい)態様である。
 パターンP7は、図7(a)に示すように、線状画像Eがその端部から中央部に向かって虚像Vの背景色に同化するグラデーション状に表示される態様である。パターンP8は、図7(b)に示すように、線状画像Eがその中央部から端部に向かって虚像Vの背景色に同化するグラデーション状に表示される態様である。パターンP9は、図7(c)に示すように、線状画像Eにグラデーション処理を施さない態様である。
The patterns P7 to P9 are modes in which the width of the linear image E is constant (the width at the end is equal to the width at the center).
As shown in FIG. 7A, the pattern P7 is a mode in which the linear image E is displayed in a gradation form assimilating the background color of the virtual image V from the end to the center. As shown in FIG. 7B, the pattern P8 is a mode in which the linear image E is displayed in a gradation form in which the linear image E is assimilated to the background color of the virtual image V from the center to the end. The pattern P9 is a mode in which no gradation processing is performed on the linear image E, as shown in FIG.
 なお、これらのパターンのうち、P3、P6、及びP9以外のグラデーション処理を施したパターンについて、「虚像Vの背景色に同化するグラデーション状」とは、線状画像Eが表示される領域の背景色に徐々に同化していく態様である。例えば、線状画像Eを第1領域Aに表示する場合は、線状画像Eの所定部分が第1領域Aの背景色に同化していくようにグラデーション処理を施せばよい。また、線状画像Eを第2領域Bに表示する場合は、線状画像Eの所定部分が第2領域Bの背景色に同化していくようにグラデーション処理を施せばよい。また、線状画像Eを前述の境界線Dと一致させた場合には、線状画像Eの所定部分が第1領域A又は第2領域Bのいずれかの背景色に同化していくようにグラデーション処理を施せばよい。 Among these patterns, among the patterns subjected to gradation processing other than P3, P6, and P9, the “gradation shape assimilated to the background color of the virtual image V” refers to the background of the area where the linear image E is displayed. In this mode, the color gradually assimilate. For example, when displaying the linear image E in the first area A, gradation processing may be performed so that a predetermined portion of the linear image E is assimilated to the background color of the first area A. When the linear image E is displayed in the second area B, gradation processing may be performed so that a predetermined portion of the linear image E is assimilated to the background color of the second area B. When the linear image E is matched with the above-mentioned boundary line D, a predetermined portion of the linear image E is assimilated to the background color of either the first area A or the second area B. What is necessary is just to perform a gradation process.
 以上に説明したHUD装置10は、車両1の前方風景と重なる虚像Vとして視認者(例えば運転者5)に視認される重畳画像を表示する表示装置の一例である。HUD装置10は、重畳画像の表示制御を行い、重畳画像の表示領域(虚像Vの表示領域に対応)内において車両1の前方路面に沿って視認される第1画像C1(AR画像)と第1画像C1よりも視認者側に立ち上がって視認される第2画像C2(非AR画像)とを表示可能な表示制御手段(例えば、制御部31や表示部20)を備える。 The HUD device 10 described above is an example of a display device that displays a superimposed image visually recognized by a viewer (for example, the driver 5) as a virtual image V overlapping a scene in front of the vehicle 1. The HUD device 10 controls the display of the superimposed image, and the first image C1 (AR image) and the first image C1 (AR image) that are visually recognized along the road surface ahead of the vehicle 1 in the superimposed image display area (corresponding to the virtual image V display area). A display control unit (for example, the control unit 31 or the display unit 20) capable of displaying a second image C2 (non-AR image) that stands up to the viewer side from the one image C1 and is visually recognized is provided.
(1)特に、表示制御手段は、第1画像C1と第2画像C2とを同じ期間で表示する場合(同時表示モードの場合)に、前記表示領域内における第1領域Aに第1画像C1を表示するとともに第1領域Aと隣り合う第2領域Bに第2画像C2を表示し、第1領域Aと第2領域Bの境界部Gをグラデーション状に表示し、境界部Gに沿って線状に視認される線状画像Eを表示する。
 この構成によれば、線状画像Eを跨いでAR画像と非AR画像とを区分して表示可能でありつつも、第1領域Aと第2領域Bの境界部Gをグラデーション状に表示することで当該境界部Gに表示される情報が視認者に与える煩雑さを低減することができる。つまり、AR画像と非AR画像とを同じ期間で表示する際に双方を見やすく表示することができる。
(1) Particularly, when the first image C1 and the second image C2 are displayed in the same period (in the case of the simultaneous display mode), the first image C1 is displayed in the first area A in the display area. Is displayed, the second image C2 is displayed in the second area B adjacent to the first area A, the boundary G between the first area A and the second area B is displayed in a gradation form, and along the boundary G A linear image E visually recognized in a linear manner is displayed.
According to this configuration, while the AR image and the non-AR image can be displayed separately while straddling the linear image E, the boundary G between the first area A and the second area B is displayed in a gradation. This can reduce the complexity of the information displayed on the boundary G given to the viewer. That is, when the AR image and the non-AR image are displayed in the same period, both can be displayed easily.
(2)また、線状画像Eは、重畳画像の中央部から端部に向かって延びており、端部における幅が中央部における幅よりも細い(例えば、線状画像パターンP1~P3)、ようにしてもよい。
 こうすれば、中央部近傍において第1画像C1と第2画像C2とを明瞭に区分しつつも、両画像を違和感無く表示することができる。
(2) The linear image E extends from the center to the end of the superimposed image, and the width at the end is smaller than the width at the center (for example, linear image patterns P1 to P3). You may do so.
By doing so, it is possible to display the first image C1 and the second image C2 without discomfort while clearly separating the first image C1 and the second image C2 near the center.
(3)また、線状画像Eは、重畳画像の中央部から端部に向かって延びており、端部における幅が中央部における幅よりも太い(例えば、線状画像パターンP4~P6)、ようにしてもよい。
 こうすれば、第1領域Aで第1画像C1を表示している際に、当該第1画像C1と線状画像Eが干渉して視認者に煩わしさを与えることを抑制することができる。
(3) The linear image E extends from the center to the end of the superimposed image, and the width at the end is larger than the width at the center (for example, linear image patterns P4 to P6). You may do so.
In this way, when the first image C1 is displayed in the first area A, it is possible to prevent the first image C1 and the linear image E from interfering with each other and causing trouble to the viewer.
(4)また、線状画像Eは、重畳画像の中央部から端部に向かって延びており、中央部から端部に向かって重畳画像の背景色に同化するグラデーション状に表示される(例えば、線状画像パターンP2、P5、P8)、ようにしてもよい。
 こうすれば、中央部近傍において第1画像C1と第2画像C2とを明瞭に区分しつつも、両画像を違和感無く表示することができる。
(4) Further, the linear image E extends from the center to the end of the superimposed image, and is displayed in a gradation shape assimilated from the center to the end with the background color of the superimposed image (for example, , Linear image patterns P2, P5, P8).
By doing so, it is possible to display the first image C1 and the second image C2 without discomfort while clearly separating the first image C1 and the second image C2 near the center.
(5)また、線状画像Eは、重畳画像の中央部から端部に向かって延びており、端部から中央部に向かって重畳画像の背景色に同化するグラデーション状に表示される(例えば、線状画像パターンP1、P4、P7)、ようにしてもよい。
 こうすれば、第1領域Aで第1画像C1を表示している際に、当該第1画像C1と線状画像Eが干渉して視認者に煩わしさを与えることを抑制することができる。
(5) Further, the linear image E extends from the center to the end of the superimposed image, and is displayed in a gradation form assimilating the background color of the superimposed image from the end to the center (for example, , Linear image patterns P1, P4, P7).
In this way, when the first image C1 is displayed in the first area A, it is possible to prevent the first image C1 and the linear image E from interfering with each other and causing trouble to the viewer.
(6)また、線状画像Eの第1領域A側又は第2領域B側に、線状画像Eに隣接する影状画像Sを表示する、ようにしてもよい。図8(a)に示すように、線状画像Eの第2領域B側に影状画像Sを表示した場合は、第2領域Bよりも第1領域Aを強調して表示することができる。一方、図8(b)に示すように、線状画像Eの第1領域A側に影状画像Sを表示した場合は、第1領域Aよりも第2領域Bを強調して表示することができる。影状画像Sは、例えば、制御部31の制御により描画されるドロップシャドウが好ましいが、図8(a)又は(b)に示すように影状に視認させることができれば、光彩などの他のエフェクトを用いて実現されるものであってもよい。 (6) The shadow image S adjacent to the linear image E may be displayed on the first area A side or the second area B side of the linear image E. As shown in FIG. 8A, when the shadow image S is displayed on the second area B side of the linear image E, the first area A can be displayed more emphasized than the second area B. . On the other hand, as shown in FIG. 8B, when the shadow image S is displayed on the first area A side of the linear image E, the second area B is displayed more emphasized than the first area A. Can be. The shadow image S is preferably, for example, a drop shadow drawn under the control of the control unit 31. However, if the shadow image S can be visually recognized in a shadow shape as shown in FIG. It may be realized using an effect.
(7)また、第1領域Aは、視認者から見て第2領域Bよりも上方に位置し、重畳画像の中央部における線状画像Eは、視認者から見て左右方向に沿っていてもよい。(8)また、線状画像Eは、重畳画像の中央部から端部に向かって延びており、当該端部は、視認者から見て当該中央部よりも下方に位置するようにしてもよい。
 なお、図9(a)に示すように、同時表示モード時の虚像Vを、その表示領域の矩形のうち上辺と下辺とを結ぶように境界線Dを設定し、第1領域Aと第2領域Bとが左右方向において隣り合う構成としてもよい。また、図9(b)に示すように、同時表示モード時の虚像Vを、境界線Dが曲線であり、且つ、その端が、虚像Vの表示領域の矩形のうち下辺や角ではなく左側や右側の辺に向かって延びる構成としてもよい。これらの場合も同様に、第1領域Aと第2領域Bの境界部Gをグラデーション状に表示し、境界部Gに沿って線状に視認される線状画像Eを表示することができる。このように、虚像Vの表示領域内において、互いに隣り合う第1領域Aと第2領域Bとを区分することができれば、第1領域A及び第2領域Bの各々の形状は目的に応じて任意である。
(7) The first area A is located above the second area B when viewed from the viewer, and the linear image E at the center of the superimposed image extends in the left-right direction when viewed from the viewer. Is also good. (8) The linear image E extends from the center of the superimposed image toward the end, and the end may be located below the center when viewed from the viewer. .
As shown in FIG. 9A, the virtual image V in the simultaneous display mode is set with a boundary line D connecting the upper side and the lower side of the rectangle of the display area, and the first area A and the second area A are connected to each other. The region B may be configured to be adjacent in the left-right direction. Further, as shown in FIG. 9B, the virtual image V in the simultaneous display mode is defined such that the boundary line D is a curved line, and the end of the virtual image V is not the lower side or the corner but the left side of the rectangle of the display area of the virtual image V. Or a configuration extending toward the right side. In these cases, similarly, the boundary portion G between the first region A and the second region B can be displayed in a gradation form, and a linear image E visually recognized linearly along the boundary portion G can be displayed. As described above, if the first area A and the second area B adjacent to each other can be separated in the display area of the virtual image V, the shapes of the first area A and the second area B can be changed according to the purpose. Optional.
 また、線状画像Eの表示色は、第1領域A又は第2領域Bのいずれかの背景色に含まれる色であってもよいし、当該背景色に含まれない色(例えば、背景色よりも濃い黒色など)であってもよい。 The display color of the linear image E may be a color included in the background color of either the first area A or the second area B, or a color not included in the background color (for example, the background color). Darker than black).
 なお、本発明は以上の実施形態、変形例、及び図面によって限定されるものではない。本発明の要旨を変更しない範囲で、適宜、変更(構成要素の削除も含む)を加えることが可能である。 Note that the present invention is not limited by the above embodiments, modifications, and drawings. Modifications (including deletion of components) can be made as appropriate without changing the gist of the present invention.
 第1画像C1(AR画像)が「前方路面に沿って視認される」とは、パース度100%に限られず、運転者5が前方路面に沿って画像が表示されていると認識できる状態であれば、パース度100%よりも小さい所定度合であってもよい。また、第1画像C1によりその存在が報知される前方物体は、車両に限られず、歩行者や自転車や他の移動物体であってもよい。 The phrase "the first image C1 (AR image) is visually recognized along the front road surface" is not limited to the perspective level of 100%, but means that the driver 5 can recognize that the image is displayed along the front road surface. If so, the predetermined degree may be smaller than 100%. The forward object whose presence is notified by the first image C1 is not limited to the vehicle, but may be a pedestrian, a bicycle, or another moving object.
 表示光Lの投射対象は、フロントガラス3に限定されず、板状のハーフミラー、ホログラム素子等により構成されるコンバイナであってもよい。また、以上に説明した虚像Vを表示する表示装置としては、HUD装置10に限られない。表示装置は、車両1の運転者5の頭部に装着されるヘッドマウントディスプレイ(HMD:Head Mounted Display)として構成されてもよい。つまり、表示装置は、車両1に搭載されているものに限られず、車両1で使用されるものであればよい。 The projection target of the display light L is not limited to the windshield 3, but may be a combiner including a plate-shaped half mirror, a hologram element, and the like. The display device that displays the virtual image V described above is not limited to the HUD device 10. The display device may be configured as a head mounted display (HMD: Head Mounted Display) mounted on the head of the driver 5 of the vehicle 1. That is, the display device is not limited to the display device mounted on the vehicle 1 and may be any display device used in the vehicle 1.
 以上の説明では、本発明の理解を容易にするために、公知の技術的事項の説明を適宜省略した。 で は In the above description, descriptions of well-known technical matters have been omitted as appropriate to facilitate understanding of the present invention.
 100…車両用表示システム
  10…HUD装置
  20…表示部
  30…制御装置
  31…制御部(31a…CPU、31b…GDC)
  32…ROM、33…RAM
  40…周辺情報取得部
  50…前方情報取得部
  60…カーナビゲーション装置
  70…ECU
  80…操作部
   1…車両、2…ダッシュボード、3…フロントガラス
   5…運転者、4…アイボックス、L…表示光
   V…虚像
   A…第1領域、C1…第1画像
   B…第2領域、C2…第2画像
   D…境界線、G…境界部
   E…線状画像
   S…影状画像
REFERENCE SIGNS LIST 100 vehicle display system 10 HUD device 20 display unit 30 control device 31 control unit (31a CPU, 31b GDC)
32 ROM, 33 RAM
40: peripheral information acquisition unit 50: forward information acquisition unit 60: car navigation device 70: ECU
Reference Signs List 80 operation unit 1 vehicle 2 dashboard 3 windshield 5 driver 4 eye box L display light V virtual image A 1st area C1 1st image B 2nd area , C2: second image D: boundary line, G: boundary portion E: linear image S: shadow image

Claims (8)

  1.  車両の前方風景と重なる虚像として視認者に視認される重畳画像を表示する表示装置であって、
     前記重畳画像の表示制御を行い、前記重畳画像の表示領域内において前記車両の前方路面に沿って視認される第1画像と前記第1画像よりも前記視認者側に立ち上がって視認される第2画像とを表示可能な表示制御手段を備え、
     前記表示制御手段は、前記第1画像と前記第2画像とを同じ期間で表示する場合に、前記表示領域内における第1領域に前記第1画像を表示するとともに前記第1領域と隣り合う第2領域に前記第2画像を表示し、
     前記第1領域と前記第2領域の境界部をグラデーション状に表示し、
     前記境界部に沿って線状に視認される線状画像を表示する、
     表示装置。
    A display device that displays a superimposed image visually recognized by a viewer as a virtual image overlapping a scene in front of the vehicle,
    A display control of the superimposed image is performed, and a first image that is visually recognized along a road surface ahead of the vehicle in a display area of the superimposed image, and a second image that is more standing up to the viewer side than the first image and visually recognized. Display control means capable of displaying an image and
    When the first image and the second image are displayed in the same period, the display control unit displays the first image in a first area in the display area and a second image adjacent to the first area. Displaying the second image in two regions,
    Displaying a boundary between the first region and the second region in a gradation form;
    Displaying a linear image visually recognized linearly along the boundary portion,
    Display device.
  2.  前記線状画像は、前記重畳画像の中央部から端部に向かって延びており、前記端部における幅が前記中央部における幅よりも細い、
     請求項1に記載の表示装置。
    The linear image extends from the center of the superimposed image toward the end, and the width at the end is smaller than the width at the center.
    The display device according to claim 1.
  3.  前記線状画像は、前記重畳画像の中央部から端部に向かって延びており、前記端部における幅が前記中央部における幅よりも太い、
     請求項1に記載の表示装置。
    The linear image extends from the center of the superimposed image toward the end, and the width at the end is larger than the width at the center.
    The display device according to claim 1.
  4.  前記線状画像は、前記重畳画像の中央部から端部に向かって延びており、前記中央部から前記端部に向かって前記重畳画像の背景色に同化するグラデーション状に表示される、
     請求項1乃至3のいずれか1項に記載の表示装置。
    The linear image extends from the center to the end of the superimposed image, and is displayed in a gradation shape assimilated to the background color of the superimposed image from the center to the end.
    The display device according to claim 1.
  5.  前記線状画像は、前記重畳画像の中央部から端部に向かって延びており、前記端部から前記中央部に向かって前記重畳画像の背景色に同化するグラデーション状に表示される、
     請求項1乃至3のいずれか1項に記載の表示装置。
    The linear image extends from the center to the end of the superimposed image, and is displayed in a gradation shape assimilating to the background color of the superimposed image from the end toward the center.
    The display device according to claim 1.
  6.  前記線状画像の前記第1領域側又は前記第2領域側に、前記線状画像に隣接する影状の画像を表示する、
     請求項1乃至5のいずれか1項に記載の表示装置。
    Displaying a shadow-like image adjacent to the linear image on the first region side or the second region side of the linear image;
    The display device according to claim 1.
  7.  前記第1領域は、前記視認者から見て前記第2領域よりも上方に位置し、
     前記重畳画像の中央部における前記線状画像は、前記視認者から見て左右方向に沿っている、
     請求項1乃至6のいずれか1項に記載の表示装置。
    The first region is located above the second region when viewed from the viewer,
    The linear image in the central portion of the superimposed image is along the left and right directions as viewed from the viewer,
    The display device according to claim 1.
  8.  前記線状画像は、前記重畳画像の中央部から端部に向かって延びており、
     前記端部は、前記視認者から見て前記中央部よりも下方に位置する、
     請求項1乃至7のいずれか1項に記載の表示装置。
    The linear image extends from the center to the end of the superimposed image,
    The end portion is located below the central portion as viewed from the viewer,
    The display device according to claim 1.
PCT/JP2019/032968 2018-08-23 2019-08-23 Display device WO2020040276A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020538477A JPWO2020040276A1 (en) 2018-08-23 2019-08-23 Display device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018156377 2018-08-23
JP2018-156377 2018-08-23

Publications (1)

Publication Number Publication Date
WO2020040276A1 true WO2020040276A1 (en) 2020-02-27

Family

ID=69592986

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/032968 WO2020040276A1 (en) 2018-08-23 2019-08-23 Display device

Country Status (2)

Country Link
JP (1) JPWO2020040276A1 (en)
WO (1) WO2020040276A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023074662A1 (en) * 2021-10-28 2023-05-04 日本精機株式会社 Display control device, head-up display device, and display control method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010188826A (en) * 2009-02-17 2010-09-02 Mazda Motor Corp Display device for vehicle
JP2015157509A (en) * 2014-02-21 2015-09-03 日本精機株式会社 Vehicular warning device and vehicular warning unit
WO2016072019A1 (en) * 2014-11-07 2016-05-12 三菱電機株式会社 Display control device
JP2016109645A (en) * 2014-12-10 2016-06-20 株式会社リコー Information providing device, information providing method, and control program for providing information
JP2016146170A (en) * 2015-01-29 2016-08-12 株式会社デンソー Image generation device and image generation method
JP2017154613A (en) * 2016-03-02 2017-09-07 トヨタ自動車株式会社 Display device for vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010188826A (en) * 2009-02-17 2010-09-02 Mazda Motor Corp Display device for vehicle
JP2015157509A (en) * 2014-02-21 2015-09-03 日本精機株式会社 Vehicular warning device and vehicular warning unit
WO2016072019A1 (en) * 2014-11-07 2016-05-12 三菱電機株式会社 Display control device
JP2016109645A (en) * 2014-12-10 2016-06-20 株式会社リコー Information providing device, information providing method, and control program for providing information
JP2016146170A (en) * 2015-01-29 2016-08-12 株式会社デンソー Image generation device and image generation method
JP2017154613A (en) * 2016-03-02 2017-09-07 トヨタ自動車株式会社 Display device for vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023074662A1 (en) * 2021-10-28 2023-05-04 日本精機株式会社 Display control device, head-up display device, and display control method

Also Published As

Publication number Publication date
JPWO2020040276A1 (en) 2021-08-12

Similar Documents

Publication Publication Date Title
JP6861375B2 (en) Display system, information presentation system, display system control method, program, and mobile
US9818206B2 (en) Display device
JP7065383B2 (en) Display systems, information presentation systems, display system control methods, programs, and moving objects
US20220118983A1 (en) Display control device and display control program product
US20220172652A1 (en) Display control device and display control program product
US20220058998A1 (en) Display control device and non-transitory computer-readable storage medium for display control on head-up display
JP2019202589A (en) Display device
JP7310560B2 (en) Display control device and display control program
WO2020040276A1 (en) Display device
WO2021039762A1 (en) Display device
JP7435465B2 (en) display device
JP7338632B2 (en) Display device
JP2019206262A (en) Display device
JP2019207632A (en) Display device
JP2020047153A (en) Display device
JP2021037916A (en) Display control device and display control program
JP7379945B2 (en) display device
JP7379946B2 (en) display device
JP2023166057A (en) display device
US20240019696A1 (en) Head-up display device
WO2020085159A1 (en) Display device
JP2018167669A (en) Head-up display device
WO2023145851A1 (en) Display device
JP2023108637A (en) Head-up display apparatus
JP7255443B2 (en) Display control device and display control program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850987

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020538477

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19850987

Country of ref document: EP

Kind code of ref document: A1