WO2023003109A1 - Affichage tête haute et son procédé de commande - Google Patents

Affichage tête haute et son procédé de commande Download PDF

Info

Publication number
WO2023003109A1
WO2023003109A1 PCT/KR2022/000125 KR2022000125W WO2023003109A1 WO 2023003109 A1 WO2023003109 A1 WO 2023003109A1 KR 2022000125 W KR2022000125 W KR 2022000125W WO 2023003109 A1 WO2023003109 A1 WO 2023003109A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
ground
vehicle
head
Prior art date
Application number
PCT/KR2022/000125
Other languages
English (en)
Korean (ko)
Inventor
정은영
차재원
Original Assignee
네이버랩스 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020210095895A external-priority patent/KR20230014491A/ko
Priority claimed from KR1020210095896A external-priority patent/KR20230014492A/ko
Application filed by 네이버랩스 주식회사 filed Critical 네이버랩스 주식회사
Publication of WO2023003109A1 publication Critical patent/WO2023003109A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a head-up display of a vehicle that outputs information in front of a driver and a method for controlling the same.
  • a head-up display (HUD) device for a vehicle projects an image output from the display as a graphic image on the windshield or combiner in front of the driver through an optical system, and transmits information to the driver. to be.
  • HUD head-up display
  • the optical system may be composed of a plurality of mirrors or lenses to change the light path of the image transmitted from the display.
  • a head-up display device for a vehicle has the advantage of inducing an immediate reaction of the driver and providing convenience at the same time.
  • HUD head-up display
  • an image is fixed and positioned approximately 2 to 3 m in front of the driver.
  • the driver's gaze distance when driving is from a short distance to about 300 m. Accordingly, inconvenience exists in that the driver must adjust the focus of the eyes to a large extent in order to drive while looking at a long distance and check information on a head-up display (HUD) device while driving. That is, the driver repeatedly adjusts the focus between the distance where the main view is located and ⁇ 3m where the image is formed.
  • a head-up display device that provides augmented reality to a driving environment so that the driver can obtain desired information without changing the focus of the eyes at the point of view while driving.
  • Korean Patent Registration No. 10-2116783 (published on May 29, 2020) relates to a head-up display device that implements augmented reality, and provides realistic information from the driver's point of view by placing an image on the ground. are providing
  • the method of uniformly providing information using images of the same posture has limitations in effectively providing information on various events surrounding the vehicle (eg, speed limit, pedestrian appearance, etc.) while driving. .
  • a head-up display device capable of providing images in various positions and in various postures to more effectively provide information to the driver may be considered.
  • the present invention provides a method for more effectively conveying information in an augmented reality head-up display that implements augmented reality from a driver's point of view by placing an image on the ground.
  • the present invention provides a head-up display and a control method for implementing both an image lying on the ground (or matched on the ground) and an image having a three-dimensional effect erected on the ground.
  • the present invention provides a head-up display and a control method through which a driver can naturally recognize both an image lying on the ground and an image erected.
  • the present invention proposes a new type of image plane that is advantageous in realizing both an image of a driver lying on the ground and an image of a driver standing upright.
  • the head-up display and its control method according to the present invention set an allowable space that satisfies both the convergence of the image lying on the ground and the image erected, and on the allowable space Position the video plane.
  • a head-up display of a vehicle includes a display device that outputs light, an optical system that controls a path of the light to generate a virtual image output to the ground from the front of the vehicle, and a first image and a second image based on the virtual image. and a control unit controlling the display element so that at least one of the images is output within an allowable space of the optical system, wherein the first image and the second image have different attitudes relative to the ground within the allowable space,
  • the allowable space is formed to be biased upward with respect to the ground, and includes an image plane on which the first image and the second image are located.
  • the first image may be an image projected on the ground
  • the second image may be an image built with respect to the ground.
  • the second image is formed in a 3D space based on the camera viewpoint of the virtual camera and projected onto the image plane based on the driver's viewpoint.
  • the allowable space may have an upper limit surface and a lower limit surface
  • the image plane may be located between the upper limit surface and the lower limit surface. It is characterized in that the lower limit surface is closer to the ground than the upper limit surface.
  • the present invention relates to a method for controlling a head-up display, comprising: selecting at least one of a first image and a second image to be transmitted through the head-up display; and using an optical system for projecting a virtual image in front of a vehicle. , outputting at least one of the first image and the second image, wherein the first image and the second image have different postures with respect to the ground within an allowable space of the optical system, and the allowable space Disclosed is a control method of a head-up display comprising a video surface on which the first image and the second image are located and biased upward with respect to the ground.
  • the head-up display and its control method according to the present invention form a curved surface on which an image lying on the ground and an erected image are output.
  • a head-up display of a vehicle includes a display device that outputs light, an optical system that controls a path of the light to generate a virtual image output to the ground from the front of the vehicle, and a first image and a second image based on the virtual image. and a control unit controlling the display element so that at least one of the images is output on the image plane of the optical system, the first image and the second image have different attitudes relative to the ground, and the image plane is the vehicle
  • a curved surface having a different distance from the ground is provided at a first point and a second point spaced apart along the front of the.
  • the curved surface is characterized in that it is curved in a direction away from the ground toward the front of the vehicle.
  • the curved surface is positioned within an allowable space of the optical system, and the allowable space may have an upper limit surface and a lower limit surface.
  • the image plane further includes an extension surface extending upward from one end of the curved surface.
  • the extension surface may expand upward at a point where the curved surface meets a horizontal surface.
  • the control unit may control the location of the image plane based on an event related to the vehicle.
  • the control unit may control the distance between the image plane and the vehicle based on the type of road or driving environment on which the vehicle travels.
  • the optical system and the display element may be configured to adjust at least one of a relative distance and a relative angle so that the position of the image plane may be controlled.
  • the head-up display and its control method according to the present invention in a display method in which the location of an image corresponds to the ground, it is possible to implement an image in a form erected on the ground. Through this, realistic information of augmented reality can be provided as a graphic object having a three-dimensional effect on the ground.
  • the present invention by setting at least a part of the image plane as a curved surface, it is possible to easily implement an image lying on the ground and an image erected on the ground. Furthermore, the present invention can implement an image plane advantageous to both a lying image and an erect image by using a curved surface gradually moving away from the ground.
  • the problem of convergence inconsistency between the image and the space can be solved by setting the allowable space of the image plane using the allowable value of the convergence error of both eyes. Furthermore, even when the convergence between image and space is not simultaneously satisfied, the error can be minimized.
  • the allowable space that satisfies both the convergence of the image lying on the ground and the image erected on the ground is defined using the convergence tolerance of both eyes
  • the allowable space whose cross-sectional area widens as it moves away from the head-up display can be set. Also, at this time, as the allowable space of the image plane is moved to the ground, the space of the established image can be further expanded.
  • the image plane is extended to the upper side of the horizontal plane, it is possible to more easily express an upright image in a head-up display that outputs a reclined image.
  • a convergence error can be further minimized when expressing an image established at a short distance by applying a structure for moving an image plane.
  • FIG. 1 is a conceptual diagram for explaining an image implemented in a head-up display according to the present invention.
  • Figure 2a shows the image location of the head-up display in one embodiment of the invention.
  • 2B and 2C are conceptual diagrams illustrating different embodiments of an optical design configuration for the head-up display of FIG. 2A.
  • FIG. 3 is a conceptual diagram illustrating a control system of a head-up display according to the present invention.
  • FIG. 4A is a conceptual diagram illustrating a method of generating a 2D ground image of a 3D object
  • FIG. 4B is a conceptual diagram illustrating a theory of combining an image of a 3D object with a method of generating a 2D ground image.
  • FIG. 5 is a conceptual diagram illustrating a concept of defining a convergence error allowance space in a head-up display according to the present invention.
  • FIG. 6 is a conceptual diagram illustrating a method of outputting images in the allowed space of FIG. 5 .
  • FIG. 7 is a conceptual diagram for explaining an example of defining an image plane located in the allowable space of FIG. 5 .
  • FIG. 8 is a conceptual diagram illustrating an example of outputting an image by positioning the image plane of FIG. 7 in the allowed space of FIG. 5 .
  • FIG. 9 is a conceptual diagram illustrating an extension of the image plane of FIG. 8 .
  • FIG. 10 is a conceptual diagram illustrating the concept of moving an image plane according to another embodiment of the present invention.
  • FIG. 11 is a flowchart illustrating a control method for implementing movement of an image plane of FIG. 10
  • FIG. 12 is a conceptual diagram illustrating an example of the control method of FIG. 11 .
  • 13 and 14 are exemplary embodiments illustrating a structure of a head-up display realizing movement of an image plane of FIG. 10 .
  • the present invention relates to a head-up display and a control method thereof, and more specifically, to providing a head-up display that outputs information in front of a driver in a vehicle.
  • the head-up display is a screen display device installed in front of the driver or pilot in a car or airplane, and means a display that allows the driver or pilot to view various information while raising their head. do.
  • an automobile is exemplified for convenience of description, but the present invention is not necessarily limited thereto.
  • the vehicle is a power device on which a driver rides, it may be one of various types such as a car, a motorcycle, an electric kickboard, an electric bicycle, an electric wheelchair, a helicopter, an airplane, a ship, construction equipment, and a mobile robot.
  • the head-up display and its control method according to the present invention is a device capable of recognizing various types of information while a driver raises his or her head, and can be applied to various types of vehicles.
  • an automobile is taken as an example, and the head-up display and the control method thereof according to the present invention will be described in more detail below.
  • FIG. 1 is a conceptual diagram for explaining an image implemented in a head-up display according to the present invention.
  • FIG. 1 illustrates a case in which various images 111 and 112 are output as virtual images recognized from the driver's line of sight in the driver's seat of a vehicle.
  • the image is output using the combiner 121, but it can also be applied to a method using a windshield.
  • a graphic expression in an image is called an image so that the driver can visually perceive it for information transfer.
  • the present invention is not limited to recognizing the image only by the driver, and the image may be recognized by other users as well as the driver. However, for convenience of description, the driver will be unified and described below.
  • the image may be a graphic representation of an image (eg, a gas icon) in the real world, and in this case may be referred to as a graphic object.
  • an individual image may be formed as an information unit that can be independently distinguished.
  • a graphic object representing a 3D object “gas station” and an image corresponding to a 2D object “lane” may each exist.
  • graphic objects, icons, images, etc. are collectively referred to as images.
  • the surrounding space of the vehicle may be defined as a two-dimensional space and a three-dimensional space.
  • the 2-dimensional space may be a driving surface on which the vehicle travels
  • the 3-dimensional space 12 may be a three-dimensional space on which the vehicle travels.
  • the running surface may be referred to as the ground 11, and may be understood as a surface in contact with the wheels of the vehicle.
  • the ground is not necessarily limited to a driving surface, and may be a road surface, a walking path, and the like.
  • a driver riding the vehicle can look at the ground 11 and the 3D space 12 .
  • information of augmented reality may be output as a virtual image on the ground 11 .
  • a virtual image is displayed on a real road surface at the point of view of the driver.
  • first image since the virtual image is displayed in two dimensions on a virtual screen located on the ground 11, it may be referred to as an image lying on the ground (111, hereinafter referred to as “first image”).
  • the first image may include concepts such as an image coincident with the ground or an image parallel to the ground.
  • the first image 111 includes at least one of information such as turn information, lane information, distance information to the vehicle in front, driving direction display (or navigation information), speed limit display, lane departure warning, etc. may correspond to one.
  • the first image 111 may be information related to at least one of a lane, a driving speed, and a driving direction display (or navigation information). More specifically, an icon 111a indicating information on a speed limit of 60 and a driving guide 111b within a lane are output as the first image 111 on the ground 11 .
  • an image is formed by light output from the display device, and the virtual image of the image includes a first image formed on the image surface of the virtual screen.
  • the image plane may be an output screen on which a virtual image of the image is output.
  • the head-up display according to the present invention corresponds to the virtual screen on the ground
  • the driver can intuitively recognize information without having to move the focus of the eyes to other places in various driving environments. That is, the information to be transmitted by the head-up display is implemented as augmented reality on the ground that the driver actually looks at while driving.
  • the head-up display according to the present invention can output an image having a three-dimensional effect in the 3D space 12 .
  • the image having the three-dimensional effect may include an image (112, hereinafter referred to as “second image”) erected on the ground.
  • the second image 112 is a 3D image in which at least a portion of the second image 112 protrudes into the 3D space 12 while being located on the ground 11, and may be recognized by the driver as a stereoscopic image.
  • the second image 112 may be an upright image erected on the ground.
  • the second image 112 may be information related to at least one of a point of interest (POI), nearby facilities (eg, gas stations, charging stations, buildings, etc.), information signs, destinations, and surrounding vehicles. there is.
  • POI point of interest
  • the erect image may be any one of a first erect image and a second erect image having different characteristics.
  • the first erect image may be a 3D image having a depth value.
  • one erect image may be formed by a combination of left eye and right eye images or implemented as a hologram.
  • the second erect image may be an image that does not have an actual depth value but feels like a 3D image due to an optical illusion or the like. Although it is located on a two-dimensional plane through this optical illusion, the driver feels as if it is standing on the ground.
  • the second image is implemented as the second erect image.
  • the present invention is not necessarily limited thereto, and may be used as part of a method of implementing the first erect image.
  • the vehicle recognizes information about the gas station, and the head-up display outputs a gas station icon at the location where the gas station is located.
  • the gas station icon may be the second image 112 , for example, an image erected on the ground 11 .
  • a 3D graphic object having a three-dimensional effect or erected on the ground 11 (ie, the second image 112 )) outputs
  • the present invention provides information to be transmitted in augmented reality to the ground that the driver is looking at, and in this case, the first image 111 and the second image 112 are selected according to the type of the information A more intuitive heads-up display is implemented.
  • a head-up display and a control method thereof according to the present invention will be described in more detail.
  • Figure 2a shows the image position of the head-up display in one embodiment of the invention
  • Figure 2b and Figure 2c is a conceptual diagram showing different embodiments of the optical design configuration for the head-up display of Figure 2a
  • 3 is a conceptual diagram illustrating a control system of a head-up display according to the present invention.
  • the head-up display according to the present invention corresponds to the position of a virtual image that the driver 13 can see, that is, the virtual image 110, with the floor in front of the driver, that is, the ground 11. It can be expressed as a three-dimensional viewpoint laid down as much as possible.
  • An image through the optical system of a general head-up display for a vehicle is located at a fixed distance of 2 to 3 m in front of the driver and is generally perpendicular to the ground (11).
  • the head-up display according to the present invention is a 3D augmented reality display, and is intended to position the virtual image 110 on a virtual plane corresponding to the ground 11 in front of the driver's gaze.
  • the head-up display may include a display element and an optical system.
  • the head-up display according to the present invention is a method of generating a visible virtual image 110 by reflecting through an optical system of the head-up display, rather than directly projecting a real image on a screen like a general projector.
  • the display device outputs light, and the optical system controls the path of the light through refraction or reflection so that an image formed by the light is output toward a light-transmitting area.
  • the optical system may be an imaging optical system that forms an image of the display device, and the optical system is configured to output a virtual image on the ground by spreading light while passing through the light-transmitting region.
  • the light-transmitting region may be formed on a combiner (see FIG. 2B) or a windshield (221, see FIG. 2C).
  • the head-up display 120 may include a display element 122 and a combiner 121 .
  • the display element 122 may be referred to as a display panel, a display source, and the like, and may include a liquid crystal display (LCD), a light emitting diode (LED), a digital light projector (DLP), organic light emitting diodes (OLED), and a LD ( It may be any one of a Laser Diode) and a Laser Beam Scanning (LBS) display.
  • LCD liquid crystal display
  • LED light emitting diode
  • DLP digital light projector
  • OLED organic light emitting diodes
  • LD Laser Beam Scanning
  • the combiner 121 reflects light toward the driver's eyes and simultaneously transmits the light toward the outside (front).
  • the combiner 121 may be composed of a single or multiple optical elements. Hereinafter, for convenience of explanation, it is assumed that a combiner composed of a single optical element is used.
  • An additional optical system may be included between the display element 122 and the combiner 121 to increase the quality of the image or to have an optimal size and performance in some cases.
  • the combiner 121 or the display element 122 may be composed of elements included in the head-up display 120 .
  • the head-up display 220 may include a display element 222 and a mirror 223 .
  • the display element 222 has the same characteristics as the above-described combiner-type display element, a description thereof will be replaced with the above description.
  • the mirror 223 functions to reflect the light of the display element 222 to the windshield 221 to focus a virtual image (virtual image) on the ground in front of the driver.
  • the windshield 221 may reflect the light of the light source reflected by the mirror 223 toward the eye-box and transmit external (front) light at the same time.
  • the eye box is a spatial volume through which a user can perceive an image, and means a position of the driver's eyes.
  • the head-up display 220 includes a structure in which light from the light source 222 is projected onto the ground via the mirror 223 and the windshield 221, so that a virtual image can be positioned on the ground in front of the driver.
  • the mirror 223 is an optical system and may include a plurality of mirrors.
  • a head-up display of a combiner or windshield type may be applied, and the virtual image 110 is arranged on the ground 11 . Furthermore, as described with reference to FIG. 1 , the present invention implements a second image 112 (see FIG. 1 ) having a three-dimensional effect or erected in the virtual image 110 located on the ground 11 .
  • a display element implementing a 2D graphic object corresponding to the first image 111 and a display element implementing a 3D graphic object corresponding to the second image 112 are not separately provided, but are provided as a single unit.
  • the display element may form a single image, and the first image 111 and the second image 112 may be generated in the single image. In this case, the first image 111 and the second image 112 may be selectively generated or generated simultaneously.
  • Main information provided by the in-vehicle navigation system may include route information, lane information, and distance information from a vehicle in front on the road being driven.
  • ADAS Advanced Driver-Assistance System
  • the corresponding information is mainly lane information, distance information to a vehicle in front/next to, unexpected information, and the like.
  • the route information is information for guiding a route, and may include turn-by-turn (TBT) information for guiding straight ahead or turning.
  • TBT turn-by-turn
  • Such various types of information may be selected as one of the first image 111 and the second image 112 and implemented on the ground through the head-up display of the present invention.
  • the point at which the information is output to either the first image 111 or the second image 112 can be fixed on absolute coordinates regardless of the driving of the vehicle.
  • the head-up display continuously (by fixing) the first image 111 or the second image 112 at a specific position in front of the vehicle despite the driving of the vehicle using absolute coordinates of a specific map. ) can be printed.
  • the control system 1100 includes a display unit 1110, a sensing unit 1120, a communication unit 1130, a storage unit 1140, and a control unit 1150. may contain at least one.
  • the display unit 1110 is a display device for forming information in front of a vehicle, and may mean a head-up display.
  • the display unit 1110 may be implemented in any one of a combiner method and a wind shield method.
  • the sensing unit 1120 may include at least one sensor.
  • the sensing unit 1120 is for sensing information about the vehicle's surrounding space (12, or 3-dimensional space, see FIG. 1), for example, an image sensor (or camera), lidar sensor, and speed. It may include at least one of a sensor, an acceleration sensor, a geomagnetic sensor, a GPS sensor, and an ultrasonic sensor. In addition, various sensors may be further included even in the exceptions listed here.
  • the communication unit 1130 may be configured to perform at least one of wired or wireless communication.
  • the communication unit 1130 may support various communication methods according to a communication standard of a target to be communicated with.
  • the communication unit 1130 may include Wireless LAN (WLAN), Wireless-Fidelity (Wi-Fi), Wireless Fidelity (Wi-Fi) Direct, Digital Living Network Alliance (DLNA), Wireless Broadband (WiBro), WiMAX ( World Interoperability for Microwave Access), HSDPA(High Speed Downlink Packet Access), HSUPA(High Speed Uplink Packet Access), LTE(Long Term Evolution), LTE-A(Long Term Evolution-Advanced), 5G(5th Generation Mobile Telecommunication ) , Bluetooth (BluetoothTMRFID (Radio Frequency Identification), Infrared Data Association (IrDA), UWB (Ultra-Wideband), ZigBee, NFC (Near Field Communication), Wi-Fi Direct, Wireless USB (Wireless Universal Serial Bus) ) using at least one
  • the communication unit 1130 may include a Global Positioning System (GPS) module or a Differential Global Positioning System (DGPS) module for acquiring vehicle location information.
  • GPS Global Positioning System
  • DGPS Differential Global Positioning System
  • a target for communication with the communication unit 1130 may be very diverse.
  • the communication unit 1130 may communicate with at least one of an external server (or cloud server 1160) and an external storage (or cloud storage 1161).
  • an external server or cloud server 1160
  • an external storage or cloud storage 1161.
  • “external server”, “cloud server”, “external storage”, and “cloud storage” are unified and named as the cloud server 1160.
  • the communication unit 1130 may download at least a portion of a specific map or at least a portion of information included in the specific map through communication with the cloud server 1160 .
  • the storage unit 1140 may be configured to store various information related to the present invention.
  • the storage unit 1140 may be provided in the control system 1100 itself according to the present invention.
  • at least a part of the storage unit 1140 may mean the cloud server 1160 . That is, it can be understood that the storage unit 1140 suffices as long as it is a space for storing necessary information to provide event information according to the present invention, and there is no restriction on physical space. Therefore, hereinafter, the storage unit 1140 and the cloud server 1160 may be expressed as the storage unit 1140 without being separately distinguished.
  • the cloud server 1160 may mean “cloud storage” as described above.
  • a specific map may be stored in the storage unit 1140 .
  • the specific map may be a high-definition map (HD map).
  • the high-definition map means a 3D three-dimensional map with centimeter (cm) level precision.
  • Such a high-precision map may include lane-unit information such as road center lines and boundary lines, and information such as traffic lights, signs, curbs, road marks, and various structures.
  • This coordinate information is coordinate information of the real world and may mean absolute coordinates. For example, when information corresponding to a “gas station” is included in a specific map, absolute coordinates corresponding to a point where the “gas station” is actually located may be included in the specific map.
  • the control system 1100 may provide visual information to the front of the vehicle by referring to information included in a specific map. More specifically, the controller 1150 may output information to the display unit 1110 based on information included in a specific map.
  • the controller 1150 provides information on gas stations included in a specific map to the driver as a second image 112 (see FIG. 1), and the second image 112 is displayed on specific absolute coordinates of the space in which the vehicle travels. It is possible to control the output position where the second image 112 is output so as to be located. For example, the control unit may continuously output the second image 112 at a preset location in front of the vehicle using absolute coordinates of a specific map.
  • the aforementioned first image 111 may also be continuously output at a predetermined position in front of the vehicle.
  • the icon 111a of “60” representing the speed limit in the first image 111 can be output so as to gradually approach the vehicle.
  • the first image 111 and the second image 112 can be output on the paper selectively or together by the control unit 1150.
  • the first image 111 and the second image 112 may be implemented using a single display element.
  • a method of implementing the second image 112 using an optical illusion in a head-up display that outputs the first image 111 on the image plane of a virtual screen located on the ground 11 is exemplified.
  • a method of generating an image in which the first image and the second image 112 are implemented together will be described with reference to FIGS. 4A and 4B.
  • FIG. 4A is a conceptual diagram illustrating a method of generating a 2D ground image of a 3D object
  • FIG. 4B is a conceptual diagram illustrating a theory of combining an image of a 3D object with a method of generating a 2D ground image.
  • the control unit uses a first viewpoint toward the ground based on an eye-box and a second viewpoint toward a 3-dimensional space on the ground based on the eye-box. 2 image 112 is created.
  • generation of images may be implemented in a separate device such as a server or a computer.
  • an image generated through communication may be transmitted to a vehicle or a head-up display.
  • the image of the display element is derived using a method of matching different viewpoints.
  • the first viewpoint is a driver's viewpoint recognizing the virtual image on the ground
  • the second viewpoint is a camera viewpoint of a virtual camera
  • the driver's viewpoint and the camera viewpoint are matched to each other to generate the image.
  • the second image 112 which is seen as a standing shape from the driver's point of view, is implemented on the ground.
  • the 2D space ground, 2D image plane
  • the created upright image is converted to the 2D space.
  • FIG. 4B shows a step in which an image generated by the display device (LCD) is shown to the driver in the head-up display for positioning the image on the ground.
  • LCD display device
  • the conversion matrix M is defined as a mapping matrix including an index mapping from the display element (LCD) to the top-view, and the image of the display element (LCD) is a plane (top-view) viewed from above with respect to the ground. switch to video That is, when the characteristics of the optical system of the head-up display are reflected in the image of the display element (LCD), it is converted into a top-view image.
  • the characteristics of the optical system of the head-up display may have characteristics of an imaging optical system that outputs a virtual image located on the ground.
  • the top-view image is converted into an image in which the driver views the image displayed on the ground in a 3D perspective through a transformation matrix Dv according to the relationship between the driver's position and the ground on which the image is displayed.
  • the position of the driver may mean a position including eye height.
  • a user in a driver's seat of a vehicle recognizes an image output from the display device (LCD) on the ground, which may be defined as an image from a driver's point of view.
  • LCD display device
  • FIGS. 4A and 4B show part of a step in which an image generated freely in a 3D space is shown to a driver, not limited to the ground. In other words, it shows the step of converting a 3D object into a 2D ground image.
  • the specific object Obj 141 is located in a 3D space within a camera FoV of the virtual camera.
  • the specific object (Obj, 141) is the object in the 3D space by the transformation matrix C, in which the object is projected onto the ground according to the position and angle of the camera (or the driver's point of view) And it is converted into an image 142 projected on the ground according to the angle.
  • the camera projection is performed using a transformation matrix C that projects an object within the camera's field of view onto the ground, and the image is converted into a top-view image.
  • a top-view image viewed from the top by projecting the camera of FIG. 4A may be a straight trapezoid. This is different from the bending of the top-view image in (a) of FIG. 4B.
  • the virtual camera views the image projected on the ground in a three-dimensional perspective using a matrix Cv representing the relationship between the image of the top-view viewed from the upper side and the image of the ground in the virtual camera. implement the video.
  • an object in a 3D space can be converted into an image projected on the ground when a virtual camera views it from a 3D viewpoint.
  • FIG. 4B shows a method of grafting an object in a three-dimensional space to the head-up display of the present invention.
  • the image generating methods of (a) and (b) of FIG. 4B are combined with each other.
  • the driver sees through the head-up display from the viewpoint of the camera looking at the object located in the 3D space and recognizes the second image 112 having a three-dimensional effect or being established.
  • the position of the virtual camera in the 3D space may coincide with the position of the actual driver's exe-box.
  • the second image 112 output by the head-up display is formed on the 3D space based on the viewpoint of the camera and projected onto a 2D plane on the ground based on the driver's viewpoint.
  • the image actually output from the display device (LCD) in order to generate a specific object (Obj) to be implemented in the 3D space on the head-up display that the driver looks at is the following formula ( 1) can be derived.
  • C V C -1 . This is because the image of the specific object Obj is projected onto the ground according to the position and angle of the virtual camera, and then the image seen by the camera is obtained again at the same position and angle.
  • D V C V can be.
  • Equation (1) can be arranged as Equation (2) below.
  • the method of outputting the second image 112 on the ground using the head-up display of the present invention is, after all, a specific object (Obj) corresponding to the information to be transmitted by using M -1 and C It may be a method of converting the image into an image actually output from the display device (LCD).
  • M -1 is an inverse transformation matrix of a transformation matrix in which the optical system characteristics of the head-up display of the present invention are reflected
  • C is a specific object (Obj, 141) according to the position and angle of the camera (or the driver's point of view). It can be a transformation matrix projected onto the ground.
  • the head-up display can implement the second image 112 together with the first image 111 on the image plane on the ground using the above-described theory.
  • the control method of the head-up display may deliver information to the driver through the first step and the second step.
  • At least one of the first image 111 and the second image 112 to be transmitted through the head-up display may be selected.
  • the control unit determines that it is necessary to output information such as turn information on the road, lane information, distance information from the vehicle in front, driving direction display (or navigation information), speed limit display, and lane departure warning on the road, the first Select Image (111).
  • the control unit outputs the second image when outputting information such as a point of interest (POI), nearby facilities (eg, gas station, charging station, building, etc.), a guide, a destination, and information on nearby vehicles is required. (112) can be selected.
  • POI point of interest
  • nearby facilities eg, gas station, charging station, building, etc.
  • a guide, a destination, and information on nearby vehicles is required.
  • At least one of the first image 111 and the second image 112 is output from the head-up display using an optical system that projects a virtual image onto the front of the vehicle.
  • the present invention provides a head-up display capable of naturally recognizing all images when the first image 111 and the second image 112 are implemented together, and a control method thereof.
  • a head-up display capable of naturally recognizing all images when the first image 111 and the second image 112 are implemented together, and a control method thereof.
  • the actual position of the erected second image 112 is on the ground, when the driver looks at the erected image matched to the surrounding environment in augmented reality (AR), the convergence of the surrounding environment and the erected image is inconsistent. You can feel the unnaturalness caused by it. Therefore, in the present invention, an allowable space that satisfies both the convergence of an image lying on the ground and an image erected on the ground is set, and the image plane is placed on the allowed space to eliminate this unnaturalness.
  • AR augmented reality
  • FIG. 5 is a conceptual diagram illustrating a concept of defining a convergence error tolerance space in the head-up display according to the present invention
  • FIG. 6 is a conceptual diagram illustrating a method of outputting images in the tolerance space of FIG. 5 .
  • the control unit of the present invention controls the display element so that at least one of the first image 111 and the second image 112 is output within the head-up display, more specifically, within the convergence error allowance space 151 of the optical system by virtual images.
  • the convergence error allowance space of the optical system may be referred to as a convergence allowance space or an allowance space, and hereinafter, for convenience of description, it will be mainly unified as an allowance space.
  • the first image 111 and the second image 112 may be images having different postures relative to the ground within the allowable space 151 .
  • the first image 111 may be an image lying on the ground, in which case it is consistent with the ground or partially rotated within an allowable range. cases may be included.
  • the second image 112 may be an image erected on the ground, and may include a three-dimensional image or an image erected on the ground.
  • the first image 111 and the second image 112 are only named for distinction, the image established on the ground is referred to as the first image 111, and the image lying on the ground is It is also possible to refer to 2 images 112 .
  • a space within the convergence error range of both eyes of the driver with respect to the image plane 161 is defined as the allowable space 151 .
  • the allowable space 151 a space within the convergence error range of both eyes of the driver with respect to the image plane 161 is defined as the allowable space 151 .
  • the image plane 161 may be a virtual screen on which the first image 111 and the second image 112 are located.
  • a binocular convergence error tolerance may be used.
  • a method of implementing a 3D image using a converged stereo image is mainly used in augmented reality.
  • parallax occurs in the horizontal and vertical directions based on the convergence point where the optical axes of the user's eyes intersect in the object space.
  • parallax in the horizontal direction is caused by the distance between both eyes, and when an error occurs, it causes a double image and fatigue to the user. Therefore, in augmented reality, it is possible to set a permissible range that does not cause double images and fatigue to the user by defining a convergence error tolerance.
  • the convergence error tolerance for a single optic nerve is set to 0.3 mrad
  • the US Air Force head-up display has a convergence error tolerance of 1 to 3 mrad
  • the convergence error tolerance for a vehicle head-up display is set to 3 mrad or higher.
  • the permissible range in the vertical direction for the image plane 161 matched to the ground is defined using the convergence error allowance. More specifically, the allowable space 151 may be set by a range allowing convergence errors of the first image 111 and the second image 112, respectively. Furthermore, the allowable space 151 may be set within a range allowing both convergence errors of the first image 111 and the second image 112 .
  • an allowable space 151 defined as a convergence error allowable space that expands in the vertical direction based on the image plane 161 coincident with the ground may be formed. That is, the allowable space 151 has an upper limit surface 154 and a lower limit surface 153, and the image plane 161 may be located between the upper limit surface 154 and the lower limit surface 153. .
  • the upper limit surface 154 and the lower limit surface 153 are surfaces having a maximum permissible convergence error value of the image plane based on the driver's standard, and the convergence error can be set in the range of 0.1 to 10 mrad.
  • the allowable space 151 may be set to a range where the absolute value of the convergence error is 0.3 mrad or less based on the ground, which minimizes the unnaturalness of the second image 112 and facilitates image control. .
  • the image plane 161 may coincide with the ground, and the lower limit surface 153 may be formed below the ground.
  • the lower limit surface 153 becomes the minimum allowable position (or depth) of the first image 111
  • the upper limit surface 154 becomes the maximum allowable position (or height) of the second image 112. It can be.
  • the allowable space 151 may be formed to be biased upward with respect to the ground.
  • the allowance space 151 may be formed to be eccentric upward with respect to the ground.
  • the allowable space 151 may be set such that the lower limit surface 153 is closer to the ground than the upper limit surface 154 .
  • the image plane 161 may not coincide with the ground.
  • the driver can recognize that the first image 111 coincides with the ground.
  • the space for displaying the second image 112 on the ground is enlarged, it is easier to implement the second image 112 in the head-up display.
  • the lower limit surface 153 may be set to coincide with the ground so that the allowable space 151 moves to the ground. At this time, since the lower limit surface 153 of the allowable space 151 coincides with the ground, the output of the first image 111 may at least conform to the ground or be implemented within a specific range above the ground. On the other hand, the allowable space 151 where the second image 112 is output can be secured as much as possible on the ground.
  • the image plane 161 is located between the upper limit surface 154 and the lower limit surface 153 of the allowable space 151, and the first image 111 is an icon 111a representing information about the speed limit 60 It is provided with a driving guide 111b within the lane and is output to the first point 171 of the image plane 161.
  • the driver recognizes that the icon 111a and the driving guide 111b are output in line with the ground, and thus the first image 111 is displayed on the actual road surface at the point of view of the driver It can be.
  • the second image 112 is output to the image plane 161 at the second point 172 and implemented as an image established as an optical illusion.
  • the first image 111 and the second image 112 are output at different heights with respect to the ground, and information is transmitted to the driver as an image aligned with the ground and an image established on the ground. .
  • the image plane 261 may be defined as a curved surface. In this way, in the present invention, the unnaturalness of the second image 112 can be more effectively resolved by using the curved image surface 261 .
  • the curved image plane 261 will be described in more detail with reference to FIGS. 7 and 8 .
  • FIG. 7 is a conceptual diagram illustrating an example of defining an image plane located in the allowable space of FIG. 5, and FIG. 8 is a conceptual diagram illustrating an example of outputting an image by positioning the image plane of FIG. 7 in the allowable space of FIG. .
  • the image plane 261 located in the allowable space 151 may be formed as a curved surface on which a virtual image recognized from the driver's line of sight is located. More specifically, the image plane 261 may include a curved surface having a different distance from the ground at a first point 171 and a second point 172 (see FIG. 6 above) spaced apart along the front of the vehicle. there is.
  • a convergence angle (2 ⁇ ) and an interpupillary distance (2p) of both eyes may be defined, and a convergence tolerance ( ⁇ ) of both eyes may be given.
  • Equation (3) Equation (3)
  • the image plane 261 can be defined as a curved surface at the expression distance d as shown in Equation (4) below.
  • the above formula is only one example of defining the image plane 261 as a curved surface, and various types of curved surfaces may be defined as the image plane 261 in the present invention.
  • the first image 111 and the second image 112 can be implemented in the form shown in FIG.
  • the curved surface may be formed to gradually move away from the ground along the front of the vehicle.
  • the curved surface may be curved in a direction away from the ground toward the front of the vehicle.
  • the image plane 261 is formed in a form in which at least a part is spaced upward with respect to the ground, and as described with reference to FIG. 7, the image plane 261 is formed of a curved surface Therefore, as shown in FIG. 8, it is finally formed in the form of a curved surface floating at a higher position than the ground.
  • the image plane of the curved surface may be implemented in a form having a complex curvature within an allowable space.
  • an image plane having one curvature will be described as a standard.
  • the second image 112 is formed on the curved image surface 261, it can be implemented as an upright image that satisfies the allowable range 151 in a wider range of distance. . That is, by defining the curved image plane 261, unnaturalness that may appear when the second image 112 is erected at a location close to the vehicle can be further reduced.
  • FIG. 9 is a conceptual diagram illustrating an extension of the image plane of FIG. 8 .
  • the image plane 261 may be extended and formed on the upper side of the horizontal plane 181 in a curved shape. In this case, the image plane 261 may extend upward from a point where the image plane 261 and the horizontal plane 181 meet.
  • the horizontal plane 181 may also be expressed as a horizontal line, and may mean a reference plane extending in parallel in a horizontal direction from the position of the head-up display, the driver's eyes, or the eye box.
  • the space on the horizontal plane 181 in the driver's field of view can be an advantageous space for transmitting information such as POI, text information, destination display, etc. to an erect image.
  • the image plane 261 extends upward from the end of the curved surface, which is the image plane 261, to facilitate the expression of an erect image.
  • the image plane 261 extended in the upward direction may be defined as an extension plane 262 .
  • the image surface 261 includes a curved surface and an extension surface 262, and the image surface may extend upward from one end of the curved surface. More specifically, the extension surface 262 extends upward at a point where the curved surface meets a horizontal surface.
  • the curved surface may be positioned within the allowable space of the optical system, and furthermore, the extension surface 262 may also be positioned within the allowable space.
  • the second image 112 is outputted by extending from a specific position to an upper space using the extension surface 262, so that obstruction of the field of view by other buildings or the like can be removed.
  • the image plane 261 and the extension plane 262 are optically one continuous plane, and may be defined as an extension image plane.
  • an erect image can be created at a position that floats upward with respect to the ground.
  • at least a part of the second image 112 may be formed above the horizontal surface 181 . .
  • the above-described head-up display and its control method of the present invention display an image lying on the ground and an erected image within a convergence error tolerance space of the image plane based on the driver, so that the convergence mismatch between the image and the space can solve the problem
  • the space capable of expressing the erected image can be further expanded.
  • a convergence error can be further minimized when expressing an image erected at various distances by applying a structure for moving an image plane.
  • the movement of the image plane will be described in more detail with reference to FIGS. 10 to 14 .
  • FIG. 10 is a conceptual diagram illustrating the concept of moving the image plane according to another embodiment of the present invention
  • FIG. 11 is a flowchart illustrating a control method for implementing movement of the image plane of FIG. 10
  • FIG. 12 is a control method of FIG. 11 13 and 14 are conceptual diagrams illustrating examples
  • FIGS. 13 and 14 are exemplary embodiments illustrating a structure of a head-up display implementing movement of an image plane of FIG. 10 .
  • the controller 1150 may control the position of the image plane 361 based on an event related to a vehicle.
  • controlling the location of the image plane based on an event can provide information more effectively than providing information while being fixed at the same location uniformly.
  • the image plane 361 may move between a vehicle and a short distance first boundary surface 362 and a long distance second boundary surface 363. At this time, the location of the image plane 361 may be set based on the event.
  • the event may refer to a situation or an event that a driver should be aware of while driving a vehicle.
  • the controller 1150 may select an image to be output through the head-up display (S110) and output the image to the video screen (S120). Thereafter, when an event is detected (S130), the controller 1150 controls the position of the image plane (S140), and outputs an image to the image plane whose position is changed (S150).
  • the controller 1150 may control the distance between the image plane 361 and the vehicle based on the type of road on which the vehicle travels.
  • an icon 112 indicating gas station information is displayed as a second image on a first image surface 361a on a general road where a plurality of images to be displayed at a short distance while driving at a relatively low speed compared to a highway. can be output.
  • the convergence error allowance space is set at a position including the first image plane 361a, so that the convergence error of images displayed near the first image plane 361a can be minimized.
  • the vehicle may enter a different type of road, for example, a highway with a different speed limit.
  • the step of detecting the event (S130) may proceed.
  • control unit 1150 may be configured to detect the occurrence of the event.
  • the controller 1150 may detect (or determine) whether an event has occurred based on information sensed by the sensing unit 1120 .
  • the controller 1150 may detect whether an event has occurred by comparing the sensed information with information (or reference driving information) included in a specific map. More specifically, information indicating that the road on which the vehicle is currently driving corresponds to a general road with a speed limit of 60 km/h or a highway with a speed limit of 100 km/h may be included in the specific map.
  • an icon 312 indicating a gas station is displayed on the second image screen 361b on a highway where there are many images to be displayed at a distance while traveling at a relatively high speed compared to general roads. It can be output as 2 images.
  • the convergence error allowance space is set at a position including the second image plane 361b, so that the convergence error of images displayed near the second image plane 361b can be minimized.
  • the second image plane 361b may be an image plane located farther from the vehicle than the first image plane 361a. Accordingly, the icon 312 representing gas station information on the highway may be located at a greater distance than the icon 112 representing gas station information on general roads. Furthermore, in the case of the driving guide 311b, the starting point may be located at a position farther from the vehicle than the driving guide 111b on a general road. However, the present invention is not limited thereto, and the starting point of the driving guide may be set identically regardless of the type of road.
  • the convergence error allowance space is set at a location including the second image plane 362b, the position of the convergence error allowance space may be varied according to the type of road. That is, the convergence error allowance space may be located farther away on the expressway than on a general road driving at low speed.
  • the image plane is located at a short distance in a city where there are many images to be displayed at a short distance, and when entering or expected to enter a highway, the image plane is moved to a long distance, so that the user It is possible to actively control the position of the image plane according to the driving environment of the vehicle.
  • movement of the image plane may be implemented through hardware.
  • the position of the image plane may be controlled by adjusting at least one of a relative distance and a relative angle between the optical system and the display element of the head-up display.
  • the display element 122 and the optical system 123 move relative to each other so that the image plane moves to a position matched with a space to output the image.
  • the display element 122 may move forward and backward.
  • the relative angle with the optical system 123 may be adjusted by tilting the display device 122 .
  • This structure can be equally applied to a structure that outputs an image using the windshield 221 .
  • the display element 222 and the optical system, specifically, the first lens or mirror 224 of the optical system are moved relative to each other so that the image plane moves to a position matched with the space to output the image, for example, the display Element 222 can move forward and backward.
  • the relative angle with the first lens or mirror 224 of the optical system may be adjusted by tilting the display device 222 .
  • the display element is formed to be moved or tilted based on the optical system, and at least one of the relative distance and relative angle is adjusted, and through this, the image plane moves relatively forward and backward with respect to the vehicle.
  • head-up display and its control method are not limited to the configuration and method of the above-described embodiments, but all or part of each of the embodiments is optional so that various modifications can be made. It may be configured in combination with.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Optics & Photonics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Instrument Panels (AREA)

Abstract

La présente invention concerne un affichage tête haute et son procédé de commande. L'affichage tête haute d'un véhicule selon la présente invention comprend : un élément d'affichage pour émettre de la lumière ; un système optique pour commander un trajet de la lumière pour générer une sortie d'image virtuelle à partir du côté avant du véhicule vers le sol ; et une unité de commande pour commander l'élément d'affichage de telle sorte qu'au moins l'une d'une première image et d'une seconde image est délivrée à l'intérieur d'un espace d'autorisation du système optique par l'image virtuelle, la première image et la seconde image ayant des postures différentes par rapport au sol à l'intérieur de l'espace d'autorisation, et l'espace d'autorisation étant formé pour être sollicité vers le côté supérieur par rapport au sol et comprenant une surface d'image sur laquelle la première image et la seconde image sont positionnées.
PCT/KR2022/000125 2021-07-21 2022-01-05 Affichage tête haute et son procédé de commande WO2023003109A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2021-0095895 2021-07-21
KR10-2021-0095896 2021-07-21
KR1020210095895A KR20230014491A (ko) 2021-07-21 2021-07-21 헤드업 디스플레이 및 그 제어방법
KR1020210095896A KR20230014492A (ko) 2021-07-21 2021-07-21 헤드업 디스플레이 및 그 제어방법

Publications (1)

Publication Number Publication Date
WO2023003109A1 true WO2023003109A1 (fr) 2023-01-26

Family

ID=84980190

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/000125 WO2023003109A1 (fr) 2021-07-21 2022-01-05 Affichage tête haute et son procédé de commande

Country Status (1)

Country Link
WO (1) WO2023003109A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1751499B1 (fr) * 2004-06-03 2012-04-04 Making Virtual Solid, L.L.C. Procede d'affichage de navigation en route et appareil utilisant affichage tete haute
US20160085301A1 (en) * 2014-09-22 2016-03-24 The Eye Tribe Aps Display visibility based on eye convergence
KR20200040507A (ko) * 2018-10-10 2020-04-20 네이버랩스 주식회사 영상을 지면에 위치시켜 운전자의 시점에 증강현실을 구현하는 3차원 증강현실 헤드업 디스플레이
KR20210087271A (ko) * 2020-01-02 2021-07-12 삼성전자주식회사 3d 증강 현실 내비게이션 정보 표시 방법 및 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1751499B1 (fr) * 2004-06-03 2012-04-04 Making Virtual Solid, L.L.C. Procede d'affichage de navigation en route et appareil utilisant affichage tete haute
US20160085301A1 (en) * 2014-09-22 2016-03-24 The Eye Tribe Aps Display visibility based on eye convergence
KR20200040507A (ko) * 2018-10-10 2020-04-20 네이버랩스 주식회사 영상을 지면에 위치시켜 운전자의 시점에 증강현실을 구현하는 3차원 증강현실 헤드업 디스플레이
KR20210087271A (ko) * 2020-01-02 2021-07-12 삼성전자주식회사 3d 증강 현실 내비게이션 정보 표시 방법 및 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEE JIN-HO, YANUSIK IGOR, CHOI YOONSUN, KANG BYONGMIN, HWANG CHANSOL, PARK JUYONG, NAM DONGKYUNG, HONG SUNGHOON: "Automotive augmented reality 3D head-up display based on light-field rendering with eye-tracking", OPTICS EXPRESS, vol. 28, no. 20, 28 September 2020 (2020-09-28), pages 1 - 17, XP093027592, DOI: 10.1364/OE.404318 *

Similar Documents

Publication Publication Date Title
US10890762B2 (en) Image display apparatus and image display method
US10551619B2 (en) Information processing system and information display apparatus
WO2019209057A1 (fr) Procédé de détermination de position de véhicule et véhicule l'utilisant
WO2020076090A1 (fr) Affichage tête haute tridimensionnel à réalité augmentée pour positionner une image virtuelle sur le sol au moyen d'un procédé de réflexion de pare-brise
WO2021137485A1 (fr) Procédé et dispositif d'affichage d'informations de navigation tridimensionnelles (3d) en réalité augmentée
US20180339591A1 (en) Information display apparatus
JP2017211366A (ja) 移動体システム、情報表示装置
JP2014181025A (ja) 動的焦点面を備える立体ヘッドアップディスプレイ
WO2019235743A1 (fr) Robot pour déplacement par le biais d'un point de cheminement sur la base d'un évitement d'obstacle et procédé pour déplacement
WO2016190135A1 (fr) Système d'affichage pour véhicule
WO2018143589A1 (fr) Procédé et dispositif d'émission d'informations de voie
JP2017094882A (ja) 虚像生成システム、虚像生成方法及びコンピュータプログラム
CN210348060U (zh) 一种增强现实抬头显示装置及系统
WO2018135745A1 (fr) Procédé et dispositif pour générer une image pour indiquer un objet sur la périphérie d'un véhicule
JP2021121536A (ja) 制御装置、画像表示方法及びプログラム
US11320652B2 (en) Display device, object apparatus and display method
WO2023003109A1 (fr) Affichage tête haute et son procédé de commande
WO2020138760A1 (fr) Dispositif électronique et procédé de commande associé
WO2022075514A1 (fr) Procédé et système de commande d'affichage tête haute
JP2018020779A (ja) 車両情報投影システム
WO2021085691A1 (fr) Procédé de fourniture d'image par un dispositif de navigation de véhicule
KR102543899B1 (ko) 헤드업 디스플레이 및 그 제어방법
WO2021100917A1 (fr) Appareil d'affichage tête haute
KR20230014492A (ko) 헤드업 디스플레이 및 그 제어방법
KR20230014491A (ko) 헤드업 디스플레이 및 그 제어방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22845981

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE