US10926638B1 - Method and apparatus that reformats content of eyebox - Google Patents
Method and apparatus that reformats content of eyebox Download PDFInfo
- Publication number
- US10926638B1 US10926638B1 US16/661,227 US201916661227A US10926638B1 US 10926638 B1 US10926638 B1 US 10926638B1 US 201916661227 A US201916661227 A US 201916661227A US 10926638 B1 US10926638 B1 US 10926638B1
- Authority
- US
- United States
- Prior art keywords
- occupant
- content
- eyebox
- position coordinates
- eye position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/65—Instruments specially adapted for specific vehicle types or users, e.g. for left- or right-hand drive
- B60K35/654—Instruments specially adapted for specific vehicle types or users, e.g. for left- or right-hand drive the user being the driver
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/21—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
- B60K35/23—Head-up displays [HUD]
- B60K35/235—Head-up displays [HUD] with means for detecting the driver's gaze direction or eye points
-
- G06T3/0006—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/149—Instrument input by detecting viewing direction not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/20—Optical features of instruments
- B60K2360/31—Virtual images
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/20—Optical features of instruments
- B60K2360/33—Illumination features
- B60K2360/336—Light guides
-
- B60K2370/149—
-
- B60K2370/1529—
-
- B60K2370/194—
-
- B60K2370/31—
-
- B60K2370/336—
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/10—Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/28—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
- B60K35/285—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver for improving awareness by directing driver's gaze direction or eye points
Definitions
- Apparatuses and methods consistent with exemplary embodiments relate to reformatting displayed content. More particularly, apparatuses and methods consistent with exemplary embodiments relate to reformatting displayed content of a head up display.
- One or more exemplary embodiments provide a method and an apparatus that reformats clipped content so that the content fits within a portion of the eyebox that is visible to an occupant. More particularly, one or more exemplary embodiments provide a method and an apparatus that scale, transform, translate, remove or reformat content of the clipped object so the object is visible to an occupant in its entirety without being clipped.
- a method that reformats content visible in an eyebox includes detecting a position of eyes of an occupant and determining eye position coordinates, determining whether at least one object present in the eyebox of a head up display is clipped when viewed from the eye position coordinates, and modifying the clipped object so that the clipped object is not clipped when viewed by the occupant from the eye position coordinates.
- the eye position coordinates may reflect a position in space with respect to the eyebox.
- the eyebox may be a virtual area in space from which the image projected by the head up display is entirely visible to the occupant, and the modifying the clipped object is performed such that the modified object is displayed within a usable eyebox, which is a subset of the virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
- the head up display may include a light source, a display, a mirror and a combiner.
- the modifying the clipped object may include translating the clipped object in a direction corresponding to the eye position coordinates.
- the modifying the clipped object may include scaling the clipped object to a smaller size so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
- the modifying the clipped object may include reorganizing, reducing, or removing content visible in the clipped object so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
- the head up display may include two or more virtual image planes, and the modifying the clipped object may include moving content of the clipped object from a first plane of the two planes to another plane of the two planes.
- the determining whether the at least one object present in the eyebox of the head up display is clipped may include determining at least one of coordinates and size of a clipped portion of the clipped object based on the detected eye position coordinates and the distance or direction of the eye position coordinates from a center of the eyebox.
- the modifying the clipped object may include transforming, translating, removing and reformatting content of the clipped object according to the determined at least one of coordinates and size of clipped portion of the clipped object.
- an apparatus that reformats content visible in an eyebox.
- the apparatus includes at least one memory comprising computer executable instructions; and at least one processor configured to read and execute the computer executable instructions.
- the computer executable instructions causing the at least one processor to detect a position of eyes of an occupant and determining eye position coordinates, determine whether at least one object present in the eyebox of a head up display is clipped when viewed from the eye position coordinates; and modify the clipped object so that the clipped object is not clipped when viewed by the occupant from the eye position coordinates.
- the eye position coordinates may reflect a position in space with respect to the eyebox.
- the eyebox may be a virtual area in space from which the image projected by the head up display is entirely visible to the occupant, and the computer executable instructions may further cause the at least one processor to modify the clipped object such that the modified object is displayed within a usable eyebox, which is a subset of the virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
- the apparatus may further include the head up display including a light source, a display, a mirror and a combiner.
- the head up display including a light source, a display, a mirror and a combiner.
- the computer executable instructions may further cause the at least one processor to modify the clipped object by translating the clipped object in a direction corresponding to the eye position coordinates.
- the computer executable instructions may further cause the at least one processor to modify the clipped object by scaling the clipped object to a smaller size so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
- the computer executable instructions may further cause the at least one processor to modify the clipped object by reorganizing, reducing, or removing content visible in the clipped object so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
- the apparatus may further include the head up display, the head up display comprising two or more virtual image planes, and the computer executable instructions may further cause the at least one processor to modify the clipped object by moving content of the clipped object from a first plane of the two planes to another plane of the two planes.
- the computer executable instructions further may cause the at least one processor to determine whether the at least one object present in the eyebox of the head up display is clipped by determining at least one of coordinates and size of a clipped portion of the clipped object based on the detected eye position coordinates and the distance or direction of the eye position coordinates from a center of the eyebox.
- the computer executable instructions may further cause the at least one processor to modify the clipped object by transforming, translating, removing and reformatting content of the clipped object according to the determined at least one of coordinates and size of clipped portion of the clipped object.
- FIG. 1 shows a block diagram of an apparatus that reformats content of an eyebox according to an exemplary embodiment
- FIG. 2 shows a flowchart for a method that reformats content of an eyebox according to an exemplary embodiment
- FIG. 3 shows an illustration of an occupant viewing a head up display apparatus according to an aspect of an exemplary embodiment
- FIG. 4 shows illustrations of reformatted content of an eyebox according to an aspect of an exemplary embodiment.
- FIGS. 1-4 of the accompanying drawings in which like reference numerals refer to like elements throughout.
- first element is “connected to,” “attached to,” “formed on,” or “disposed on” a second element
- first element may be connected directly to, formed directly on or disposed directly on the second element or there may be intervening elements between the first element and the second element, unless it is stated that a first element is “directly” connected to, attached to, formed on, or disposed on the second element.
- first element may send or receive the information directly to or from the second element, send or receive the information via a bus, send or receive the information via a network, or send or receive the information via intermediate elements, unless the first element is indicated to send or receive information “directly” to or from the second element.
- one or more of the elements disclosed may be combined into a single device or combined into one or more devices.
- individual elements may be provided on separate devices.
- Head up displays that provide occupants with information, while minimizing the time which occupants' gaze and attention are off of the road.
- Head up displays have a limited eyebox due to the limited aperture and size of optical components inside the HUD.
- the eyebox is an area in the space where the projected content of the HUD is entirely visible to the occupant without post-image processing.
- the center of the eyebox is designated as the center of the eyellipse defined by vehicle geometry.
- the usable eyebox may be a subset of the virtual area of the eyebox.
- the usable eyebox is an area where content may be viewed unclipped in its entirety from the current eye position coordinates of the occupant if the content is modified to fit in the usable eyebox.
- the limited size of an eyebox may create a clipping effect on projected information or content when one or both eyes of the occupant are positioned outside of the eyebox. If a viewer's eye is located outside of the eyebox or is offset from a position where the entire content or projected information is viewable, the projected information or content is not seen in that eye or may be clipped when viewed from the eye position coordinates of the viewer. To address the clipping effect due to limited eyebox size and the clipped projected object, an operator may have to reposition their head or body. However, another method may be to modify or reformat the content displayed by the head-up display into the usable eyebox in such a way so that the content appears to be complete or unclipped to an operator thereby effectively allowing the occupant or operator to view complete or coherent content.
- FIG. 1 shows a block diagram of an apparatus that reformats content of an eyebox 100 according to an exemplary embodiment.
- the apparatus that reformats content of an eyebox 100 includes a controller 101 , a power supply 102 , a storage 103 , an output 104 , a user input 106 , a sensor 107 , and a communication device 108 .
- the apparatus that reformats content of an eyebox 100 is not limited to the aforementioned configuration and may be configured to include additional elements and/or omit one or more of the aforementioned elements.
- the apparatus that reformats content of an eyebox 100 may be implemented as part of a vehicle or as a standalone component.
- the controller 101 controls the overall operation and function of the apparatus that reformats content of an eyebox 100 .
- the controller 101 may control one or more of a storage 103 , an output 104 , a user input 106 , a sensor 107 , and a communication device 108 of the apparatus that reformats content of an eyebox 100 .
- the controller 101 may include one or more from among a processor, a microprocessor, a central processing unit (CPU), a graphics processor, Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), state machines, circuitry, and a combination of hardware, software and firmware components.
- the controller 101 is configured to send and/or receive information from one or more of the storage 103 , the output 104 , the user input 106 , the sensor 107 , and the communication device 108 of the apparatus that reformats content of an eyebox 100 .
- the information may be sent and received via a bus or network, or may be directly read or written to/from one or more of the storage 103 , the output 104 , the user input 106 , the sensor 107 , and the communication device 108 of the apparatus that reformats content of an eyebox 100 .
- suitable network connections include a controller area network (CAN), a media oriented system transfer (MOST), a local interconnection network (LIN), a local area network (LAN), and other appropriate connections such as Ethernet.
- the power supply 102 provides power to one or more of the controller 101 , the storage 103 , the output 104 , the user input 106 , the sensor 107 , and the communication device 108 of the apparatus that reformats content of an eyebox 100 .
- the power supply 102 may include one or more from among a battery, an outlet, a capacitor, a solar energy cell, a generator, a wind energy device, an alternator, etc.
- the storage 103 is configured for storing information and retrieving information used by the apparatus that reformats content of an eyebox 100 .
- the storage 103 may be controlled by the controller 101 to store and retrieve information about content, objects, eye position, and the eyebox.
- the storage 103 may also include the computer instructions configured to be executed by a processor to perform the functions of the apparatus that reformats content of an eyebox 100 .
- the information about objects or content may include one or more from among dimensions, area, priority or importance, and displayed information.
- the information about eye position may include coordinate information.
- the information about the eyebox may include eyebox size, coordinate bounds, center coordinates.
- the storage 103 may include one or more from among floppy diskettes, optical disks, CD-ROMs (Compact Disc-Read Only Memories), magneto-optical disks, ROMs (Read Only Memories), RAMs (Random Access Memories), EPROMs (Erasable Programmable Read Only Memories), EEPROMs (Electrically Erasable Programmable Read Only Memories), magnetic or optical cards, flash memory, cache memory, and other type of media/machine-readable medium suitable for storing machine-executable instructions.
- the output 104 outputs information in one or more forms including: visual, audible and/or haptic form.
- the output 104 may be controlled by the controller 101 to provide outputs to the user of the apparatus that reformats content of an eyebox 100 .
- the output 104 may include one or more from among a speaker, a display, a transparent display, a centrally-located display, a head up display, a windshield display, a haptic feedback device, a vibration device, a tactile feedback device, a tap-feedback device, a holographic display, an instrument light, an indicator light, etc.
- the output 104 is a head up display that displays objects or contents using graphical indicators showing things such as speed, temperature or other information.
- the graphical indicator may have adjustable features including a shading of a graphical indicator, a transparency of a graphical indicator, a size of a graphical indicator, a size of a graphical indicator relative to other graphical indicators, a color of a graphical indicator, a shape of a graphical indicator.
- the user input 106 is configured to provide information and commands to the apparatus that reformats content of an eyebox 100 .
- the user input 106 may be used to provide user inputs, etc., to the controller 101 .
- the user input 106 may include one or more from among a touchscreen, a keyboard, a soft keypad, a button, a motion detector, a voice input detector, a microphone, a camera, a trackpad, a mouse, a touchpad, etc.
- the user input 106 may be configured to receive a user input to acknowledge or dismiss the notification output by the output 104 .
- the user input 106 may also be configured to receive a user input to cycle through objects, graphical indicators or content on the head up display.
- the sensor 107 is configured to detect an occupant.
- the sensor 107 may be one or more sensors from among a radar sensor, a microwave sensor, an ultrasonic sensor, a camera, an infrared sensor, a LIDAR, and a laser sensor.
- the sensor 107 is implemented as a camera in a driver monitoring system.
- the controller 101 may receive information about the eye position of the occupant from the sensor 107 .
- the information from the sensor 107 may be provided to the controller 101 via a bus, storage 103 or communication device 108 .
- the communication device 108 may be used by the apparatus that reformats content of an eyebox 100 to communicate with various types of external apparatuses according to various communication methods.
- the communication device 108 may be used to send/receive object information to/from the controller 101 of the apparatus that reformats content of an eyebox 100 .
- the communication device 108 may include various communication modules such as one or more from among a telematics unit, a broadcast receiving module, a near field communication (NFC) module, a GPS receiver, a wired communication module, or a wireless communication module.
- the broadcast receiving module may include a terrestrial broadcast receiving module including an antenna to receive a terrestrial broadcast signal, a demodulator, and an equalizer, etc.
- the NFC module is a module that communicates with an external apparatus located at a nearby distance according to an NFC method.
- the GPS receiver is a module that receives a GPS signal from a GPS satellite and detects a current location.
- the wired communication module may be a module that receives information over a wired network such as a local area network, a controller area network (CAN), or an external network.
- the wireless communication module is a module that is connected to an external network by using a wireless communication protocol such as IEEE 802.11 protocols, WiMAX, Wi-Fi or IEEE communication protocol and communicates with the external network.
- the wireless communication module may further include a mobile communication module that accesses a mobile communication network and performs communication according to various mobile communication standards such as 3 rd generation (3G), 3 rd generation partnership project (3GPP), long term evolution (LTE), Bluetooth, EVDO, CDMA, GPRS, EDGE or ZigBee.
- 3G 3 rd generation
- 3GPP 3 rd generation partnership project
- LTE long term evolution
- Bluetooth Bluetooth
- EVDO Code Division Multiple Access
- CDMA Code Division Multiple Access
- GPRS global positioning reference signal
- EDGE EDGE
- ZigBee ZigBee
- the controller 101 of the apparatus that reformats content of an eyebox 100 may be configured to detect a position of eyes of an occupant and determining eye position coordinates, determine whether at least one object present in the eyebox of a head up display is clipped when viewed from the eye position coordinates; and modify the clipped object so that the clipped object is not clipped when viewed by the occupant from the eye position coordinates.
- the controller 101 of the apparatus that reformats content of an eyebox 100 may be configured to modify the clipped object such that the modified object is displayed within a usable eyebox, which is a subset of the virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
- the controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to modify the clipped object by translating the clipped object in a direction corresponding to the eye position coordinates.
- the controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to modify the clipped object by scaling the clipped object to a smaller size so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
- the controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to control to modify the clipped object by reorganizing, reducing, or removing content visible in the clipped object so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
- the controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to modify the clipped object by moving content of the clipped object from a first plane of the two planes to another plane of the two planes.
- the controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to determine whether the at least one object present in the eyebox of the head up display is clipped by determining at least one of coordinates and size of a clipped portion of the clipped object based on the detected eye position coordinates and the distance or direction of the eye position coordinates from a center of the eyebox. The controller 101 may then modify the clipped object by transforming, translating, removing and reformatting content of the clipped object according to the determined at least one of coordinates and size of clipped portion of the clipped object.
- FIG. 2 shows a flowchart for a method that reformats content of an eyebox according to an exemplary embodiment.
- the method of FIG. 2 may be performed by the apparatus that reformats content of an eyebox 100 or may be encoded into a computer readable medium as instructions that are executable by a computer to perform the method.
- the position of the eyes of the occupant are detected and the eye position coordinates are detected in operation S 210 . Based on the eye position coordinates, it is then determined whether the at least one object present in the eyebox is clipped in operation S 220 .
- the clipped object is modified so that object becomes unclipped when viewed by occupant from eye position coordinates.
- the modification of the object may include one or more of scaling, transforming, translating, removing and reformatting content of the object so that the content fits into a usable eyebox.
- FIG. 3 shows an illustration of an occupant viewing a head up display apparatus according to an aspect of an exemplary embodiment.
- the occupant 301 is able to view virtual image 311 through the eyebox 302 .
- the eyebox is determined by the aperture of each optical components in the head up display and the viewing angle of the picture generating unit (PGU).
- the head up display apparatus include a picture generation unit 305 comprising a light source and display.
- a fold mirror 306 reflects the picture generated by picture generation unit 305 at optical component 307 which creates an optical path 304 .
- the fold mirror 306 may be planar or aspherical.
- the optical component 307 may be a combiner or a windshield.
- FIG. 4 shows illustrations of reformatted content of eyebox according to an aspect of an exemplary embodiment. Referring to FIG. 4 , three examples of content modifications are shown. However, the apparatus and method that reformat content are not limited to the three examples and may include other methods of reformatting content now shown in FIG. 4 .
- Illustration 411 shows content clipped from the bottom when viewed by an occupant from a position that is above or higher than the eyebox. If this type of clipping occurs and depending on the size of the position of the eyes of the occupant, the content may be reorganized and reduced in illustration 412 in such that only the speed is visible in an unclipped manner to the occupant from the occupant's detected eye position coordinates.
- the content of the object may be prioritized by importance and the content may be displayed according to the importance and the area of the eyebox that is viewable by the occupant from the occupant's detected eye position.
- Illustration 421 shows content clipped from the bottom when viewed by an occupant from a position that is lower or below the eyebox. If this type of clipping occurs and depending on the size of the position of the eyes of the occupant, the content may be scaled to smaller size and translated in illustration 422 in such that only the entire content is visible in an unclipped manner to the occupant from the occupant's detected eye position coordinates. In this example, the content or the object may be scaled to a reduced size according to the area of the eyebox that is viewable by the occupant from the occupant's detected eye position.
- Illustration 431 shows content clipped from the bottom when viewed by an occupant from a position that is above or higher than the eyebox. If this type of clipping occurs and depending on the image size viewable at the position of the eyes of the occupant, the content may be translated upwards and skewed in illustration 432 such that the entire content is visible in an unclipped manner to the occupant from the occupant's detected eye position coordinates. In this example, the content or the object may still fit in the area of the eyebox that is viewable by the occupant and so content or the object is moved to an area of the eyebox that is viewable by the occupant from the occupant's detected eye position.
- the processes, methods, or algorithms disclosed herein can be deliverable to/implemented by a processing device, controller, or computer, which can include any existing programmable electronic control device or dedicated electronic control device.
- the processes, methods, or algorithms can be stored as data and instructions executable by a controller or computer in many forms including, but not limited to, information permanently stored on non-writable storage media such as ROM devices and information alterably stored on writeable storage media such as floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media.
- the processes, methods, or algorithms can also be implemented in a software executable object.
- the processes, methods, or algorithms can be embodied in whole or in part using suitable hardware components, such as Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software and firmware components.
- suitable hardware components such as Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software and firmware components.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Or Creating Images (AREA)
Abstract
A method and apparatus that reformat content of an eyebox objects are provided. The method includes: detecting a position of eyes of an occupant and determining eye position coordinates, determining whether at least one object present in the eyebox of a head up display is clipped when viewed from the eye position coordinates, and modifying the clipped object so that the clipped object is not clipped when viewed by the occupant from the eye position coordinates.
Description
Apparatuses and methods consistent with exemplary embodiments relate to reformatting displayed content. More particularly, apparatuses and methods consistent with exemplary embodiments relate to reformatting displayed content of a head up display.
One or more exemplary embodiments provide a method and an apparatus that reformats clipped content so that the content fits within a portion of the eyebox that is visible to an occupant. More particularly, one or more exemplary embodiments provide a method and an apparatus that scale, transform, translate, remove or reformat content of the clipped object so the object is visible to an occupant in its entirety without being clipped.
According to an aspect of an exemplary embodiment, a method that reformats content visible in an eyebox is provided. The method includes detecting a position of eyes of an occupant and determining eye position coordinates, determining whether at least one object present in the eyebox of a head up display is clipped when viewed from the eye position coordinates, and modifying the clipped object so that the clipped object is not clipped when viewed by the occupant from the eye position coordinates.
The eye position coordinates may reflect a position in space with respect to the eyebox.
The eyebox may be a virtual area in space from which the image projected by the head up display is entirely visible to the occupant, and the modifying the clipped object is performed such that the modified object is displayed within a usable eyebox, which is a subset of the virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
The head up display may include a light source, a display, a mirror and a combiner.
The modifying the clipped object may include translating the clipped object in a direction corresponding to the eye position coordinates.
The modifying the clipped object may include scaling the clipped object to a smaller size so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
The modifying the clipped object may include reorganizing, reducing, or removing content visible in the clipped object so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
The head up display may include two or more virtual image planes, and the modifying the clipped object may include moving content of the clipped object from a first plane of the two planes to another plane of the two planes.
The determining whether the at least one object present in the eyebox of the head up display is clipped may include determining at least one of coordinates and size of a clipped portion of the clipped object based on the detected eye position coordinates and the distance or direction of the eye position coordinates from a center of the eyebox.
The modifying the clipped object may include transforming, translating, removing and reformatting content of the clipped object according to the determined at least one of coordinates and size of clipped portion of the clipped object.
According to an aspect of another exemplary embodiment, an apparatus that reformats content visible in an eyebox is provided. The apparatus includes at least one memory comprising computer executable instructions; and at least one processor configured to read and execute the computer executable instructions. The computer executable instructions causing the at least one processor to detect a position of eyes of an occupant and determining eye position coordinates, determine whether at least one object present in the eyebox of a head up display is clipped when viewed from the eye position coordinates; and modify the clipped object so that the clipped object is not clipped when viewed by the occupant from the eye position coordinates.
The eye position coordinates may reflect a position in space with respect to the eyebox. The eyebox may be a virtual area in space from which the image projected by the head up display is entirely visible to the occupant, and the computer executable instructions may further cause the at least one processor to modify the clipped object such that the modified object is displayed within a usable eyebox, which is a subset of the virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
The apparatus may further include the head up display including a light source, a display, a mirror and a combiner.
The computer executable instructions may further cause the at least one processor to modify the clipped object by translating the clipped object in a direction corresponding to the eye position coordinates.
The computer executable instructions may further cause the at least one processor to modify the clipped object by scaling the clipped object to a smaller size so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
The computer executable instructions may further cause the at least one processor to modify the clipped object by reorganizing, reducing, or removing content visible in the clipped object so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
The apparatus may further include the head up display, the head up display comprising two or more virtual image planes, and the computer executable instructions may further cause the at least one processor to modify the clipped object by moving content of the clipped object from a first plane of the two planes to another plane of the two planes.
The computer executable instructions further may cause the at least one processor to determine whether the at least one object present in the eyebox of the head up display is clipped by determining at least one of coordinates and size of a clipped portion of the clipped object based on the detected eye position coordinates and the distance or direction of the eye position coordinates from a center of the eyebox.
The computer executable instructions may further cause the at least one processor to modify the clipped object by transforming, translating, removing and reformatting content of the clipped object according to the determined at least one of coordinates and size of clipped portion of the clipped object. Other objects, advantages and novel features of the exemplary embodiments will become more apparent from the following detailed description of exemplary embodiments and the accompanying drawings.
An apparatus and method that reformat content of an eyebox will now be described in detail with reference to FIGS. 1-4 of the accompanying drawings in which like reference numerals refer to like elements throughout.
The following disclosure will enable one skilled in the art to practice the inventive concept. However, the exemplary embodiments disclosed herein are merely exemplary and do not limit the inventive concept to exemplary embodiments described herein. Moreover, descriptions of features or aspects of each exemplary embodiment should typically be considered as available for aspects of other exemplary embodiments.
It is also understood that where it is stated herein that a first element is “connected to,” “attached to,” “formed on,” or “disposed on” a second element, the first element may be connected directly to, formed directly on or disposed directly on the second element or there may be intervening elements between the first element and the second element, unless it is stated that a first element is “directly” connected to, attached to, formed on, or disposed on the second element. In addition, if a first element is configured to “send” or “receive” information from a second element, the first element may send or receive the information directly to or from the second element, send or receive the information via a bus, send or receive the information via a network, or send or receive the information via intermediate elements, unless the first element is indicated to send or receive information “directly” to or from the second element.
Throughout the disclosure, one or more of the elements disclosed may be combined into a single device or combined into one or more devices. In addition, individual elements may be provided on separate devices.
Vehicles are equipped with head up displays that provide occupants with information, while minimizing the time which occupants' gaze and attention are off of the road. Head up displays have a limited eyebox due to the limited aperture and size of optical components inside the HUD. The eyebox is an area in the space where the projected content of the HUD is entirely visible to the occupant without post-image processing. The center of the eyebox is designated as the center of the eyellipse defined by vehicle geometry. The usable eyebox may be a subset of the virtual area of the eyebox. The usable eyebox is an area where content may be viewed unclipped in its entirety from the current eye position coordinates of the occupant if the content is modified to fit in the usable eyebox.
The limited size of an eyebox may create a clipping effect on projected information or content when one or both eyes of the occupant are positioned outside of the eyebox. If a viewer's eye is located outside of the eyebox or is offset from a position where the entire content or projected information is viewable, the projected information or content is not seen in that eye or may be clipped when viewed from the eye position coordinates of the viewer. To address the clipping effect due to limited eyebox size and the clipped projected object, an operator may have to reposition their head or body. However, another method may be to modify or reformat the content displayed by the head-up display into the usable eyebox in such a way so that the content appears to be complete or unclipped to an operator thereby effectively allowing the occupant or operator to view complete or coherent content.
The controller 101 controls the overall operation and function of the apparatus that reformats content of an eyebox 100. The controller 101 may control one or more of a storage 103, an output 104, a user input 106, a sensor 107, and a communication device 108 of the apparatus that reformats content of an eyebox 100. The controller 101 may include one or more from among a processor, a microprocessor, a central processing unit (CPU), a graphics processor, Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), state machines, circuitry, and a combination of hardware, software and firmware components.
The controller 101 is configured to send and/or receive information from one or more of the storage 103, the output 104, the user input 106, the sensor 107, and the communication device 108 of the apparatus that reformats content of an eyebox 100. The information may be sent and received via a bus or network, or may be directly read or written to/from one or more of the storage 103, the output 104, the user input 106, the sensor 107, and the communication device 108 of the apparatus that reformats content of an eyebox 100. Examples of suitable network connections include a controller area network (CAN), a media oriented system transfer (MOST), a local interconnection network (LIN), a local area network (LAN), and other appropriate connections such as Ethernet.
The power supply 102 provides power to one or more of the controller 101, the storage 103, the output 104, the user input 106, the sensor 107, and the communication device 108 of the apparatus that reformats content of an eyebox 100. The power supply 102 may include one or more from among a battery, an outlet, a capacitor, a solar energy cell, a generator, a wind energy device, an alternator, etc.
The storage 103 is configured for storing information and retrieving information used by the apparatus that reformats content of an eyebox 100. The storage 103 may be controlled by the controller 101 to store and retrieve information about content, objects, eye position, and the eyebox. The storage 103 may also include the computer instructions configured to be executed by a processor to perform the functions of the apparatus that reformats content of an eyebox 100. The information about objects or content may include one or more from among dimensions, area, priority or importance, and displayed information. The information about eye position may include coordinate information. The information about the eyebox may include eyebox size, coordinate bounds, center coordinates.
The storage 103 may include one or more from among floppy diskettes, optical disks, CD-ROMs (Compact Disc-Read Only Memories), magneto-optical disks, ROMs (Read Only Memories), RAMs (Random Access Memories), EPROMs (Erasable Programmable Read Only Memories), EEPROMs (Electrically Erasable Programmable Read Only Memories), magnetic or optical cards, flash memory, cache memory, and other type of media/machine-readable medium suitable for storing machine-executable instructions.
The output 104 outputs information in one or more forms including: visual, audible and/or haptic form. The output 104 may be controlled by the controller 101 to provide outputs to the user of the apparatus that reformats content of an eyebox 100. The output 104 may include one or more from among a speaker, a display, a transparent display, a centrally-located display, a head up display, a windshield display, a haptic feedback device, a vibration device, a tactile feedback device, a tap-feedback device, a holographic display, an instrument light, an indicator light, etc. In one example, the output 104 is a head up display that displays objects or contents using graphical indicators showing things such as speed, temperature or other information. The graphical indicator may have adjustable features including a shading of a graphical indicator, a transparency of a graphical indicator, a size of a graphical indicator, a size of a graphical indicator relative to other graphical indicators, a color of a graphical indicator, a shape of a graphical indicator.
The user input 106 is configured to provide information and commands to the apparatus that reformats content of an eyebox 100. The user input 106 may be used to provide user inputs, etc., to the controller 101. The user input 106 may include one or more from among a touchscreen, a keyboard, a soft keypad, a button, a motion detector, a voice input detector, a microphone, a camera, a trackpad, a mouse, a touchpad, etc. The user input 106 may be configured to receive a user input to acknowledge or dismiss the notification output by the output 104. The user input 106 may also be configured to receive a user input to cycle through objects, graphical indicators or content on the head up display.
The sensor 107 is configured to detect an occupant. The sensor 107 may be one or more sensors from among a radar sensor, a microwave sensor, an ultrasonic sensor, a camera, an infrared sensor, a LIDAR, and a laser sensor. In one example, the sensor 107 is implemented as a camera in a driver monitoring system. For example, the controller 101 may receive information about the eye position of the occupant from the sensor 107. The information from the sensor 107 may be provided to the controller 101 via a bus, storage 103 or communication device 108.
The communication device 108 may be used by the apparatus that reformats content of an eyebox 100 to communicate with various types of external apparatuses according to various communication methods. The communication device 108 may be used to send/receive object information to/from the controller 101 of the apparatus that reformats content of an eyebox 100. The communication device 108 may include various communication modules such as one or more from among a telematics unit, a broadcast receiving module, a near field communication (NFC) module, a GPS receiver, a wired communication module, or a wireless communication module. The broadcast receiving module may include a terrestrial broadcast receiving module including an antenna to receive a terrestrial broadcast signal, a demodulator, and an equalizer, etc. The NFC module is a module that communicates with an external apparatus located at a nearby distance according to an NFC method. The GPS receiver is a module that receives a GPS signal from a GPS satellite and detects a current location. The wired communication module may be a module that receives information over a wired network such as a local area network, a controller area network (CAN), or an external network. The wireless communication module is a module that is connected to an external network by using a wireless communication protocol such as IEEE 802.11 protocols, WiMAX, Wi-Fi or IEEE communication protocol and communicates with the external network. The wireless communication module may further include a mobile communication module that accesses a mobile communication network and performs communication according to various mobile communication standards such as 3rd generation (3G), 3rd generation partnership project (3GPP), long term evolution (LTE), Bluetooth, EVDO, CDMA, GPRS, EDGE or ZigBee.
The controller 101 of the apparatus that reformats content of an eyebox 100 may be configured to detect a position of eyes of an occupant and determining eye position coordinates, determine whether at least one object present in the eyebox of a head up display is clipped when viewed from the eye position coordinates; and modify the clipped object so that the clipped object is not clipped when viewed by the occupant from the eye position coordinates.
The controller 101 of the apparatus that reformats content of an eyebox 100 may be configured to modify the clipped object such that the modified object is displayed within a usable eyebox, which is a subset of the virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
The controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to modify the clipped object by translating the clipped object in a direction corresponding to the eye position coordinates.
The controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to modify the clipped object by scaling the clipped object to a smaller size so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
The controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to control to modify the clipped object by reorganizing, reducing, or removing content visible in the clipped object so that the clipped object is entirely contained within a usable eyebox, which is a subset of a virtual area of the eyebox, where the modified object is completely visible to the occupant from the eye position coordinates.
The controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to modify the clipped object by moving content of the clipped object from a first plane of the two planes to another plane of the two planes.
The controller 101 of the apparatus that reformats content of an eyebox 100 may also be configured to determine whether the at least one object present in the eyebox of the head up display is clipped by determining at least one of coordinates and size of a clipped portion of the clipped object based on the detected eye position coordinates and the distance or direction of the eye position coordinates from a center of the eyebox. The controller 101 may then modify the clipped object by transforming, translating, removing and reformatting content of the clipped object according to the determined at least one of coordinates and size of clipped portion of the clipped object.
Referring to FIG. 2 , the position of the eyes of the occupant are detected and the eye position coordinates are detected in operation S210. Based on the eye position coordinates, it is then determined whether the at least one object present in the eyebox is clipped in operation S220. In operation S230, the clipped object is modified so that object becomes unclipped when viewed by occupant from eye position coordinates. The modification of the object may include one or more of scaling, transforming, translating, removing and reformatting content of the object so that the content fits into a usable eyebox.
The processes, methods, or algorithms disclosed herein can be deliverable to/implemented by a processing device, controller, or computer, which can include any existing programmable electronic control device or dedicated electronic control device. Similarly, the processes, methods, or algorithms can be stored as data and instructions executable by a controller or computer in many forms including, but not limited to, information permanently stored on non-writable storage media such as ROM devices and information alterably stored on writeable storage media such as floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media. The processes, methods, or algorithms can also be implemented in a software executable object. Alternatively, the processes, methods, or algorithms can be embodied in whole or in part using suitable hardware components, such as Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software and firmware components.
One or more exemplary embodiments have been described above with reference to the drawings. The exemplary embodiments described above should be considered in a descriptive sense only and not for purposes of limitation. Moreover, the exemplary embodiments may be modified without departing from the spirit and scope of the inventive concept, which is defined by the following claims.
Claims (10)
1. An apparatus that reformats content of an eyebox of a head up display, the apparatus comprising:
at least one memory comprising computer executable instructions; and
at least one processor configured to read and execute the computer executable instructions, the computer executable instructions causing the at least one processor to:
detect a position of eyes of an occupant and determining eye position coordinates reflecting a three-dimensional position in space with respect to the eyebox;
determine whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates; and
modify the projected object by scaling the projected object to a smaller size so that the clipped object content that cannot be viewed by the occupant from the eye position coordinates is entirely visible in the modified projected object when viewed by the occupant from the eye position coordinates.
2. An apparatus that reformats content of an eyebox of a head up display, the apparatus comprising:
at least one memory comprising computer executable instructions; and
at least one processor configured to read and execute the computer executable instructions, the computer executable instructions causing the at least one processor to:
detect a position of eyes of an occupant and determining eye position coordinates reflecting a three-dimensional position in space with respect to the eyebox;
determine whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates; and
modify the projected object by reducing, or removing the clipped object content that cannot be seen when viewed from the eye position coordinates so that the modified projected object is entirely visible when viewed by the occupant from the eye position coordinates.
3. An apparatus that reformats content of an eyebox of a head up display, the apparatus comprising:
at least one memory comprising computer executable instructions;
a head up display comprising two or more virtual image planes; and
at least one processor configured to read and execute the computer executable instructions, the computer executable instructions causing the at least one processor to:
detect a position of eyes of an occupant and determining eye position coordinates reflecting a three-dimensional position in space with respect to the eyebox;
determine whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates; and
modify the clipped object by moving content of the projected object from a first plane of the two planes to another plane of the two planes.
4. An apparatus that reformats content of an eyebox of a head up display, the apparatus comprising:
at least one memory comprising computer executable instructions; and
at least one processor configured to read and execute the computer executable instructions, the computer executable instructions causing the at least one processor to:
detect a position of eyes of an occupant and determining eye position coordinates reflecting a three-dimensional position in space with respect to the eyebox;
determine whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates; and
modify the projected object so that the clipped object is not projected when viewed by the occupant from the eye position coordinates;
wherein determining whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates comprises determining at least one of coordinates and size of a clipped portion of the clipped object based on the determined eye position coordinates and a distance of the eye position coordinates from a center of the eyebox.
5. The apparatus of claim 4 , wherein the computer executable instructions further cause the at least one processor to modify the projected object by removing content of the projected object according to the determined at least one of coordinates and size of the clipped portion of the clipped object.
6. A method that reformats content of an eyebox of a head up display, the method comprising:
detecting a position of eyes of an occupant and determining eye position coordinates reflecting a three-dimensional position in space with respect to the eyebox;
determining whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates; and
modifying the projected object comprising scaling the projected object to a smaller size so that the clipped object content that cannot be viewed by the occupant from the eye position coordinates is entirely visible in the modified projected object when viewed by the occupant from the eye position coordinates.
7. A method that reformats content of an eyebox of a head up display, the method comprising:
detecting a position of eyes of an occupant and determining eye position coordinates reflecting a three-dimensional position in space with respect to the eyebox;
determining whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates; and
modifying the projected object comprising, reducing or removing the clipped object content that cannot be viewed by the occupant from the eye position coordinates so that the modified projected object is entirely visible when viewed by the occupant from the eye position coordinates.
8. A method that reformats content of an eyebox of a head up display, the method comprising:
detecting a position of eyes of an occupant and determining eye position coordinates reflecting a three-dimensional position in space with respect to the eyebox;
determining whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates;
wherein the head up display comprises two or more virtual image planes; and
moving content of the clipped object from a first plane of the two planes to another plane of the two planes.
9. A method that reformats content of an eyebox of a head up display, the method comprising:
detecting a position of eyes of an occupant and determining eye position coordinates reflecting a three-dimensional position in space with respect to the eyebox;
determining whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates; and
modifying the projected object so that the modified projected object is not clipped when viewed by the occupant from the eye position coordinates;
wherein the determining whether a projected object comprises a clipped object comprising content that cannot be viewed by the occupant from the eye position coordinates further comprises determining at least one of coordinates and size of a clipped portion of the clipped object based on the determined eye position coordinates and a distance of the eye position coordinates from a center of the eyebox.
10. The method of claim 9 , wherein the modifying the projected object comprises removing content of the projected object according to the determined at least one of coordinates and size of the clipped portion of the clipped object.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/661,227 US10926638B1 (en) | 2019-10-23 | 2019-10-23 | Method and apparatus that reformats content of eyebox |
| DE102020124591.2A DE102020124591B4 (en) | 2019-10-23 | 2020-09-22 | Device for reformatting an eye space |
| CN202011138619.0A CN112698719A (en) | 2019-10-23 | 2020-10-22 | Method and apparatus for reformatting content of a human eye window |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/661,227 US10926638B1 (en) | 2019-10-23 | 2019-10-23 | Method and apparatus that reformats content of eyebox |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US10926638B1 true US10926638B1 (en) | 2021-02-23 |
Family
ID=74659199
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/661,227 Active US10926638B1 (en) | 2019-10-23 | 2019-10-23 | Method and apparatus that reformats content of eyebox |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US10926638B1 (en) |
| CN (1) | CN112698719A (en) |
| DE (1) | DE102020124591B4 (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2022128876A (en) * | 2021-02-24 | 2022-09-05 | 株式会社ニコン | Image display device and method |
| US11602993B1 (en) * | 2021-09-17 | 2023-03-14 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for adjusting the transparency of a digital needle |
| US11630302B1 (en) | 2022-04-06 | 2023-04-18 | GM Global Technology Operations LLC | Driving guidance system for a motor vehicle having an augmented reality head up display |
| US11654771B1 (en) | 2022-07-20 | 2023-05-23 | GM Global Technology Operations LLC | Multi-view display device for a co-pilot application of a motor vehicle |
| US11880036B2 (en) | 2021-07-19 | 2024-01-23 | GM Global Technology Operations LLC | Control of ambient light reflected from pupil replicator |
| US11915395B2 (en) | 2022-07-20 | 2024-02-27 | GM Global Technology Operations LLC | Holographic display system for a motor vehicle with real-time reduction of graphics speckle noise |
| US12030512B2 (en) | 2022-04-06 | 2024-07-09 | GM Global Technology Operations LLC | Collision warning system for a motor vehicle having an augmented reality head up display |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100253593A1 (en) | 2009-04-02 | 2010-10-07 | Gm Global Technology Operations, Inc. | Enhanced vision system full-windshield hud |
| US7961117B1 (en) | 2008-09-16 | 2011-06-14 | Rockwell Collins, Inc. | System, module, and method for creating a variable FOV image presented on a HUD combiner unit |
| US20120224062A1 (en) | 2009-08-07 | 2012-09-06 | Light Blue Optics Ltd | Head up displays |
| US20140267263A1 (en) | 2013-03-13 | 2014-09-18 | Honda Motor Co., Ltd. | Augmented reality heads up display (hud) for left turn safety cues |
| US20160003636A1 (en) | 2013-03-15 | 2016-01-07 | Honda Motor Co., Ltd. | Multi-level navigation monitoring and control |
| US20160209647A1 (en) | 2015-01-19 | 2016-07-21 | Magna Electronics Inc. | Vehicle vision system with light field monitor |
| US20160266391A1 (en) * | 2015-03-11 | 2016-09-15 | Hyundai Mobis Co., Ltd. | Head up display for vehicle and control method thereof |
| US20170329143A1 (en) | 2016-05-11 | 2017-11-16 | WayRay SA | Heads-up display with variable focal plane |
| US20190339535A1 (en) * | 2017-01-02 | 2019-11-07 | Visteon Global Technologies, Inc. | Automatic eye box adjustment |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102013208971A1 (en) * | 2013-05-15 | 2014-11-20 | Robert Bosch Gmbh | Apparatus and method for projecting image information into a field of view of a vehicle occupant of a vehicle |
| US20140375543A1 (en) * | 2013-06-25 | 2014-12-25 | Honda Motor Co., Ltd. | Shared cognition |
| US9430046B2 (en) * | 2014-01-16 | 2016-08-30 | Denso International America, Inc. | Gesture based image capturing system for vehicle |
| DE102014015241B4 (en) | 2014-10-16 | 2023-11-02 | Mercedes-Benz Group AG | Method and device for displaying content in a projection of a head-up display of a vehicle, and a motor vehicle |
| CN105774679B (en) * | 2014-12-25 | 2019-01-29 | 比亚迪股份有限公司 | A kind of automobile, vehicle-mounted head-up-display system and its projected image height adjusting method |
| JP6482975B2 (en) * | 2015-07-15 | 2019-03-13 | アルパイン株式会社 | Image generating apparatus and image generating method |
| DE102015116160B4 (en) * | 2015-09-24 | 2022-10-13 | Denso Corporation | Head-up display with situation-based adjustment of the display of virtual image content |
| CN106125306A (en) * | 2016-06-28 | 2016-11-16 | 科世达(上海)管理有限公司 | A kind of head-up-display system, vehicle control system and vehicle |
| DE102016225353A1 (en) | 2016-12-16 | 2018-06-21 | Continental Automotive Gmbh | Method for adjusting an image generated by an image generation unit and head-up display for carrying out the method |
| CN110114711B (en) * | 2017-02-10 | 2022-03-29 | 金泰克斯公司 | Vehicle display including a projection system |
| JP2018185654A (en) * | 2017-04-26 | 2018-11-22 | 日本精機株式会社 | Head-up display device |
-
2019
- 2019-10-23 US US16/661,227 patent/US10926638B1/en active Active
-
2020
- 2020-09-22 DE DE102020124591.2A patent/DE102020124591B4/en active Active
- 2020-10-22 CN CN202011138619.0A patent/CN112698719A/en active Pending
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7961117B1 (en) | 2008-09-16 | 2011-06-14 | Rockwell Collins, Inc. | System, module, and method for creating a variable FOV image presented on a HUD combiner unit |
| US20100253593A1 (en) | 2009-04-02 | 2010-10-07 | Gm Global Technology Operations, Inc. | Enhanced vision system full-windshield hud |
| US20120224062A1 (en) | 2009-08-07 | 2012-09-06 | Light Blue Optics Ltd | Head up displays |
| US20140267263A1 (en) | 2013-03-13 | 2014-09-18 | Honda Motor Co., Ltd. | Augmented reality heads up display (hud) for left turn safety cues |
| US20160003636A1 (en) | 2013-03-15 | 2016-01-07 | Honda Motor Co., Ltd. | Multi-level navigation monitoring and control |
| US20160209647A1 (en) | 2015-01-19 | 2016-07-21 | Magna Electronics Inc. | Vehicle vision system with light field monitor |
| US20160266391A1 (en) * | 2015-03-11 | 2016-09-15 | Hyundai Mobis Co., Ltd. | Head up display for vehicle and control method thereof |
| US20170329143A1 (en) | 2016-05-11 | 2017-11-16 | WayRay SA | Heads-up display with variable focal plane |
| US20190339535A1 (en) * | 2017-01-02 | 2019-11-07 | Visteon Global Technologies, Inc. | Automatic eye box adjustment |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2022128876A (en) * | 2021-02-24 | 2022-09-05 | 株式会社ニコン | Image display device and method |
| JP7559605B2 (en) | 2021-02-24 | 2024-10-02 | 株式会社ニコン | Image display device and method |
| US11880036B2 (en) | 2021-07-19 | 2024-01-23 | GM Global Technology Operations LLC | Control of ambient light reflected from pupil replicator |
| US11602993B1 (en) * | 2021-09-17 | 2023-03-14 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for adjusting the transparency of a digital needle |
| US20230085888A1 (en) * | 2021-09-17 | 2023-03-23 | Toyota Motor Engineering & Manafacturing North America, Inc. | System and method for adjusting the transparency of a digital needle |
| US11630302B1 (en) | 2022-04-06 | 2023-04-18 | GM Global Technology Operations LLC | Driving guidance system for a motor vehicle having an augmented reality head up display |
| US12030512B2 (en) | 2022-04-06 | 2024-07-09 | GM Global Technology Operations LLC | Collision warning system for a motor vehicle having an augmented reality head up display |
| US11654771B1 (en) | 2022-07-20 | 2023-05-23 | GM Global Technology Operations LLC | Multi-view display device for a co-pilot application of a motor vehicle |
| US11915395B2 (en) | 2022-07-20 | 2024-02-27 | GM Global Technology Operations LLC | Holographic display system for a motor vehicle with real-time reduction of graphics speckle noise |
Also Published As
| Publication number | Publication date |
|---|---|
| DE102020124591B4 (en) | 2023-12-28 |
| CN112698719A (en) | 2021-04-23 |
| DE102020124591A1 (en) | 2021-04-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10926638B1 (en) | Method and apparatus that reformats content of eyebox | |
| US10346695B2 (en) | Method and apparatus for classifying LIDAR data for object detection | |
| US10810774B2 (en) | Electronic apparatus and method for controlling the same | |
| US20180220081A1 (en) | Method and apparatus for augmenting rearview display | |
| US20180056861A1 (en) | Vehicle-mounted augmented reality systems, methods, and devices | |
| US20150175068A1 (en) | Systems and methods for augmented reality in a head-up display | |
| US20180220082A1 (en) | Method and apparatus for augmenting rearview display | |
| CN108569298B (en) | Method and apparatus for enhancing top view images | |
| US20190102202A1 (en) | Method and apparatus for displaying human machine interface | |
| US20190217866A1 (en) | Method and apparatus for determining fuel economy | |
| US20200143546A1 (en) | Apparatus and method for detecting slow vehicle motion | |
| CN115923661A (en) | Image display method, device, streaming media rearview mirror system and vehicle | |
| US20250182657A1 (en) | Display Method and Apparatus | |
| US10725296B2 (en) | Head-up display device, vehicle including the same, and method for controlling the head-up display device | |
| JP6295360B1 (en) | Message display program, message display device, and message display method | |
| US20190212849A1 (en) | Method and apparatus that detect driver input to touch sensitive display | |
| US20180272978A1 (en) | Apparatus and method for occupant sensing | |
| US10974758B2 (en) | Method and apparatus that direct lateral control during backward motion | |
| US20210116710A1 (en) | Vehicular display device | |
| CN111435269A (en) | Display adjusting method, system, medium and terminal of vehicle head-up display device | |
| US10160274B1 (en) | Method and apparatus that generate position indicators for towable object | |
| US20190122382A1 (en) | Method and apparatus that display view alert | |
| KR200485409Y1 (en) | Mobile device for parking management using speech recognition and gesture | |
| US20180222389A1 (en) | Method and apparatus for adjusting front view images | |
| WO2024138467A1 (en) | Ar display system based on multi-view cameras and viewport tracking |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |