WO2019127224A1 - Procédé et appareil de mise au point et dispositif d'affichage tête haute - Google Patents

Procédé et appareil de mise au point et dispositif d'affichage tête haute Download PDF

Info

Publication number
WO2019127224A1
WO2019127224A1 PCT/CN2017/119431 CN2017119431W WO2019127224A1 WO 2019127224 A1 WO2019127224 A1 WO 2019127224A1 CN 2017119431 W CN2017119431 W CN 2017119431W WO 2019127224 A1 WO2019127224 A1 WO 2019127224A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
sensing data
display device
information
determining
Prior art date
Application number
PCT/CN2017/119431
Other languages
English (en)
Chinese (zh)
Inventor
吴军
刘怀宇
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780023179.4A priority Critical patent/CN109076201A/zh
Priority to PCT/CN2017/119431 priority patent/WO2019127224A1/fr
Publication of WO2019127224A1 publication Critical patent/WO2019127224A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/317Convergence or focusing systems

Definitions

  • the present invention relates to the field of projection display technologies, and in particular, to a focusing method, device, and head-up display device.
  • Head Up Display is a driving aid used on mobile platforms such as airplanes and automobiles.
  • the head-up display uses the principle of optical reflection to project important driving-related information on a piece of glass. This glass is located at the front end of the driver's seat and is roughly at the same level as the driver's eyes, so the driver can view the driving-related information without looking down. Improve driving safety.
  • heads-up displays generally employ a fixed focus plane design that uses a fixed projected imaging distance. That is to say, in any case, the projected content seen by the driver (such as driving related information) is at a fixed distance. Since the driver's gaze and focus are focused on objects of different distances depending on different scenes, the above fixed focus plane is designed such that the driver cannot observe the projected content and the actual object well.
  • the embodiment of the invention provides a focusing method, a device and a head-up display device, which can realize adaptive adjustment of the projection focal length.
  • an embodiment of the present invention provides a focusing method, which is applied to a head-up display device, and the head-up display device is configured to display and display related data according to a projection focal length, and the method includes:
  • an embodiment of the present invention provides a focusing device, which is disposed in a head-up display device, and the head-up display device is configured to display and display related data according to a projection focal length, and the device includes:
  • a communication interface configured to acquire scene sensing data
  • a processor configured to determine a plurality of reference objects according to the scene sensing data; determine a target object from the plurality of reference objects; generate a first focus instruction, where the first focus instruction is used to indicate the
  • the processor adjusts the projection focal length according to the distance value to the target object.
  • an embodiment of the present invention provides a head-up display device, where the head-up display device includes:
  • a projection device for displaying related data according to a projection focal length
  • an embodiment of the present invention provides a computer readable storage medium, where the computer storage medium stores a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processing The focus performing method of the first aspect described above is performed.
  • the projection focal length is adjusted according to the distance value of the target object, and the adaptive adjustment of the projection focal length can be realized, and the overlapping of the projection content and the target object is improved, so that the driver can view the target object without switching the focus point. Projecting content for improved driving comfort and driving safety.
  • FIG. 1 is a schematic diagram of an application scenario of a focus adjustment method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart of a focusing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flow chart of another focusing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a focusing device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another focusing device according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a head-up display device according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of another head-up display device according to an embodiment of the present invention.
  • the heads-up display projects the content that needs to be displayed onto the glass in front of the driver based on projection technology.
  • the projection technology is any one of a liquid crystal display (LCD) technology, a liquid crystal on silicon (LCOS) technology, or a digital light processing (DLP) technology.
  • the system configuration of the heads up display generally includes: an application processor, a projection module, and a mirror surface.
  • the application processor is configured to generate projected content;
  • the projection module is configured to convert an electronic signal of the image into projection light, wherein the projection module includes a multi-lens prism system for focusing the projection content
  • a specific focus plane or called an imaging plane
  • the mirror surface is generally a separate partial transparent lens or a front windshield for reflecting the projected content.
  • Existing heads-up displays generally employ a fixed focus plane design, that is, in any case, the projected content (such as driving related information) that the driver sees is at a fixed distance.
  • the focus plane is generally two meters to ten meters away from the driver according to different system designs. Since the driver's gaze and focus are focused on objects of different distances depending on different scenes (such as slow driving scenes and fast driving scenes), the design of the above fixed focus plane may result in the driver being unable to project the content with the actual object. For effective overlapping observation, it is necessary to repeatedly switch the focus point to switch between the projected content and the actual object, which reduces driving comfort and driving safety.
  • a small number of head-up displays use a dual-system design that uses two application processors and projection modules. Among them, the two systems use different focal plane designs. In general, one system focuses on closer proximity and the other system focuses on farther. Although the design of the dual system is improved compared with the above-mentioned design using a fixed focus plane, the problem that the driver needs to repeatedly switch the focus point in different scenarios cannot be completely solved.
  • the embodiment of the present invention provides a focusing method.
  • the focusing method is applied to a head-up display device, and the head-up display device is configured to display a projected content according to a projection focal length.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the system configuration of the head-up display device includes a processor (or an application processor), a projection module, and a mirror surface.
  • the processor is configured to generate projected content.
  • the projection module is configured to convert an electronic signal of the image into a projection light, wherein the projection module comprises a multi-lens prism system for focusing the projection content on a virtual plane, wherein the specific focus plane is The optical system of the projection module is determined.
  • the optical system of the projection module is a zoom optical system, and the projection content can be projected to different focus planes according to settings under the control of software.
  • the mirror surface is used to reflect the projected content to the driver's eyes.
  • the mirror surface may be a separate partial transparent lens or a front windshield of the automobile.
  • the head-up display device may use a pupil tracking technology to identify a current viewing direction of the driver (or called an eyeball tracking technology).
  • the head-up display device can acquire eyeball image data of the driver, identify features in the eyeball pupil of the driver according to the processing of the eyeball image data, and calculate the current observation of the driver in real time through the features. direction.
  • the system configuration of the heads up display device may further include a camera vision system, wherein the camera vision system may include an image sensor.
  • the image data may be acquired directly by an image sensor in the heads up display device.
  • the image data may also be acquired by an image sensor in a movable platform, the heads-up display device being acquired from the movable platform.
  • the head-up display device may analyze the scene in front of the vehicle by using vehicle sensing technology, and determine a plurality of reference objects and distance values to the respective reference objects. Specifically, the heads up display device may acquire scene sensing data, and determine a plurality of reference objects and distance values to the respective reference objects according to the scene sensing data.
  • the scene sensing data includes image sensing data and distance sensing data.
  • the image sensing data and the distance sensing data may be collected by a binocular vision sensor, or may be collected by a monocular vision sensor and a distance sensor, respectively.
  • the distance sensor may include but is not limited to a lidar sensor, a millimeter wave sensor, and an ultrasonic radar sensor.
  • the system configuration of the head-up display device may further include a sensing system, wherein the sensing system may include a binocular vision sensor and/or a monocular vision sensor and a distance sensor.
  • the scene sensing data may be acquired directly by a binocular vision sensor (or a monocular vision sensor and a distance sensor) in the heads up display device.
  • the scene sensing data may also be collected by a binocular vision sensor (or a monocular vision sensor and a distance sensor) in the movable platform, and the head display device may be from the Obtained in the mobile platform.
  • a binocular vision sensor or a monocular vision sensor and a distance sensor
  • the head-up display device may determine a target object from a plurality of reference objects located in the viewing direction.
  • the target object may be a nearest reference object located in the viewing direction.
  • the head-up display device may determine a reference object corresponding to a minimum distance value among the plurality of reference objects located in the observation direction as the target object.
  • the heads up display device may determine the target object based on deep learning. Specifically, the head-up display device may collect the geographical location, the weather, the vehicle speed, the identified reference object and the distance value as input data, and then mark the target object that needs the most attention, thereby selecting a certain neural network model (eg, Alexnet convolutional neural network model) and deep learning to train this kind of neural network model, and subsequently use the results of this network training to help quickly determine the target object.
  • a certain neural network model eg, Alexnet convolutional neural network model
  • the head-up display device may determine the target object based on a risk factor (or referred to as a risk degree) of the reference object. Specifically, the head-up display device may determine a reference object having the highest risk coefficient among the plurality of reference objects located in the observation direction as the target object.
  • the risk factor of the reference object may be determined according to a distance value to the reference object, a motion trajectory of the reference object, a motion direction of the reference object, a motion speed of the reference object, a volume of the reference object, and the like.
  • the head-up display device may combine a high-precision map to determine a target object according to attribute information of the reference object.
  • the high-precision map identifies attribute information of different reference objects, and the attribute information of the reference object includes, but is not limited to, whether the reference object is a movable object, a rigid strength of the reference object, a quality of the reference object, and a reference object. The value and so on.
  • the heads-up display device may score the attribute information, and the head-up display device may further weight the scores of the respective attribute information to obtain a total score (ie, an evaluation value of the reference object) according to the weight value of the different attribute information set in advance. .
  • the head-up display device may determine a reference object having the highest evaluation value among the plurality of reference objects located in the observation direction as the target object.
  • the head-up display device may control its zoom optical system to perform focusing according to a distance value to the target object (ie, adjust a projection focal length) such that the projected content displayed by the head-up display device is substantially located with the target object The same distance.
  • the distance value to the target object refers to the distance value of the sensor that collects the distance sensing data to the target object.
  • the head-up display device may further determine whether a distance value of the target object exceeds a preset distance value (or is referred to as a super focus); if yes, the head-up display device controls the same The zoom optical system performs focusing in accordance with the preset distance value.
  • the heads up display device may further display the projected content generated by the processor according to the adjusted projected focal length.
  • FIG. 1 is a schematic diagram of an application scenario of a focus adjustment method according to an embodiment of the present invention.
  • the illustrated application scenario includes a car 101, a heads-up display device 102 installed in the car 101, a stone 103 located in front of the car 101, a pedestrian 104, and a pillar 105.
  • the heads-up display device 102 can identify the current viewing direction v of the driver (not shown in FIG. 1) in the car 101 using a pupil tracking technique.
  • the heads-up display device 102 may analyze the scene in front of the automobile 101 by using a vehicle sensing technology, and determine the distance value d1 of the stone 103 and the stone 103 to the automobile 101, and the pedestrian. 104 and a distance value d2 of the automobile 101 to the pedestrian 104 and a distance value d3 between the pillar 105 and the automobile 101 to the pillar 105.
  • the heads-up display device 102 can determine that the pedestrian 104 and the pillar 105 are located in the viewing direction v.
  • the heads-up display device 102 can determine a target object from the pedestrian 104 and the pillar 105 located in the viewing direction v.
  • the heads-up display device 102 may determine the pedestrian 104 located in the viewing direction v and the nearest person in the pillar 105 from the car 101 as a target object. Specifically, the heads-up display device 102 can compare the distance value d2 and the distance value d3, and the head-up display device 102 can determine the pedestrian 104 because the distance value d2 is smaller than the distance value d3. For the target object.
  • the heads-up display device 102 can perform focusing (ie, adjusting the projection focal length) according to the distance value d2 of the automobile 101 to the pedestrian 104, so that the projected content displayed by the head-up display device 102 (with driving-related information) For example, the pedestrian 104 is located at substantially the same distance, thereby improving the overlap of the travel-related information and the target object.
  • the heads-up display device 102 can also display the driving-related information according to the adjusted projection focal length.
  • the travel related information is displayed on the front windshield in front of the driver in the car 101. Since the driving related information is substantially at the same distance from the pedestrian 104, the driver can view the driving related information without observing the focus point while observing the pedestrian 104, that is, the driver can access the pedestrian 104. Perform effective overlapping observation with the travel related information.
  • the focusing method provided by the embodiment of the present invention adjusts the projection focal length of the head-up display device according to the distance value to the target object, realizes adaptive adjustment of the projection focal length, and improves the overlap of the projection content and the target object. Sexuality allows the driver to view the projected object without having to switch the focus point while viewing the target object, thereby improving driving comfort and driving safety.
  • the focusing method, device and head-up display device of the embodiments of the present invention are described in detail below with reference to FIG. 2 to FIG.
  • the focusing method of the embodiment of the present invention is applied to a head-up display device for projecting display related data according to a projection focal length.
  • the focusing method may include:
  • S201 Acquire scene sensing data, and determine a plurality of reference objects according to the scene sensing data.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the movable platform is driven by a driver
  • the head-up display device projects relevant data (ie, content that needs to be displayed) onto the glass in front of the driver based on the projection technology, and the glass will display the content (ie, projection) Content) reflected to the driver's eyes.
  • the glass may be a separate partial transparent lens in the head-up display device, or may be an automobile front windshield.
  • the related data may be, for example, driving related information such as instantaneous traveling speed, average traveling speed, engine speed, idle fuel consumption, average fuel consumption, mileage, external environment temperature, navigation map, and the like.
  • the scene sensing data may be collected by a binocular vision sensor, or may be collected by a monocular vision sensor and a distance sensor.
  • the distance sensor may include but is not limited to a lidar sensor, a millimeter wave sensor, and an ultrasonic radar sensor.
  • the heads up display device may further determine a distance value to each reference object according to the scene sensing data.
  • the distance value to each reference object may be a distance value of the movable platform to each reference object, or may be a distance value of the head-up display device to each reference object, or may be a sensor that collects the scene sensing data to each The distance value of the reference object.
  • the scene sensing data may include image sensing data and distance sensing data.
  • the method when the head-up display device performs the determining a plurality of reference objects according to the scene sensing data, the method may specifically include: determining a plurality of reference objects according to the image sensing data; and performing, by the head-up display device
  • the determining the distance value to each reference object according to the scene sensing data may specifically include: determining a distance value to each reference object according to the distance sensing data.
  • the image sensing data and the distance sensing data may be collected by a binocular vision sensor, or may be collected by a monocular vision sensor and a distance sensor, respectively.
  • the scene sensing data may be directly collected by a binocular vision sensor (or a monocular vision sensor and a distance sensor) in the heads-up display device.
  • the scene sensing data may also be collected by a binocular vision sensor (or a monocular vision sensor and a distance sensor) in the movable platform, and the head-up display device is Obtained in the mobile platform.
  • a binocular vision sensor or a monocular vision sensor and a distance sensor
  • the reference object refers to an object located outside the movable platform that is determined by the heads up display device by analyzing the scene in front of the movable platform according to the scene sensing data. It can be understood that the object according to the embodiment of the present invention refers to a general term for living objects (such as humans, animals, etc.) and inanimate objects (such as stones, railings, etc.) located outside the movable platform.
  • the heads up display device may further acquire direction sensing data for the target object, and determine an observation field of view of the target object according to the direction sensing data; If the content displayed by the heads-up display device is within the observation field of view, the acquiring scene sensing data is performed, and a plurality of reference objects are determined according to the scene sensing data.
  • the target object is the driver mentioned above. It should be noted that the driver in the embodiment of the present invention refers to the general name of the driver and the pilot of the automobile.
  • the head-up display device may not perform the embodiment of the present invention.
  • the head-up display device may project display related data according to a preset projection focal length, or may project display related data according to the current projection focal length.
  • the target object refers to an object observed by the target object.
  • the heads up display device may determine a reference object corresponding to the minimum distance value as the target object.
  • the distance value to the target object is smaller than the distance value to other reference objects, that is, the target object is the closest reference object.
  • the heads up display device may generate identification information of each reference object, where the identification information is used to uniquely identify each reference object; and acquire environment information, where the environment information includes weather information and Any one or more of position information and motion speed corresponding to the head-up display device; inputting the identification information of the respective reference objects, the environment information, and the distance value to each reference object into a preset recognition model And determining, as the target object, the reference object identified by the identification information output by the preset recognition model.
  • the weather information may include, but is not limited to, temperature sensing data, humidity sensing data, and the like.
  • the weather information may be collected directly by a weather sensor (such as a temperature sensor, a humidity sensor, etc.) in the head-up display device, or may be collected by a weather sensor in a movable platform, and the head-up display device communicates The interface is obtained from the mobile platform.
  • the location information corresponding to the heads up display device may be Global Positioning System (GPS) positioning data.
  • GPS Global Positioning System
  • the position information corresponding to the head-up display device may be directly collected by the positioning device in the head-up display device, or may be collected by a positioning device in the movable platform, and the head-up display device is Obtained by the mobile platform.
  • the corresponding moving speed of the head-up display device is the traveling speed of the movable platform.
  • the moving speed corresponding to the head-up display device may be directly collected by a speed sensor (such as a line speed sensor) in the head-up display device, or may be collected by a speed sensor in the movable platform, the head-up
  • the display device is obtained from the mobile platform through a communication interface.
  • the preset recognition model may be a neural network model (such as the Alexnet convolutional neural network model) obtained by training using deep learning.
  • the heads up display device may acquire first attribute information of each reference object, where the first attribute information includes motion information and/or volume information of each reference object; Calculating a risk coefficient of each reference object by determining first attribute information of the object and the distance value to each reference object; and determining a reference object having the highest risk coefficient among the plurality of reference objects as the target object.
  • the motion information of each reference object may include, but is not limited to, a motion trajectory, a motion direction, a motion speed, and the like of each reference object.
  • the motion information of each reference object may be directly collected by a motion sensor in the head-up display device, or may be collected by a motion sensor in a movable platform, and the head-up display device may be from the Obtained by the mobile platform.
  • the volume information of each reference object may be directly collected by a volume sensor in the head-up display device, or may be collected by a volume sensor in a movable platform, and the head-up display device is accessed through a communication interface. Described by the mobile platform.
  • the volume sensor can be, for example, an ultrasonic volume sensor.
  • the head-up display device performs the first attribute information according to the respective reference objects and the distance value to each reference object, and calculating the risk coefficient of each reference object may specifically include: according to a preset level Dividing rules, determining a hazard level of each of the first attribute information of each reference object, and determining a hazard level of the distance value to each reference object; a hazard level of each of the first attribute information of each reference object and to each reference object The hazard levels of the distance values are summed to obtain the hazard coefficients of the respective reference objects.
  • the heads up display device may acquire location information corresponding to the heads up display device; acquire map data corresponding to the location information, where the map data includes second attribute information of each reference object
  • the second attribute information includes any one or more of status information, intensity information, quality information, and value information of each reference object; and calculating a rating of each reference object according to the second attribute information of the respective reference objects a value; a reference object having the largest evaluation value among the plurality of reference objects is determined as the target object.
  • the map data corresponding to each location information may be pre-stored in the head-up display device. Therefore, the heads up display device can query and acquire map data corresponding to the location information corresponding to the heads up display device.
  • the map data corresponding to each location information may be pre-stored in the mobile platform, and the heads-up display device may obtain, from the mobile platform, the corresponding corresponding to the head-up display device by using a communication interface. Map data corresponding to location information.
  • the heads-up display device may acquire map data corresponding to location information corresponding to the head-up display device from a server through a wired connection or a wireless connection.
  • the state information of the reference object may include a fixed state or a moving state.
  • the state information of the reference object is a fixed state, that is, the reference object is an immovable object
  • the state information of the reference object is a moving state, that is, the reference object is a movable object.
  • the intensity information of the reference object is used to characterize the stiffness strength of the reference object.
  • the header display device performs the second attribute information according to the respective reference objects, and calculating the evaluation value of each reference object may specifically include: selecting, according to a preset scoring rule, each reference object The two attribute information is scored; the scores of the second attribute information of each reference object are weighted according to the weight values of the second attribute information set in advance, and the evaluation values of the respective reference objects are obtained.
  • the score when the movement information of the reference object is the moving state is higher than the score when the movement information of the reference object is the fixed state; the higher the strength of the reference object is, the higher the score; the reference object The higher the quality, the higher the score; the higher the value of the reference object, the higher the score.
  • the first focus instruction is used to instruct the head display device to adjust a projection focus according to the distance value to the target object, so that the projected content displayed by the head display device is substantially located with the target object. The same distance.
  • the heads up display device may store the identifiers of the respective reference objects in association with the distance values to the respective reference objects. After the heads up display device determines the target object, the distance value of the target object may be queried according to the identifier of the target object.
  • the head-up display device can also display and display relevant data according to the adjusted projection focal length.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • FIG. 3 is a schematic flowchart diagram of another focusing method provided by an embodiment of the present invention.
  • the focusing method of the embodiment of the present invention is applied to a head-up display device for projecting display related data according to a projection focal length.
  • the focusing method may include:
  • S301 Acquire direction sensing data of the target object, and determine an observation direction of the target object according to the direction sensing data.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or an automobile.
  • the movable platform is driven by a driver
  • the head-up display device projects relevant data (ie, content that needs to be displayed) onto the glass in front of the driver based on the projection technology, and the glass will display the content (ie, projection) Content) reflected to the driver's eyes.
  • the glass may be a separate partial transparent lens in the head-up display device, or may be an automobile front windshield.
  • the related data may be, for example, driving related information such as instantaneous traveling speed, average traveling speed, engine speed, idle fuel consumption, average fuel consumption, mileage, external environment temperature, navigation map, and the like.
  • the target object is the driver mentioned above. It should be noted that the driver in the embodiment of the present invention refers to the general name of the driver and the pilot of the automobile.
  • the direction sensing data can be used to determine a viewing direction of the target object.
  • the heads-up display device uses a pupil tracking technique to identify a viewing direction of the target object (or referred to as an eye tracking technique).
  • the direction sensing data for the target object may be specifically the eyeball image data of the target object. Determining, by the heads-up display device, the direction-sensing data for the target object, and determining the viewing direction of the target object according to the direction-sensing data may include: acquiring eyeball image data of the target object; The processing of the eyeball image data identifies features in the pupil of the eye of the target object, and the direction of observation of the target object is inversely calculated by these features in real time.
  • the direction sensing data may also be used to determine an observation field of view of the target object.
  • the heads up display device may determine the observation of the target object according to the direction sensing data. a field of view; if the content displayed by the heads-up display device is within the observation field of view, performing the determining the direction of observation of the target object according to the direction sensing data.
  • the head-up display device may not perform the embodiment of the present invention.
  • the head-up display device may project display related data according to a preset projection focal length, or may project display related data according to the current projection focal length.
  • S302 Acquire scene sensing data, and determine a plurality of reference objects according to the scene sensing data.
  • the heads up display device may further determine a distance value to each reference object according to the scene sensing data.
  • the distance value to each reference object may be a distance value of a sensor that collects the scene sensing data to each reference object, or a distance value of the movable platform to each reference object, or may be the head display device to The distance value of each reference object.
  • the scene sensing data may include image sensing data and distance sensing data.
  • the method when the head-up display device performs the determining a plurality of reference objects according to the scene sensing data, the method may specifically include: determining a plurality of reference objects according to the image sensing data; and performing, by the head-up display device
  • the determining the distance value to each reference object according to the scene sensing data may specifically include: determining a distance value to each reference object according to the distance sensing data.
  • the image sensing data and the distance sensing data may be collected by the same sensor, or may be collected by different sensors.
  • the reference object refers to an object located outside the movable platform that is determined by the heads up display device by analyzing the scene in front of the movable platform according to the scene sensing data.
  • S303 Determine a target object from a plurality of reference objects located in the observation direction.
  • the target object refers to an object observed by the target object.
  • the head-up display device may determine a reference object corresponding to a minimum distance value among the plurality of reference objects located in the observation direction as the target object.
  • the heads up display device may generate identification information of each reference object located in the observation direction, where the identification information is used to uniquely identify each reference object located in the observation direction.
  • the environment information and the distance value to each reference object located in the observation direction are input into a preset recognition model, and the reference object identified by the identification information output by the preset recognition model is determined as a target object. .
  • the heads up display device may acquire first attribute information of each reference object located in the observation direction, where the first attribute information includes each reference object located in the observation direction.
  • Motion information and/or volume information calculating the location in the observation according to the first attribute information of the respective reference objects located in the observation direction and the distance values to the respective reference objects located in the observation direction a risk factor of each reference object in the direction; a reference object having the highest risk coefficient among the plurality of reference objects located in the observation direction is determined as the target object.
  • the heads up display device may acquire location information corresponding to the heads up display device, and acquire map data corresponding to the location information, where the map data includes each of the view directions.
  • the second attribute information includes any one or more of status information, intensity information, quality information, and value information of each reference object located in the observation direction; Determining, by the second attribute information of each reference object in the observation direction, an evaluation value of each reference object located in the observation direction; determining a reference object having the largest evaluation value among the plurality of reference objects located in the observation direction For the target object.
  • step S202 of the focusing method shown in FIG. 2 of the present application may refer to step S202 of the focusing method shown in FIG. 2 of the present application, and details are not described herein again.
  • the head-up display device may directly determine the reference object as the target object.
  • S304 Determine whether the distance value of the target object is less than a preset distance value.
  • the heads up display device may store the identifiers of the respective reference objects in association with the distance values to the respective reference objects. After the heads up display device determines the target object, the distance value of the target object may be queried according to the identifier of the target object.
  • the preset distance value is a super focus set in advance by the heads up display device.
  • the first focus instruction is used to instruct the head display device to adjust a projection focus according to the distance value to the target object, so that the projected content displayed by the head display device is substantially located with the target object. The same distance.
  • the second focus adjustment instruction is used to instruct the head display device to adjust a projection focus according to the preset distance value.
  • the heads up display device may perform a maximum focus distance for a target object that is outside the preset distance value.
  • the heads up display device may generate the first focus adjustment instruction; or when the distance value to the target object is equal to The heads up display device may generate the second focus adjustment command when the distance value is preset.
  • the head-up display device can also display and display relevant data according to the adjusted projection focal length.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • FIG. 4 is a schematic structural diagram of a focusing device according to an embodiment of the present invention.
  • the focusing device is disposed in a head-up display device for projecting display related data according to a projection focal length.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the focusing device 40 can include one or more communication interfaces 401 and one or more processors 402.
  • the one or more processors 402 can operate individually or in concert.
  • the communication interface 401 and the processor 402 can be connected by, but not limited to, via a bus 403.
  • the communication interface 401 is configured to acquire scene sensing data.
  • the processor 402 is configured to determine a plurality of reference objects according to the scene sensing data; determine a target object from the plurality of reference objects; generate a first focus instruction, where the first focus instruction is used Instructing the processor to adjust a projection focal length based on the distance value to the target object.
  • the communication interface 401 is further configured to acquire direction sensing data for the target object.
  • the processor 402 is further configured to determine an observation direction of the target object according to the direction sensing data
  • the processor 402 is configured to determine a target object from a plurality of reference objects located in the observation direction when the determining the target object from the plurality of reference objects.
  • the processor 402 is further configured to determine a distance value to each reference object according to the scene sensing data.
  • the scene sensing data includes image sensing data and distance sensing data
  • the processor 402 When the processor 402 performs the determining a plurality of reference objects according to the scene sensing data, specifically, the processor 402 is configured to identify a plurality of reference objects according to the image sensing data;
  • the processor 402 is configured to determine a distance value to each reference object according to the distance sensing data when performing the determining the distance value to each reference object according to the scene sensing data.
  • the processor 402 is configured to determine a reference object corresponding to the minimum distance value as the target object when the determining the target object from the plurality of reference objects.
  • the communication interface 401 is further configured to acquire environment information, where the environment information includes any one or more of weather information and location information and motion speed corresponding to the head-up display device;
  • the processor 402 is configured to generate identification information of each reference object when the target object is determined from the plurality of reference objects, where the identification information is used to uniquely identify each reference object;
  • the identification information of the reference object, the environmental information, and the distance value to the respective reference objects are input into the preset recognition model, and the reference object identified by the identification information output by the preset recognition model is determined as the target object.
  • the communication interface 401 is further configured to acquire first attribute information of each reference object, where the first attribute information includes motion information and/or volume information of each reference object;
  • the processor 402 performs the determining of the target object from the plurality of reference objects, specifically for calculating the respective reference according to the first attribute information of the respective reference objects and the distance value to each reference object
  • the risk factor of the object; the reference object having the highest risk coefficient among the plurality of reference objects is determined as the target object.
  • the communication interface 401 is further configured to acquire location information corresponding to the heads-up display device, acquire map data corresponding to the location information, where the map data includes second attribute information of each reference object,
  • the second attribute information includes any one or more of status information, intensity information, quality information, and value information of each reference object;
  • the processor 402 performs the determining of the target object from the plurality of reference objects, specifically, determining, according to the second attribute information of the respective reference objects, an evaluation value of each reference object; The reference object having the largest evaluation value among the reference objects is determined as the target object.
  • the processor 402 is further configured to determine whether the distance value of the target object is less than a preset distance value; if yes, executing the generating a first focus instruction; if not, generating a second tone a focus command, the second focus command is used to instruct the processor to adjust a projection focus according to the preset distance value.
  • the communication interface 401 is further configured to acquire direction sensing data for the target object.
  • the processor 402 is further configured to determine an observation field of view of the target object according to the direction sensing data; and if the content displayed by the head display device is within the observation field of view, control the communication interface 401 Perform the acquiring scene sensing data.
  • the processor 402 is further configured to determine an observation field of view of the target object according to the direction sensing data; If the content displayed by the heads-up display device is within the observation field of view, performing the determining the direction of observation of the target object according to the direction sensing data.
  • scenario sensing data, the direction sensing data, the environment information, the location information, and the first attribute information described in the embodiments of the present invention may be collected by corresponding sensors in the movable platform, and the communication interface 401 Obtained from the mobile platform.
  • the processor 402 may be a central processing unit (CPU), and the processor 402 may also be another general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, and the like.
  • the general purpose processor may be a microprocessor or the processor 402 or any conventional processor or the like.
  • the communication interface 401 and the processor 402 described in the embodiment of the present invention may implement the implementation manner of the focusing method shown in FIG. 2 or FIG. 3 of the present application.
  • the communication interface 401 and the processor 402 described in the embodiment of the present invention may implement the implementation manner of the focusing method shown in FIG. 2 or FIG. 3 of the present application.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • FIG. 5 is a schematic structural diagram of another focusing device according to an embodiment of the present invention.
  • the focusing device is disposed in a head-up display device for projecting display related data according to a projection focal length.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the focusing device 50 can include one or more processors 501 and one or more communication interfaces 502. Wherein the one or more processors 501 can work individually or in concert.
  • the processor 501 and the communication interface 502 can be connected by, but not limited to, via a bus 503.
  • the processor 501 is configured to acquire scene sensing data, and determine a plurality of reference objects according to the scene sensing data; determine a target object from the plurality of reference objects; generate a first focusing instruction, where The first focus command is used to instruct the processor to adjust a projection focus according to the distance value to the target object.
  • the processor 501 is further configured to acquire direction sensing data for the target object, and determine an observation direction of the target object according to the direction sensing data;
  • the processor 501 is configured to determine a target object from a plurality of reference objects located in the observation direction when the determining the target object from the plurality of reference objects.
  • the processor 501 is further configured to determine a distance value to each reference object according to the scene sensing data.
  • the scene sensing data includes image sensing data and distance sensing data
  • the processor 501 is configured to: when determining, according to the scene sensing data, a plurality of reference objects, specifically for identifying a plurality of reference objects according to the image sensing data;
  • the processor 501 is configured to determine a distance value to each reference object according to the distance sensing data when performing the determining a distance value to each reference object according to the scene sensing data.
  • the processor 501 is configured to determine a reference object corresponding to the minimum distance value as the target object when the determining the target object from the plurality of reference objects.
  • the processor 501 is configured to generate identification information of each reference object when the target object is determined from the plurality of reference objects, where the identifier information is used to uniquely identify each reference object; Acquiring the environment information, the weather information, and any one or more of the location information and the motion speed corresponding to the heads-up display device; the identification information of the respective reference objects, the environment information, and the The distance value of each reference object is input into the preset recognition model, and the reference object identified by the identification information output by the preset recognition model is determined as the target object.
  • the processor 501 is configured to obtain first attribute information of each reference object when the target object is determined from the plurality of reference objects, where the first attribute information includes each reference object. Motion information and/or volume information; calculating a risk coefficient of each reference object according to the first attribute information of the respective reference objects and the distance value to each reference object; and having the highest risk coefficient among the plurality of reference objects The reference object is determined as the target object.
  • the processor 501 is further configured to acquire location information corresponding to the heads up display device;
  • the communication interface 502 is configured to acquire map data corresponding to the location information, where the map data includes second attribute information of each reference object, and the second attribute information includes status information, intensity information, and quality of each reference object. Any one or more of information and value information;
  • the processor 501 performs the determining of the target object from the plurality of reference objects, specifically for calculating the evaluation value of each reference object according to the second attribute information of the respective reference objects;
  • the reference object having the largest evaluation value among the reference objects is determined as the target object.
  • the processor 501 is further configured to determine whether a distance value of the target object is less than a preset distance value; if yes, executing the generating a first focus instruction; if not, generating a second tone a focus command, the second focus command is used to instruct the processor to adjust a projection focus according to the preset distance value.
  • the processor 501 is further configured to acquire direction sensing data for the target object, and determine an observation field of view of the target object according to the direction sensing data; if the head display device displays The content is within the range of the viewing field, and the acquiring scene sensing data is performed, and a plurality of reference objects are determined according to the scene sensing data.
  • the processor 401 is further configured to determine an observation field of view of the target object according to the direction sensing data; if the head is displayed The content displayed by the device is within the observation field of view, and the performing the determining direction of the target object according to the direction sensing data is performed.
  • processor 501 in the embodiment of the present invention may be the processor described in the foregoing embodiment.
  • processor 501 and the communication interface 502 described in the embodiments of the present invention may implement the implementation of the focus adjustment method shown in FIG. 2 or FIG. 3 of the present application. The description of the parts will not be repeated here.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • FIG. 6 is a schematic structural diagram of a head-up display device according to an embodiment of the present invention.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the head-up display device 60 may include the focusing device 40 and the projection device 601 shown in FIG. 4 of the present application.
  • the focusing device 40 and the projection device 601 can be connected by a bus 602.
  • the heads up display device 60 may further include a power system, a visual sensor (such as a binocular vision sensor, a monocular vision sensor), a distance sensor, an image sensor, a weather sensor (such as a temperature sensor, not shown in FIG. 6). Humidity sensor, etc.), positioning device, speed sensor (such as line speed sensor), motion sensor, volume sensor (such as ultrasonic volume sensor) and so on.
  • a visual sensor such as a binocular vision sensor, a monocular vision sensor
  • a distance sensor such as a distance sensor, an image sensor, a weather sensor (such as a temperature sensor, not shown in FIG. 6). Humidity sensor, etc.)
  • positioning device such as line speed sensor
  • motion sensor such as ultrasonic volume sensor
  • the projection device 601 is configured to project and display related data according to the adjusted projection focal length.
  • the projection device 601 may include a projection module and a mirror surface, wherein the mirror surface may be a separate partial transparent lens.
  • the projection device 601 may only include a projection module.
  • a front windshield or the like of the automobile can be used as a mirror surface.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • a computer readable storage medium is also provided in an embodiment of the present invention, the computer readable storage medium storing a computer program, the computer program including program instructions, which are shown in FIG. 4 of the present application.
  • the processor 402 When the processor 402 is invoked, the processor 402 is caused to perform the focusing method shown in FIG. 2 or FIG. 3 of the present application.
  • the computer readable storage medium may be an internal storage unit of the mobile platform described herein, such as a hard disk or memory of the removable platform.
  • the computer readable storage medium may also be an external storage device of the mobile platform, such as a plug-in hard disk equipped on the movable platform, a smart memory card (SMC), and a secure digital (Secure Digital) , SD) card, flash card (Flash Card), etc.
  • the computer readable storage medium may also include both an internal storage unit of the removable platform and an external storage device.
  • the computer readable storage medium is for storing the computer program and other programs and data required by the mobile platform.
  • the computer readable storage medium can also be used to temporarily store data that has been output or is about to be output.
  • FIG. 7 is a schematic structural diagram of another head-up display device according to an embodiment of the present invention.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the heads-up display device 70 may include: a scene sensor 701, a direction sensor 702, a weather sensor 703, a positioning device 704, a speed sensor 705, a motion sensor 706, a volume sensor 707, a projection device 708, and a diagram as in the present application.
  • the scene sensor 701, the direction sensor 702, the weather sensor 703, the positioning device 704, the speed sensor 705, the motion sensor 706, the volume sensor 707, the projection device 708, and the focusing device 50 as shown in FIG. 5 of the present application may pass However, it is not limited to being connected through the bus 709.
  • the heads up display device 70 may also include a power supply system or the like not shown in FIG.
  • the scene sensor 701 is configured to collect scene sensing data.
  • the scene sensor 701 may be, for example, a binocular vision sensor.
  • the scene sensing data includes image sensing data and distance sensing data.
  • the scene sensor 701 may include a monocular vision sensor and a distance sensor for acquiring image sensing data and distance sensing data, respectively.
  • the distance sensor may include but is not limited to a lidar sensor, a millimeter wave sensor, and an ultrasonic radar sensor.
  • the direction sensor 702 is configured to collect direction sensing data.
  • the direction sensing data is eyeball image data of the target object.
  • the direction sensor 702 can be, for example, an image sensor.
  • the weather sensor 703 is configured to collect weather information.
  • the weather information includes temperature sensing data and humidity sensing data.
  • the weather sensor 703 may include a temperature sensor and a humidity sensor for acquiring temperature sensing data and humidity sensing data, respectively.
  • the positioning device 704 is configured to collect location information corresponding to the heads up display device.
  • the location information corresponding to the heads up display device may be GPS positioning data.
  • the speed sensor 705 is configured to collect a motion speed corresponding to the head display device.
  • the speed sensor 705 can be, for example, a line speed sensor.
  • the motion sensor 706 is configured to collect motion information of each reference object.
  • the motion information of each reference object may include, but is not limited to, a motion trajectory, a motion direction, a motion speed, and the like of each reference object.
  • the volume sensor 707 is configured to collect volume information of each reference object.
  • the volume sensor 707 can be, for example, an ultrasonic volume sensor.
  • the focusing device 50 shown in FIG. 5 of the present application can acquire the data collected by the scene sensor 701, the direction sensor 702, the weather sensor 703, the positioning device 704, the speed sensor 705, the motion sensor 706, and the volume sensor 707, and execute the present application.
  • the projection device 708 is configured to display and display related data according to the adjusted projection focal length.
  • the projection device 708 can include a projection module and a mirror surface, wherein the mirror surface can be a separate partial transparent lens.
  • the projection device 708 may only include a projection module.
  • a front windshield or the like of the automobile can be used as a mirror surface.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • a computer readable storage medium is also provided in an embodiment of the present invention, the computer readable storage medium storing a computer program, the computer program including program instructions, which are shown in FIG. 5 of the present application.
  • the processor 501 When the processor 501 is invoked, the processor 501 is caused to perform the focusing method shown in FIG. 2 or FIG. 3 of the present application.
  • the computer readable storage medium in the embodiments of the present invention may be the computer readable storage medium described in the foregoing embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Instrument Panels (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente invention porte sur un procédé et sur un appareil de mise au point, ainsi que sur un dispositif d'affichage tête haute. Le procédé consiste : à acquérir des données de détection de scénario et à déterminer une pluralité d'objets de référence en fonction des données de détection de scénario ; à déterminer un objet cible parmi la pluralité d'objets de référence ; et à générer une première instruction de mise au point, la première instruction de mise au point étant utilisée pour donner comme instruction à un dispositif d'affichage tête haute (102) d'ajuster une longueur focale de projection en fonction de la valeur de la distance par rapport à l'objet cible. La présente invention peut réaliser l'ajustement auto-adaptatif d'une longueur focale de projection, ce qui permet d'augmenter le degré de chevauchement entre un contenu projeté et un objet cible.
PCT/CN2017/119431 2017-12-28 2017-12-28 Procédé et appareil de mise au point et dispositif d'affichage tête haute WO2019127224A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780023179.4A CN109076201A (zh) 2017-12-28 2017-12-28 调焦方法、装置及抬头显示设备
PCT/CN2017/119431 WO2019127224A1 (fr) 2017-12-28 2017-12-28 Procédé et appareil de mise au point et dispositif d'affichage tête haute

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/119431 WO2019127224A1 (fr) 2017-12-28 2017-12-28 Procédé et appareil de mise au point et dispositif d'affichage tête haute

Publications (1)

Publication Number Publication Date
WO2019127224A1 true WO2019127224A1 (fr) 2019-07-04

Family

ID=64812375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119431 WO2019127224A1 (fr) 2017-12-28 2017-12-28 Procédé et appareil de mise au point et dispositif d'affichage tête haute

Country Status (2)

Country Link
CN (1) CN109076201A (fr)
WO (1) WO2019127224A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114415370B (zh) * 2020-05-15 2023-06-06 华为技术有限公司 一种抬头显示装置、显示方法及显示系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015019567A1 (fr) * 2013-08-09 2015-02-12 株式会社デンソー Dispositif d'affichage d'informations
CN104515531A (zh) * 2013-09-30 2015-04-15 本田技研工业株式会社 增强的3-维(3-d)导航
CN105008170A (zh) * 2013-02-22 2015-10-28 歌乐株式会社 车辆用平视显示器装置
CN105711511A (zh) * 2014-12-22 2016-06-29 罗伯特·博世有限公司 用于运行平视显示器的方法、显示设备、车辆
CN106454310A (zh) * 2015-08-13 2017-02-22 福特全球技术公司 用于增强车辆视觉性能的聚焦系统
JP2017056933A (ja) * 2015-09-18 2017-03-23 株式会社リコー 情報表示装置、情報提供システム、移動体装置、情報表示方法及びプログラム

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9472023B2 (en) * 2014-10-06 2016-10-18 Toyota Jidosha Kabushiki Kaisha Safety system for augmenting roadway objects on a heads-up display
CN104932104B (zh) * 2015-06-03 2017-08-04 青岛歌尔声学科技有限公司 一种可变焦光学系统及抬头显示系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105008170A (zh) * 2013-02-22 2015-10-28 歌乐株式会社 车辆用平视显示器装置
WO2015019567A1 (fr) * 2013-08-09 2015-02-12 株式会社デンソー Dispositif d'affichage d'informations
CN104515531A (zh) * 2013-09-30 2015-04-15 本田技研工业株式会社 增强的3-维(3-d)导航
CN105711511A (zh) * 2014-12-22 2016-06-29 罗伯特·博世有限公司 用于运行平视显示器的方法、显示设备、车辆
CN106454310A (zh) * 2015-08-13 2017-02-22 福特全球技术公司 用于增强车辆视觉性能的聚焦系统
JP2017056933A (ja) * 2015-09-18 2017-03-23 株式会社リコー 情報表示装置、情報提供システム、移動体装置、情報表示方法及びプログラム

Also Published As

Publication number Publication date
CN109076201A (zh) 2018-12-21

Similar Documents

Publication Publication Date Title
US11194154B2 (en) Onboard display control apparatus
RU2746380C2 (ru) Индикатор на лобовом стекле с переменной фокальной плоскостью
EP3300941B1 (fr) Dispositif et procédé de fourniture d'informations
US11048095B2 (en) Method of operating a vehicle head-up display
US20160170487A1 (en) Information provision device and information provision method
JP5173031B2 (ja) 表示装置及び表示方法
US20140098008A1 (en) Method and apparatus for vehicle enabled visual augmentation
WO2021228112A1 (fr) Appareil de réglage de système de poste de pilotage et procédé destiné à régler un système de poste de pilotage
CN112344963B (zh) 一种基于增强现实抬头显示设备的测试方法及系统
JP2018127099A (ja) 車両用表示制御装置
US10672269B2 (en) Display control assembly and control method therefor, head-up display system, and vehicle
EP3496041A1 (fr) Procédé et appareil d'estimation de paramètre d'écran virtuel
JP2022105256A (ja) マルチビュー自動車及びロボット工学システムにおける画像合成
JP7300112B2 (ja) 制御装置、画像表示方法及びプログラム
US20200192091A1 (en) Method and apparatus for providing driving information of vehicle, and recording medium
JP2022176081A (ja) 適応視標追跡機械学習モデル・エンジン
WO2021227784A1 (fr) Dispositif d'affichage tête haute et procédé d'affichage tête haute
JPWO2020105685A1 (ja) 表示制御装置、方法、及びコンピュータ・プログラム
WO2019127224A1 (fr) Procédé et appareil de mise au point et dispositif d'affichage tête haute
JP6494764B2 (ja) 表示制御装置、表示装置及び表示制御方法
Kang et al. Do you see what I see: towards a gaze-based surroundings query processing system
US10795167B2 (en) Video display system, video display method, non-transitory storage medium, and moving vehicle for projecting a virtual image onto a target space
US20160117802A1 (en) Display control device, display control method, non-transitory recording medium, and projection device
CN116338958A (zh) 双层图像成像方法、装置、电子设备及存储介质
US20220013046A1 (en) Virtual image display system, image display method, head-up display, and moving vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936286

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936286

Country of ref document: EP

Kind code of ref document: A1