WO2019127224A1 - 调焦方法、装置及抬头显示设备 - Google Patents

调焦方法、装置及抬头显示设备 Download PDF

Info

Publication number
WO2019127224A1
WO2019127224A1 PCT/CN2017/119431 CN2017119431W WO2019127224A1 WO 2019127224 A1 WO2019127224 A1 WO 2019127224A1 CN 2017119431 W CN2017119431 W CN 2017119431W WO 2019127224 A1 WO2019127224 A1 WO 2019127224A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
sensing data
display device
information
determining
Prior art date
Application number
PCT/CN2017/119431
Other languages
English (en)
French (fr)
Inventor
吴军
刘怀宇
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/119431 priority Critical patent/WO2019127224A1/zh
Priority to CN201780023179.4A priority patent/CN109076201A/zh
Publication of WO2019127224A1 publication Critical patent/WO2019127224A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/317Convergence or focusing systems

Definitions

  • the present invention relates to the field of projection display technologies, and in particular, to a focusing method, device, and head-up display device.
  • Head Up Display is a driving aid used on mobile platforms such as airplanes and automobiles.
  • the head-up display uses the principle of optical reflection to project important driving-related information on a piece of glass. This glass is located at the front end of the driver's seat and is roughly at the same level as the driver's eyes, so the driver can view the driving-related information without looking down. Improve driving safety.
  • heads-up displays generally employ a fixed focus plane design that uses a fixed projected imaging distance. That is to say, in any case, the projected content seen by the driver (such as driving related information) is at a fixed distance. Since the driver's gaze and focus are focused on objects of different distances depending on different scenes, the above fixed focus plane is designed such that the driver cannot observe the projected content and the actual object well.
  • the embodiment of the invention provides a focusing method, a device and a head-up display device, which can realize adaptive adjustment of the projection focal length.
  • an embodiment of the present invention provides a focusing method, which is applied to a head-up display device, and the head-up display device is configured to display and display related data according to a projection focal length, and the method includes:
  • an embodiment of the present invention provides a focusing device, which is disposed in a head-up display device, and the head-up display device is configured to display and display related data according to a projection focal length, and the device includes:
  • a communication interface configured to acquire scene sensing data
  • a processor configured to determine a plurality of reference objects according to the scene sensing data; determine a target object from the plurality of reference objects; generate a first focus instruction, where the first focus instruction is used to indicate the
  • the processor adjusts the projection focal length according to the distance value to the target object.
  • an embodiment of the present invention provides a head-up display device, where the head-up display device includes:
  • a projection device for displaying related data according to a projection focal length
  • an embodiment of the present invention provides a computer readable storage medium, where the computer storage medium stores a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processing The focus performing method of the first aspect described above is performed.
  • the projection focal length is adjusted according to the distance value of the target object, and the adaptive adjustment of the projection focal length can be realized, and the overlapping of the projection content and the target object is improved, so that the driver can view the target object without switching the focus point. Projecting content for improved driving comfort and driving safety.
  • FIG. 1 is a schematic diagram of an application scenario of a focus adjustment method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart of a focusing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flow chart of another focusing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a focusing device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another focusing device according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a head-up display device according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of another head-up display device according to an embodiment of the present invention.
  • the heads-up display projects the content that needs to be displayed onto the glass in front of the driver based on projection technology.
  • the projection technology is any one of a liquid crystal display (LCD) technology, a liquid crystal on silicon (LCOS) technology, or a digital light processing (DLP) technology.
  • the system configuration of the heads up display generally includes: an application processor, a projection module, and a mirror surface.
  • the application processor is configured to generate projected content;
  • the projection module is configured to convert an electronic signal of the image into projection light, wherein the projection module includes a multi-lens prism system for focusing the projection content
  • a specific focus plane or called an imaging plane
  • the mirror surface is generally a separate partial transparent lens or a front windshield for reflecting the projected content.
  • Existing heads-up displays generally employ a fixed focus plane design, that is, in any case, the projected content (such as driving related information) that the driver sees is at a fixed distance.
  • the focus plane is generally two meters to ten meters away from the driver according to different system designs. Since the driver's gaze and focus are focused on objects of different distances depending on different scenes (such as slow driving scenes and fast driving scenes), the design of the above fixed focus plane may result in the driver being unable to project the content with the actual object. For effective overlapping observation, it is necessary to repeatedly switch the focus point to switch between the projected content and the actual object, which reduces driving comfort and driving safety.
  • a small number of head-up displays use a dual-system design that uses two application processors and projection modules. Among them, the two systems use different focal plane designs. In general, one system focuses on closer proximity and the other system focuses on farther. Although the design of the dual system is improved compared with the above-mentioned design using a fixed focus plane, the problem that the driver needs to repeatedly switch the focus point in different scenarios cannot be completely solved.
  • the embodiment of the present invention provides a focusing method.
  • the focusing method is applied to a head-up display device, and the head-up display device is configured to display a projected content according to a projection focal length.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the system configuration of the head-up display device includes a processor (or an application processor), a projection module, and a mirror surface.
  • the processor is configured to generate projected content.
  • the projection module is configured to convert an electronic signal of the image into a projection light, wherein the projection module comprises a multi-lens prism system for focusing the projection content on a virtual plane, wherein the specific focus plane is The optical system of the projection module is determined.
  • the optical system of the projection module is a zoom optical system, and the projection content can be projected to different focus planes according to settings under the control of software.
  • the mirror surface is used to reflect the projected content to the driver's eyes.
  • the mirror surface may be a separate partial transparent lens or a front windshield of the automobile.
  • the head-up display device may use a pupil tracking technology to identify a current viewing direction of the driver (or called an eyeball tracking technology).
  • the head-up display device can acquire eyeball image data of the driver, identify features in the eyeball pupil of the driver according to the processing of the eyeball image data, and calculate the current observation of the driver in real time through the features. direction.
  • the system configuration of the heads up display device may further include a camera vision system, wherein the camera vision system may include an image sensor.
  • the image data may be acquired directly by an image sensor in the heads up display device.
  • the image data may also be acquired by an image sensor in a movable platform, the heads-up display device being acquired from the movable platform.
  • the head-up display device may analyze the scene in front of the vehicle by using vehicle sensing technology, and determine a plurality of reference objects and distance values to the respective reference objects. Specifically, the heads up display device may acquire scene sensing data, and determine a plurality of reference objects and distance values to the respective reference objects according to the scene sensing data.
  • the scene sensing data includes image sensing data and distance sensing data.
  • the image sensing data and the distance sensing data may be collected by a binocular vision sensor, or may be collected by a monocular vision sensor and a distance sensor, respectively.
  • the distance sensor may include but is not limited to a lidar sensor, a millimeter wave sensor, and an ultrasonic radar sensor.
  • the system configuration of the head-up display device may further include a sensing system, wherein the sensing system may include a binocular vision sensor and/or a monocular vision sensor and a distance sensor.
  • the scene sensing data may be acquired directly by a binocular vision sensor (or a monocular vision sensor and a distance sensor) in the heads up display device.
  • the scene sensing data may also be collected by a binocular vision sensor (or a monocular vision sensor and a distance sensor) in the movable platform, and the head display device may be from the Obtained in the mobile platform.
  • a binocular vision sensor or a monocular vision sensor and a distance sensor
  • the head-up display device may determine a target object from a plurality of reference objects located in the viewing direction.
  • the target object may be a nearest reference object located in the viewing direction.
  • the head-up display device may determine a reference object corresponding to a minimum distance value among the plurality of reference objects located in the observation direction as the target object.
  • the heads up display device may determine the target object based on deep learning. Specifically, the head-up display device may collect the geographical location, the weather, the vehicle speed, the identified reference object and the distance value as input data, and then mark the target object that needs the most attention, thereby selecting a certain neural network model (eg, Alexnet convolutional neural network model) and deep learning to train this kind of neural network model, and subsequently use the results of this network training to help quickly determine the target object.
  • a certain neural network model eg, Alexnet convolutional neural network model
  • the head-up display device may determine the target object based on a risk factor (or referred to as a risk degree) of the reference object. Specifically, the head-up display device may determine a reference object having the highest risk coefficient among the plurality of reference objects located in the observation direction as the target object.
  • the risk factor of the reference object may be determined according to a distance value to the reference object, a motion trajectory of the reference object, a motion direction of the reference object, a motion speed of the reference object, a volume of the reference object, and the like.
  • the head-up display device may combine a high-precision map to determine a target object according to attribute information of the reference object.
  • the high-precision map identifies attribute information of different reference objects, and the attribute information of the reference object includes, but is not limited to, whether the reference object is a movable object, a rigid strength of the reference object, a quality of the reference object, and a reference object. The value and so on.
  • the heads-up display device may score the attribute information, and the head-up display device may further weight the scores of the respective attribute information to obtain a total score (ie, an evaluation value of the reference object) according to the weight value of the different attribute information set in advance. .
  • the head-up display device may determine a reference object having the highest evaluation value among the plurality of reference objects located in the observation direction as the target object.
  • the head-up display device may control its zoom optical system to perform focusing according to a distance value to the target object (ie, adjust a projection focal length) such that the projected content displayed by the head-up display device is substantially located with the target object The same distance.
  • the distance value to the target object refers to the distance value of the sensor that collects the distance sensing data to the target object.
  • the head-up display device may further determine whether a distance value of the target object exceeds a preset distance value (or is referred to as a super focus); if yes, the head-up display device controls the same The zoom optical system performs focusing in accordance with the preset distance value.
  • the heads up display device may further display the projected content generated by the processor according to the adjusted projected focal length.
  • FIG. 1 is a schematic diagram of an application scenario of a focus adjustment method according to an embodiment of the present invention.
  • the illustrated application scenario includes a car 101, a heads-up display device 102 installed in the car 101, a stone 103 located in front of the car 101, a pedestrian 104, and a pillar 105.
  • the heads-up display device 102 can identify the current viewing direction v of the driver (not shown in FIG. 1) in the car 101 using a pupil tracking technique.
  • the heads-up display device 102 may analyze the scene in front of the automobile 101 by using a vehicle sensing technology, and determine the distance value d1 of the stone 103 and the stone 103 to the automobile 101, and the pedestrian. 104 and a distance value d2 of the automobile 101 to the pedestrian 104 and a distance value d3 between the pillar 105 and the automobile 101 to the pillar 105.
  • the heads-up display device 102 can determine that the pedestrian 104 and the pillar 105 are located in the viewing direction v.
  • the heads-up display device 102 can determine a target object from the pedestrian 104 and the pillar 105 located in the viewing direction v.
  • the heads-up display device 102 may determine the pedestrian 104 located in the viewing direction v and the nearest person in the pillar 105 from the car 101 as a target object. Specifically, the heads-up display device 102 can compare the distance value d2 and the distance value d3, and the head-up display device 102 can determine the pedestrian 104 because the distance value d2 is smaller than the distance value d3. For the target object.
  • the heads-up display device 102 can perform focusing (ie, adjusting the projection focal length) according to the distance value d2 of the automobile 101 to the pedestrian 104, so that the projected content displayed by the head-up display device 102 (with driving-related information) For example, the pedestrian 104 is located at substantially the same distance, thereby improving the overlap of the travel-related information and the target object.
  • the heads-up display device 102 can also display the driving-related information according to the adjusted projection focal length.
  • the travel related information is displayed on the front windshield in front of the driver in the car 101. Since the driving related information is substantially at the same distance from the pedestrian 104, the driver can view the driving related information without observing the focus point while observing the pedestrian 104, that is, the driver can access the pedestrian 104. Perform effective overlapping observation with the travel related information.
  • the focusing method provided by the embodiment of the present invention adjusts the projection focal length of the head-up display device according to the distance value to the target object, realizes adaptive adjustment of the projection focal length, and improves the overlap of the projection content and the target object. Sexuality allows the driver to view the projected object without having to switch the focus point while viewing the target object, thereby improving driving comfort and driving safety.
  • the focusing method, device and head-up display device of the embodiments of the present invention are described in detail below with reference to FIG. 2 to FIG.
  • the focusing method of the embodiment of the present invention is applied to a head-up display device for projecting display related data according to a projection focal length.
  • the focusing method may include:
  • S201 Acquire scene sensing data, and determine a plurality of reference objects according to the scene sensing data.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the movable platform is driven by a driver
  • the head-up display device projects relevant data (ie, content that needs to be displayed) onto the glass in front of the driver based on the projection technology, and the glass will display the content (ie, projection) Content) reflected to the driver's eyes.
  • the glass may be a separate partial transparent lens in the head-up display device, or may be an automobile front windshield.
  • the related data may be, for example, driving related information such as instantaneous traveling speed, average traveling speed, engine speed, idle fuel consumption, average fuel consumption, mileage, external environment temperature, navigation map, and the like.
  • the scene sensing data may be collected by a binocular vision sensor, or may be collected by a monocular vision sensor and a distance sensor.
  • the distance sensor may include but is not limited to a lidar sensor, a millimeter wave sensor, and an ultrasonic radar sensor.
  • the heads up display device may further determine a distance value to each reference object according to the scene sensing data.
  • the distance value to each reference object may be a distance value of the movable platform to each reference object, or may be a distance value of the head-up display device to each reference object, or may be a sensor that collects the scene sensing data to each The distance value of the reference object.
  • the scene sensing data may include image sensing data and distance sensing data.
  • the method when the head-up display device performs the determining a plurality of reference objects according to the scene sensing data, the method may specifically include: determining a plurality of reference objects according to the image sensing data; and performing, by the head-up display device
  • the determining the distance value to each reference object according to the scene sensing data may specifically include: determining a distance value to each reference object according to the distance sensing data.
  • the image sensing data and the distance sensing data may be collected by a binocular vision sensor, or may be collected by a monocular vision sensor and a distance sensor, respectively.
  • the scene sensing data may be directly collected by a binocular vision sensor (or a monocular vision sensor and a distance sensor) in the heads-up display device.
  • the scene sensing data may also be collected by a binocular vision sensor (or a monocular vision sensor and a distance sensor) in the movable platform, and the head-up display device is Obtained in the mobile platform.
  • a binocular vision sensor or a monocular vision sensor and a distance sensor
  • the reference object refers to an object located outside the movable platform that is determined by the heads up display device by analyzing the scene in front of the movable platform according to the scene sensing data. It can be understood that the object according to the embodiment of the present invention refers to a general term for living objects (such as humans, animals, etc.) and inanimate objects (such as stones, railings, etc.) located outside the movable platform.
  • the heads up display device may further acquire direction sensing data for the target object, and determine an observation field of view of the target object according to the direction sensing data; If the content displayed by the heads-up display device is within the observation field of view, the acquiring scene sensing data is performed, and a plurality of reference objects are determined according to the scene sensing data.
  • the target object is the driver mentioned above. It should be noted that the driver in the embodiment of the present invention refers to the general name of the driver and the pilot of the automobile.
  • the head-up display device may not perform the embodiment of the present invention.
  • the head-up display device may project display related data according to a preset projection focal length, or may project display related data according to the current projection focal length.
  • the target object refers to an object observed by the target object.
  • the heads up display device may determine a reference object corresponding to the minimum distance value as the target object.
  • the distance value to the target object is smaller than the distance value to other reference objects, that is, the target object is the closest reference object.
  • the heads up display device may generate identification information of each reference object, where the identification information is used to uniquely identify each reference object; and acquire environment information, where the environment information includes weather information and Any one or more of position information and motion speed corresponding to the head-up display device; inputting the identification information of the respective reference objects, the environment information, and the distance value to each reference object into a preset recognition model And determining, as the target object, the reference object identified by the identification information output by the preset recognition model.
  • the weather information may include, but is not limited to, temperature sensing data, humidity sensing data, and the like.
  • the weather information may be collected directly by a weather sensor (such as a temperature sensor, a humidity sensor, etc.) in the head-up display device, or may be collected by a weather sensor in a movable platform, and the head-up display device communicates The interface is obtained from the mobile platform.
  • the location information corresponding to the heads up display device may be Global Positioning System (GPS) positioning data.
  • GPS Global Positioning System
  • the position information corresponding to the head-up display device may be directly collected by the positioning device in the head-up display device, or may be collected by a positioning device in the movable platform, and the head-up display device is Obtained by the mobile platform.
  • the corresponding moving speed of the head-up display device is the traveling speed of the movable platform.
  • the moving speed corresponding to the head-up display device may be directly collected by a speed sensor (such as a line speed sensor) in the head-up display device, or may be collected by a speed sensor in the movable platform, the head-up
  • the display device is obtained from the mobile platform through a communication interface.
  • the preset recognition model may be a neural network model (such as the Alexnet convolutional neural network model) obtained by training using deep learning.
  • the heads up display device may acquire first attribute information of each reference object, where the first attribute information includes motion information and/or volume information of each reference object; Calculating a risk coefficient of each reference object by determining first attribute information of the object and the distance value to each reference object; and determining a reference object having the highest risk coefficient among the plurality of reference objects as the target object.
  • the motion information of each reference object may include, but is not limited to, a motion trajectory, a motion direction, a motion speed, and the like of each reference object.
  • the motion information of each reference object may be directly collected by a motion sensor in the head-up display device, or may be collected by a motion sensor in a movable platform, and the head-up display device may be from the Obtained by the mobile platform.
  • the volume information of each reference object may be directly collected by a volume sensor in the head-up display device, or may be collected by a volume sensor in a movable platform, and the head-up display device is accessed through a communication interface. Described by the mobile platform.
  • the volume sensor can be, for example, an ultrasonic volume sensor.
  • the head-up display device performs the first attribute information according to the respective reference objects and the distance value to each reference object, and calculating the risk coefficient of each reference object may specifically include: according to a preset level Dividing rules, determining a hazard level of each of the first attribute information of each reference object, and determining a hazard level of the distance value to each reference object; a hazard level of each of the first attribute information of each reference object and to each reference object The hazard levels of the distance values are summed to obtain the hazard coefficients of the respective reference objects.
  • the heads up display device may acquire location information corresponding to the heads up display device; acquire map data corresponding to the location information, where the map data includes second attribute information of each reference object
  • the second attribute information includes any one or more of status information, intensity information, quality information, and value information of each reference object; and calculating a rating of each reference object according to the second attribute information of the respective reference objects a value; a reference object having the largest evaluation value among the plurality of reference objects is determined as the target object.
  • the map data corresponding to each location information may be pre-stored in the head-up display device. Therefore, the heads up display device can query and acquire map data corresponding to the location information corresponding to the heads up display device.
  • the map data corresponding to each location information may be pre-stored in the mobile platform, and the heads-up display device may obtain, from the mobile platform, the corresponding corresponding to the head-up display device by using a communication interface. Map data corresponding to location information.
  • the heads-up display device may acquire map data corresponding to location information corresponding to the head-up display device from a server through a wired connection or a wireless connection.
  • the state information of the reference object may include a fixed state or a moving state.
  • the state information of the reference object is a fixed state, that is, the reference object is an immovable object
  • the state information of the reference object is a moving state, that is, the reference object is a movable object.
  • the intensity information of the reference object is used to characterize the stiffness strength of the reference object.
  • the header display device performs the second attribute information according to the respective reference objects, and calculating the evaluation value of each reference object may specifically include: selecting, according to a preset scoring rule, each reference object The two attribute information is scored; the scores of the second attribute information of each reference object are weighted according to the weight values of the second attribute information set in advance, and the evaluation values of the respective reference objects are obtained.
  • the score when the movement information of the reference object is the moving state is higher than the score when the movement information of the reference object is the fixed state; the higher the strength of the reference object is, the higher the score; the reference object The higher the quality, the higher the score; the higher the value of the reference object, the higher the score.
  • the first focus instruction is used to instruct the head display device to adjust a projection focus according to the distance value to the target object, so that the projected content displayed by the head display device is substantially located with the target object. The same distance.
  • the heads up display device may store the identifiers of the respective reference objects in association with the distance values to the respective reference objects. After the heads up display device determines the target object, the distance value of the target object may be queried according to the identifier of the target object.
  • the head-up display device can also display and display relevant data according to the adjusted projection focal length.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • FIG. 3 is a schematic flowchart diagram of another focusing method provided by an embodiment of the present invention.
  • the focusing method of the embodiment of the present invention is applied to a head-up display device for projecting display related data according to a projection focal length.
  • the focusing method may include:
  • S301 Acquire direction sensing data of the target object, and determine an observation direction of the target object according to the direction sensing data.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or an automobile.
  • the movable platform is driven by a driver
  • the head-up display device projects relevant data (ie, content that needs to be displayed) onto the glass in front of the driver based on the projection technology, and the glass will display the content (ie, projection) Content) reflected to the driver's eyes.
  • the glass may be a separate partial transparent lens in the head-up display device, or may be an automobile front windshield.
  • the related data may be, for example, driving related information such as instantaneous traveling speed, average traveling speed, engine speed, idle fuel consumption, average fuel consumption, mileage, external environment temperature, navigation map, and the like.
  • the target object is the driver mentioned above. It should be noted that the driver in the embodiment of the present invention refers to the general name of the driver and the pilot of the automobile.
  • the direction sensing data can be used to determine a viewing direction of the target object.
  • the heads-up display device uses a pupil tracking technique to identify a viewing direction of the target object (or referred to as an eye tracking technique).
  • the direction sensing data for the target object may be specifically the eyeball image data of the target object. Determining, by the heads-up display device, the direction-sensing data for the target object, and determining the viewing direction of the target object according to the direction-sensing data may include: acquiring eyeball image data of the target object; The processing of the eyeball image data identifies features in the pupil of the eye of the target object, and the direction of observation of the target object is inversely calculated by these features in real time.
  • the direction sensing data may also be used to determine an observation field of view of the target object.
  • the heads up display device may determine the observation of the target object according to the direction sensing data. a field of view; if the content displayed by the heads-up display device is within the observation field of view, performing the determining the direction of observation of the target object according to the direction sensing data.
  • the head-up display device may not perform the embodiment of the present invention.
  • the head-up display device may project display related data according to a preset projection focal length, or may project display related data according to the current projection focal length.
  • S302 Acquire scene sensing data, and determine a plurality of reference objects according to the scene sensing data.
  • the heads up display device may further determine a distance value to each reference object according to the scene sensing data.
  • the distance value to each reference object may be a distance value of a sensor that collects the scene sensing data to each reference object, or a distance value of the movable platform to each reference object, or may be the head display device to The distance value of each reference object.
  • the scene sensing data may include image sensing data and distance sensing data.
  • the method when the head-up display device performs the determining a plurality of reference objects according to the scene sensing data, the method may specifically include: determining a plurality of reference objects according to the image sensing data; and performing, by the head-up display device
  • the determining the distance value to each reference object according to the scene sensing data may specifically include: determining a distance value to each reference object according to the distance sensing data.
  • the image sensing data and the distance sensing data may be collected by the same sensor, or may be collected by different sensors.
  • the reference object refers to an object located outside the movable platform that is determined by the heads up display device by analyzing the scene in front of the movable platform according to the scene sensing data.
  • S303 Determine a target object from a plurality of reference objects located in the observation direction.
  • the target object refers to an object observed by the target object.
  • the head-up display device may determine a reference object corresponding to a minimum distance value among the plurality of reference objects located in the observation direction as the target object.
  • the heads up display device may generate identification information of each reference object located in the observation direction, where the identification information is used to uniquely identify each reference object located in the observation direction.
  • the environment information and the distance value to each reference object located in the observation direction are input into a preset recognition model, and the reference object identified by the identification information output by the preset recognition model is determined as a target object. .
  • the heads up display device may acquire first attribute information of each reference object located in the observation direction, where the first attribute information includes each reference object located in the observation direction.
  • Motion information and/or volume information calculating the location in the observation according to the first attribute information of the respective reference objects located in the observation direction and the distance values to the respective reference objects located in the observation direction a risk factor of each reference object in the direction; a reference object having the highest risk coefficient among the plurality of reference objects located in the observation direction is determined as the target object.
  • the heads up display device may acquire location information corresponding to the heads up display device, and acquire map data corresponding to the location information, where the map data includes each of the view directions.
  • the second attribute information includes any one or more of status information, intensity information, quality information, and value information of each reference object located in the observation direction; Determining, by the second attribute information of each reference object in the observation direction, an evaluation value of each reference object located in the observation direction; determining a reference object having the largest evaluation value among the plurality of reference objects located in the observation direction For the target object.
  • step S202 of the focusing method shown in FIG. 2 of the present application may refer to step S202 of the focusing method shown in FIG. 2 of the present application, and details are not described herein again.
  • the head-up display device may directly determine the reference object as the target object.
  • S304 Determine whether the distance value of the target object is less than a preset distance value.
  • the heads up display device may store the identifiers of the respective reference objects in association with the distance values to the respective reference objects. After the heads up display device determines the target object, the distance value of the target object may be queried according to the identifier of the target object.
  • the preset distance value is a super focus set in advance by the heads up display device.
  • the first focus instruction is used to instruct the head display device to adjust a projection focus according to the distance value to the target object, so that the projected content displayed by the head display device is substantially located with the target object. The same distance.
  • the second focus adjustment instruction is used to instruct the head display device to adjust a projection focus according to the preset distance value.
  • the heads up display device may perform a maximum focus distance for a target object that is outside the preset distance value.
  • the heads up display device may generate the first focus adjustment instruction; or when the distance value to the target object is equal to The heads up display device may generate the second focus adjustment command when the distance value is preset.
  • the head-up display device can also display and display relevant data according to the adjusted projection focal length.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • FIG. 4 is a schematic structural diagram of a focusing device according to an embodiment of the present invention.
  • the focusing device is disposed in a head-up display device for projecting display related data according to a projection focal length.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the focusing device 40 can include one or more communication interfaces 401 and one or more processors 402.
  • the one or more processors 402 can operate individually or in concert.
  • the communication interface 401 and the processor 402 can be connected by, but not limited to, via a bus 403.
  • the communication interface 401 is configured to acquire scene sensing data.
  • the processor 402 is configured to determine a plurality of reference objects according to the scene sensing data; determine a target object from the plurality of reference objects; generate a first focus instruction, where the first focus instruction is used Instructing the processor to adjust a projection focal length based on the distance value to the target object.
  • the communication interface 401 is further configured to acquire direction sensing data for the target object.
  • the processor 402 is further configured to determine an observation direction of the target object according to the direction sensing data
  • the processor 402 is configured to determine a target object from a plurality of reference objects located in the observation direction when the determining the target object from the plurality of reference objects.
  • the processor 402 is further configured to determine a distance value to each reference object according to the scene sensing data.
  • the scene sensing data includes image sensing data and distance sensing data
  • the processor 402 When the processor 402 performs the determining a plurality of reference objects according to the scene sensing data, specifically, the processor 402 is configured to identify a plurality of reference objects according to the image sensing data;
  • the processor 402 is configured to determine a distance value to each reference object according to the distance sensing data when performing the determining the distance value to each reference object according to the scene sensing data.
  • the processor 402 is configured to determine a reference object corresponding to the minimum distance value as the target object when the determining the target object from the plurality of reference objects.
  • the communication interface 401 is further configured to acquire environment information, where the environment information includes any one or more of weather information and location information and motion speed corresponding to the head-up display device;
  • the processor 402 is configured to generate identification information of each reference object when the target object is determined from the plurality of reference objects, where the identification information is used to uniquely identify each reference object;
  • the identification information of the reference object, the environmental information, and the distance value to the respective reference objects are input into the preset recognition model, and the reference object identified by the identification information output by the preset recognition model is determined as the target object.
  • the communication interface 401 is further configured to acquire first attribute information of each reference object, where the first attribute information includes motion information and/or volume information of each reference object;
  • the processor 402 performs the determining of the target object from the plurality of reference objects, specifically for calculating the respective reference according to the first attribute information of the respective reference objects and the distance value to each reference object
  • the risk factor of the object; the reference object having the highest risk coefficient among the plurality of reference objects is determined as the target object.
  • the communication interface 401 is further configured to acquire location information corresponding to the heads-up display device, acquire map data corresponding to the location information, where the map data includes second attribute information of each reference object,
  • the second attribute information includes any one or more of status information, intensity information, quality information, and value information of each reference object;
  • the processor 402 performs the determining of the target object from the plurality of reference objects, specifically, determining, according to the second attribute information of the respective reference objects, an evaluation value of each reference object; The reference object having the largest evaluation value among the reference objects is determined as the target object.
  • the processor 402 is further configured to determine whether the distance value of the target object is less than a preset distance value; if yes, executing the generating a first focus instruction; if not, generating a second tone a focus command, the second focus command is used to instruct the processor to adjust a projection focus according to the preset distance value.
  • the communication interface 401 is further configured to acquire direction sensing data for the target object.
  • the processor 402 is further configured to determine an observation field of view of the target object according to the direction sensing data; and if the content displayed by the head display device is within the observation field of view, control the communication interface 401 Perform the acquiring scene sensing data.
  • the processor 402 is further configured to determine an observation field of view of the target object according to the direction sensing data; If the content displayed by the heads-up display device is within the observation field of view, performing the determining the direction of observation of the target object according to the direction sensing data.
  • scenario sensing data, the direction sensing data, the environment information, the location information, and the first attribute information described in the embodiments of the present invention may be collected by corresponding sensors in the movable platform, and the communication interface 401 Obtained from the mobile platform.
  • the processor 402 may be a central processing unit (CPU), and the processor 402 may also be another general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, and the like.
  • the general purpose processor may be a microprocessor or the processor 402 or any conventional processor or the like.
  • the communication interface 401 and the processor 402 described in the embodiment of the present invention may implement the implementation manner of the focusing method shown in FIG. 2 or FIG. 3 of the present application.
  • the communication interface 401 and the processor 402 described in the embodiment of the present invention may implement the implementation manner of the focusing method shown in FIG. 2 or FIG. 3 of the present application.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • FIG. 5 is a schematic structural diagram of another focusing device according to an embodiment of the present invention.
  • the focusing device is disposed in a head-up display device for projecting display related data according to a projection focal length.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the focusing device 50 can include one or more processors 501 and one or more communication interfaces 502. Wherein the one or more processors 501 can work individually or in concert.
  • the processor 501 and the communication interface 502 can be connected by, but not limited to, via a bus 503.
  • the processor 501 is configured to acquire scene sensing data, and determine a plurality of reference objects according to the scene sensing data; determine a target object from the plurality of reference objects; generate a first focusing instruction, where The first focus command is used to instruct the processor to adjust a projection focus according to the distance value to the target object.
  • the processor 501 is further configured to acquire direction sensing data for the target object, and determine an observation direction of the target object according to the direction sensing data;
  • the processor 501 is configured to determine a target object from a plurality of reference objects located in the observation direction when the determining the target object from the plurality of reference objects.
  • the processor 501 is further configured to determine a distance value to each reference object according to the scene sensing data.
  • the scene sensing data includes image sensing data and distance sensing data
  • the processor 501 is configured to: when determining, according to the scene sensing data, a plurality of reference objects, specifically for identifying a plurality of reference objects according to the image sensing data;
  • the processor 501 is configured to determine a distance value to each reference object according to the distance sensing data when performing the determining a distance value to each reference object according to the scene sensing data.
  • the processor 501 is configured to determine a reference object corresponding to the minimum distance value as the target object when the determining the target object from the plurality of reference objects.
  • the processor 501 is configured to generate identification information of each reference object when the target object is determined from the plurality of reference objects, where the identifier information is used to uniquely identify each reference object; Acquiring the environment information, the weather information, and any one or more of the location information and the motion speed corresponding to the heads-up display device; the identification information of the respective reference objects, the environment information, and the The distance value of each reference object is input into the preset recognition model, and the reference object identified by the identification information output by the preset recognition model is determined as the target object.
  • the processor 501 is configured to obtain first attribute information of each reference object when the target object is determined from the plurality of reference objects, where the first attribute information includes each reference object. Motion information and/or volume information; calculating a risk coefficient of each reference object according to the first attribute information of the respective reference objects and the distance value to each reference object; and having the highest risk coefficient among the plurality of reference objects The reference object is determined as the target object.
  • the processor 501 is further configured to acquire location information corresponding to the heads up display device;
  • the communication interface 502 is configured to acquire map data corresponding to the location information, where the map data includes second attribute information of each reference object, and the second attribute information includes status information, intensity information, and quality of each reference object. Any one or more of information and value information;
  • the processor 501 performs the determining of the target object from the plurality of reference objects, specifically for calculating the evaluation value of each reference object according to the second attribute information of the respective reference objects;
  • the reference object having the largest evaluation value among the reference objects is determined as the target object.
  • the processor 501 is further configured to determine whether a distance value of the target object is less than a preset distance value; if yes, executing the generating a first focus instruction; if not, generating a second tone a focus command, the second focus command is used to instruct the processor to adjust a projection focus according to the preset distance value.
  • the processor 501 is further configured to acquire direction sensing data for the target object, and determine an observation field of view of the target object according to the direction sensing data; if the head display device displays The content is within the range of the viewing field, and the acquiring scene sensing data is performed, and a plurality of reference objects are determined according to the scene sensing data.
  • the processor 401 is further configured to determine an observation field of view of the target object according to the direction sensing data; if the head is displayed The content displayed by the device is within the observation field of view, and the performing the determining direction of the target object according to the direction sensing data is performed.
  • processor 501 in the embodiment of the present invention may be the processor described in the foregoing embodiment.
  • processor 501 and the communication interface 502 described in the embodiments of the present invention may implement the implementation of the focus adjustment method shown in FIG. 2 or FIG. 3 of the present application. The description of the parts will not be repeated here.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • FIG. 6 is a schematic structural diagram of a head-up display device according to an embodiment of the present invention.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the head-up display device 60 may include the focusing device 40 and the projection device 601 shown in FIG. 4 of the present application.
  • the focusing device 40 and the projection device 601 can be connected by a bus 602.
  • the heads up display device 60 may further include a power system, a visual sensor (such as a binocular vision sensor, a monocular vision sensor), a distance sensor, an image sensor, a weather sensor (such as a temperature sensor, not shown in FIG. 6). Humidity sensor, etc.), positioning device, speed sensor (such as line speed sensor), motion sensor, volume sensor (such as ultrasonic volume sensor) and so on.
  • a visual sensor such as a binocular vision sensor, a monocular vision sensor
  • a distance sensor such as a distance sensor, an image sensor, a weather sensor (such as a temperature sensor, not shown in FIG. 6). Humidity sensor, etc.)
  • positioning device such as line speed sensor
  • motion sensor such as ultrasonic volume sensor
  • the projection device 601 is configured to project and display related data according to the adjusted projection focal length.
  • the projection device 601 may include a projection module and a mirror surface, wherein the mirror surface may be a separate partial transparent lens.
  • the projection device 601 may only include a projection module.
  • a front windshield or the like of the automobile can be used as a mirror surface.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • a computer readable storage medium is also provided in an embodiment of the present invention, the computer readable storage medium storing a computer program, the computer program including program instructions, which are shown in FIG. 4 of the present application.
  • the processor 402 When the processor 402 is invoked, the processor 402 is caused to perform the focusing method shown in FIG. 2 or FIG. 3 of the present application.
  • the computer readable storage medium may be an internal storage unit of the mobile platform described herein, such as a hard disk or memory of the removable platform.
  • the computer readable storage medium may also be an external storage device of the mobile platform, such as a plug-in hard disk equipped on the movable platform, a smart memory card (SMC), and a secure digital (Secure Digital) , SD) card, flash card (Flash Card), etc.
  • the computer readable storage medium may also include both an internal storage unit of the removable platform and an external storage device.
  • the computer readable storage medium is for storing the computer program and other programs and data required by the mobile platform.
  • the computer readable storage medium can also be used to temporarily store data that has been output or is about to be output.
  • FIG. 7 is a schematic structural diagram of another head-up display device according to an embodiment of the present invention.
  • the head-up display device refers to a head-up display that is applied to a movable platform such as an airplane or a car.
  • the heads-up display device 70 may include: a scene sensor 701, a direction sensor 702, a weather sensor 703, a positioning device 704, a speed sensor 705, a motion sensor 706, a volume sensor 707, a projection device 708, and a diagram as in the present application.
  • the scene sensor 701, the direction sensor 702, the weather sensor 703, the positioning device 704, the speed sensor 705, the motion sensor 706, the volume sensor 707, the projection device 708, and the focusing device 50 as shown in FIG. 5 of the present application may pass However, it is not limited to being connected through the bus 709.
  • the heads up display device 70 may also include a power supply system or the like not shown in FIG.
  • the scene sensor 701 is configured to collect scene sensing data.
  • the scene sensor 701 may be, for example, a binocular vision sensor.
  • the scene sensing data includes image sensing data and distance sensing data.
  • the scene sensor 701 may include a monocular vision sensor and a distance sensor for acquiring image sensing data and distance sensing data, respectively.
  • the distance sensor may include but is not limited to a lidar sensor, a millimeter wave sensor, and an ultrasonic radar sensor.
  • the direction sensor 702 is configured to collect direction sensing data.
  • the direction sensing data is eyeball image data of the target object.
  • the direction sensor 702 can be, for example, an image sensor.
  • the weather sensor 703 is configured to collect weather information.
  • the weather information includes temperature sensing data and humidity sensing data.
  • the weather sensor 703 may include a temperature sensor and a humidity sensor for acquiring temperature sensing data and humidity sensing data, respectively.
  • the positioning device 704 is configured to collect location information corresponding to the heads up display device.
  • the location information corresponding to the heads up display device may be GPS positioning data.
  • the speed sensor 705 is configured to collect a motion speed corresponding to the head display device.
  • the speed sensor 705 can be, for example, a line speed sensor.
  • the motion sensor 706 is configured to collect motion information of each reference object.
  • the motion information of each reference object may include, but is not limited to, a motion trajectory, a motion direction, a motion speed, and the like of each reference object.
  • the volume sensor 707 is configured to collect volume information of each reference object.
  • the volume sensor 707 can be, for example, an ultrasonic volume sensor.
  • the focusing device 50 shown in FIG. 5 of the present application can acquire the data collected by the scene sensor 701, the direction sensor 702, the weather sensor 703, the positioning device 704, the speed sensor 705, the motion sensor 706, and the volume sensor 707, and execute the present application.
  • the projection device 708 is configured to display and display related data according to the adjusted projection focal length.
  • the projection device 708 can include a projection module and a mirror surface, wherein the mirror surface can be a separate partial transparent lens.
  • the projection device 708 may only include a projection module.
  • a front windshield or the like of the automobile can be used as a mirror surface.
  • the projection focal length of the head-up display device is adjusted according to the distance value of the target object observed by the driver, thereby realizing adaptive adjustment of the projection focal length, and improving the overlapping of the projection content and the target object, so that The driver can view the target object without having to switch the focus point to view the projected content, thereby improving driving comfort and driving safety.
  • a computer readable storage medium is also provided in an embodiment of the present invention, the computer readable storage medium storing a computer program, the computer program including program instructions, which are shown in FIG. 5 of the present application.
  • the processor 501 When the processor 501 is invoked, the processor 501 is caused to perform the focusing method shown in FIG. 2 or FIG. 3 of the present application.
  • the computer readable storage medium in the embodiments of the present invention may be the computer readable storage medium described in the foregoing embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Instrument Panels (AREA)

Abstract

一种调焦方法、装置及抬头显示设备,其中,方法包括:获取场景感测数据,并根据场景感测数据确定多个参考物体;从多个参考物体中确定出目标物体;生成第一调焦指令,第一调焦指令用于指示抬头显示设备(102)根据到目标物体的距离值调节投射焦距。可以实现对投射焦距的自适应调节,提高了投射内容与目标物体的重叠性。

Description

调焦方法、装置及抬头显示设备 技术领域
本发明涉及投影显示技术领域,尤其涉及一种调焦方法、装置及抬头显示设备。
背景技术
抬头显示器(Head Up Display,HUD)是运用在飞机、汽车等可移动平台上的行驶辅助仪器。抬头显示器利用光学反射的原理,将重要的行驶相关信息投射在一片玻璃上面,这片玻璃位于驾驶座前端,大致与驾驶员的眼睛在同一水平线,因此驾驶员不需要低头就能查看行驶相关信息,提高了行驶安全性。
但是,抬头显示器一般采用固定聚焦平面的设计,即采用固定的投射成像距离。也就是说,在任何情况下驾驶员看到的投射内容(如行驶相关信息)都在一个固定距离上。由于驾驶员的目光注意力和焦距依据不同场景而聚焦于不同距离的物体上,因此上述固定聚焦平面的设计使得驾驶员不能很好地观察投射内容与实际物体。
发明内容
本发明实施例提供了一种调焦方法、装置及抬头显示设备,可以实现对投射焦距的自适应调节。
第一方面,本发明实施例提供了一种调焦方法,应用于抬头显示设备,所述抬头显示设备用于根据投射焦距投射显示相关数据,所述方法包括:
获取场景感测数据,并根据所述场景感测数据确定多个参考物体;
从所述多个参考物体中确定出目标物体;
生成第一调焦指令,所述第一调焦指令用于指示所述抬头显示设备根据所述到所述目标物体的距离值调节投射焦距。
第二方面,本发明实施例提供了一种调焦装置,设置于抬头显示设备中,所述抬头显示设备用于根据投射焦距投射显示相关数据,所述装置包括:
通讯接口,用于获取场景感测数据;
处理器,用于根据所述场景感测数据确定多个参考物体;从所述多个参考 物体中确定出目标物体;生成第一调焦指令,所述第一调焦指令用于指示所述处理器根据所述到所述目标物体的距离值调节投射焦距。
第三方面,本发明实施例提供了一种抬头显示设备,所述抬头显示设备包括:
投影装置,用于根据投射焦距投射显示相关数据;以及,
上述第二方面的调焦装置。
第四方面,本发明实施例提供了一种计算机可读存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行上述第一方面的调焦方法。
本发明实施例根据目标物体的距离值调节投射焦距,可以实现对投射焦距的自适应调节,提高了投射内容与目标物体的重叠性,使得驾驶员在观察目标物体时无需切换聚焦点就能查看投射内容,从而提高了驾驶舒适性和行驶安全性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种调焦方法的应用场景示意图;
图2是本发明实施例提供的一种调焦方法的流程示意图;
图3是本发明实施例提供的另一种调焦方法的流程示意图;
图4是本发明实施例提供的一种调焦装置的结构示意图;
图5是本发明实施例提供的另一种调焦装置的结构示意图;
图6是本发明实施例提供的一种抬头显示设备的结构示意图;
图7是本发明实施例提供的另一种抬头显示设备的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是 全部的实施例。在不冲突的情况下,下述实施例或实施方法中的特征可以任意组合。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
抬头显示器基于投影技术将需要显示的内容投射到驾驶员前方的玻璃上。其中,所述投影技术为液晶显示(Liquid Crystal Display,LCD)技术、液晶附硅(Liquid Crystal on Silicon,LCOS)技术或数字光处理(Digital Light Processing,DLP)技术中的任意一种。
抬头显示器的系统构成一般包括:应用处理器、投射模组和反射镜面。其中,所述应用处理器用于生成投射内容;所述投射模组用于将图像的电子信号转换为投射光,其中所述投射模组包括一个多镜片的棱镜系统,用于将投射内容聚焦于一个虚拟平面上,具体的聚焦平面(或称为成像平面)由所述投射模组的光学系统决定;所述反射镜面一般为独立的部分透明镜片或汽车前挡风玻璃,用于反射投射内容到驾驶员的眼睛。
现有的抬头显示器一般采用固定聚焦平面的设计,即在任何情况下驾驶员看到的投射内容(如行驶相关信息)都在一个固定距离上。目前,按照不同的系统设计聚焦平面一般在距离驾驶员两米到十几米的前方。由于驾驶员的目光注意力和焦距依据不同场景(如慢速行驶场景和快速行驶场景)而聚焦于不同距离的物体上,因此上述固定聚焦平面的设计会导致驾驶员无法将投射内容与实际物体进行有效的重叠观察,需要反复切换聚焦点以在投射内容与实际物体之间进行切换,降低了驾驶舒适性和行驶安全性。
另有少量的抬头显示器采用了双系统的设计,即采用两套应用处理器与投射模组。其中,两套系统采用不同的聚焦平面设计。一般来讲,其中一套系统聚焦于较近处,另一套系统则聚焦于较远处。这种双系统的设计较上述采用固定聚焦平面的设计虽然有所改进,但是仍然不能彻底解决在不同场景下驾驶员需要反复切换聚焦点的问题。
为了解决上述在不同场景下驾驶员需要反复切换聚焦点的问题,本发明实施例提供一种调焦方法。其中,所述调焦方法应用于抬头显示设备,所述抬头显示设备用于根据投射焦距投射显示投射内容。在本发明实施例中,所述抬头显示设备指的是运用在飞机、汽车等可移动平台上的抬头显示器。
在本发明实施例中,所述抬头显示设备的系统构成包括处理器(或称为应 用处理器)、投射模组以及反射镜面。其中,所述处理器用于生成投射内容。所述投射模组用于将图像的电子信号转换为投射光,其中所述投射模组包括一个多镜片的棱镜系统,用于将投射内容聚焦于一个虚拟平面上,具体的聚焦平面由所述投射模组的光学系统决定。在本发明实施例中,所述投射模组的光学系统为变焦光学系统,可以在软件的控制下将投射内容按照设置投射到不同的聚焦平面。所述反射镜面用于反射投射内容到驾驶员的眼睛。所述反射镜面可以为独立的部分透明镜片,也可以为汽车前挡风玻璃。
在本发明实施例中,所述抬头显示设备可以采用瞳孔追踪技术对(或称为眼球追踪技术)驾驶员当前的观察方向进行识别。具体地,所述抬头显示设备可以获取驾驶员的眼球图像数据,根据对所述眼球图像数据的处理来识别驾驶员的眼球瞳孔里的特征,并通过这些特征实时地反算出驾驶员当前的观察方向。
作为一种可选的实施方式,所述抬头显示设备的系统构成还可以包括摄像头视觉系统,其中所述摄像头视觉系统可以包括图像传感器。在这种情形下,所述图像数据可以是直接由所述抬头显示设备中的图像传感器采集的。
作为另一种可选的实施方式,所述图像数据也可以是由可移动平台中的图像传感器采集,所述抬头显示设备从所述可移动平台中获取的。
在本发明实施例中,所述抬头显示设备可以采用车辆感知技术对车辆前方的场景做出分析,确定多个参考物体以及到各个参考物体的距离值。具体地,所述抬头显示设备可以获取场景感测数据,并根据所述场景感测数据确定多个参考物体以及到各个参考物体的距离值。
需要说明的是,所述场景感测数据包括图像感测数据和距离感测数据。其中,所述图像感测数据和所述距离感测数据可以是均由双目视觉传感器采集到的,也可以是分别由单目视觉传感器和距离传感器采集到的。其中,所述距离传感器可以包括但不限于激光雷达传感器、毫米波传感器和超声波雷达传感器。
作为一种可选的实施方式,所述抬头显示设备的系统构成还可以包括感知系统,其中所述感知系统可以包括双目视觉传感器和/或单目视觉传感器以及距离传感器。在这种情形下,所述场景感测数据可以是直接由所述抬头显示设备中的双目视觉传感器(或单目视觉传感器以及距离传感器)采集到的。
作为另一种可选的实施方式,所述场景感测数据也可以是由可移动平台中的双目视觉传感器(或单目视觉传感器以及距离传感器)采集,所述抬头显示 设备从所述可移动平台中获取的。
进一步地,所述抬头显示设备可以从位于所述观察方向上的多个参考物体中确定出目标物体。
作为一种可选的实施方式,所述目标物体可以为位于所述观察方向上的最近参考物体。具体地,所述抬头显示设备可以将位于所述观察方向上的多个参考物体中最小距离值对应的参考物体确定为目标物体。
作为另一种可选的实施方式,所述抬头显示设备可以基于深度学习确定目标物体。具体地,所述抬头显示设备可以将地理位置、天气、车速、识别出的参考物体和距离值作为输入数据加以收集,然后标注最需要关注的目标物体,以此选取某种神经网络模型(如Alexnet卷积神经网络模型)并采用深度学习对该种神经网络模型进行训练,后续利用此网络训练的结果协助快速确定目标物体。
作为另一种可选的实施方式,所述抬头显示设备可以基于参考物体的危险系数(或称为危险度)确定目标物体。具体地,所述抬头显示设备可以将位于所述观察方向上的多个参考物体中危险系数最高的参考物体确定为目标物体。其中,所述参考物体的危险系数可以根据到参考物体的距离值、参考物体的运动轨迹、参考物体的运动方向、参考物体的运动速度、参考物体的体积等确定。
作为另一种可选的实施方式,所述抬头显示设备可以结合高精度地图,根据参考物体的属性信息确定目标物体。其中,所述高精度地图中标识有不同参考物体的属性信息,所述参考物体的属性信息包括但不限于:参考物体是否为可移动物体、参考物体的刚性强度、参考物体的质量、参考物体的价值等等。所述抬头显示设备可以对这些属性信息进行评分,根据预先设置的不同属性信息的权重值,所述抬头显示设备可以进一步对各个属性信息的评分进行加权得到总分(即参考物体的评价值)。在这种情形下,所述抬头显示设备可以将位于所述观察方向上的多个参考物体中评价值最高的参考物体确定为目标物体。
进一步地,所述抬头显示设备可以控制其变焦光学系统根据到所述目标物体的距离值执行对焦(即调节投射焦距),以使所述抬头显示设备显示的投射内容与所述目标物体大致位于相同的距离。在一个具体的实施例中,到所述目标物体的距离值指的是采集距离感测数据的传感器到所述目标物体的距离值。
作为一种可选的实施方式,所述抬头显示设备还可以判断到所述目标物体 的距离值是否超过预设距离值(或称为超焦点);如果是,则所述抬头显示设备控制其变焦光学系统根据所述预设距离值执行对焦。
进一步地,所述抬头显示设备还可以根据调节后的投射焦距投射显示所述处理器生成的投射内容。
请参见图1,图1为本发明实施例提供的一种调焦方法的应用场景示意图。如图1所示,所示应用场景中包括汽车101、安装在所述汽车101中的抬头显示设备102、位于所述汽车101前方的石头103、行人104以及柱子105。
首先,所述抬头显示设备102可以采用瞳孔追踪技术识别出所述汽车101中的驾驶员(图1未示出)当前的观察方向v。
进一步地,所述抬头显示设备102可以采用车辆感知技术对所述汽车101前方的场景做出分析,确定出所述石头103以及到所述汽车101所述石头103的距离值d1、所述行人104以及所述汽车101到所述行人104的距离值d2以及所述柱子105以及所述汽车101到所述柱子105的距离值d3。
进一步地,所述抬头显示设备102可以确定出所述行人104和所述柱子105位于所述观察方向v上。
进一步地,所述抬头显示设备102可以从位于所述观察方向v上的所述行人104和所述柱子105中确定出目标物体。
作为一种可选的实施方式,所述抬头显示设备102可以将位于所述观察方向v上的所述行人104和所述柱子105中距离所述汽车101最近者确定为目标物体。具体地,所述抬头显示设备102可以比较所述距离值d2和所述距离值d3,由于所述距离值d2小于所述距离值d3,因此所述抬头显示设备102可以将所述行人104确定为目标物体。
进一步地,所述抬头显示设备102可以根据所述汽车101到所述行人104的距离值d2执行对焦(即调节投射焦距),以使所述抬头显示设备102显示的投射内容(以行驶相关信息为例)与所述行人104大致位于相同的距离,从而提高了行驶相关信息容与目标物体的重叠性。
进一步地,所述抬头显示设备102还可以根据调节后的投射焦距投射显示行驶相关信息。作为一种可选的实施方式,所述行驶相关信息显示在所述汽车101中位于驾驶员前方的前挡风玻璃上。由于所述行驶相关信息与所述行人104大致位于相同的距离,因此驾驶员在观察所述行人104的同时无需切换聚 焦点就能查看所述行驶相关信息,即驾驶员可以对所述行人104和所述行驶相关信息进行有效的重叠观察。
综上所述,本发明实施例提供的调焦方法根据到目标物体的距离值对抬头显示设备的投射焦距进行调节,实现了对投射焦距的自适应调节,提高了投射内容与目标物体的重叠性,使得驾驶员在观察目标物体的同时无需切换聚焦点就能查看投射内容,从而提高了驾驶舒适性和行驶安全性。下面结合图2至图7,对本发明实施例的调焦方法、装置及抬头显示设备进行详细的描述。
请参见图2,是本发明实施例提供的一种调焦方法的流程示意图。具体地,本发明实施例的调焦方法应用于抬头显示设备,所述抬头显示设备用于根据投射焦距投射显示相关数据。如图2所示,所述调焦方法可以包括:
S201:获取场景感测数据,并根据所述场景感测数据确定多个参考物体。
需要说明的是,本发明实施例所述的抬头显示设备指的是运用在飞机、汽车等可移动平台上的抬头显示器。其中,所述可移动平台由驾驶员驾驶,所述抬头显示设备基于投影技术将相关数据(即需要显示的内容)投射到驾驶员前方的玻璃上,所述玻璃再将显示的内容(即投射内容)反射到驾驶员的眼睛。其中,所述玻璃可以是所述抬头显示设备中独立的部分透明镜片,也可以是汽车前挡风玻璃。
其中,所述相关数据例如可以是诸如瞬时行驶速度、平均行驶速度、发动机转速、怠速油耗、平均油耗、行驶里程、外部环境温度、导航地图等行驶相关信息。
其中,所述场景感测数据可以是由双目视觉传感器采集到的,也可以是由单目视觉传感器和距离传感器协同工作采集到的。其中,所述距离传感器可以包括但不限于激光雷达传感器、毫米波传感器和超声波雷达传感器。
需要说明的是,所述抬头显示设备还可以根据所述场景感测数据确定到各个参考物体的距离值。其中,到各个参考物体的距离值可以是可移动平台到各个参考物体的距离值,也可以是抬头显示设备到各个参考物体的距离值,还可以是采集所述场景感测数据的传感器到各个参考物体的距离值。
具体实现中,所述场景感测数据可以包括图像感测数据和距离感测数据。在这种情形下,所述抬头显示设备执行所述根据所述场景感测数据确定多个参 考物体时可以具体包括:根据所述图像感测数据确定多个参考物体;所述抬头显示设备执行所述根据所述场景感测数据确定到各个参考物体的距离值可以具体包括:根据所述距离感测数据确定到各个参考物体的距离值。
其中,所述图像感测数据和所述距离感测数据可以是均由双目视觉传感器采集到的,也可以是分别由单目视觉传感器和距离传感器采集到的。
作为一种可选的实施方式,所述场景感测数据可以是直接由所述抬头显示设备中的双目视觉传感器(或单目视觉传感器以及距离传感器)采集到的。
作为另一种可选的实施方式,所述场景感测数据也可以是由可移动平台中的双目视觉传感器(或单目视觉传感器以及距离传感器)采集,所述抬头显示设备通过通讯接口从所述可移动平台中获取的。
其中,所述参考物体指的是所述抬头显示设备根据所述场景感测数据对可移动平台前方的场景进行分析而确定出的位于所述可移动平台外部的物体。可以理解的是,本发明实施例所述的物体指的是位于可移动平台外部的有生命的物体(如人类、动物等)和没有生命的物体(如石头、栏杆等)的总称。
需要说明的是,在获取所述场景感测数据之前,所述抬头显示设备还可以获取对目标对象的方向感测数据,并根据所述方向感测数据确定所述目标对象的观察视野范围;如果所述抬头显示设备所显示的内容在所述观察视野范围内,则执行所述获取场景感测数据,并根据所述场景感测数据确定多个参考物体。
其中,所述目标对象为上述驾驶员。需要说明的是,本发明实施例所述的驾驶员指的是汽车驾驶员和飞行员的总称。
可以理解的是,当所述抬头显示设备所显示的内容不在所述目标对象的观察视野范围内时(如目标对象查看汽车后视镜时),所述抬头显示设备可以不执行本发明实施例提供的调焦方法。在这种情形下,所述抬头显示设备可以根据预设的投射焦距投射显示相关数据,也可以根据当前的投射焦距投射显示相关数据。
S202:从所述多个参考物体中确定出目标物体。
其中,所述目标物体指的是所述目标对象观察的物体。
作为一种可选的实施方式,所述抬头显示设备可以将最小距离值对应的参考物体确定为目标物体。在这种情形下,到所述目标物体的距离值小于到其他参考物体的距离值,即所述目标物体为最近的参考物体。
作为另一种可选的实施方式,所述抬头显示设备可以生成各个参考物体的标识信息,所述标识信息用于唯一地标识各个参考物体;获取环境信息,所述环境信息包括天气信息以及所述抬头显示设备对应的位置信息和运动速度的任意一种或多种;将所述各个参考物体的标识信息、所述环境信息以及所述到各个参考物体的距离值输入到预设识别模型中,并将所述预设识别模型输出的标识信息所标识的参考物体确定为目标物体。
其中,所述天气信息可以包括但不限于温度感测数据、湿度感测数据等等。所述天气信息可以是直接由所述抬头显示设备中的气象传感器(如温度传感器、湿度传感器等)采集到的,也可以是由可移动平台中的气象传感器采集,所述抬头显示设备通过通讯接口从所述可移动平台获取的。
其中,所述抬头显示设备对应的位置信息可以是全球定位系统(Global Positioning System,GPS)定位数据。所述抬头显示设备对应的位置信息可以是直接由所述抬头显示设备中的定位装置采集到的,也可以是由可移动平台中的定位装置采集,所述抬头显示设备通过通讯接口从所述可移动平台获取的。
可以理解的是,所述抬头显示设备对应的运动速度即是可移动平台的行驶速度。其中,所述抬头显示设备对应的运动速度可以是直接由所述抬头显示设备中的速度传感器(如线速度传感器)采集到的,也可以是由可移动平台中的速度传感器采集,所述抬头显示设备通过通讯接口从所述可移动平台获取的。
其中,所述预设识别模型可以为采用深度学习进行训练得到的神经网络模型(如Alexnet卷积神经网络模型)。
作为另一种可选的实施方式,所述抬头显示设备可以获取各个参考物体的第一属性信息,所述第一属性信息包括各个参考物体的运动信息和/或体积信息;根据所述各个参考物体的第一属性信息以及所述到各个参考物体的距离值,计算各个参考物体的危险系数;将所述多个参考物体中危险系数最高的参考物体确定为目标物体。
其中,所述各个参考物体的运动信息可以包括但不限于各个参考物体的运动轨迹、运动方向、运动速度等等。所述各个参考物体的运动信息可以是直接由所述抬头显示设备中的运动传感器采集到的,也可以是由可移动平台中的运动传感器采集,所述抬头显示设备通过通讯接口从所述可移动平台获取的。
其中,所述各个参考物体的体积信息可以是直接由所述抬头显示设备中的 体积传感器采集到的,也可以是由可移动平台中的体积传感器采集,所述抬头显示设备通过通讯接口从所述可移动平台获取的。其中,所述体积传感器例如可以是超声波体积传感器。
具体实现中,所述抬头显示设备执行所述根据所述各个参考物体的第一属性信息以及所述到各个参考物体的距离值,计算各个参考物体的危险系数可以具体包括:根据预先设置的等级划分规则,确定各个参考物体的各项第一属性信息的危险等级,并确定到各个参考物体的距离值的危险等级;对各个参考物体的各项第一属性信息的危险等级以及到各个参考物体的距离值的危险等级进行加和,得到各个参考物体的危险系数。
可以理解的是,在本发明实施例中,参考物体的运动轨迹越不规则时危险等级越高;参考物体的运动方向为朝所述抬头显示设备运动时的危险等级高于参考物体的运动方向为背离所述抬头显示设备运动时的危险等级;参考物体的运动速度越快时危险等级越高;到参考物体的距离值越小时危险等级越高。
作为另一种可选的实施方式,所述抬头显示设备可以获取所述抬头显示设备对应的位置信息;获取所述位置信息对应的地图数据,所述地图数据包括各个参考物体的第二属性信息,所述第二属性信息包括各个参考物体的状态信息、强度信息、质量信息和价值信息中的任意一种或多种;根据所述各个参考物体的第二属性信息,计算各个参考物体的评价值;将所述多个参考物体中评价值最大的参考物体确定为目标物体。
在一个具体的实施例中,所述抬头显示设备中可以预先存储有各个位置信息对应的地图数据。从而,所述抬头显示设备可以查询并获取所述抬头显示设备对应的位置信息对应的地图数据。
在另一个具体的实施例中,可移动平台中可以预先存储有各个位置信息对应的地图数据,所述抬头显示设备可以通过通讯接口从对所述可移动平台中获取所述抬头显示设备对应的位置信息对应的地图数据。
在另一个具体的实施例中,所述抬头显示设备可以通过有线连接或无线连接从服务器中获取所述抬头显示设备对应的位置信息对应的地图数据。
在本发明实施例中,参考物体的状态信息可以包括固定状态或移动状态。其中,参考物体的状态信息为固定状态指的是参考物体为不可移动物体,参考物体的状态信息为移动状态指的是参考物体为可移动物体。
在本发明实施例中,参考物体的强度信息用于表征参考物体的刚性强度。
具体实现中,所述抬头显示设备执行所述根据所述各个参考物体的第二属性信息,计算各个参考物体的评价值可以具体包括:根据预先设置的评分规则,对各个参考物体的各项第二属性信息进行评分;根据预先设置的各项第二属性信息的权重值,对各个参考物体的各项第二属性信息的评分进行加权,得到各个参考物体的评价值。
可以理解的是,在本发明实施例中,参考物体的移动信息为移动状态时的评分高于参考物体的移动信息为固定状态时的评分;参考物体的强度越大时评分越高;参考物体的质量越大时评分越高;参考物体的价值越高时评分越高。
S203:生成第一调焦指令。
其中,所述第一调焦指令用于指示所述抬头显示设备根据所述到所述目标物体的距离值调节投射焦距,以使所述抬头显示设备显示的投射内容与所述目标物体大致位于相同的距离。
具体实现中,所述抬头显示设备可以将各个参考物体的标识与到各个参考物体的距离值进行关联存储。当所述抬头显示设备确定出目标物体之后,可以根据所述目标物体的标识查询到所述目标物体的距离值。
进一步地,所述抬头显示设备还可以根据调节后的投射焦距投射显示相关数据。
在本发明实施例中,根据到驾驶员观察的目标物体的距离值对抬头显示设备的投射焦距进行调节,实现了对投射焦距的自适应调节,提高了投射内容与目标物体的重叠性,使得驾驶员在观察目标物体的同时无需切换聚焦点就能查看投射内容,从而提高了驾驶舒适性和行驶安全性。
进一步地,请参见图3,是本发明实施例提供的另一种调焦方法的流程示意图。具体地,本发明实施例的调焦方法应用于抬头显示设备,所述抬头显示设备用于根据投射焦距投射显示相关数据。在图2所示的实施例的基础上,如图3所示,所述调焦方法可以包括:
S301:获取对目标对象的方向感测数据,并根据所述方向感测数据确定所述目标对象的观察方向。
需要说明的是,本发明实施例所述的抬头显示设备指的是运用在飞机、汽 车等可移动平台上的抬头显示器。其中,所述可移动平台由驾驶员驾驶,所述抬头显示设备基于投影技术将相关数据(即需要显示的内容)投射到驾驶员前方的玻璃上,所述玻璃再将显示的内容(即投射内容)反射到驾驶员的眼睛。其中,所述玻璃可以是所述抬头显示设备中独立的部分透明镜片,也可以是汽车前挡风玻璃。
其中,所述相关数据例如可以是诸如瞬时行驶速度、平均行驶速度、发动机转速、怠速油耗、平均油耗、行驶里程、外部环境温度、导航地图等行驶相关信息。
其中,所述目标对象为上述驾驶员。需要说明的是,本发明实施例所述的驾驶员指的是汽车驾驶员和飞行员的总称。
其中,所述方向感测数据可以用于确定所述目标对象的观察方向。在一个具体的实施例中,所述抬头显示设备采用瞳孔追踪技术对(或称为眼球追踪技术)所述目标对象的观察方向进行识别。在这种情形下,所述对目标对象的方向感测数据可以具体为所述目标对象的眼球图像数据。所述抬头显示设备执行所述获取对目标对象的方向感测数据,并根据所述方向感测数据确定所述目标对象的观察方向可以具体包括:获取所述目标对象的眼球图像数据;根据对所述眼球图像数据的处理来识别所述目标对象的眼球瞳孔里的特征,并通过这些特征实时地反算出所述目标对象的观察方向。
其中,所述方向感测数据还可以用于确定所述目标对象的观察视野范围。作为一种可选的实施方式,在所述抬头显示设备执行所述获取所述目标对象的方向感测数据之后,所述抬头显示设备可以根据所述方向感测数据确定所述目标对象的观察视野范围;如果所述抬头显示设备所显示的内容在所述观察视野范围内,则执行所述根据所述方向感测数据确定所述目标对象的观察方向。
可以理解的是,当所述抬头显示设备所显示的内容不在所述目标对象的观察视野范围内时(如目标对象查看汽车后视镜时),所述抬头显示设备可以不执行本发明实施例提供的调焦方法。在这种情形下,所述抬头显示设备可以根据预设的投射焦距投射显示相关数据,也可以根据当前的投射焦距投射显示相关数据。
S302:获取场景感测数据,并根据所述场景感测数据确定多个参考物体。
需要说明的是,所述抬头显示设备还可以根据所述场景感测数据确定到各 个参考物体的距离值。其中,到各个参考物体的距离值可以是采集所述场景感测数据的传感器到各个参考物体的距离值,或者是可移动平台到各个参考物体的距离值,还可以是所述抬头显示设备到各个参考物体的距离值。
具体实现中,所述场景感测数据可以包括图像感测数据和距离感测数据。在这种情形下,所述抬头显示设备执行所述根据所述场景感测数据确定多个参考物体时可以具体包括:根据所述图像感测数据确定多个参考物体;所述抬头显示设备执行所述根据所述场景感测数据确定到各个参考物体的距离值可以具体包括:根据所述距离感测数据确定到各个参考物体的距离值。
其中,所述图像感测数据和所述距离感测数据可以是由同一个传感器采集到的,也可以是分别由不同的传感器采集到的。
其中,所述参考物体指的是所述抬头显示设备根据所述场景感测数据对可移动平台前方的场景进行分析而确定出的位于所述可移动平台外部的物体。
S303:从位于所述观察方向上的多个参考物体中确定出目标物体。
其中,所述目标物体指的是所述目标对象观察的物体。
作为一种可选的实施方式,所述抬头显示设备可以将位于所述观察方向上的多个参考物体中最小距离值对应的参考物体确定为目标物体。
作为另一种可选的实施方式,所述抬头显示设备可以生成位于所述观察方向上的各个参考物体的标识信息,所述标识信息用于唯一地标识位于所述观察方向上的各个参考物体;获取环境信息,所述环境信息包括天气信息以及所述抬头显示设备对应的位置信息和运动速度的任意一种或多种;将所述位于所述观察方向上的各个参考物体的标识信息、所述环境信息以及所述到位于所述观察方向上的各个参考物体的距离值输入到预设识别模型中,并将所述预设识别模型输出的标识信息所标识的参考物体确定为目标物体。
作为另一种可选的实施方式,所述抬头显示设备可以获取位于所述观察方向上的各个参考物体的第一属性信息,所述第一属性信息包括位于所述观察方向上的各个参考物体的运动信息和/或体积信息;根据所述位于所述观察方向上的各个参考物体的第一属性信息以及所述到位于所述观察方向上的各个参考物体的距离值,计算位于所述观察方向上的各个参考物体的危险系数;将位于所述观察方向上的多个参考物体中危险系数最高的参考物体确定为目标物体。
作为另一种可选的实施方式,所述抬头显示设备可以获取所述抬头显示设备对应的位置信息;获取所述位置信息对应的地图数据,所述地图数据包括位于所述观察方向上的各个参考物体的第二属性信息,所述第二属性信息包括位于所述观察方向上的各个参考物体的状态信息、强度信息、质量信息和价值信息中的任意一种或多种;根据所述位于所述观察方向上的各个参考物体的第二属性信息,计算位于所述观察方向上的各个参考物体的评价值;将位于所述观察方向上的多个参考物体中评价值最大的参考物体确定为目标物体。
其中,上述四种可选的实施方式的具体技术细节可以参考本申请图2所示的调焦方法的步骤S202,在此不再赘述。
需要说明的是,当位于所述观察方向上的参考物体仅包括单个参考物体时,所述抬头显示设备可以将所述参考物体直接确定为目标物体。
S304:判断到所述目标物体的距离值是否小于预设距离值。
具体实现中,所述抬头显示设备可以将各个参考物体的标识与到各个参考物体的距离值进行关联存储。当所述抬头显示设备确定出目标物体之后,可以根据所述目标物体的标识查询到所述目标物体的距离值。
其中,所述预设距离值即为所述抬头显示设备预先设置的超焦点。
S305:生成第一调焦指令。
其中,所述第一调焦指令用于指示所述抬头显示设备根据所述到所述目标物体的距离值调节投射焦距,以使所述抬头显示设备显示的投射内容与所述目标物体大致位于相同的距离。
S306:生成第二调焦指令。
其中,所述第二调焦指令用于指示所述抬头显示设备根据所述预设距离值调节投射焦距。
在本发明实施例中,所述抬头显示设备可以对超出所述预设距离值以外的目标物体执行一个最大对焦距离。
需要说明的时,当到所述目标物体的距离值等于所述预设距离值时,所述抬头显示设备可以生成所述第一调焦指令;或者,当到所述目标物体的距离值等于所述预设距离值时,所述抬头显示设备可以生成所述第二调焦指令。
进一步地,所述抬头显示设备还可以根据调节后的投射焦距投射显示相关数据。
在本发明实施例中,根据到驾驶员观察的目标物体的距离值对抬头显示设备的投射焦距进行调节,实现了对投射焦距的自适应调节,提高了投射内容与目标物体的重叠性,使得驾驶员在观察目标物体的同时无需切换聚焦点就能查看投射内容,从而提高了驾驶舒适性和行驶安全性。
请参见图4,是本发明实施例提供的一种调焦装置的结构示意图。所述调焦装置设置于抬头显示设备中,所述抬头显示设备用于根据投射焦距投射显示相关数据。其中,所述抬头显示设备指的是运用在飞机、汽车等可移动平台上的抬头显示器。如图4所示,所述调焦装置40可以包括:一个或多个通讯接口401以及一个或多个处理器402。其中,所述一个或多个处理器402可以单独地或协同地工作。所述通讯接口401和所述处理器402可以通过但不限于通过总线403连接。
所述通讯接口401,用于获取场景感测数据;
所述处理器402,用于根据所述场景感测数据确定多个参考物体;从所述多个参考物体中确定出目标物体;生成第一调焦指令,所述第一调焦指令用于指示所述处理器根据所述到所述目标物体的距离值调节投射焦距。
可选地,所述通讯接口401,还用于获取对目标对象的方向感测数据;
所述处理器402,还用于根据所述方向感测数据确定所述目标对象的观察方向;
所述处理器402执行所述从所述多个参考物体中确定出目标物体时,具体用于从位于所述观察方向上的多个参考物体中确定出目标物体。
其中,所述处理器402,还用于根据所述场景感测数据确定到各个参考物体的距离值。
可选地,所述场景感测数据包括图像感测数据和距离感测数据;
所述处理器402执行所述根据所述场景感测数据确定多个参考物体时,具体用于根据所述图像感测数据识别多个参考物体;
所述处理器402执行所述根据所述场景感测数据确定到各个参考物体的距离值时,具体用于根据所述距离感测数据确定到各个参考物体的距离值。
可选地,所述处理器402执行所述从所述多个参考物体中确定出目标物体 时,具体用于将最小距离值对应的参考物体确定为目标物体。
可选地,所述通讯接口401,还用于获取环境信息,所述环境信息包括天气信息以及所述抬头显示设备对应的位置信息和运动速度的任意一种或多种;
所述处理器402执行所述从所述多个参考物体中确定出目标物体时,具体用于生成各个参考物体的标识信息,所述标识信息用于唯一地标识各个参考物体;将所述各个参考物体的标识信息、所述环境信息以及所述到各个参考物体的距离值输入到预设识别模型中,并将所述预设识别模型输出的标识信息所标识的参考物体确定为目标物体。
可选地,所述通讯接口401,还用于获取各个参考物体的第一属性信息,所述第一属性信息包括各个参考物体的运动信息和/或体积信息;
所述处理器402执行所述从所述多个参考物体中确定出目标物体时,具体用于根据所述各个参考物体的第一属性信息以及所述到各个参考物体的距离值,计算各个参考物体的危险系数;将所述多个参考物体中危险系数最高的参考物体确定为目标物体。
可选地,所述通讯接口401,还用于获取所述抬头显示设备对应的位置信息;获取所述位置信息对应的地图数据,所述地图数据包括各个参考物体的第二属性信息,所述第二属性信息包括各个参考物体的状态信息、强度信息、质量信息和价值信息中的任意一种或多种;
所述处理器402执行所述从所述多个参考物体中确定出目标物体时,具体用于根据所述各个参考物体的第二属性信息,计算各个参考物体的评价值;将所述多个参考物体中评价值最大的参考物体确定为目标物体。
可选地,所述处理器402,还用于判断到所述目标物体的距离值是否小于预设距离值;若是,则执行所述生成第一调焦指令;若否,则生成第二调焦指令,所述第二调焦指令用于指示所述处理器根据所述预设距离值调节投射焦距。
可选地,所述通讯接口401,还用于获取对目标对象的方向感测数据;
所述处理器402,还用于根据所述方向感测数据确定所述目标对象的观察视野范围;如果所述抬头显示设备所显示的内容在所述观察视野范围内,则控制所述通讯接口401执行所述获取场景感测数据。
可选地,在所述通讯接口401执行所述获取对目标对象的方向感测数据之后,所述处理器402,还用于根据所述方向感测数据确定所述目标对象的观察 视野范围;如果所述抬头显示设备所显示的内容在所述观察视野范围内,则执行所述根据所述方向感测数据确定所述目标对象的观察方向。
需要说明的是,本发明实施例所描述的场景感测数据、方向感测数据、环境信息、位置信息以及第一属性信息可以是由可移动平台中相应的传感器采集到,所述通讯接口401从所述可移动平台中获取的。
应当理解,在本发明实施例中,所述处理器402可以是中央处理单元(Central Processing Unit,CPU),所述处理器402还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。所述通用处理器可以是微处理器或者所述处理器402也可以是任何常规的处理器等。
需要说明的是,本发明实施例中描述的通讯接口401和处理器402可执行本申请图2或图3所示的调焦方法的实现方式,具体技术细节可以参考本发明实施例方法的相关部分的描述,在此不再赘述。
在本发明实施例中,根据到驾驶员观察的目标物体的距离值对抬头显示设备的投射焦距进行调节,实现了对投射焦距的自适应调节,提高了投射内容与目标物体的重叠性,使得驾驶员在观察目标物体的同时无需切换聚焦点就能查看投射内容,从而提高了驾驶舒适性和行驶安全性。
请参见图5,是本发明实施例提供的另一种调焦装置的结构示意图。所述调焦装置设置于抬头显示设备中,所述抬头显示设备用于根据投射焦距投射显示相关数据。其中,所述抬头显示设备指的是运用在飞机、汽车等可移动平台上的抬头显示器。如图5所示,所述调焦装置50可以包括:一个或多个处理器501以及一个或多个通讯接口502。其中,所述一个或多个处理器501可以单独地或协同地工作。所述处理器501和所述通讯接口502可以通过但不限于通过总线503连接。
所述处理器501,用于获取场景感测数据,并根据所述场景感测数据确定多个参考物体;从所述多个参考物体中确定出目标物体;生成第一调焦指令, 所述第一调焦指令用于指示所述处理器根据所述到所述目标物体的距离值调节投射焦距。
可选地,所述处理器501,还用于获取对目标对象的方向感测数据,并根据所述方向感测数据确定所述目标对象的观察方向;
所述处理器501执行所述从所述多个参考物体中确定出目标物体时,具体用于从位于所述观察方向上的多个参考物体中确定出目标物体。
其中,所述处理器501,还用于根据所述场景感测数据确定到各个参考物体的距离值。
可选地,所述场景感测数据包括图像感测数据和距离感测数据;
所述处理器501执行所述根据所述场景感测数据确定多个参考物体时,具体用于根据所述图像感测数据识别多个参考物体;
所述处理器501执行所述根据所述场景感测数据确定到各个参考物体的距离值时,具体用于根据所述距离感测数据确定到各个参考物体的距离值。
可选地,所述处理器501执行所述从所述多个参考物体中确定出目标物体时,具体用于将最小距离值对应的参考物体确定为目标物体。
可选地,所述处理器501执行所述从所述多个参考物体中确定出目标物体时,具体用于生成各个参考物体的标识信息,所述标识信息用于唯一地标识各个参考物体;获取环境信息,所述环境信息包括天气信息以及所述抬头显示设备对应的位置信息和运动速度的任意一种或多种;将所述各个参考物体的标识信息、所述环境信息以及所述到各个参考物体的距离值输入到预设识别模型中,并将所述预设识别模型输出的标识信息所标识的参考物体确定为目标物体。
可选地,所述处理器501执行所述从所述多个参考物体中确定出目标物体时,具体用于获取各个参考物体的第一属性信息,所述第一属性信息包括各个参考物体的运动信息和/或体积信息;根据所述各个参考物体的第一属性信息以及所述到各个参考物体的距离值,计算各个参考物体的危险系数;将所述多个参考物体中危险系数最高的参考物体确定为目标物体。
可选地,所述处理器501,还用于获取所述抬头显示设备对应的位置信息;
所述通讯接口502,用于获取所述位置信息对应的地图数据,所述地图数据包括各个参考物体的第二属性信息,所述第二属性信息包括各个参考物体的状态信息、强度信息、质量信息和价值信息中的任意一种或多种;
所述处理器501执行所述从所述多个参考物体中确定出目标物体时,具体用于根据所述各个参考物体的第二属性信息,计算各个参考物体的评价值;将所述多个参考物体中评价值最大的参考物体确定为目标物体。
可选地,所述处理器501,还用于判断到所述目标物体的距离值是否小于预设距离值;若是,则执行所述生成第一调焦指令;若否,则生成第二调焦指令,所述第二调焦指令用于指示所述处理器根据所述预设距离值调节投射焦距。
可选地,所述处理器501,还用于获取对目标对象的方向感测数据,并根据所述方向感测数据确定所述目标对象的观察视野范围;如果所述抬头显示设备所显示的内容在所述观察视野范围内,则执行所述获取场景感测数据,并根据所述场景感测数据确定多个参考物体。
可选地,在执行所述获取对目标对象的方向感测数据之后,所述处理器401,还用于根据所述方向感测数据确定所述目标对象的观察视野范围;如果所述抬头显示设备所显示的内容在所述观察视野范围内,则执行所述根据所述方向感测数据确定所述目标对象的观察方向。
需要说明的是,本发明实施例中的处理器501可以是前述实施例所描述的处理器。
需要说明的是,本发明实施例中描述的处理器501和通讯接口502可执行本申请图2或图3所示的调焦方法的实现方式,具体技术细节可以参考本发明实施例方法的相关部分的描述,在此不再赘述。
在本发明实施例中,根据到驾驶员观察的目标物体的距离值对抬头显示设备的投射焦距进行调节,实现了对投射焦距的自适应调节,提高了投射内容与目标物体的重叠性,使得驾驶员在观察目标物体的同时无需切换聚焦点就能查看投射内容,从而提高了驾驶舒适性和行驶安全性。
请参见图6,图6是本发明实施例提供的一种抬头显示设备的结构示意图。其中,所述抬头显示设备指的是运用在飞机、汽车等可移动平台上的抬头显示器。如6所示,所述抬头显示设备60可以包括:本申请图4所示的调焦装置40以及投影装置601。其中,所述调焦装置40和所述投影装置601可以通过但不限于通过总线602连接。
其中,所述抬头显示设备60还可以包括未在图6中示出的电源系统、视 觉传感器(如双目视觉传感器、单目视觉传感器)、距离传感器、图像传感器、气象传感器(如温度传感器、湿度传感器等)、定位装置、速度传感器(如线速度传感器)、运动传感器、体积传感器(如超声波体积传感器)等等。
其中,所述投影装置601,用于根据调节后的投射焦距投射显示相关数据。
作为一种可选的实施方式,所述投影装置601可以包括投射模组和反射镜面,其中所述反射镜面可以为独立的部分透明镜片。
作为另一种可选的实施方式,所述投影装置601可以仅包括投射模组。在该实施方式中,汽车前挡风玻璃等可以用作反射镜面。
在本发明实施例中,根据到驾驶员观察的目标物体的距离值对抬头显示设备的投射焦距进行调节,实现了对投射焦距的自适应调节,提高了投射内容与目标物体的重叠性,使得驾驶员在观察目标物体的同时无需切换聚焦点就能查看投射内容,从而提高了驾驶舒适性和行驶安全性。
在本发明的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述程序指令当被本申请图4所示的处理器402调用时使所述处理器402执行本申请图2或图3所示的调焦方法。
所述计算机可读存储介质可以是本申请所述的可移动平台的内部存储单元,例如所述可移动平台的硬盘或内存。所述计算机可读存储介质也可以是所述可移动平台的外部存储设备,例如所述可移动平台上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述计算机可读存储介质还可以既包括所述可移动平台的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序以及所述可移动平台所需的其他程序和数据。所述计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
请参见图7,图7是本发明实施例提供的另一种抬头显示设备的结构示意图。其中,所述抬头显示设备指的是运用在飞机、汽车等可移动平台上的抬头显示器。如7所示,所述抬头显示设备70可以包括:场景传感器701、方向传感器702、气象传感器703、定位装置704、速度传感器705、运动传感器706、体积传感器707、投影装置708以及如本申请图5所示的调焦装置50。 其中,所述场景传感器701、方向传感器702、气象传感器703、定位装置704、速度传感器705、运动传感器706、体积传感器707、投影装置708以及如本申请图5所示的调焦装置50可以通过但不限于通过总线709连接。所述抬头显示设备70还可以包括未在图7中示出的电源系统等等。
所述场景传感器701,用于采集场景感测数据。
作为一种可选的实施方式,所述场景传感器701例如可以是双目视觉传感器。
可选地,所述场景感测数据包括图像感测数据和距离感测数据。作为另一种可选的实施方式,所述场景传感器701可以包括单目视觉传感器和距离传感器,分别用于采集图像感测数据和距离感测数据。其中,所述距离传感器可以包括但不限于激光雷达传感器、毫米波传感器和超声波雷达传感器。
所述方向传感器702,用于采集方向感测数据。
可选地,所述方向感测数据为所述目标对象的眼球图像数据。作为一种可选的实施方式,所述方向传感器702例如可以是图像传感器。
所述气象传感器703,用于采集天气信息。
可选地,所述天气信息包括温度感测数据和湿度感测数据。作为一种可选的实施方式,所述气象传感器703可以包括温度传感器和湿度传感器,分别用于采集温度感测数据和湿度感测数据。
所述定位装置704,用于采集所述抬头显示设备对应的位置信息。可选地,所述抬头显示设备对应的位置信息可以是GPS定位数据。
所述速度传感器705,用于采集所述抬头显示设备对应的运动速度。作为一种可选的实施方式,所述速度传感器705例如可以是线速度传感器。
所述运动传感器706,用于采集各个参考物体的运动信息。
可选地,所述各个参考物体的运动信息可以包括但不限于各个参考物体的运动轨迹、运动方向、运动速度等等。
所述体积传感器707,用于采集各个参考物体的体积信息。作为一种可选的实施方式,所述体积传感器707例如可以是超声波体积传感器。
本申请图5所示的调焦装置50可获取上述场景传感器701、方向传感器702、气象传感器703、定位装置704、速度传感器705、运动传感器706以及体积传感器707采集到的数据,并执行本申请图2或图3所示的调焦方法的实 施方式。
所述投影装置708,用于根据调节后的投射焦距投射显示相关数据。
作为一种可选的实施方式,所述投影装置708可以包括投射模组和反射镜面,其中所述反射镜面可以为独立的部分透明镜片。
作为另一种可选的实施方式,所述投影装置708可以仅包括投射模组。在该实施方式中,汽车前挡风玻璃等可以用作反射镜面。
在本发明实施例中,根据到驾驶员观察的目标物体的距离值对抬头显示设备的投射焦距进行调节,实现了对投射焦距的自适应调节,提高了投射内容与目标物体的重叠性,使得驾驶员在观察目标物体的同时无需切换聚焦点就能查看投射内容,从而提高了驾驶舒适性和行驶安全性。
在本发明的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述程序指令当被本申请图5所示的处理器501调用时使所述处理器501执行本申请图2或图3所示的调焦方法。
需要说明的是,本发明实施例中的计算机可读存储介质可以是前述实施例所描述的计算机可读存储介质。
以上所述,仅为本发明的部分实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (24)

  1. 一种调焦方法,其特征在于,应用于抬头显示设备,所述抬头显示设备用于根据投射焦距投射显示相关数据,所述方法包括:
    获取场景感测数据,并根据所述场景感测数据确定多个参考物体;
    从所述多个参考物体中确定出目标物体;
    生成第一调焦指令,所述第一调焦指令用于指示所述抬头显示设备根据所述到所述目标物体的距离值调节投射焦距。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取对目标对象的方向感测数据,并根据所述方向感测数据确定所述目标对象的观察方向;
    所述从所述多个参考物体中确定出目标物体,包括:
    从位于所述观察方向上的多个参考物体中确定出目标物体。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    根据所述场景感测数据确定到各个参考物体的距离值。
  4. 根据权利要求3所述的方法,其特征在于,所述场景感测数据包括图像感测数据和距离感测数据;
    所述根据所述场景感测数据确定多个参考物体,包括:
    根据所述图像感测数据识别多个参考物体;
    所述根据所述场景感测数据确定到各个参考物体的距离值,包括:
    根据所述距离感测数据确定到各个参考物体的距离值。
  5. 根据权利要求3所述的方法,其特征在于,所述从所述多个参考物体中确定出目标物体,包括:
    将最小距离值对应的参考物体确定为目标物体。
  6. 根据权利要求3所述的方法,其特征在于,所述从所述多个参考物体中确定出目标物体,包括:
    生成各个参考物体的标识信息,所述标识信息用于唯一地标识各个参考物体;
    获取环境信息,所述环境信息包括天气信息以及所述抬头显示设备对应的位置信息和运动速度的任意一种或多种;
    将所述各个参考物体的标识信息、所述环境信息以及所述到各个参考物体的距离值输入到预设识别模型中,并将所述预设识别模型输出的标识信息所标识的参考物体确定为目标物体。
  7. 根据权利要求3所述的方法,其特征在于,所述从所述多个参考物体中确定出目标物体,包括:
    获取各个参考物体的第一属性信息,所述第一属性信息包括各个参考物体的运动信息和/或体积信息;
    根据所述各个参考物体的第一属性信息以及所述到各个参考物体的距离值,计算各个参考物体的危险系数;
    将所述多个参考物体中危险系数最高的参考物体确定为目标物体。
  8. 根据权利要求3所述的方法,其特征在于,所述从所述多个参考物体中确定出目标物体,包括:
    获取所述抬头显示设备对应的位置信息;
    获取所述位置信息对应的地图数据,所述地图数据包括各个参考物体的第二属性信息,所述第二属性信息包括各个参考物体的状态信息、强度信息、质量信息和价值信息中的任意一种或多种;
    根据所述各个参考物体的第二属性信息,计算各个参考物体的评价值;
    将所述多个参考物体中评价值最大的参考物体确定为目标物体。
  9. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    判断到所述目标物体的距离值是否小于预设距离值;
    若是,则执行所述生成第一调焦指令;
    若否,则生成第二调焦指令,所述第二调焦指令用于指示所述抬头显示设备根据所述预设距离值调节投射焦距。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取对目标对象的方向感测数据,并根据所述方向感测数据确定所述目标对象的观察视野范围;
    如果所述抬头显示设备所显示的内容在所述观察视野范围内,则执行所述获取场景感测数据,并根据所述场景感测数据确定多个参考物体。
  11. 根据权利要求2所述的方法,其特征在于,在所述获取对目标对象的方向感测数据之后,所述方法还包括:
    根据所述方向感测数据确定所述目标对象的观察视野范围;
    如果所述抬头显示设备所显示的内容在所述观察视野范围内,则执行所述根据所述方向感测数据确定所述目标对象的观察方向。
  12. 一种调焦装置,其特征在于,设置于抬头显示设备中,所述抬头显示设备用于根据投射焦距投射显示相关数据,所述装置包括:
    通讯接口,用于获取场景感测数据;
    处理器,用于根据所述场景感测数据确定多个参考物体;从所述多个参考物体中确定出目标物体;生成第一调焦指令,所述第一调焦指令用于指示所述处理器根据所述到所述目标物体的距离值调节投射焦距。
  13. 根据权利要求12所述的装置,其特征在于,
    所述通讯接口,还用于获取对目标对象的方向感测数据;
    所述处理器,还用于根据所述方向感测数据确定所述目标对象的观察方向;
    所述处理器执行所述从所述多个参考物体中确定出目标物体时,具体用于从位于所述观察方向上的多个参考物体中确定出目标物体。
  14. 根据权利要求12或13所述的装置,其特征在于,
    所述处理器,还用于根据所述场景感测数据确定到各个参考物体的距离值。
  15. 根据权利要求14所述的装置,其特征在于,所述场景感测数据包括 图像感测数据和距离感测数据;
    所述处理器执行所述根据所述场景感测数据确定多个参考物体时,具体用于根据所述图像感测数据识别多个参考物体;
    所述处理器执行所述根据所述场景感测数据确定到各个参考物体的距离值时,具体用于根据所述距离感测数据确定到各个参考物体的距离值。
  16. 根据权利要求14所述的装置,其特征在于,
    所述处理器执行所述从所述多个参考物体中确定出目标物体时,具体用于将最小距离值对应的参考物体确定为目标物体。
  17. 根据权利要求14所述的装置,其特征在于,
    所述通讯接口,还用于获取环境信息,所述环境信息包括天气信息以及所述抬头显示设备对应的位置信息和运动速度的任意一种或多种;
    所述处理器执行所述从所述多个参考物体中确定出目标物体时,具体用于生成各个参考物体的标识信息,所述标识信息用于唯一地标识各个参考物体;将所述各个参考物体的标识信息、所述环境信息以及所述到各个参考物体的距离值输入到预设识别模型中,并将所述预设识别模型输出的标识信息所标识的参考物体确定为目标物体。
  18. 根据权利要求14所述的装置,其特征在于,
    所述通讯接口,还用于获取各个参考物体的第一属性信息,所述第一属性信息包括各个参考物体的运动信息和/或体积信息;
    所述处理器执行所述从所述多个参考物体中确定出目标物体时,具体用于根据所述各个参考物体的第一属性信息以及所述到各个参考物体的距离值,计算各个参考物体的危险系数;将所述多个参考物体中危险系数最高的参考物体确定为目标物体。
  19. 根据权利要求14所述的装置,其特征在于,
    所述通讯接口,还用于获取所述抬头显示设备对应的位置信息;获取所述位置信息对应的地图数据,所述地图数据包括各个参考物体的第二属性信息, 所述第二属性信息包括各个参考物体的状态信息、强度信息、质量信息和价值信息中的任意一种或多种;
    所述处理器执行所述从所述多个参考物体中确定出目标物体时,具体用于根据所述各个参考物体的第二属性信息,计算各个参考物体的评价值;将所述多个参考物体中评价值最大的参考物体确定为目标物体。
  20. 根据权利要求12或13所述的装置,其特征在于,
    所述处理器,还用于判断到所述目标物体的距离值是否小于预设距离值;若是,则执行所述生成第一调焦指令;若否,则生成第二调焦指令,所述第二调焦指令用于指示所述处理器根据所述预设距离值调节投射焦距。
  21. 根据权利要求12所述的装置,其特征在于,
    所述通讯接口,还用于获取对目标对象的方向感测数据;
    所述处理器,还用于根据所述方向感测数据确定所述目标对象的观察视野范围;如果所述抬头显示设备所显示的内容在所述观察视野范围内,则控制所述通讯接口执行所述获取场景感测数据。
  22. 根据权利要求13所述的装置,其特征在于,
    在所述通讯接口执行所述获取对目标对象的方向感测数据之后,所述处理器,还用于根据所述方向感测数据确定所述目标对象的观察视野范围;如果所述抬头显示设备所显示的内容在所述观察视野范围内,则执行所述根据所述方向感测数据确定所述目标对象的观察方向。
  23. 一种抬头显示设备,其特征在于,所述设备包括:
    投影装置,用于根据投射焦距投射显示相关数据;以及,
    如权利要求12至22任一项所述的调焦装置。
  24. 一种计算机可读存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器调用时使所述处理器执行如权利要求1至11任一项所述的调焦方法。
PCT/CN2017/119431 2017-12-28 2017-12-28 调焦方法、装置及抬头显示设备 WO2019127224A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/119431 WO2019127224A1 (zh) 2017-12-28 2017-12-28 调焦方法、装置及抬头显示设备
CN201780023179.4A CN109076201A (zh) 2017-12-28 2017-12-28 调焦方法、装置及抬头显示设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/119431 WO2019127224A1 (zh) 2017-12-28 2017-12-28 调焦方法、装置及抬头显示设备

Publications (1)

Publication Number Publication Date
WO2019127224A1 true WO2019127224A1 (zh) 2019-07-04

Family

ID=64812375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119431 WO2019127224A1 (zh) 2017-12-28 2017-12-28 调焦方法、装置及抬头显示设备

Country Status (2)

Country Link
CN (1) CN109076201A (zh)
WO (1) WO2019127224A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114415370B (zh) * 2020-05-15 2023-06-06 华为技术有限公司 一种抬头显示装置、显示方法及显示系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015019567A1 (ja) * 2013-08-09 2015-02-12 株式会社デンソー 情報表示装置
CN104515531A (zh) * 2013-09-30 2015-04-15 本田技研工业株式会社 增强的3-维(3-d)导航
CN105008170A (zh) * 2013-02-22 2015-10-28 歌乐株式会社 车辆用平视显示器装置
CN105711511A (zh) * 2014-12-22 2016-06-29 罗伯特·博世有限公司 用于运行平视显示器的方法、显示设备、车辆
CN106454310A (zh) * 2015-08-13 2017-02-22 福特全球技术公司 用于增强车辆视觉性能的聚焦系统
JP2017056933A (ja) * 2015-09-18 2017-03-23 株式会社リコー 情報表示装置、情報提供システム、移動体装置、情報表示方法及びプログラム

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9472023B2 (en) * 2014-10-06 2016-10-18 Toyota Jidosha Kabushiki Kaisha Safety system for augmenting roadway objects on a heads-up display
CN104932104B (zh) * 2015-06-03 2017-08-04 青岛歌尔声学科技有限公司 一种可变焦光学系统及抬头显示系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105008170A (zh) * 2013-02-22 2015-10-28 歌乐株式会社 车辆用平视显示器装置
WO2015019567A1 (ja) * 2013-08-09 2015-02-12 株式会社デンソー 情報表示装置
CN104515531A (zh) * 2013-09-30 2015-04-15 本田技研工业株式会社 增强的3-维(3-d)导航
CN105711511A (zh) * 2014-12-22 2016-06-29 罗伯特·博世有限公司 用于运行平视显示器的方法、显示设备、车辆
CN106454310A (zh) * 2015-08-13 2017-02-22 福特全球技术公司 用于增强车辆视觉性能的聚焦系统
JP2017056933A (ja) * 2015-09-18 2017-03-23 株式会社リコー 情報表示装置、情報提供システム、移動体装置、情報表示方法及びプログラム

Also Published As

Publication number Publication date
CN109076201A (zh) 2018-12-21

Similar Documents

Publication Publication Date Title
US10152120B2 (en) Information provision device and information provision method
US11194154B2 (en) Onboard display control apparatus
CN108399903B (zh) 一种成像位置的调节方法及装置、平视显示系统
EP3031656B1 (en) Information provision device, information provision method, and carrier medium storing information provision program
US11048095B2 (en) Method of operating a vehicle head-up display
CN109477967A (zh) 具可变焦平面的抬头显示系统
US20140098008A1 (en) Method and apparatus for vehicle enabled visual augmentation
WO2011036788A1 (ja) 表示装置及び表示方法
CN112344963B (zh) 一种基于增强现实抬头显示设备的测试方法及系统
US10672269B2 (en) Display control assembly and control method therefor, head-up display system, and vehicle
WO2021228112A1 (zh) 座舱系统调节装置和用于调节座舱系统的方法
EP3496041A1 (en) Method and apparatus for estimating parameter of virtual screen
JP6504431B2 (ja) 画像表示装置、移動体、画像表示方法及びプログラム
JP2018127099A (ja) 車両用表示制御装置
JP7300112B2 (ja) 制御装置、画像表示方法及びプログラム
JP2022105256A (ja) マルチビュー自動車及びロボット工学システムにおける画像合成
US20200192091A1 (en) Method and apparatus for providing driving information of vehicle, and recording medium
JP7255608B2 (ja) 表示制御装置、方法、及びコンピュータ・プログラム
US20220013046A1 (en) Virtual image display system, image display method, head-up display, and moving vehicle
WO2019127224A1 (zh) 调焦方法、装置及抬头显示设备
JP6494764B2 (ja) 表示制御装置、表示装置及び表示制御方法
US20190391401A1 (en) Video display system, video display method, non-transitory storage medium, and moving vehicle
WO2021227784A1 (zh) 抬头显示装置和抬头显示方法
US20160117802A1 (en) Display control device, display control method, non-transitory recording medium, and projection device
JP2022176081A (ja) 適応視標追跡機械学習モデル・エンジン

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936286

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936286

Country of ref document: EP

Kind code of ref document: A1