CN113835230A - Display processing method and device for vehicle HUD, electronic equipment and medium - Google Patents

Display processing method and device for vehicle HUD, electronic equipment and medium Download PDF

Info

Publication number
CN113835230A
CN113835230A CN202111187693.6A CN202111187693A CN113835230A CN 113835230 A CN113835230 A CN 113835230A CN 202111187693 A CN202111187693 A CN 202111187693A CN 113835230 A CN113835230 A CN 113835230A
Authority
CN
China
Prior art keywords
information
current
display
eyeball
display content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111187693.6A
Other languages
Chinese (zh)
Inventor
胡晓健
朱鹤群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xianta Intelligent Technology Co Ltd
Original Assignee
Shanghai Xianta Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xianta Intelligent Technology Co Ltd filed Critical Shanghai Xianta Intelligent Technology Co Ltd
Priority to CN202111187693.6A priority Critical patent/CN113835230A/en
Publication of CN113835230A publication Critical patent/CN113835230A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Abstract

The invention provides a display processing method and device of a vehicle HUD, an electronic device and a medium, wherein the display processing method of the vehicle HUD comprises the following steps: determining the current eyeball position; the current eyeball position represents the eyeball position of the current driver in the vehicle; and controlling the position of the display content in the HUD based on the current eyeball position.

Description

Display processing method and device for vehicle HUD, electronic equipment and medium
Technical Field
The invention relates to the field of vehicles, in particular to a display processing method and device of a vehicle HUD, electronic equipment and a medium.
Background
A HUD, in particular a Head Up Display, may be understood as a Head-Up Display, also as a Head-Up Display device, also called Head-Up Display system, referring to a multifunctional instrument panel, operating blindly, centered on the driver.
In the related art, the position of the display content in the HUD may be changed in response to an active operation, however, the process of changing the position of the operation may cause an operation burden to the user, and it is inconvenient to accurately implement the adjustment.
Disclosure of Invention
The invention provides a display processing method and device of a vehicle HUD, electronic equipment and a medium, and aims to solve the problems of heavy operation load and inconvenience in accurate adjustment.
According to a first aspect of the present invention, there is provided a display processing method of a vehicle HUD, comprising:
determining the current eyeball position; the current eyeball position represents the eyeball position of the current driver in the vehicle;
and controlling the position of the display content in the HUD based on the current eyeball position.
Optionally, the determining the current eyeball position includes:
acquiring an in-vehicle image;
and positioning the eyeball of the current driver in the in-vehicle image, and determining the current eyeball position based on the positioning result.
Optionally, the controlling the position of the display content in the HUD based on the current eye position includes:
determining a target display position of the display content based on the current eyeball position and preset position reference information; the position reference information represents the corresponding relation between different eyeball positions and different display positions of the display content;
and controlling the display content in the HUD to be displayed at the target display position.
Optionally, the controlling the position of the display content in the HUD based on the current eye position information includes:
determining position adjustment information of the display content based on the current eyeball position and a pre-stored previous eyeball position; the position adjustment information represents the position adjustment direction and the adjustment amount of the display content; the previous eyeball position represents the position of an eyeball when the position of the display content is adjusted last time;
adjusting a position of display content in the HUD based on the position adjustment information.
Optionally, before determining the current eyeball position, the method further includes:
identifying identity information of the current driver;
determining that the identity information of the current driver is different from the identity information of a previous driver, the previous driver being: the driver when the position of the display content was last controlled to change.
Optionally, the identity information for identifying the current driver includes at least one of:
acquiring pattern information of vein blood vessels in eyeballs of the current driver, and identifying identity information of the current driver based on the pattern information;
acquiring voice information of the current driver, and identifying identity information of the current driver based on the voice information;
and acquiring the face information of the current driver, and identifying the identity information of the current driver based on the face information.
Optionally, the display processing method further includes:
detecting whether the vehicle meets a preset active display condition during parking based on the driving information of the vehicle;
if the active display condition during parking is met, displaying specified display information by using the HUD of the vehicle so as to take the specified display information as the display content; the specified display information includes: a designated interactive guidance screen; the interactive guide picture is used for guiding the current driver to finish the specified neck action;
adapting a location at which the display content is changed based on a direction of the specified neck action while the current driver performs the specified neck action.
According to a second aspect of the present invention, there is provided a display processing device for a vehicle HUD, comprising:
the eyeball position determining module is used for determining the current eyeball position; the current eyeball position represents the eyeball position of the current driver of the vehicle;
and the display position control module is used for controlling the position of the display content in the HUD based on the current eyeball position.
According to a third aspect of the invention, there is provided an electronic device comprising a processor and a memory,
the memory is used for storing codes;
the processor is configured to execute the code in the memory to implement the method according to the first aspect and its alternatives.
According to a fourth aspect of the present invention, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of the first aspect and its alternatives.
According to the display processing method, the display processing device, the electronic equipment and the medium for the vehicle HUD, the position of the display content in the HUD can be controlled based on the current eyeball position after the current eyeball position is determined, and further, the position control of the display content is based on the current eyeball position, so that the accurate adaptation of the position of the display content in the HUD to the eyeball of a current driver can be guaranteed, the manual operation is not needed in the process, the automation degree is high, and the efficiency is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a first flowchart illustrating a method for HUD display processing of a vehicle according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating step S11 according to an embodiment of the present invention;
FIG. 3 is a first flowchart illustrating the step S12 according to an embodiment of the present invention;
FIG. 4 is a second flowchart illustrating the step S12 according to an embodiment of the present invention;
FIG. 5 is a second flowchart illustrating a HUD display processing method of a vehicle according to an embodiment of the present invention;
FIG. 6 is a third flowchart illustrating a method for HUD display processing of a vehicle according to an embodiment of the present invention;
FIG. 7 is a first block diagram illustrating the program modules of the HUD display processing device according to one embodiment of the present invention;
FIG. 8 is a second exemplary block diagram illustrating the process steps of the HUD display processing device according to one embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The display processing method of the vehicle HUD provided by the embodiment of the invention can be applied to a vehicle-mounted terminal, and can also be applied to a server, a terminal and the like which can be communicated with the vehicle-mounted terminal.
Referring to fig. 1, an embodiment of the present invention provides a display processing method for a vehicle HUD, including:
s11: determining the current eyeball position;
s12: and controlling the position of the display content in the HUD based on the current eyeball position.
The current eyeball position may be understood as an eyeball position representing a current driver in the vehicle, and further may be any information describing the eyeball position, which may be position information (e.g., coordinates) of one or more positions in a three-dimensional coordinate system, or may be position information (e.g., coordinates) of one or more positions in a two-dimensional coordinate system.
The display content can be any content that is suitable for being displayed on the HUD and thus viewed. For example, the information may be at least one of meter information, navigation information, weather information, multimedia material, instant messaging information, human-computer interaction picture, and the like, and no matter what information is displayed as the display content, the scope of the embodiment of the present invention is not deviated. .
In the above scheme, because the position control of the display content is based on the current eyeball position, the accurate position matching of the display content in the HUD with the eyeball of the current driver can be ensured, the process does not need manual operation, the automation degree is higher, and the efficiency is higher.
In one embodiment, referring to fig. 2, step S11 may include:
s111: acquiring an in-vehicle image;
s112: and positioning the eyeball of the current driver in the in-vehicle image, and determining the current eyeball position based on the positioning result.
The in-vehicle image may be an image captured by a pointer in the vehicle, and may include a pixel of a current driver, which may be a photograph, a video, a moving image, or the like, and the in-vehicle image may be captured by an in-vehicle image capturing device, or may be captured by another device.
The positioning result may include, for example, the position information of the eyeball in the image coordinate system of the in-vehicle image, which may be any position information capable of characterizing the position of the eyeball, and on this basis, the positioning result may be used as the current eyeball position, that is, the position information in the image coordinate system (a two-dimensional coordinate system) is used as the current eyeball position, or the positioning result may be projected in a spatial three-dimensional coordinate system, and the projected result is used as the current eyeball position, which does not depart from the scope of the embodiments of the present invention, regardless of the manner used.
In the scheme, accurate positioning of the eyeballs is achieved based on the collected images in the vehicle, and the corresponding current eyeball positions are obtained.
In addition, after the current eyeball position is determined, the current eyeball position can be stored and used as a basis for subsequent adjustment of the previous eyeball position.
In one embodiment, referring to fig. 3, step S12 may include:
s121: determining a target display position of the display content based on the current eyeball position and preset position reference information;
s122: and controlling the display content in the HUD to be displayed at the target display position.
The position reference information represents the corresponding relation between different eyeball positions and different display positions of the display content.
For example, which display position each area range of the eyeball position corresponds to may be calibrated in advance, and then, in step S121, the display position corresponding to the area range where the current eyeball position is located may be taken as the target display position.
For another example, the corresponding relationship between the deviation between the eyeball position and the reference eyeball position and the display position may be calibrated in advance, and thus the corresponding relationship between the eyeball position and the display position may be represented, and this may be used as the position reference information, and in step S121, the display position corresponding to the deviation between the current eyeball position and the reference eyeball position may be used as the target display position.
For another example, the correspondence between the first deviation (the deviation between the eyeball position and the reference eyeball position) and the second deviation (the deviation between the display position and the reference display position) may be calibrated in advance, and therefore, the correspondence between the eyeball position and the display position may be represented, and this may be used as the position reference information, and in step S121, the second deviation corresponding to the first deviation of the current eyeball position may be determined, and further, based on this second deviation and the reference display position, one display position may be determined, and this may be used as the target display position.
No matter how the step S121 is implemented, the scope of the embodiment of the present invention is not deviated.
In the above scheme, accurate control of the display content position in the HUD can be realized, and the requirement of a user for observing the HUD is fully met.
In addition, in order to simplify the correspondence relationship and the corresponding processing steps thereof, the current eyeball position may adopt position information in a two-dimensional coordinate system (for example, position information in an image coordinate system).
In one embodiment, referring to fig. 4, step S12 may include:
s123: determining position adjustment information of the display content based on the current eyeball position and a pre-stored previous eyeball position;
s124: adjusting a position of display content in the HUD based on the position adjustment information.
The position adjustment information represents the position adjustment direction and the adjustment amount of the display content;
the previous eyeball position represents a position of an eyeball at a last time the position of the display content is adjusted.
Furthermore, in the above scheme, the adjustment result of the last adjustment (i.e. the current display position of the display content) and the adjustment basis of the last adjustment (i.e. the previous eyeball position) can be taken as references, so that the relative adjustment is realized, the processing flow is simplified, the efficiency and the accuracy of the adjustment of the display position are effectively considered, and the requirement of a user for observing the HUD is fully met.
In one embodiment, referring to fig. 5, before step S11, the method may further include:
s13: identifying identity information of the current driver;
s14: determining that the identity information of the current driver is different from the identity information of the previous driver;
wherein, the last driver indicates: the driver when the position of the display content was last controlled to change.
Wherein the identification information for identifying the current driver includes at least one of:
acquiring pattern information of vein blood vessels in eyeballs of the current driver, and identifying identity information of the current driver based on the pattern information;
acquiring voice information of the current driver, and identifying identity information of the current driver based on the voice information;
and acquiring the face information of the current driver, and identifying the identity information of the current driver based on the face information.
Furthermore, based on identification of the identity, in the above scheme, the steps S11 and S12 may be implemented only when the identity changes, so that automatic triggering of the processes of the steps S11 and S12 is realized.
For example, after the driver gets on the vehicle, the driver 'S identity may be recognized, and after steps S13 and S14, the execution of steps S11 and S12 is triggered, otherwise, if the driver' S identity is the same, the triggering of steps S11 and S12 may not be required. The whole process can be automatically realized without manual and active operation triggering.
In other examples, the processing of step S11 and step S12 may be triggered actively in response to an operation of the car machine (or other terminal) by a user (e.g., a driver), so as to implement automatic change of the display content position.
In one embodiment, please refer to fig. 6, the display processing method may further include:
s15: detecting whether the vehicle meets a preset active display condition during parking based on the driving information of the vehicle;
s16: whether the parking time active display condition is met or not;
if so, then:
s17: displaying specified display information with the HUD of the vehicle to take the specified display information as the display content;
s18: adapting a location at which the display content is changed based on a direction of the specified neck action while the current driver performs the specified neck action.
The driving information may be information capable of describing any one of a driving process, a state, a destination, a route, and the like of the vehicle. For example, the state information of any operable object in the vehicle, such as the gear information of the vehicle, whether the automatic parking function is turned on, the rotation direction of the steering wheel, whether the air conditioner is turned on, the state of the turn signal, etc., and the detection result of the detection device of the vehicle, such as the vehicle speed, the engine speed, the image collected by the image collecting device, the information recognized based on the image, etc.; and for example, information acquired by the vehicle from the outside, such as road condition information, weather information, date information, vehicle information, and the like, may be included.
Wherein the specified display information includes: a designated interactive guidance screen; the interactive guide picture is used for guiding the current driver to finish the specified neck action; in addition, at least one of the following may be included: multimedia material, instant messaging information, call request information, and the like.
The multimedia material can be, for example, video, picture, audio related information, article, link, etc.; in addition, the display of the multimedia material can be a process of calling the multimedia material from a local or network so as to display the multimedia material;
the instant messaging information can be short messages and also can be characters, pictures, videos, links and the like transmitted in instant messaging software;
the call request information may be any information describing a call request, such as identity information of a requesting party, time of request initiation, and the like, and the call request may be a call request, such as a missed call, a call in progress, and the like, or a call request generated based on instant messaging software;
the interactive guidance picture can be understood as: the screen for guiding the driver to complete the designated neck movement may be a dynamically changing screen or a static screen, for example, a dynamic screen for guiding the user to do shoulder and neck exercises, and in some examples, the interactive guidance screen may also dynamically change according to the movement of the driver, for example, when one screen is used to guide the driver to do shoulder and neck exercises, it may be identified whether the driver is performing or completing a certain movement, and then the interactive guidance screen changes to another screen after the movement is completed. In addition, the quality of the action performed by the driver can be evaluated at the same time.
Through the demonstration of above appointed demonstration information, HUD has still attached a function that shows appointed demonstration information on original function's basis, has richened HUD's function, has satisfied user's actual demand, for example alleviates driver fatigue, adjusts demand such as driver attention.
The parking active display condition can be understood as a condition for actively displaying specified display information when the vehicle parks, and only one parking active display condition can be configured, and various parking active display conditions can also be configured;
in one embodiment, the detecting whether the vehicle meets a preset active display-while-parking condition based on the driving information of the vehicle includes at least one of:
if the driving information represents first information, determining that the vehicle meets the active display condition during parking, wherein the first information represents: the vehicle is in a P gear or an automatic parking function is started; it can also be understood as a first parking active display condition, which characterizes a condition that the vehicle is in P gear or that an automatic parking function has been activated;
if the driving information represents the first information and the second information, determining that the vehicle meets the active display condition during parking, wherein the second information represents: before the first information is generated, the continuous driving parameter of the vehicle exceeds the corresponding parameter threshold, which can also be understood as a second active parking display condition, which represents a condition that the vehicle is in a P gear or has started the automatic parking function, and has been driven for a longer time (for example, 1 hour) before entering the P gear or starting the automatic parking function;
if the driving information represents the first information and the current time is within a specified time interval, it is determined that the vehicle satisfies the parking-time active display condition, which may also be understood as a third parking-time active display condition representing that the vehicle is in a P gear or has started an automatic parking function and is currently in a specified time interval (e.g., midnight).
The continuous driving parameters may include a continuous driving time and/or a continuous driving mileage, wherein the continuous driving time is a time that the driver is continuously driving without flameout, and the continuous driving mileage is a mileage that the driver is continuously driving without flameout.
In addition, the second parking-time active display condition and the third parking-time active display condition are compatible, and further, whether any one of the two conditions is satisfied can be detected, and if one of the two conditions is satisfied, the subsequent steps can be executed.
In a specific example of steps S17 and S18, the neck movement may be designated as head up, left turn, right turn, head down, etc., and the direction of the head up neck movement is upward, and the corresponding adaptation change is, for example, to adjust the position of the display content upward; the direction of the neck movement of the left turn, i.e., to the left, and the corresponding adaptation change is, for example, to make the position of the display content to be adjusted to the right or to the left (to be understood as a left adjustment if the orientation of the driver is used as a reference, or to be understood as a right adjustment if the orientation of the HUD is used as a reference); the direction of the right-turn neck movement, i.e., to the right, and the corresponding adaptation change is, for example, to make the position of the display content to be adjusted to the left or to the right (to be understood as a right adjustment if the orientation of the driver is used as a reference, or to be understood as a left adjustment if the orientation of the HUD is used as a reference); the direction of the head-lowering neck motion is downward, corresponding to a change in adaptation, for example, to adjust the position of the display content downward.
Further, the implemented process may for example: when the current driver heads up along with the interactive guide picture, the display content in the HUD also adjusts the position upwards along with the interactive guide picture, and when the current driver heads down along with the interactive guide picture, the display content in the HUD also adjusts the position downwards along with the interactive guide picture. Similar process both can satisfy the driver neck activity in-process to the observation demand of display content in the HUD, also can provide the guide effect to driver's neck activity.
In a specific example, if the current action of the driver is not matched with the specified neck action, the display content of the HUD can be controlled not to be changed in a matching mode, and then the position of the display content is changed in a matching mode only when the current action of the driver is matched with the specified neck action, and on the basis, whether the current action of the driver is accurate can be informed through the mode feedback.
In addition, in a specific example, whether the current driver is doing the action or not and what kind of action is done can be judged through the in-vehicle image acquired by the image acquisition device, and: whether it matches a specified neck motion.
Referring to fig. 7, an embodiment of the present invention further provides a display processing apparatus 200 for a HUD of a vehicle, including:
an eyeball position determination module 201, configured to determine a current eyeball position; the current eyeball position represents the eyeball position of the current driver of the vehicle;
and a display position control module 202, configured to control a position of the display content in the HUD based on the current eyeball position.
Optionally, the eyeball position determination module 201 is specifically configured to:
acquiring an in-vehicle image;
and positioning the eyeball of the current driver in the in-vehicle image, and determining the current eyeball position based on the positioning result.
Optionally, the display position control module 202 is specifically configured to:
determining a target display position of the display content based on the current eyeball position and preset position reference information; the position reference information represents the corresponding relation between different eyeball positions and different display positions of the display content;
and controlling the display content in the HUD to be displayed at the target display position.
Optionally, the display position control module 202 is specifically configured to:
determining position adjustment information of the display content based on the current eyeball position and a pre-stored previous eyeball position; the position adjustment information represents the position adjustment direction and the adjustment amount of the display content; the previous eyeball position represents the position of an eyeball when the position of the display content is adjusted last time;
adjusting a position of display content in the HUD based on the position adjustment information.
Optionally, referring to fig. 8, the display processing device 200 of the vehicle HUD further includes:
an identification determination module 203, configured to identify the identity information of the current driver, and determine that the identity information of the current driver is different from the identity information of a previous driver, where the previous driver refers to: the driver when the position of the display content was last controlled to change.
Optionally, the identity information for identifying the current driver includes at least one of:
acquiring pattern information of vein blood vessels in eyeballs of the current driver, and identifying identity information of the current driver based on the pattern information;
acquiring voice information of the current driver, and identifying identity information of the current driver based on the voice information;
and acquiring the face information of the current driver, and identifying the identity information of the current driver based on the face information.
Optionally, referring to fig. 8, the display processing device 200 of the vehicle HUD further includes:
the condition detection module 205 is configured to detect whether the vehicle meets a preset active display condition during parking based on driving information of the vehicle;
an active display module 204, configured to display specified display information by using the HUD of the vehicle if the parking active display condition is satisfied, so that the specified display information is used as the display content; the specified display information includes: a designated interactive guidance screen; the interactive guide picture is used for guiding the current driver to finish the specified neck action;
the display position control module 202 is further configured to:
adapting a location at which the display content is changed based on a direction of the specified neck action while the current driver performs the specified neck action.
Referring to fig. 9, an electronic device 30 is provided, which includes:
a processor 31; and the number of the first and second groups,
a memory 32 for storing executable instructions of the processor;
wherein the processor 41 is configured to perform the above-mentioned method via execution of the executable instructions.
The processor 31 is capable of communicating with the memory 32 via a bus 33.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the above-mentioned method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A display processing method for a vehicle HUD is characterized by comprising the following steps:
determining the current eyeball position; the current eyeball position represents the eyeball position of the current driver in the vehicle;
and controlling the position of the display content in the HUD based on the current eyeball position.
2. The display processing method according to claim 1,
the determining the current eyeball position comprises:
acquiring an in-vehicle image;
and positioning the eyeball of the current driver in the in-vehicle image, and determining the current eyeball position based on the positioning result.
3. The display processing method according to claim 1,
controlling a position of display content in the HUD based on the current eye position, comprising:
determining a target display position of the display content based on the current eyeball position and preset position reference information; the position reference information represents the corresponding relation between different eyeball positions and different display positions of the display content;
and controlling the display content in the HUD to be displayed at the target display position.
4. The display processing method according to claim 1,
the controlling a position of display content in the HUD based on the current eye position information includes:
determining position adjustment information of the display content based on the current eyeball position and a pre-stored previous eyeball position; the position adjustment information represents the position adjustment direction and the adjustment amount of the display content; the previous eyeball position represents the position of an eyeball when the position of the display content is adjusted last time;
adjusting a position of display content in the HUD based on the position adjustment information.
5. The display processing method according to any one of claims 1 to 4,
before the determining the current eyeball position, the method further comprises:
identifying identity information of the current driver;
determining that the identity information of the current driver is different from the identity information of a previous driver, the previous driver being: the driver when the position of the display content was last controlled to change.
6. The display processing method according to claim 5,
the identity information for identifying the current driver comprises at least one of the following:
acquiring pattern information of vein blood vessels in eyeballs of the current driver, and identifying identity information of the current driver based on the pattern information;
acquiring voice information of the current driver, and identifying identity information of the current driver based on the voice information;
and acquiring the face information of the current driver, and identifying the identity information of the current driver based on the face information.
7. The display processing method according to any one of claims 1 to 4, further comprising:
detecting whether the vehicle meets a preset active display condition during parking based on the driving information of the vehicle;
if the active display condition during parking is met, displaying specified display information by using the HUD of the vehicle so as to take the specified display information as the display content; the specified display information includes: a designated interactive guidance screen; the interactive guide picture is used for guiding the current driver to finish the specified neck action;
adapting a location at which the display content is changed based on a direction of the specified neck action while the current driver performs the specified neck action.
8. A display processing device for a vehicle HUD, characterized by comprising:
the eyeball position determining module is used for determining the current eyeball position; the current eyeball position represents the eyeball position of the current driver of the vehicle;
and the display position control module is used for controlling the position of the display content in the HUD based on the current eyeball position.
9. An electronic device, comprising a processor and a memory,
the memory is used for storing codes;
the processor to execute code in the memory to implement the method of any one of claims 1 to 7.
10. A storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of any one of claims 1 to 7.
CN202111187693.6A 2021-10-12 2021-10-12 Display processing method and device for vehicle HUD, electronic equipment and medium Pending CN113835230A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111187693.6A CN113835230A (en) 2021-10-12 2021-10-12 Display processing method and device for vehicle HUD, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111187693.6A CN113835230A (en) 2021-10-12 2021-10-12 Display processing method and device for vehicle HUD, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN113835230A true CN113835230A (en) 2021-12-24

Family

ID=78968467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111187693.6A Pending CN113835230A (en) 2021-10-12 2021-10-12 Display processing method and device for vehicle HUD, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN113835230A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158096A1 (en) * 1999-12-15 2008-07-03 Automotive Technologies International, Inc. Eye-Location Dependent Vehicular Heads-Up Display System
CN105015552A (en) * 2014-04-24 2015-11-04 Lg电子株式会社 Driver state monitoring system and control method thereof
CN107107749A (en) * 2014-11-03 2017-08-29 奥迪股份公司 The system and method for monitoring the health status and/or somatosensory of occupant
CN110733447A (en) * 2019-10-31 2020-01-31 重庆长安汽车股份有限公司 Method, system, computer-readable storage medium and vehicle for automatically adjusting HUD based on driver identity
CN111152790A (en) * 2019-12-29 2020-05-15 的卢技术有限公司 Multi-device interactive vehicle-mounted head-up display method and system based on use scene
CN112428936A (en) * 2020-11-27 2021-03-02 奇瑞汽车股份有限公司 Method and device for automatically adjusting parameters of head-up display
CN113126295A (en) * 2020-01-15 2021-07-16 未来(北京)黑科技有限公司 Head-up display device based on environment display
CN113376840A (en) * 2021-07-01 2021-09-10 北京汽车集团越野车有限公司 Image display position adjusting method and device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158096A1 (en) * 1999-12-15 2008-07-03 Automotive Technologies International, Inc. Eye-Location Dependent Vehicular Heads-Up Display System
CN105015552A (en) * 2014-04-24 2015-11-04 Lg电子株式会社 Driver state monitoring system and control method thereof
CN107107749A (en) * 2014-11-03 2017-08-29 奥迪股份公司 The system and method for monitoring the health status and/or somatosensory of occupant
CN110733447A (en) * 2019-10-31 2020-01-31 重庆长安汽车股份有限公司 Method, system, computer-readable storage medium and vehicle for automatically adjusting HUD based on driver identity
CN111152790A (en) * 2019-12-29 2020-05-15 的卢技术有限公司 Multi-device interactive vehicle-mounted head-up display method and system based on use scene
CN113126295A (en) * 2020-01-15 2021-07-16 未来(北京)黑科技有限公司 Head-up display device based on environment display
CN112428936A (en) * 2020-11-27 2021-03-02 奇瑞汽车股份有限公司 Method and device for automatically adjusting parameters of head-up display
CN113376840A (en) * 2021-07-01 2021-09-10 北京汽车集团越野车有限公司 Image display position adjusting method and device and storage medium

Similar Documents

Publication Publication Date Title
EP3757875A1 (en) Obstacle avoidance method and device used for driverless vehicle
US9043042B2 (en) Method to map gaze position to information display in vehicle
US8010283B2 (en) Driving evaluation system and server
JP6775188B2 (en) Head-up display device and display control method
JP5723106B2 (en) Vehicle display device and vehicle display method
JP4366716B2 (en) Vehicle information display device
US20160039285A1 (en) Scene awareness system for a vehicle
EP3213948A1 (en) Display control device and display control program
JP5286035B2 (en) Vehicle speed control device
CN109624994B (en) Vehicle automatic driving control method, device, equipment and terminal
CN111428571B (en) Vehicle guiding method, device, equipment and storage medium
US11836864B2 (en) Method for operating a display device in a motor vehicle
JP2010018201A (en) Driver assistant device, driver assistant method, and driver assistant processing program
EP3896604A1 (en) Vehicle driving and monitoring system; method for maintaining a sufficient level of situational awareness; computer program and computer readable medium for implementing the method
CN111397627A (en) AR navigation method and device
CN111721315A (en) Information processing method and device, vehicle and display equipment
JP2010029262A (en) Sight line measuring apparatus and program
CN112141010B (en) Control method, control device, electronic equipment and storage medium
CN113835230A (en) Display processing method and device for vehicle HUD, electronic equipment and medium
CN111341134A (en) Lane line guide prompting method, cloud server and vehicle
CN113918112A (en) HUD display processing method and device, electronic equipment and storage medium
JP7140483B2 (en) Information processing device, information processing method and information processing program
JP6555326B2 (en) Driving assistance device
JP2020098414A (en) Driving action analysis system
US10713511B2 (en) Method and device for estimating recognition timing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination