CN117651655A - Computer-implemented method of adapting a graphical user interface of a human-machine interface of a vehicle, computer program product, human-machine interface and vehicle - Google Patents

Computer-implemented method of adapting a graphical user interface of a human-machine interface of a vehicle, computer program product, human-machine interface and vehicle Download PDF

Info

Publication number
CN117651655A
CN117651655A CN202180100624.9A CN202180100624A CN117651655A CN 117651655 A CN117651655 A CN 117651655A CN 202180100624 A CN202180100624 A CN 202180100624A CN 117651655 A CN117651655 A CN 117651655A
Authority
CN
China
Prior art keywords
configuration
controller
human
computer
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180100624.9A
Other languages
Chinese (zh)
Inventor
R·扎里夫
O·森斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lutes Technology Innovation Center Co ltd
Original Assignee
Lutes Technology Innovation Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lutes Technology Innovation Center Co ltd filed Critical Lutes Technology Innovation Center Co ltd
Publication of CN117651655A publication Critical patent/CN117651655A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • B60K35/10
    • B60K35/20
    • B60K35/22
    • B60K35/29
    • B60K35/654
    • B60K2360/11
    • B60K2360/1438
    • B60K2360/151
    • B60K2360/18

Abstract

A computer-implemented method of adapting a graphical user interface of a human-machine interface of a vehicle is described, wherein the graphical user interface is controlled by a controller of the human-machine interface, wherein a layout of information presented on the graphical user interface is determined by a configuration, wherein the controller accesses at least two different configurations, wherein each configuration is associated with at least one hash index or hash index range, wherein the controller associates at least one of a user profile, a region, usage information and/or user alertness with the hash index or hash index range, and selects a configuration having a compatible hash index or hash index range.

Description

Computer-implemented method of adapting a graphical user interface of a human-machine interface of a vehicle, computer program product, human-machine interface and vehicle
Technical Field
The present disclosure relates to a computer implemented method of adapting a graphical user interface of a human-machine interface of a vehicle, a computer program product, a human-machine interface and a vehicle.
Background
It is well known in the art that, especially when information is important for decisions, especially in complex and time sensitive situations, great care needs to be taken how to provide information to people. This situation often occurs in vehicle driving where one needs to obtain real-time information about critical parameters of the vehicle, such as the speed of travel. The information must be accessible in such a way that the driver is distracted from the traffic situation as little as possible while the information is acquired.
Cognitive sciences have found that there is an optimum value for the information that an individual driver should receive, neither too little nor too much. If the driver receives too little information, the driver may search for information desired by the driver. If the driver receives too much information, the driver may be overwhelmed and may not be able to quickly identify the necessary information. The amount of information to be presented is very individualized. This is related to the cognitive ability, age, experience and personal preferences of the driver. For example, some drivers want to know engine temperature, while others may not care about this data. For another example, some drivers may prefer to easily obtain navigation information, while other drivers prefer to easily obtain entertainment information. The types of users are also greatly different, and for example, there are very invested users who like more information density, and there are digital novice users who prefer to simplify the interface and a small number of main functions.
In addition, it is well known that human perception of the density and amount of information provided by human-machine interfaces is often different in different markets. For example, users in china generally prefer to display more information and higher feature densities, while many users in the european union prefer to simplify clean digital interfaces.
Thus, how to design interfaces that fit all such markets and user types is a general challenge.
It is also known that driver assistance systems such as adaptive cruise control and automatic lane change systems have been successfully marketed to improve driver comfort and safety. With the continued sophistication of these driver assistance systems, less driver interaction may be required. In some cases, the driver assistance system may be fully automated over a trip. Thus, at least in a partial journey, the role of the driver has been changed from the role of the active driver to the role of the passenger. Highly automated vehicles allow a driver to transfer control to an automated vehicle and perform other tasks while driving. The requirement for information displayed to the user in the autonomous driving mode is different from that in the manual driving mode.
EP 3240715 B1 discloses an adaptive user interface system for a vehicle having an automatic vehicle system, the adaptive user interface system comprising: a display; and an electronic controller electrically coupled to the display and configured to generate a graphical user interface indicative of operation of the automated vehicle system, output the graphical user interface on the display, the electronic controller characterized by being further configured to: indicia of a comfort level monitoring the driver; determining when the driver is uncomfortable with operation of the automatic vehicle system based on the monitored indicia; and in response to determining that the driver is uncomfortable with operation of the automated vehicle system, modifying the graphical user interface to provide further details.
Disclosure of Invention
It is therefore an object of the present invention to allow a more elaborate method to adjust the way information is displayed to the vehicle user so that the user is not overwhelmed but obtains an appropriate level of information.
This object is achieved by a computer implemented method of adapting a graphical user interface of a human-machine interface of a vehicle according to claim 1, a computer program product according to independent claim 11, a human-machine interface according to independent claim 12 and a vehicle according to independent claim 13. Further embodiments are described in the dependent claims.
A computer-implemented method of adapting a graphical user interface of a human-machine interface of a vehicle is described, wherein the graphical user interface is controlled by a controller of the human-machine interface, wherein a layout of information presented on the graphical user interface is determined by a configuration, wherein the controller accesses at least two different configurations, wherein each configuration is associated with at least one hash index or hash index range, wherein the controller associates at least one of a user profile, a region, usage information and/or user alertness with the hash index or hash index range, and selects a configuration having a compatible hash index or hash index range.
The vehicle may be a car, truck, bus, etc.
The graphical user interface may be, inter alia, a cockpit display, a heads-up display or a central display. The use of displays instead of meters has become a standard in many vehicles because they are more configurable than traditional meters.
The hash index is an index derived from a specific configuration of how a human perceives information. In one embodiment, the hash index for a given configuration may be assessed by a tester. In another more adaptive embodiment, the hash index may be calculated by the processor. The hash index may relate to the number, location and density of information and decorations, the font used, including in some embodiments size and type, the color used, and other criteria. The more cluttered the display appears, the higher the cluttering index. The higher the hash, the more precisely the user needs to know where to find the corresponding information in order to find it quickly and efficiently.
The configuration may be preconfigured and/or user-adjustable and/or dynamic. The dynamic display may react to certain situations, for example, if the navigation system provides a direction notification to be performed soon, this information may be made more noticeable than other information, which will then be moved to other locations, not displayed, or displayed in a smaller size.
In some embodiments, there may be a variety of configurations with correspondingly similar or identical hash values or overlapping hash value ranges. The different configurations may be ordered by hash index. The hash index may be determined within a range, and the ranges may overlap, such that more than one hash index may be determined to be appropriate.
Depending on the external environment described above, the controller may select one or more suitable configurations. For this, the controller must calculate the hash index or hash index range according to at least one of a user profile, an area, usage information, and/or user alertness information provided to the controller. The controller may then compare and match this calculated hash index or range of indices to the hash index or range of hash indices associated with the available configurations and select one or more of the closest matching configurations.
The hash index may be calculated as a rounded sum of the weighting factors. In some embodiments, the factors may be subsequently normalized and weighted, or weights may be calculated such that the weights result in normalization of the factors.
These factors may include factors related to the use of particular functions of the infotainment system of the vehicle, such as the average variance of the use of certain functions over a given period of time (e.g., the last month). Another factor may be related to the usage of the smartphone, such as the average variance of usage of a particular function within the smartphone over a given period of time. Two of these factors are related to user preferences in terms of the type of application used by the user. Another factor may be the user selected information density on the human-machine interface of the vehicle. Yet another factor may be derived from monitoring eye tracking and component usage over a given period of time. Another factor may be derived from monitoring the user's voice interaction with the human-machine interface.
The hash index is linked to the user's private profile. For example, it may be analyzed at a group level within a cloud application to generate specific weights for different regions and/or groups of people.
In this way, the user may be provided with a display configuration that matches the user's needs or capabilities.
In a first further embodiment, the selected configuration is presented to the user, wherein if the user confirms the selected configuration, the configuration is changed to the selected configuration.
This allows the user to control the user preferences and which configuration it wants to use.
In yet a further embodiment, the hash index is derived from the density of information displayed on the graphical user interface at a given time.
The information density may be calculated based on the amount of different information simultaneously displayed on the display. For example, if a menu is displayed, a menu having 12 menu items displayed simultaneously is more cluttered than a menu having only 6 menu items. Another example is that cockpit displays that show speed, engine revolutions, engine temperature, fuel level, outside temperature, navigational information, etc. at the same time typically look more cluttered than displays that show speed and navigational information only.
In yet a further embodiment, the controller analyzes the use of functionality accessible by the user through the human-machine interface, wherein the controller adjusts the at least one configuration based on the use analysis.
In this way, the controller can determine which information is relevant to a particular user and can prioritize information that is regularly needed by the user. For example, if a user often wants to see navigation information, this information may be viewed with a higher priority than, for example, information about the current engine condition.
In yet a further embodiment, the driver monitoring system is connected to the controller, wherein driver information captured by the driver monitoring system is processed by the controller, wherein the controller selects the configuration based on the driver information.
In this way, if the driver looks less attentive, the display configuration may be changed to a configuration with a lower clutter index.
In yet a further embodiment, the human-machine interface comprises a speech recognition system, wherein information related to the use of the speech commands is analyzed by the controller, wherein the controller selects the configuration based on the use of the speech commands.
In this embodiment, the information presented on the display may be matched to the commands given by the user such that the information is more relevant to the individual user.
In a further embodiment, the selection of the configuration is presented at predetermined time intervals.
Such a time interval may be, for example, one month, so that the user has regular reminders that the user's needs may have changed. It also keeps the user engaged in customizing the vehicle according to the user's preferences, which is shown to have a beneficial impact on user satisfaction and brand appreciation.
In yet a further embodiment, the mobile device is connected to a human-machine interface, wherein the controller analyzes a configuration of the mobile device, wherein the controller selects the configuration based on the mobile device configuration.
It has been found that users tend to have the same preferences on how information is displayed to them in a variety of different settings, mobile devices, computers and vehicles. In this way, using information of the mobile device, the user experience in the vehicle may be improved, and the closest configuration may be selected.
In yet a further embodiment, wherein the controller applies a machine learning algorithm to make the selection of the configuration.
Once a particular user group size in a particular area and/or crowd, for example equal to or greater than 60 users, is reached, a learning mechanism may be implemented through reinforcement learning.
In yet a further embodiment, the user preference profile is stored and accessed by the controller.
The user preference profile may be stored in the vehicle and/or in an external storage location such as a server. The server may be accessed through a remote network connection. The profile may also be stored or accessible through the user's mobile device.
A first independent aspect relates to a computer program product having a non-transitory computer readable storage medium having embedded therein commands which, when executed by a processor, cause the processor to perform the above-described method.
Another independent aspect relates to a human-machine interface of a vehicle having a controller comprising a computer program product with a non-transitory computer readable storage medium as described above and a processor.
Another independent aspect relates to a vehicle having a human-machine interface as described above.
Drawings
Further features and details will be presented in the following description, in which at least one exemplary embodiment will be described in detail, where applicable with reference to the drawings. The features described and/or illustrated as such or in any combination possible and expedient form the subject matter, and ultimately also independent of the claims. In particular, they may be the subject of one or more separate applications. These figures schematically show:
FIG. 1 is an automobile with multiple displays;
FIGS. 2a-c are different display configurations, and
fig. 3 is a method of selecting a display configuration.
Detailed Description
Fig. 1 shows a motor vehicle 2 with a human-machine interface (human machine interface, HMI) 4.
The human interface 4 comprises a cockpit display 6, a central display 8, a microphone 10 and a speaker arrangement 12. The central display 8 is touch-sensitive. The human-machine interface 4 allows interaction between the driver 14 and the various systems of the car 2. The information is provided to the driver 14 via the displays 6 and 8 and via the speaker arrangement 12. The driver 14 may input commands via the microphone 10 and via the touch-sensitive central display 8 as well as by other well-known but not shown means, such as buttons and levers.
The human-machine interface 4 includes a controller 16 that includes a processor 18 and a non-transitory computer readable storage medium 20. The computer program product 22 is stored on the non-transitory computer readable storage medium 20. When the computer program product 22 is loaded and executed by the processor 18, the processor implements the methods described herein.
The controller 16 is responsible for providing the information to be displayed to the displays 6, 8. The controller 16 also controls which information is provided in sound via the speaker arrangement 12.
The controller 16 is also connected to a driver monitoring system 26 having an eye tracking camera 28 for tracking the eyes 30 of the driver 14. The driver monitoring system 26 also utilizes the microphone 10 to record sound generated by the driver 14. The eye tracking camera 28 may record the viewing direction 32 of the driver 14. This information, as well as the acoustic information captured via the microphone 10, may be used to determine the level of concentration of the driver 14 at that point. This information may be used to adjust the configuration of the displays 6 and/or 8.
The controller 16 is also connected to the mobile telephone 34 of the driver 14 via a wired or wireless connection. Information related to the configuration of the mobile device 34 is utilized by the controller 16 to determine the appropriate configuration of the displays 6, 8.
Fig. 2a-c show different configurations of the cockpit display 6 and how the selection can be made.
A screen of the central display 8 is shown for adjusting the configuration of the cockpit display 6 presented at the central display 8. Below the representation of the central display 6 a selection area 40 is presented, which can be used by the driver 14 to select between different configurations, such as configuration 42 shown in fig. 2a, configuration 44 shown in fig. 2b and configuration 46 shown in fig. 2 c.
The cockpit display 6 is typically divided into four distinct zones, three upper zones (left zone 50, central zone 52 and right zone 54) and a lower linear zone 56.
The clutter index is related to the information density provided on the cockpit display 6 and is calculated based on the amount of information provided to the driver 14 at a given time. Different information is weighted differently based on the location of the information and the type of information. For example, information provided in the center region 52 has a higher impact on the hash index than information provided on the left region 50, right region 54, or lower region 56. On the other hand, the type of information is also important. Mandatory or frequent information retrieved by the driver 14 (e.g., speed of the car 2) has a lower impact on the clutter index than other less relevant information (e.g., engine temperature).
In one particular exemplary embodiment, the hash index C may range from 1 to 6:
C=round(A1×W1+A2×W2+A3×W3+A4×W4+A5×W5),
wherein:
a_n=factor
W_n=weight
Round () =rounding function
The factors are described as follows:
a1 is the mean variance of function usage within the human-machine interface over the last month. A1 ranges from 0 to 1 (percentage value of 1=100%; for example, a1=0.8 (80%). A1 represents daily use of the main functions of the infotainment system: navigation, air conditioning, media, driver assistance, voice assistance, communication. The increased variance will cause the functionality to be promoted to a higher level, e.g., the highest level, such that more direct shortcut functionality can be accessed on the home screen of the human machine interface, resulting in higher information density.
W1 defaults to 1. In a further embodiment, W1 is adapted by reinforcement learning, for example for a specific region or population.
A2 is the average variance of the functional usage of the smartphone. A2 ranges from 0 to 1 (percentage value of 1=100%; for example, a2=0.8 (80%). A2 represents daily use of all major functions within the smartphone: messaging, calling, media, gaming, assistant usage, smart home applications, news, etc. The increased variance will result in the functionality being promoted to a higher level, e.g., highest, level, such that more direct shortcut functionality can be accessed on the home screen of the human machine interface, resulting in a higher information density.
W2 is by default equal to 1, in the aforementioned further embodiments, W2 is adapted by reinforcement learning, for example for a specific region or group of people.
A3 is the information density of the selected human-machine interface. A3 represents the number of high-correlation components (e.g., on/button with label and description) divided by the maximum number of components (e.g., 20 components) on the high information density interface. For example, 10 highly correlated components yields a value of A3 of 0.5.
W3 defaults to 1, in further embodiments, W3 is adapted by reinforcement learning, for example, for a particular region or group of people.
A4 is related to eye tracking and assembly use. A4 may be composed of multiple sub-factors based on components within the human-machine interface, e.g., a slider or button may be considered a component. A4 can be described as follows:
A4=(A4_F1×SW1+A4_F2×SW2+A4_F3×SW3+A4_F4×SW4)×W4,
wherein:
f1: weekly checking the average frequency of the components; the values range from 0 to 1 (percentage value of 1=100%).
F2: weekly viewing the components and the average frequency of interaction with the components; value range 0 to 1 (percentage value of 1=100%)
F3: viewing the average variance of the components monthly; value range 0 to 1 (percentage value of 1=100%)
F4: monthly view component and average variance with interactive component; value range 0 to 1 (percentage value of 1=100%)
Furthermore, the sub-weight SWn condition applies to:
SW1+SW2+SW3+SW4=1
for each component, a different sub-weight may be used to adapt the order (e.g., settings) and position (e.g., menu level, alignment, etc.) within the human-machine interface and generate new human-machine interface variants.
W4 defaults to 2, so it is weighted higher than the other weights because it is considered the most important factor in a given example. In the foregoing further embodiments, W4 is adapted by reinforcement learning, for example, for a particular region or group of people.
A5 is associated with voice interactions. A5 is equal to 1—a function related to the number of functions performed by voice only but also visually represented, wherein the function has a range of 0 to 1. A5 ranges from 0 to 1 (percentage value of 1=100%). This factor A5 may be used to reduce visual elements corresponding to functions performed by the user only by speech.
By default, w5=1. In the foregoing further embodiments, W5 is adapted by reinforcement learning, for example, for a particular region or group of people.
The index is linked to the user's private profile and is analyzed, e.g., only at the group level within the cloud application, to generate specific weights (W1 to W6) for different regions or groups of people.
The sum of all weights may be equal to a fixed number, in the example given above, the sum of all weights is 6. If the weights are adapted according to the method described above as a further embodiment, they are adapted interdependently in such a way that the sum of all weights remains unchanged.
The selection area 40 shown on the central display 8 includes a slider 60 that is movable on a track 62 having a plurality of stop positions 64 for a plurality of different configurations ordered from left to right in increasing hash numbers. The driver 14 may slide the slider 60 to any of the stop positions 64 while the associated cockpit display arrangements 42 through 46 are shown above the selection area 40.
The configuration 42 shown in fig. 2a has a very low clutter index because all attention is drawn to the display of the speed of the car 2 in the central zone 52, while having relatively small lines in the left zone 50, the right zone 54 and the lower zone 56.
The configuration 44 shown in fig. 2b has a higher but still relatively low clutter index because attention is still drawn to the display of the speed of the car 2 in the central zone 52, while having relatively smaller lines in the left zone 50, the right zone 54 and the lower zone 56. But the information density is increased compared to configuration 42 due to the additional information displayed in central region 52 and lower region 56.
The configuration 46 shown in fig. 2c has a relatively high clutter index and is more suitable for experienced drivers. The display of the speed of the car is moved to the left zone 50 and the charge capacity is visible in the right zone 54. The central area 52 is occupied by a menu having a plurality of list rows. In addition to this, a lot of additional information in small font or icon size is shown in the full area 50, 52, 54, 56.
After the selection, the driver 14 confirms his selection and the corresponding layout will be used for the cockpit display 6.
Fig. 3 illustrates a method of selecting a display configuration.
The machine learning algorithm is fed with a large amount of information. The first information source is usage statistics derived from the driver's interactions with the human-machine interface 4.
The second information source is attention data derived from the driver monitoring system 26.
The third information source is data derived from a speech recognition system using microphone 10.
The fourth information source is the configuration of the driver's mobile phone 34.
The machine learning algorithm processes the different information sources and determines a tolerable clutter index for the individual drivers 14. The machine learning algorithm selects and/or modifies the existing configuration to meet the tolerable clutter index range of the driver 14 and the selection of information to be displayed in relation to the driver 14.
The components and systems described above may be stand alone or used by other systems of an automobile. Sensor data, such as camera data, may be provided to different systems and used for different purposes. The system may be implemented as a function in a control unit having more functionality, for example as a function of a driving assistance system having a plurality of components such as lane keeping and adaptive cruise control.
While at least one exemplary embodiment has been presented in the foregoing summary and detailed description, and in the claims, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration in any way. Rather, the foregoing summary and detailed description will provide those skilled in the art with a convenient road map for implementing at least one exemplary embodiment, it being understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope set forth in the appended claims and their legal equivalents.
Any feature disclosed in the claims, specification and drawings, including structural details, relative positioning or method steps, may be relevant to the invention alone or in any meaningful combination with any other feature(s).
Reference numerals
2. Automobile
4. Human-machine interface
6. Cockpit display
8. Central display
10. Microphone
12. Speaker device
14. Driver of the vehicle
16. Controller for controlling a power supply
18. Processor and method for controlling the same
20. Non-transitory computer readable storage medium
22. Computer program product
26. Driver monitoring system
28. Eye tracking camera
30. Eyes (eyes)
32. Viewing direction
34. Mobile telephone
40. Select area
42. 44, 46 arrangement
50. Left area
52. Central zone
54. Right zone
56. Lower zone
60. Sliding block
62. Rail track
64. Stop position
C disorder index
Factor A
W weight
SW sub-weights

Claims (13)

1. A computer-implemented method of adapting a graphical user interface (4) of a human-machine interface (4) of a vehicle (2), wherein the graphical user interface (4) is controlled by a controller (16) of the human-machine interface (4), wherein a layout of information presented on the graphical user interface (6, 8) is determined by a configuration (42, 44, 46), wherein the controller (16) accesses at least two different configurations (42, 44, 46), wherein each configuration (42, 44, 46) is associated with at least one hash index (C) or hash index range, wherein the controller (16) associates at least one of a user profile, a region, usage information and/or a user alertness with a hash index (C) or hash index range, and selects a configuration (42, 44, 46) having a compatible hash index (C) or hash index range.
2. The computer-implemented method of claim 1, wherein the selected configuration (42, 44, 46) is presented to a user (14), wherein if the user (16) confirms the selected configuration (42, 44, 46), the configuration (42, 44, 46) is changed to the selected configuration (42, 44, 46).
3. A computer-implemented method according to claim 1 or 2, wherein the hash index (C) is derived from the information density displayed on the graphical user interface (6, 8) at a given time.
4. The computer-implemented method of any of the preceding claims, wherein the controller (16) analyzes the use of functions accessible by the user (14) through the human-machine interface (4), wherein the controller (16) adjusts at least one configuration (42, 44, 46) based on the use analysis.
5. The computer-implemented method of any of the preceding claims, wherein a driver monitoring system (26) is connected to the controller (16), wherein driver information captured by the driver monitoring system (26) is processed by the controller (16), wherein the controller (16) selects a configuration (42, 44, 46) based on the driver information.
6. The computer-implemented method of any of the preceding claims, wherein the human-machine interface (4) comprises a speech recognition system (10), wherein information related to the use of speech commands is analyzed by the controller (16), wherein the controller (16) selects a configuration (42, 44, 46) based on the use of speech commands.
7. The computer-implemented method of any of the preceding claims, wherein the selection of the configuration (42, 44, 46) is presented at predetermined time intervals.
8. The computer-implemented method of any of the preceding claims, wherein a mobile device (34) is connected to the human-machine interface (4), wherein the controller (16) analyzes a configuration of the mobile device (34), wherein the controller (16) selects a configuration based on the mobile device (34) configuration.
9. The computer-implemented method of any of the preceding claims, wherein the controller (16) applies a machine learning algorithm to make the selection of the configuration (42, 44, 46).
10. The computer-implemented method of any of the preceding claims, wherein a user preference profile is stored and accessed by the controller (16).
11. A computer program product having a non-transitory computer readable storage medium (20) with commands embedded therein, which when executed by a processor (18) causes the processor (18) to perform the method according to any of the preceding claims.
12. A human-machine interface (4) of a vehicle having a controller (16) comprising a computer program product with a non-transitory computer readable storage medium (20) according to claim 11 and a processor (18).
13. A vehicle having a human-machine interface (4) according to claim 12.
CN202180100624.9A 2021-07-15 2021-07-15 Computer-implemented method of adapting a graphical user interface of a human-machine interface of a vehicle, computer program product, human-machine interface and vehicle Pending CN117651655A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/069723 WO2023284961A1 (en) 2021-07-15 2021-07-15 Computer-implemented method of adapting a graphical user interface of a human machine interface of a vehicle, computer program product, human machine interface, and vehicle

Publications (1)

Publication Number Publication Date
CN117651655A true CN117651655A (en) 2024-03-05

Family

ID=77155750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180100624.9A Pending CN117651655A (en) 2021-07-15 2021-07-15 Computer-implemented method of adapting a graphical user interface of a human-machine interface of a vehicle, computer program product, human-machine interface and vehicle

Country Status (3)

Country Link
CN (1) CN117651655A (en)
TW (1) TWI822186B (en)
WO (1) WO2023284961A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9493130B2 (en) * 2011-04-22 2016-11-15 Angel A. Penilla Methods and systems for communicating content to connected vehicle users based detected tone/mood in voice input
WO2013074868A1 (en) * 2011-11-16 2013-05-23 Flextronics Ap, Llc Complete vehicle ecosystem
US10300929B2 (en) 2014-12-30 2019-05-28 Robert Bosch Gmbh Adaptive user interface for an autonomous vehicle
US11036523B2 (en) * 2017-06-16 2021-06-15 General Electric Company Systems and methods for adaptive user interfaces
JP2019061559A (en) * 2017-09-27 2019-04-18 本田技研工業株式会社 Display device, display control device and vehicle
US10969236B2 (en) * 2018-12-13 2021-04-06 Gm Global Technology Operations, Llc Vehicle route control based on user-provided trip constraints
DE102019217346B4 (en) * 2019-11-11 2023-12-07 Psa Automobiles Sa Method for displaying information on a human-machine interface of a motor vehicle, computer program product, human-machine interface and motor vehicle

Also Published As

Publication number Publication date
WO2023284961A1 (en) 2023-01-19
TW202319261A (en) 2023-05-16
TWI822186B (en) 2023-11-11

Similar Documents

Publication Publication Date Title
US11449294B2 (en) Display system in a vehicle
JP6883766B2 (en) Driving support method and driving support device, driving control device, vehicle, driving support program using it
KR102306879B1 (en) Post-drive summary with tutorial
US10317900B2 (en) Controlling autonomous-vehicle functions and output based on occupant position and attention
US10053113B2 (en) Dynamic output notification management for vehicle occupant
US20130038437A1 (en) System for task and notification handling in a connected car
EP3599124B1 (en) Coordinating delivery of notifications to the driver of a vehicle to reduce distractions
US20170349184A1 (en) Speech-based group interactions in autonomous vehicles
US8400332B2 (en) Emotive advisory system including time agent
EP3072444A1 (en) Display apparatus, vehicle and display method
US20180022359A1 (en) Control for an Electronic Multi-Function Apparatus
KR20190084164A (en) Method for controlling display based on driving context and electronic device therefor
US20220360641A1 (en) Dynamic time-based playback of content in a vehicle
TWI822186B (en) Computer-implemented method of adapting a graphical user interface of a human machine interface of a vehicle, computer program product, human machine interface, and vehicle
CN113811851A (en) User interface coupling
US20230025804A1 (en) User interface for allocation of non-monitoring periods during automated control of a device
KR20230050535A (en) Display system and method for improving autonomous driving safety of electric bus
CN112035034B (en) Vehicle-mounted robot interaction method
CN117615930A (en) Method for displaying a personal home screen on a display device of a vehicle and corresponding display device
CN115675513A (en) Allocation of non-monitoring time periods during automatic control of a device
NZ760269A (en) Post-drive summary with tutorial
NZ760269B2 (en) Post-drive summary with tutorial
NZ721392B2 (en) Post-drive summary with tutorial
Riener Hypotheses and Research Questions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication