CN111638780A - Vehicle display control method and vehicle host - Google Patents

Vehicle display control method and vehicle host Download PDF

Info

Publication number
CN111638780A
CN111638780A CN202010367148.4A CN202010367148A CN111638780A CN 111638780 A CN111638780 A CN 111638780A CN 202010367148 A CN202010367148 A CN 202010367148A CN 111638780 A CN111638780 A CN 111638780A
Authority
CN
China
Prior art keywords
vehicle
driver
area
display control
control method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010367148.4A
Other languages
Chinese (zh)
Inventor
潘震
李书利
耿伟峰
潘雷
冯亚兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN202010367148.4A priority Critical patent/CN111638780A/en
Publication of CN111638780A publication Critical patent/CN111638780A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the field of vehicles, and provides a vehicle display control method and a vehicle host, wherein the vehicle display control method comprises the following steps: acquiring a target area corresponding to the sight line of a driver on a current user interface displayed by a vehicle host, wherein the target area is configured to be obtained by tracking the eyeball of the driver; acquiring an area matched with the target area from a plurality of areas pre-divided in the current user interface as a key area; and the key area is displayed in an enlarged mode in the current user interface. The invention can realize the enlarged display of the key area of the vehicle, and is convenient for the operation of a driver.

Description

Vehicle display control method and vehicle host
Technical Field
The invention relates to the technical field of vehicles, in particular to a vehicle display control method and a vehicle host.
Background
With the development of vehicle technology, hard keys of a vehicle are increasingly integrated into a host computer, and become graphic keys (soft keys) in the host computer, and a driver can operate the graphic keys by touching the graphic keys to execute corresponding control functions. Because the area of the host is basically fixed, the area of each graphic key can only be compressed in order to integrally display more graphic keys. When the host computer is in the navigation interface, the graphics button still can be compressed further, if the graphics button is too small, the driver needs to put into more attention, and thus more unsafe factors can be brought to the driving safety of the driver, for middle-aged and elderly people, the light sensation distinguishing capability of the middle-aged and elderly people is poor, and when the driver spends too much attention in the driving process to look over the graphics button, more unsafe factors can be caused.
Disclosure of Invention
In view of this, the present invention is directed to a vehicle display control method, so as to achieve enlarged display of a vehicle key region, which is convenient for a driver to operate.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a vehicle display control method, comprising: acquiring a target area corresponding to the sight line of a driver on a current user interface displayed by a vehicle host, wherein the target area is configured to be obtained by tracking the eyeball of the driver; acquiring an area matched with the target area from a plurality of areas pre-divided in the current user interface as a key area; and the key area is displayed in an enlarged mode in the current user interface.
Preferably, the acquiring the target area of the driver gazing at the current user interface displayed by the vehicle host computer comprises: acquiring a plurality of frames of driver face images over a duration of time, wherein the driver face images are configured to include a head pose and an eye feature; calculating an eyeball movement angle of the driver based on the head posture and the eyeball characteristics; and determining a target area where the driver gazes at the current user interface of the vehicle host machine based on the eyeball movement angle.
Preferably, the obtaining, as a key region, a region matching the target region from a plurality of pre-divided regions in the current user interface includes: acquiring respective first hash codes of a plurality of pre-divided areas; acquiring a second hash code of the target area; and taking the area corresponding to the first hash code with the minimum Hamming distance with the second hash code as the key area matched with the target area.
Preferably, before the obtaining that the driver gazes at the target area on the current user interface displayed by the vehicle host, the vehicle display control method further includes: determining whether a host vehicle is in a night mode, wherein the night mode is configured to be triggered to start when the external ambient light intensity of the vehicle is less than a preset light intensity threshold and a vehicle lamp is activated; and under the condition that the vehicle host is in a night mode, executing the step of acquiring a target area watched by a driver on a current user interface of the vehicle host.
Preferably, the displaying the key area in the current user interface in an enlarged manner includes: and amplifying and displaying the key area in the current user interface according to a preset amplification scale.
Preferably, after the enlarging and displaying the key region in the current user interface, the vehicle display control method further includes: and responding to the touch operation of the driver on the key area after the enlarged display, and controlling the vehicle to execute the key function related to the touch operation.
Preferably, the vehicle display control method further includes: determining that the driver has a touch operation if the following condition is satisfied: the area of the key pressed by the finger of the driver is larger than a preset area threshold value, wherein the area threshold value is configured to be the product of the current area of the key and a preset proportion.
Compared with the prior art, the vehicle display control method has the following advantages:
and determining a target area of the current user interface of the display screen observed by the driver by utilizing the target area obtained by tracking the eyeball of the driver. The method comprises the steps of pre-dividing a current user interface into a plurality of regions, then finding out a region matched with a target region from the plurality of regions as a key region, and finally amplifying and displaying the key region, so that a driver can conveniently execute key operation, and the situation that the driver spends too much time on key operation to cause potential safety hazards is prevented.
Another objective of the present invention is to provide a vehicle main body to achieve enlarged display of a key area of a vehicle, so as to facilitate operation of a driver.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a vehicle main machine is provided with a controller for executing the vehicle display control method.
Preferably, the vehicle main unit further includes: a camera for transmitting a plurality of frames of driver face images captured over a duration to the controller in response to an image capturing operation of the controller, wherein the driver face images are configured to include a head pose and an eyeball feature.
Compared with the prior art, the vehicle host machine and the vehicle display control method have the same advantages, and are not repeated herein.
In addition, the present invention also provides a computer-readable storage medium having stored thereon computer program instructions for causing a machine to execute the above-described vehicle display control method.
The computer readable storage medium has the same advantages as the vehicle display control method described above over the prior art, and is not described herein again.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a vehicle display control method according to an embodiment of the invention;
fig. 2 is a flowchart of acquiring the target area according to the embodiment of the present invention;
FIG. 3 is a flow chart of a further preferred embodiment of a vehicle display control method; and
fig. 4 is a block diagram of a vehicle main unit according to an embodiment of the present invention.
Description of reference numerals:
41. controller 42, camera
Detailed Description
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.
The host computer of the vehicle is an interactive device integrated in the central control position of the vehicle, and can realize the display of vehicle information and the input of driver control. The driver generally touches a graphical key (also called a soft key or a soft switch) in the host to input a control signal. The driver touches the host computer when driving and has certain potential safety hazard, and especially when the figure button is more, and the display area is less, the driver can't accurately find the region that the button that wants to control corresponds, can't accomplish fast and control. The present invention has been devised in view of the above circumstances. The technical solution of the present invention will be described in detail below with reference to a plurality of drawings.
Fig. 1 is a flowchart of a vehicle display control method. As shown in fig. 1, the vehicle display control method includes:
s101, acquiring a target area corresponding to the sight line of a driver on a current User Interface (UI) displayed by a vehicle host.
Wherein the target area is configured to track the driver's eye. Here, it should be noted that, in the present embodiment, by acquiring an image including a position of a driver eyeball and combining with an eyeball positioning technology, it is determined that the driver eyeball corresponds to a target area on the UI.
In a preferred embodiment, fig. 2 is a flowchart of acquiring the target area, as shown in fig. 2.
S201, acquiring a plurality of frames of face images of a driver in a continuous time, wherein the face images of the driver are configured to comprise head gestures and eyeball features. The multi-frame driver face image is collected through a camera and is a driver face image containing eyeballs of a multi-frame image sequence in continuous duration. Wherein, the effectiveness of the collected image is ensured by removing the image which does not contain the eyeball in the collected face image of the driver. The driver face image is configured to include a head pose and an eyeball feature. Wherein the head posture is used for representing the head steering angle of the driver, and the eyeball characteristic is used for representing the rotation angle of the eyeball of the driver.
S202, calculating the eyeball movement angle of the driver based on the head posture and the eyeball characteristics. Wherein, the eyeball orientation of each frame of driver can be calculated based on the head posture and the eyeball characteristics in the image of each frame, and the eyeball moving angle of the driver can be obtained based on the facial images of the driver of a plurality of frames.
S203, determining a target area of the driver, which is watched on the current user interface of the vehicle host machine, based on the eyeball movement angle. The target area exists on a UI of the vehicle host.
Specifically, the eyeball target rectangular coordinate sequence can be acquired through S101:
[(x1,y1),(x2,y2),...(xn,yn),...(xN,yN)],N≥2;
the eyeball-movement angle θ is calculated by the following formula:
Figure BDA0002476885350000051
wherein the slope k is calculated once every Z framesnTo obtain a total number M of angular sequences:
1,α2,...αm,...αM];
wherein the content of the first and second substances,
Figure BDA0002476885350000052
α=arctan(kn),1≤z≤10。
each angle of eye movement corresponds to a target position of the UI that reflects the driver's visual position.
Further preferably, in order to simplify the calculation process and avoid an excessive load of the controller, the calculation steps of S202 and S203 are started when the driver is substantially gazing at the host.
Although the target area of the driver has been obtained and the visual position of the driver is determined, it is not clear which target area needs to be enlarged. If it is exactly which key the driver looks at, a highly precise eyeball positioning technology is required. However, since the current eyeball positioning technique with high precision is too expensive and the placement position cannot be adapted to the vehicle requirement, the present embodiment leads to S102 described below.
And S102, acquiring an area matched with the target area from a plurality of areas pre-divided in the UI as a key area.
The UI is pre-divided into a plurality of regions to match the above-mentioned eyeball location technique (location is not particularly accurate). And taking one area as a reference area to be amplified, and further reducing the probability of errors in positioning. For example, when the eyeball of the driver is located in the area of three continuous keys of radio, multimedia and air conditioner, since the eyeball location technology adopted at the present stage is not accurate enough, it is impossible to know which position the specific location is, and if the location is determined randomly, it is likely that the final key area is located at the multimedia position, but actually what the driver wants to turn on is the key at the air conditioner position. Therefore, the present embodiment can be applied to an eye positioning technique with low accuracy (e.g., the positioning technique of the present invention), and is more suitable for an eye positioning technique with high accuracy.
Specifically, in this embodiment, the division of the plurality of regions may be an average division, or may be set according to actual needs. For example, the radio, multimedia and air conditioner are set as one area, the telephone and mobile phone connection is set as one area, and the specific setting mode can be adjusted.
Further preferably, in the present embodiment, based on a hash code algorithm, a plurality of pre-divided areas in the UI are scanned to calculate a respective first hash code for each scanning window (area). And scanning the target area to obtain a second hash code. And selecting a first hash code with the minimum Hamming distance from the plurality of first hash codes to the second hash code, and taking the area corresponding to the first hash code as a key area, namely the area to be amplified.
And S103, amplifying and displaying the key area in the UI.
Wherein, the UI takes an 8-inch display screen as an example, and the length of the 8-inch display screen is 162.56mm-177 mm. If the key area of the driver is positioned to be the area consisting of the radio reception key, the multimedia key and the air conditioner key, the key area comprising the three keys is displayed in an enlarged mode in the UI.
Further preferably, the enlarged display is specifically that the key area is enlarged and displayed in the current user interface according to a preset enlargement ratio. For example, the enlargement ratio is set to 2 or 3. The user can make settings as desired.
Further preferably, after S103, the vehicle display control method further includes: and responding to the touch operation of the driver on the key area after the enlarged display, and controlling the vehicle to execute the key function related to the touch operation. The touch operation is an operation of clicking the key area by a driver. The width of the adult index finger is 150mm +/-3 mm, and the size of the key can be set according to the width.
Further preferably, it is determined that the driver has a touch operation if the following condition is satisfied: the area of the driver's finger pressing on a key is greater than a preset area threshold, wherein the area threshold is configured as the product of the current area of the key and 60%.
And the area of the current key is the enlarged area of the key. According to the further preferable scheme, the operation of the driver after the key is amplified can be facilitated, and the response range is expanded. For example, if the key has an area of 4cm before enlargement2Then its area responding to the driver's touch operation is only 4cm2And after the key is enlarged, the area of the key is changed into 8cm2The area thereof responding to the touch operation of the driver also becomes 8cm2The user operation becomes simpler.
Fig. 3 is a further preferred embodiment of the vehicle display control method, which further includes, before S101, as shown in fig. 3:
s301, judging whether a vehicle host is in a night mode, wherein the night mode is configured to be triggered to start when the external ambient light intensity of the vehicle is smaller than a preset light intensity threshold value and a vehicle lamp is activated.
And S302, under the condition that the vehicle host is in the night mode, executing the step of acquiring the target area watched by the driver on the UI of the vehicle host.
In the night mode, the driver needs more concentration, and unsafe factors are further increased due to too small keys. At present, in the night mode, the brightness of the UI is adjusted according to the setting of the user, and generally, the brightness of the UI is increased, so that the night mode not only increases the brightness of the UI but also increases the area of the key area watched by the user. For example, when it is recognized that the ambient light level of the outside is too dark and the driver turns on the vehicle lamp, the target area starts to be acquired, so that the present embodiment is more targeted.
Through the embodiment, in the night mode, the target area watched by the driver on the UI is acquired, the key area corresponding to the target area is acquired from the plurality of pre-divided areas according to the target area, and the display control of the vehicle host is realized through the amplified display of the key area. The graphical key under the navigation interface becomes smaller, if the scheme of the embodiment is not used, the key touch can be completed only by the driver with more attention after the graphical key in the UI is further reduced, and further more unsafe factors are brought.
Fig. 4 is a block diagram of a vehicle main unit of the present invention, and as shown in fig. 4, the vehicle main unit is provided with a controller 41, and the controller 41 is configured to execute the vehicle display control method of fig. 1, 2, and 3.
Further preferably, the vehicle main unit further includes: a camera 42 for transmitting a plurality of frames of driver face images captured within a duration to the controller 41 in response to an image capturing operation of the controller, wherein the driver face images are configured to include a head posture and an eyeball feature.
The vehicle host is a multimedia host of the vehicle. The camera, preferably a Driver Monitor Status (DMS) unit for monitoring the driving state of the Driver, is used by a main user to Monitor whether the Driver is in a fatigue state.
The local amplification technology of the UI interface of the multimedia display screen can be realized by replacing a brand new UI interface, namely, the original interface is replaced without amplification on the original interface, but the switching speed needs to be considered, so that a client cannot perceive the interface by naked eyes.
Compared with the prior art, the vehicle host machine and the vehicle display control method have the same advantages, and are not repeated herein.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more, and the display control of the vehicle is realized by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a machine-readable storage medium having stored thereon instructions for causing a machine to execute the display control method of a vehicle described above.
The embodiment of the invention provides a processor, which is used for running a program, wherein the program executes a display control method of a vehicle when running.
The present application also provides a computer program product adapted to perform a program for initializing the steps of the display control method of the above-mentioned vehicle when executed on a data processing device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The use of the phrase "including an" as used herein does not exclude the presence of other, identical elements, components, methods, articles, or apparatus that may include the same, unless expressly stated otherwise.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A vehicle display control method characterized by comprising:
acquiring a target area corresponding to the sight line of a driver on a current user interface displayed by a vehicle host, wherein the target area is configured to be obtained by tracking the eyeball of the driver;
acquiring an area matched with the target area from a plurality of areas pre-divided in the current user interface as a key area; and
and amplifying and displaying the key area in the current user interface.
2. The vehicle display control method according to claim 1, wherein the acquiring of the target area where the driver is gazing at the current user interface displayed by the vehicle host machine comprises:
acquiring a plurality of frames of driver face images over a duration of time, wherein the driver face images are configured to include a head pose and an eye feature;
calculating an eyeball movement angle of the driver based on the head posture and the eyeball characteristics; and
determining a target area that the driver gazes at on a current user interface of the vehicle host based on the eye movement angle.
3. The vehicle display control method according to claim 1, wherein the acquiring, as a key region, a region that matches the target region from among a plurality of regions pre-divided in the current user interface includes:
acquiring respective first hash codes of a plurality of pre-divided areas;
acquiring a second hash code of the target area; and
and taking the area corresponding to the first hash code with the minimum Hamming distance with the second hash code as the key area matched with the target area.
4. The vehicle display control method according to any one of claims 1 to 3, wherein before the acquiring that the driver is gazing at the target area on the current user interface displayed by the vehicle host, the vehicle display control method further comprises:
determining whether a host vehicle is in a night mode, wherein the night mode is configured to be triggered to start when the external ambient light intensity of the vehicle is less than a preset light intensity threshold and a vehicle lamp is activated; and
and under the condition that the vehicle host is in a night mode, executing the step of acquiring a target area watched by a driver on a current user interface of the vehicle host.
5. The vehicle display control method according to claim 1, wherein the enlarging and displaying the key region in the current user interface includes:
and amplifying and displaying the key area in the current user interface according to a preset amplification scale.
6. The vehicle display control method according to claim 1, wherein after the enlargement display of the key region in the current user interface, the vehicle display control method further comprises:
and responding to the touch operation of the driver on the key area after the enlarged display, and controlling the vehicle to execute the key function related to the touch operation.
7. The vehicle display control method according to claim 6, characterized by further comprising:
determining that the driver has a touch operation if the following condition is satisfied:
the area of the key pressed by the finger of the driver is larger than a preset area threshold value, wherein the area threshold value is configured to be the product of the current area of the key and a preset proportion.
8. A vehicle main unit characterized in that the vehicle main unit is provided with a controller for executing the vehicle display control method according to any one of claims 1 to 7.
9. The vehicle main unit according to claim 8, further comprising: a camera for transmitting a plurality of frames of driver face images captured over a duration to the controller in response to an image capturing operation of the controller, wherein the driver face images are configured to include a head pose and an eyeball feature.
10. A computer-readable storage medium having stored thereon computer program instructions for causing a machine to perform the vehicle display control method of any one of claims 1-7.
CN202010367148.4A 2020-04-30 2020-04-30 Vehicle display control method and vehicle host Pending CN111638780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010367148.4A CN111638780A (en) 2020-04-30 2020-04-30 Vehicle display control method and vehicle host

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010367148.4A CN111638780A (en) 2020-04-30 2020-04-30 Vehicle display control method and vehicle host

Publications (1)

Publication Number Publication Date
CN111638780A true CN111638780A (en) 2020-09-08

Family

ID=72329968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010367148.4A Pending CN111638780A (en) 2020-04-30 2020-04-30 Vehicle display control method and vehicle host

Country Status (1)

Country Link
CN (1) CN111638780A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799515A (en) * 2021-02-01 2021-05-14 重庆金康赛力斯新能源汽车设计院有限公司 Visual interaction method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885573A (en) * 2012-12-19 2014-06-25 财团法人车辆研究测试中心 Automatic correction method for vehicle display system and system thereof
CN105677024A (en) * 2015-12-31 2016-06-15 北京元心科技有限公司 Eye movement detection tracking method and device, and application of eye movement detection tracking method
CN107943368A (en) * 2017-11-30 2018-04-20 珠海市魅族科技有限公司 Display control method, device, computer installation and computer-readable recording medium
CN109835260A (en) * 2019-03-07 2019-06-04 百度在线网络技术(北京)有限公司 A kind of information of vehicles display methods, device, terminal and storage medium
CN109968979A (en) * 2019-03-14 2019-07-05 百度在线网络技术(北京)有限公司 Vehicle-mounted projection processing method, device, mobile unit and storage medium
CN109976528A (en) * 2019-03-22 2019-07-05 北京七鑫易维信息技术有限公司 A kind of method and terminal device based on the dynamic adjustment watching area of head

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885573A (en) * 2012-12-19 2014-06-25 财团法人车辆研究测试中心 Automatic correction method for vehicle display system and system thereof
CN105677024A (en) * 2015-12-31 2016-06-15 北京元心科技有限公司 Eye movement detection tracking method and device, and application of eye movement detection tracking method
CN107943368A (en) * 2017-11-30 2018-04-20 珠海市魅族科技有限公司 Display control method, device, computer installation and computer-readable recording medium
CN109835260A (en) * 2019-03-07 2019-06-04 百度在线网络技术(北京)有限公司 A kind of information of vehicles display methods, device, terminal and storage medium
CN109968979A (en) * 2019-03-14 2019-07-05 百度在线网络技术(北京)有限公司 Vehicle-mounted projection processing method, device, mobile unit and storage medium
CN109976528A (en) * 2019-03-22 2019-07-05 北京七鑫易维信息技术有限公司 A kind of method and terminal device based on the dynamic adjustment watching area of head

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799515A (en) * 2021-02-01 2021-05-14 重庆金康赛力斯新能源汽车设计院有限公司 Visual interaction method and system

Similar Documents

Publication Publication Date Title
KR102666977B1 (en) Electronic device and method for photographing image thereof
CN106716302B (en) Method, apparatus, and computer-readable medium for displaying image
KR101842075B1 (en) Trimming content for projection onto a target
US10585473B2 (en) Visual gestures
US9842571B2 (en) Context awareness-based screen scroll method, machine-readable storage medium and terminal therefor
CN107659722B (en) Image selection method and mobile terminal
CN110362192B (en) Message location based on limb location
KR20160032611A (en) Method and apparatus for controlling an electronic device using a touch input
US20120280898A1 (en) Method, apparatus and computer program product for controlling information detail in a multi-device environment
KR102636243B1 (en) Method for processing image and electronic device thereof
KR20160132620A (en) Display data processing method and electronic device supporting the same
US20150063785A1 (en) Method of overlappingly displaying visual object on video, storage medium, and electronic device
KR102575844B1 (en) Electronic device for displaying screen and method for controlling thereof
KR20180063661A (en) Method for adjusting size of screen and electronic device thereof
KR102607564B1 (en) Method for displying soft key and electronic device thereof
US10276126B2 (en) Information processing method and electronic device
EP3293605B1 (en) Widget displaying method and apparatus for use in flexible display device, computer program and recording medium
KR102379898B1 (en) Electronic device for providing a graphic indicator related to a focus and method of operating the same
KR102630789B1 (en) Electric device and method for processing touch input
CN111638780A (en) Vehicle display control method and vehicle host
US20140022167A1 (en) Projection apparatus and projection method
CN115562539A (en) Control display method and device, electronic equipment and readable storage medium
CN112286430B (en) Image processing method, apparatus, device and medium
JP2015032261A (en) Display device and control method
CN110800308B (en) Methods, systems, and media for presenting a user interface in a wearable device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination