CN114684176A - Control method, control device, vehicle, and storage medium - Google Patents

Control method, control device, vehicle, and storage medium Download PDF

Info

Publication number
CN114684176A
CN114684176A CN202011583442.5A CN202011583442A CN114684176A CN 114684176 A CN114684176 A CN 114684176A CN 202011583442 A CN202011583442 A CN 202011583442A CN 114684176 A CN114684176 A CN 114684176A
Authority
CN
China
Prior art keywords
vehicle
target
interaction
driver
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011583442.5A
Other languages
Chinese (zh)
Inventor
曹书峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qoros Automotive Co Ltd
Original Assignee
Qoros Automotive Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qoros Automotive Co Ltd filed Critical Qoros Automotive Co Ltd
Priority to CN202011583442.5A priority Critical patent/CN114684176A/en
Publication of CN114684176A publication Critical patent/CN114684176A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Abstract

The application discloses a control method for a graphical user interface of a vehicle. The control method comprises the following steps: and acquiring visual information of a driver of the vehicle, and controlling the interactive target to perform preset operation in response to the interactive target determined according to the visual information so as to realize interaction with the graphical user interface. According to the control method, the driver of the vehicle can realize interactive operation with the vehicle graphic user interface only through small action changes such as changes of eyes and expressions, nodding and shaking the head and the like, touch operation of hands is replaced to a certain extent, and safety of a driving process is effectively improved. The application also discloses a control device, a vehicle and a storage medium.

Description

Control method, control device, vehicle, and storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a control method and a control apparatus for a graphical user interface of a vehicle, and a storage medium.
Background
With the development of automotive electronics, vehicles become more and more intelligent. The user can interact with the vehicle through the vehicle control system of the vehicle-mounted central control large screen, so that intelligent control over the vehicle is achieved. In the related art, a driver is usually required to observe and determine a target and then perform interactive operation with the target through touch operation, however, excessive touch operation may distract the driver during driving, and certain safety hazards also exist when the driver uses hands and eyes in driving.
Disclosure of Invention
In view of this, embodiments of the present application provide a control method for a graphical user interface of a vehicle, a control apparatus, a vehicle, and a storage medium.
The application provides a control method for a graphical user interface of a vehicle, the control method comprising:
acquiring visual information of a driver of the vehicle;
and responding to the interaction target determined according to the visual information, and controlling the interaction target to perform a preset operation so as to realize interaction with the graphical user interface.
In some implementations, the vehicle includes a plurality of cameras, and the obtaining visual information of a driver of the vehicle includes:
and acquiring eyeball information of the driver acquired by the plurality of cameras to acquire the visual information.
In some implementations, the control method further includes:
obtaining expression information of the driver;
the controlling the interaction target to perform a predetermined operation to realize interaction with the graphical user interface in response to the interaction target determined according to the visual information comprises:
and responding to an interaction target determined according to the eyeball information and the expression information, and controlling the interaction target to perform preset operation so as to realize interaction with the graphical user interface.
In some implementations, the vehicle is communicatively coupled to a cloud server, the server configured to:
receiving the eyeball information and the expression information sent by the vehicle;
determining a visual center of the driver according to the eyeball information and determining a target corresponding to the visual center;
determining the confirmation information of the driver to the target according to the expression information to determine that the target is the interaction target;
sending confirmation information of the interaction target to the vehicle;
the responding to the interaction target determined according to the visual information and the expression information, and the controlling the interaction target to perform the preset operation so as to realize the interaction with the graphical user interface comprises the following steps:
and controlling the interaction target to perform a predetermined operation to realize interaction with the graphical user interface in response to the received confirmation information fed back by the server.
In some implementations, the driver is wearing a wearable device configured to emit microwave signals, the vehicle includes a microwave detection device, and the obtaining visual information of the driver of the vehicle includes:
and acquiring the visual information according to the microwave signal detected by the microwave detection equipment.
In some implementation methods, the controlling the interaction target to perform a predetermined operation to achieve interaction with the graphical user interface in response to the interaction target determined according to the visual information includes:
detecting the transmitting direction of the microwave signal;
and determining the target corresponding to the transmitting direction as the interactive target.
In some implementation methods, the determining that the target corresponding to the transmission direction is the interaction target includes:
detecting the standing time of the microwave signal in the generating direction;
and determining the target corresponding to the transmitting direction as the interactive target when the parking time is more than the preset time.
The present application also provides a control device for a graphical user interface of a vehicle, the control device comprising:
an acquisition module for acquiring visual information of a driver of the vehicle;
and the control module is used for responding to the interaction target determined according to the visual information and controlling the interaction target to carry out preset operation so as to realize interaction with the graphical user interface.
The application also provides a vehicle comprising a memory and a processor, wherein the memory stores a computer program, and the computer program is executed by the processor to realize the control method of the graphical user interface for the vehicle.
The present application also provides a non-transitory computer-readable storage medium of a computer program, which when executed by one or more processors, implements the control method for a graphical user interface for a vehicle.
In the control method, the control device, the vehicle and the storage medium for the vehicle graphical user interface, interaction with the graphical user interface is realized by acquiring visual information of a driver of the vehicle and controlling an interaction target to perform a predetermined operation in response to the interaction target determined according to the visual information. The driver of the vehicle can realize the interactive operation with the graphical user interface of the vehicle only by small action changes such as the change of eyes and expressions, the inclination of the head and the like, and the touch operation of hands is replaced to a certain extent. The safety of the driving process is effectively improved because hands and eyes are not needed and the interactive operation is carried out with the graphical user interface.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings.
FIG. 1 is a schematic flow chart of a control method according to certain embodiments of the present application;
FIG. 2 is a block diagram of an apparatus for a control method according to some embodiments of the present application;
FIG. 3 is a schematic flow chart of a control method according to certain embodiments of the present application;
FIG. 4 is a schematic flow chart of a control method according to certain embodiments of the present application;
FIG. 5 is a schematic flow chart of a control method according to certain embodiments of the present application;
FIG. 6 is a schematic flow chart of a control method according to certain embodiments of the present application;
FIG. 7 is a schematic flow chart of a control method according to certain embodiments of the present application;
FIG. 8 is a schematic diagram illustrating an example control method according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
Referring to fig. 1, the present application provides a control method for a graphical user interface of a vehicle, comprising:
s10: acquiring visual information of a driver of a vehicle;
s20: and responding to the interaction target determined according to the visual information, and controlling the interaction target to perform a preset operation so as to realize interaction with the graphical user interface.
Referring to fig. 2, the present embodiment also provides a control device 100 for a graphical user interface of a vehicle, and the control method of the present embodiment may be implemented by the control device 100. The control device 100 includes an acquisition module 110 and a control module 120. S10 may be implemented by the obtaining module 110 and S20 may be implemented by the control module 120. Alternatively, the obtaining module 110 is used to obtain visual information of a driver of the vehicle. The control module 120 is configured to control the interaction target to perform a predetermined operation to achieve interaction with the graphical user interface in response to the interaction target determined according to the visual information.
The embodiment of the application also provides a vehicle. The vehicle includes a memory and a processor. The memory has stored therein a computer program, and the processor is configured to obtain visual information of a driver of the vehicle, and to control the interactive target to perform a predetermined operation to achieve interaction with the graphical user interface in response to the interactive target determined based on the visual information.
With the development of automobile intellectualization, a driver can interact with a vehicle through a graphical User Interface (UI), such as a vehicle-mounted central control large screen, so that the intelligent control of the vehicle is realized. And the UI of the vehicle-mounted central control large screen comprises various APP programs and applications such as a vehicle-mounted navigation APP, air conditioner control, music playing and the like. The driver needs to select the required APP through the UI interface of the vehicle-mounted central control large screen. After entering the selected APP, the UI interface of the large screen of the vehicle-mounted central control system is switched to the UI interface of the current APP, and then the driver performs more subsequent selections.
Specifically, when the driver starts the control method of the present application, the corresponding module may start to acquire visual information of the driver of the vehicle. For example, images or video information of the driver, including static and dynamic information, are captured by the smart camera. When a driver needs to open a certain APP, only the action change of a small amplitude needs to be carried out, for example, the required APP is seen through eyes, after the binocular vision line focus point of the driver is captured by the camera, the APP which the driver wants to open is identified through calculation such as interception and feature extraction of the corresponding model. The UI interface is strictly divided into N blocks of sub-areas so as to correspond to the display positions of N APPs.
Referring to fig. 8, for example, if the UI interface has a length of 40cm and a width of 20cm, and N is 5, the interface can display 25 APPs, and each APP occupies a range of 8cm in length and 4cm in width. When the two-dimensional coordinate position of the binocular vision focusing point of the driver is calculated to be O (6cm,20cm) according to the visual information, the position is located in the area range of the air conditioner APP, namely the width is 4cm-8cm, and the length is 16cm-24 cm. The APP position that the driver wants to open can thus be determined.
Further, the visual information of the driver, such as the mouth corner rising, nodding and other small amplitude motion changes, can be captured continuously. Whether the current driver conducts opening operation on the determined APP is identified through calculation of the visual information. The driver can also emit electromagnetic waves such as microwaves through some wearable devices to operate, and the microwaves and the emission direction thereof are acquired visual information.
Further, after an interactive target such as an APP is determined according to the visual information, the APP is controlled to perform a predetermined operation, such as opening the APP, switching or closing the APP. After entering the UI interface of the APP, further similar operation can be carried out according to the visual information, and the driver can see the corresponding control button of the APP through eyes to realize the clicking function.
For example, when a driver wants to adjust the air conditioning system during the course of the form, the method is first turned on and the camera starts capturing the visual information of the driver. When a driver looks at an air conditioner display area of a vehicle-mounted central control large screen and implements the action of raising a mouth angle, the camera sends the visual information to a corresponding server to calculate and judge that the driver currently wants to turn on an air conditioning system, and then corresponding operation is executed, such as turning on an air conditioner APP. If the driver continues to perform the next operation and wants to heat the air conditioner, the driver looks at the temperature increase control '+' and performs the action of raising the mouth angle, and the camera continuously sends the visual information to the corresponding server for calculation so as to perform the operation of temperature increase.
It should be noted that the driver does not need to look at the UI interface for a long time, but only needs to look at a certain area of the UI interface and return to the UI interface immediately before performing the operation, and the subsequent actions such as raising the mouth corner can be performed simultaneously with the looking or after returning.
Therefore, the visual information of the driver of the vehicle is acquired, the interactive target determined according to the visual information is responded, and the interaction with the graphical user interface is realized by controlling the interactive target to perform the preset operation. The driver of the vehicle can realize the interactive operation with the graphical user interface of the vehicle only by small action changes such as the change of eyes and expressions, nodding and shaking the head and the like, and the touch operation of hands is replaced to a certain extent. The safety of the driving process is effectively improved because hands and eyes are not needed and the interactive operation with the graphical user interface is carried out for a long time.
Referring to fig. 3, in some embodiments, the vehicle includes a plurality of cameras, and S10 includes:
s101: the eyeball information of the driver collected by the plurality of cameras is acquired to acquire visual information.
In some implementation methods, S101 may be implemented by the obtaining module 110. In other words, the obtaining module 110 is configured to obtain eyeball information of the driver collected by a plurality of cameras to obtain the visual information.
In some implementations, the processor is configured to obtain eyeball information of the driver collected by the plurality of cameras to obtain the visual information.
In particular, the vehicle may be provided with a plurality of cameras for obtaining visual information of the driver. The position of the camera can be set according to actual conditions, such as the corner position of the vehicle-mounted central control large screen or the position above the vehicle-mounted central control large screen. The position of the eyeball visual center on the UI interface is specifically identified according to the visual information of the driver, and the specific implementation position information is not limited in the present application.
Wherein, the quantity of camera is four or more than four, and the purpose is for can comparatively accurately obtaining driver's binocular sight focus to judge the APP's that the driver looked to position. To realize the function, four or more cameras, such as four cameras, are needed to capture visual information of the driver from four directions, namely up, down, left and right, of the vehicle-mounted central control large screen.
Referring to fig. 4, in some embodiments, the control method further includes:
s30: the expression information of the driver is acquired.
Further, S20 includes:
s201: and responding to the interaction target determined according to the eyeball information and the expression information, and controlling the interaction target to perform preset operation so as to realize interaction with the graphical user interface.
In some implementations, S30 may be implemented by the obtaining module 110, and S201 may be implemented by the control module 120. In other words, the obtaining module 110 is configured to obtain the expression information of the driver, and the control module 120 is configured to control the interaction target to perform a predetermined operation to implement interaction with the graphical user interface in response to the interaction target determined according to the eyeball information and the expression information.
In some implementation methods, the processor is configured to acquire expression information of the driver, and control the interaction target to perform a predetermined operation to achieve interaction with the graphical user interface in response to the interaction target determined according to the eyeball information and the expression information.
Specifically, the expression information of the driver includes mouth information or head information or the like. For example, the mouth corners are raised, the mouth is puckered, and the like, or the head is nodded and shaken, and the like.
Video or dynamic images are acquired through a plurality of cameras, such as four cameras, and then the acquired information is preprocessed and face detection is carried out. After the face is identified, further, the drop point of the binocular line focus of the driver is tracked through a corresponding gaze point estimation technology. Similarly, the mouth information may be extracted to identify whether the mouth is in a mouth corner raising state or puckered mouth. Or judging whether the head is nodding or shaking according to the human face behavior recognition.
Specifically, the method for detecting the human face comprises the steps of obtaining a pixel coordinate graph by carrying out coordinate transformation on obtained video or dynamic image information, then extracting ORB (ordered FAST and Rotated BRIEF) characteristics from 4 coordinate graphs, carrying out rough matching, eliminating mismatching points by using a Random Sample Consensus (RANSAC) algorithm, fitting an initial value of a homography matrix, refining by using a Levenberg-Marquardt nonlinear iteration minimum approximation method, and generating a panoramic coordinate view after image registration, fusion and splicing.
Further, a plurality of weak classification models can be trained by utilizing Haar features or LBP features to form a final classifier for face detection.
When the face image is detected, the four cameras can further acquire a left eye image, a right eye image, the face image and the face position. The four information inputs are respectively processed by four branches, and a two-dimensional coordinate position (x, y) is obtained after fusion and output. The algorithm can utilize an extreme learning machine in a single hidden layer feedforward neural network. And then matching the two-dimensional coordinate position (x, y) with a UI interface such as a vehicle-mounted central control large screen N x N sub-area, and calculating the APP of the current binocular vision line focus landing point of the driver.
Further, after the face image is detected, a Region Of Interest (ROI) Of the mouth Of the driver can be captured, and the raised mouth corner can be located by using a convolutional neural network. And similarly, the motion of nodding or shaking the head can be identified and determined.
Therefore, the driver only needs to look at a certain APP area of the UI through eyes, and the visual information of the driver is captured through the plurality of cameras, so that the APP which the driver wants to open can be identified. Further, the opening operation is performed on the determined APP by recognizing the expression information of the driver. The touch operation of hands is replaced to a certain extent. The safety of the driving process is effectively improved because hands and eyes are not needed and the interactive operation with the graphical user interface is carried out for a long time.
Referring to fig. 5, in some embodiments, the vehicle is communicatively coupled to a cloud server, the cloud server comprising:
s200: receiving eyeball information and expression information sent by a vehicle;
s300: determining a visual center of a driver according to the eyeball information and determining a target corresponding to the visual center;
s400: determining the confirmation information of the driver to the target according to the expression information to determine the target as an interactive target;
s500: sending confirmation information of the interaction target to the vehicle;
further, S201 includes:
s2011: and controlling the interaction target to perform a preset operation to realize interaction with the graphical user interface in response to the received confirmation information fed back by the server.
Specifically, the server may be a cloud server, and is mainly used for calculating and identifying the acquired information. When the control method is started, the camera of the vehicle captures and acquires eyeball information and expression information of the driver, the eyeball information and the expression information are continuously sent to the cloud server in real time, the server calculates according to the eyeball information, the UI (user interface) and relevant parameter information of the APPs, an interaction target corresponding to the binocular vision line focusing point of the driver or the APP which the driver currently wants to open is determined, and the interaction target is sent to the vehicle. Meanwhile, mouth information such as the rising of the mouth angle is further identified according to the acquired expression information, if the current mouth angle of the driver is judged to rise, an opening control instruction can be generated, and the confirmation information and the control instruction of the interactive target are sent to the vehicle.
And when the vehicle receives the confirmed interactive target and the control instruction thereof, if the interactive target is a navigation control system and the control instruction is open, executing corresponding operation, namely opening a navigation control system APP.
Therefore, the control method is calculated and identified through the cloud server, a control instruction is generated, then the vehicle returns to the vehicle, and the vehicle executes corresponding operation.
Referring to fig. 6, in some embodiments, the driver wears a wearable device, the wearable device is configured to transmit a microwave signal, the vehicle includes a microwave detection device, and S10 further includes:
s102: and acquiring visual information according to the microwave signal detected by the microwave detection equipment.
In some implementations, S102 may be implemented by the obtaining module 110. Alternatively, the obtaining module 110 is configured to obtain the visual information according to the microwave signal detected by the microwave detecting device.
In some embodiments, the processor is configured to obtain the visual information based on the detection of the microwave signal by the microwave detection device.
In particular, the driver may wear a wearable device for transmitting microwave signals, such as glasses or a helmet that may transmit microwaves, etc. The glasses can be worn when a driver needs to operate the UI interface. Further, the driver can turn on the microwave control function by the operation of the steering wheel. When the function is started, the head of the driver quickly rotates to look at the APP needing to be opened on the UI interface and keeps staying for a short time, such as 1 second, and microwave detection equipment, such as a radar, can detect and recognize the microwave information. The microwave detection equipment can be installed at the rear end of the vehicle-mounted central control large screen to receive microwaves.
Referring again to fig. 6, in some embodiments, S20 further includes:
s202: detecting the transmitting direction of the microwave signal;
s203: and determining the target corresponding to the transmitting direction as an interactive target.
In some implementations, S202 and S203 may be implemented by the control module 120. Or, the control module 120 is configured to detect a transmission direction of the microwave signal, and determine that a target corresponding to the transmission direction is an interaction target.
In some implementations, the processor is configured to detect a transmission direction of the microwave signal, and determine that a target corresponding to the transmission direction is an interaction target.
Specifically, after the microwave function is turned on, the head of the driver quickly rotates to look at the APP to be turned on the UI interface, and at this time, the microwave detection device such as a radar can detect and recognize the microwave information. Furthermore, according to the transmitting direction of the microwave, the area range where the microwave stays is judged by combining the UI interface and the related parameter information of the APP, so that the APP which the driver wants to open is obtained, after the APP is determined, clicking can be directly executed to open the APP, and the APP can also be opened through secondary microwave transmission or other further confirmation operations.
Therefore, the wearable device capable of emitting the microwaves determines the interaction target by emitting the microwaves, avoids manual touch control of a driver to a certain extent, and reduces safety risks of hand-eye combination. In addition, a calculation method model does not need to be added in the server, and the implementation is simple. It should be noted that, because it requires a short stay of the driver's sight line, it can be applied to UI interface control in non-driving processes more.
Referring to fig. 7, in some embodiments, S203 includes:
s2031: detecting the standing time of the microwave signal in the generating direction;
s2032: and determining that the target corresponding to the transmitting direction is the interactive target under the condition that the parking time is more than the preset time.
In some implementations, S202 and S203 may be implemented by the control module 120. Or, the control module 120 is configured to detect a transmission direction of the microwave signal, and determine that a target corresponding to the transmission direction is an interaction target.
In some embodiments, the processor is configured to detect a transmission direction of the microwave signal, and determine that a target corresponding to the transmission direction is an interaction target.
Specifically, after the microwave function is started, the head of a driver rotates quickly to look at an APP needing to be opened on a UI interface and keeps staying for a short time or more than a preset time such as 1 second, and at the moment, microwave detection equipment such as a radar can detect and identify microwave information and judge through the preset time. If the microwave dwell time is less than 1 second, no recognition is made, for example if the driver has swept the area. And otherwise, if the microwave staying time is more than or equal to 1 second, starting identification, and calculating according to the transmitting direction of the microwave and by combining the UI (user interface) and the related parameter information of the APP to judge the area range where the microwave stays, so as to obtain the APP which the driver wants to open. After determining the APP, a click may be performed directly to open the APP, or a second microwave transmission or other further confirmation operation may be performed to open the APP.
In this way, the implementation method adds the predetermined time to the control method for UI interface interaction through the wearable device capable of emitting microwaves, and the predetermined time can exclude certain misjudgment situations such as the driver just sweeping the area instead of wanting to open the APP to which the area belongs. Thereby effectively improving the accuracy of the control method.
In some implementations, microwave emission may be combined with face detection. Specifically, after the microwave function is started, the head of the driver rotates quickly to look at the APP needing to be opened on the UI interface and keeps staying for a short time which is greater than or equal to a preset time such as 1 second, and at the moment, microwave detection equipment such as a radar can detect and identify the microwave information and judge whether to determine the interaction target or not according to the preset time. If the microwave dwell time is less than 1 second, no recognition is made, for example if the driver has swept the area. On the contrary, if the microwave residence time is greater than or equal to 1 second, the target can be determined as the interactive target, and further, whether to perform an operation on the interactive target, such as opening, for example, raising a mouth corner, puckering a mouth, or the like, or performing a little action information, such as nodding a head, shaking a head, or the like, can be determined according to the expression information of the driver, including mouth information or head information.
Specifically, a video or a dynamic image is acquired by a plurality of cameras, such as four cameras, and then the acquired information is subjected to preprocessing and face detection. After the face is recognized, the mouth information is intercepted, and whether the face is in a mouth corner raising state or a puckered mouth state or the like is judged. Or judging whether the head is nodding or shaking according to the human face behavior recognition.
Therefore, the interactive target is determined by detecting the microwave emission signal, and then the interactive target APP is opened according to smaller action information such as mouth angle rising, and compared with the scheme, the accuracy of identification is improved.
In some implementations, after the microwave function is turned on, the head of the driver quickly turns to look at the APP to be turned on the UI interface, and the microwave detection device, such as a radar, can detect and recognize the microwave information. Further, the interaction target, such as a mouth corner, puckered mouth, and the like, or smaller motion information such as nodding and shaking, may be determined and opened according to the expression information of the driver, including mouth information or head information.
Specifically, a video or a dynamic image is acquired by a plurality of cameras, such as four cameras, and then the acquired information is subjected to preprocessing and face detection. After the face is identified, the intercepted mouth information is identified and judged whether the mouth is in a mouth angle raising state or puckered mouth and the like. Or judging whether the head is nodding or shaking according to the human face behavior recognition.
So, combine to detect microwave emission signal, confirm and open interactive target APP according to less action information such as mouth angle rises, compare in above-mentioned scheme, need not sight dwell time, improved the security of traveling effectively.
In summary, in the control method, the control device, the vehicle and the storage medium for the vehicle graphical user interface according to the embodiments of the present application, the visual information of the driver of the vehicle is acquired, the binocular focusing sight point of the driver is identified by the intelligent model to determine the interactive target seen by the driver, and the determined interactive target is operated by using smaller motion or expression information. The driver of the vehicle can realize the interactive operation with the graphical user interface of the vehicle only by small action changes such as the change of eyes and expressions, the inclination of the head and the like, and the touch operation of hands is replaced to a certain extent. The safety of the driving process is effectively improved because hands and eyes are not needed and the interactive operation is carried out with the graphical user interface. Preferably, eyeball information of the driver can be acquired through a plurality of cameras to determine the accurate position of the interaction target, and then expression information of the driver is acquired to determine whether to execute the clicking operation. The confirmation and the operation are carried out step by step, so that the accuracy of the control method can be improved to a certain extent, and the user experience is improved. In some implementation methods, interactive control of the UI can be performed through the wearable device capable of emitting microwaves and the microwave detection device, models such as calculation identification and the like do not need to be added to a server, touch operation instead of hands is simply achieved to a certain extent, and driving safety risks caused by hand-eye touch are avoided.
The embodiment of the application also provides a vehicle. The vehicle includes a memory and one or more processors, one or more programs being stored in the memory and configured to be executed by the one or more processors. The program includes instructions for executing the control method for a graphical user interface for a vehicle according to any one of the above embodiments. The processor may be used to provide computational and control capabilities to support the operation of the entire vehicle. The memory provides an environment for the execution of computer-readable instructions stored therein.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media storing a computer program that, when executed by one or more processors, implements the method of controlling a graphical user interface for a vehicle of any of the embodiments described above.
It will be understood by those skilled in the art that all or part of the processes in the method for implementing the above embodiments may be implemented by a computer program instructing relevant software. The program may be stored in a non-volatile computer readable storage medium, which when executed, may include the flows of embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A control method for a graphical user interface of a vehicle, the control method comprising:
acquiring visual information of a driver of the vehicle;
and responding to the interaction target determined according to the visual information, and controlling the interaction target to perform a preset operation so as to realize interaction with the graphical user interface.
2. The control method of claim 1, wherein the vehicle includes a plurality of cameras, and the obtaining visual information of a driver of the vehicle includes:
and acquiring eyeball information of the driver acquired by the plurality of cameras to acquire the visual information.
3. The control method according to claim 2, characterized by further comprising:
obtaining expression information of the driver;
the controlling the interaction target to perform a predetermined operation to realize interaction with the graphical user interface in response to the interaction target determined according to the visual information comprises:
and responding to an interaction target determined according to the eyeball information and the expression information, and controlling the interaction target to perform preset operation so as to realize interaction with the graphical user interface.
4. The control method of claim 3, wherein the vehicle is communicatively coupled to a cloud server, the server configured to:
receiving the eyeball information and the expression information sent by the vehicle;
determining a visual center of the driver according to the eyeball information and determining a target corresponding to the visual center;
determining the confirmation information of the driver to the target according to the expression information to determine that the target is the interaction target;
sending confirmation information of the interaction target to the vehicle;
the responding to the interaction target determined according to the visual information and the expression information, and the controlling the interaction target to perform the preset operation so as to realize the interaction with the graphical user interface comprises the following steps:
and controlling the interaction target to perform a predetermined operation to realize interaction with the graphical user interface in response to the received confirmation information fed back by the server.
5. The control method of claim 1, wherein the driver is wearing a wearable device for transmitting microwave signals, the vehicle includes a microwave detection device, and the obtaining visual information of the driver of the vehicle comprises:
and acquiring the visual information according to the microwave signal detected by the microwave detection equipment.
6. The control method according to claim 5, wherein the controlling the interaction target to perform a predetermined operation to achieve interaction with the graphical user interface in response to the interaction target determined according to the visual information comprises:
detecting the transmitting direction of the microwave signal;
and determining the target corresponding to the transmitting direction as the interactive target.
7. The method according to claim 6, wherein the determining that the target corresponding to the transmission direction is the interaction target comprises:
detecting the standing time of the microwave signal in the generating direction;
and determining the target corresponding to the transmitting direction as the interactive target when the parking time is more than the preset time.
8. A control device for a graphical user interface of a vehicle, the control device comprising:
an acquisition module for acquiring visual information of a driver of the vehicle;
and the control module is used for responding to the interaction target determined according to the visual information and controlling the interaction target to carry out preset operation so as to realize interaction with the graphical user interface.
9. A vehicle comprising a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, implements the control method of any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium of a computer program, wherein the computer program, when executed by one or more processors, implements the control method of any one of claims 1-7.
CN202011583442.5A 2020-12-28 2020-12-28 Control method, control device, vehicle, and storage medium Pending CN114684176A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011583442.5A CN114684176A (en) 2020-12-28 2020-12-28 Control method, control device, vehicle, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011583442.5A CN114684176A (en) 2020-12-28 2020-12-28 Control method, control device, vehicle, and storage medium

Publications (1)

Publication Number Publication Date
CN114684176A true CN114684176A (en) 2022-07-01

Family

ID=82130080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011583442.5A Pending CN114684176A (en) 2020-12-28 2020-12-28 Control method, control device, vehicle, and storage medium

Country Status (1)

Country Link
CN (1) CN114684176A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4565999A (en) * 1983-04-01 1986-01-21 Prime Computer, Inc. Light pencil
KR19980066375A (en) * 1997-01-23 1998-10-15 김광호 Helmet mouse
US6373961B1 (en) * 1996-03-26 2002-04-16 Eye Control Technologies, Inc. Eye controllable screen pointer
US20150109191A1 (en) * 2012-02-16 2015-04-23 Google Inc. Speech Recognition
US20150346820A1 (en) * 2014-06-03 2015-12-03 Google Inc. Radar-Based Gesture-Recognition through a Wearable Device
US20160041391A1 (en) * 2014-08-08 2016-02-11 Greg Van Curen Virtual reality system allowing immersion in virtual space to consist with actual movement in actual space
CN110825216A (en) * 2018-08-10 2020-02-21 北京魔门塔科技有限公司 Method and system for man-machine interaction of driver during driving

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4565999A (en) * 1983-04-01 1986-01-21 Prime Computer, Inc. Light pencil
US6373961B1 (en) * 1996-03-26 2002-04-16 Eye Control Technologies, Inc. Eye controllable screen pointer
KR19980066375A (en) * 1997-01-23 1998-10-15 김광호 Helmet mouse
US20150109191A1 (en) * 2012-02-16 2015-04-23 Google Inc. Speech Recognition
US20150346820A1 (en) * 2014-06-03 2015-12-03 Google Inc. Radar-Based Gesture-Recognition through a Wearable Device
US20160041391A1 (en) * 2014-08-08 2016-02-11 Greg Van Curen Virtual reality system allowing immersion in virtual space to consist with actual movement in actual space
CN110825216A (en) * 2018-08-10 2020-02-21 北京魔门塔科技有限公司 Method and system for man-machine interaction of driver during driving

Similar Documents

Publication Publication Date Title
CN111931579B (en) Automatic driving assistance system and method using eye tracking and gesture recognition techniques
CN110167823B (en) System and method for driver monitoring
CN110703904B (en) Visual line tracking-based augmented virtual reality projection method and system
CN108919958B (en) Image transmission method and device, terminal equipment and storage medium
US9043042B2 (en) Method to map gaze position to information display in vehicle
CN111566612A (en) Visual data acquisition system based on posture and sight line
CN111709264A (en) Driver attention monitoring method and device and electronic equipment
CN111565978A (en) Primary preview area and gaze-based driver distraction detection
US20150279022A1 (en) Visualization of Spatial and Other Relationships
CN110826370B (en) Method and device for identifying identity of person in vehicle, vehicle and storage medium
CN108229345A (en) A kind of driver's detecting system
US11584378B2 (en) Vehicle-assist system
JP2012003764A (en) Reconfiguration of display part based on face tracking or eye tracking
CN113785263A (en) Virtual model for communication between an autonomous vehicle and an external observer
US11572071B2 (en) Method and system for determining awareness data
CN113646736A (en) Gesture recognition method, device and system and vehicle
KR20220004754A (en) Neural Networks for Head Pose and Gaze Estimation Using Photorealistic Synthetic Data
JP2019159518A (en) Visual state detection apparatus, visual state detection method, and visual state detection program
JP2022089774A (en) Device and method for monitoring driver in vehicle
CN114162130A (en) Driving assistance mode switching method, device, equipment and storage medium
CN114998870A (en) Driving behavior state recognition method, device, equipment and storage medium
JP2017056909A (en) Vehicular image display device
CN114684176A (en) Control method, control device, vehicle, and storage medium
CN116543266A (en) Automatic driving intelligent model training method and device guided by gazing behavior knowledge
CN111824043A (en) Automobile display screen control system and method and vehicle comprising same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination