CN112114671A - Human-vehicle interaction method and device based on human eye sight and storage medium - Google Patents
Human-vehicle interaction method and device based on human eye sight and storage medium Download PDFInfo
- Publication number
- CN112114671A CN112114671A CN202011001138.5A CN202011001138A CN112114671A CN 112114671 A CN112114671 A CN 112114671A CN 202011001138 A CN202011001138 A CN 202011001138A CN 112114671 A CN112114671 A CN 112114671A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- human
- driver
- eye
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000003993 interaction Effects 0.000 title claims abstract description 39
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 230000004438 eyesight Effects 0.000 title claims abstract description 9
- 230000009471 action Effects 0.000 claims abstract description 45
- 230000004424 eye movement Effects 0.000 claims abstract description 43
- 238000013528 artificial neural network Methods 0.000 claims abstract description 13
- 238000010586 diagram Methods 0.000 claims description 25
- 238000012706 support-vector machine Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 238000009434 installation Methods 0.000 claims 2
- 230000002452 interceptive effect Effects 0.000 claims 2
- 230000002411 adverse Effects 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract description 5
- 230000008569 process Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 230000004397 blinking Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000003062 neural network model Methods 0.000 description 5
- 238000003825 pressing Methods 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000004418 eye rotation Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Ophthalmology & Optometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a human-vehicle interaction method, a human-vehicle interaction device and a storage medium based on human eye sight, wherein the method and the device are applied to a vehicle, in particular to the acquisition of a human face image of a driver of the vehicle; processing the face image by using a neural network algorithm to obtain the current sight direction and eye movement of the driver; and executing operation on the in-vehicle equipment in the vehicle according to the current sight line direction and the eye movement. Therefore, the driver does not need to operate the corresponding in-vehicle equipment by hands, and the adverse effect on the safe driving of the automobile caused by the fact that the driver executes the operation action of releasing the steering wheel is avoided.
Description
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a human-vehicle interaction method and apparatus based on human eye sight, and a storage medium.
Background
It is known that during normal driving of a vehicle, in order to ensure safe driving, the driver's hands are preferably limited to the steering wheel and its accessories, and preferably no movement is required for other components. However, when the automobile actually runs, the driver is often required to perform actual operations requiring hand-eye coordination, such as opening and setting navigation, opening and closing audio/video playing, and opening and closing windows.
The inventor of the present application has found in practice that these manipulations that require hand-eye coordination adversely affect the safe driving of the vehicle by affecting the driver's effective control of the steering wheel and its accessories.
Disclosure of Invention
In view of the above, the present application provides a human-vehicle interaction method, device and storage medium based on human eyes, for avoiding adverse effects on safe driving of an automobile caused by a driver performing a control action requiring to release a steering wheel.
In order to achieve the above object, the following solutions are proposed:
a human-vehicle interaction method based on human eye sight is applied to vehicles and comprises the following steps:
acquiring a face image of a driver of the vehicle;
processing the face image by using a neural network algorithm to obtain the current sight line direction and the eye movement of the driver;
and executing operation on the equipment in the vehicle according to the current sight line direction and the eye movement.
Optionally, the acquiring a facial image of a driver of the vehicle includes:
acquiring an image of the driver by using at least one camera device in the vehicle;
and processing the image to obtain the face image.
Optionally, the image is a visible light image and/or an infrared image.
Optionally, the processing the face image by using a neural network algorithm includes:
processing the face image by utilizing three cascaded Hourglass modules to obtain face information;
processing the face information by using a convolutional layer module to obtain a feature map comprising a plurality of eye key points, wherein the feature map comprises coordinates of each eye key point;
processing the characteristic diagram by using a Resnet network and adopting a direct regression algorithm to obtain the current sight line direction;
and processing the characteristic diagram by utilizing two Lenet classification networks to obtain the eye action.
Optionally, the executing operation of the in-vehicle device in the vehicle according to the current sight line direction and the eye movement includes:
selecting target in-vehicle equipment from a plurality of in-vehicle equipment in the vehicle according to the current sight line direction;
and controlling the target in-vehicle equipment to execute operation matched with the eye action.
Optionally, the controlling the target in-vehicle device to execute an operation matched with the eye action includes:
detecting that the current sight line direction is located at the focus position of the equipment in the target vehicle;
detecting the eye movement;
and when the eye action meets a preset standard, controlling the target in-vehicle equipment to execute an operation matched with the focus position.
Optionally, the human-vehicle interaction method further includes the steps of:
and processing the current sight line direction by using a support vector machine of the second classification to obtain a conclusion whether the driver is distracted, and sending warning information to the driver when the driver is distracted.
A human-vehicle interaction device based on human eye sight is applied to a vehicle, and comprises:
the face acquisition module is used for acquiring a face image of a driver of the vehicle;
the image processing module is used for processing the face image by utilizing a neural network algorithm to obtain the current sight line direction and the eye movement of the driver;
and the operation execution module is used for executing operation on the in-vehicle equipment in the vehicle according to the current sight line direction and the eye movement.
Optionally, the face obtaining module includes:
the camera equipment is used for collecting the image of the driver;
and the processor is used for processing the image to obtain the face image.
Optionally, the image is a visible light image and/or an infrared image.
Optionally, the image processing module includes:
the three cascaded Hourglass modules are used for processing the face image to obtain face information;
the convolutional layer module is used for processing the face information to obtain a feature map comprising a plurality of eye key points, and the feature map comprises the coordinates of each eye key point;
the Resnet network is used for processing the characteristic diagram by adopting a direct regression algorithm to obtain the current sight line direction;
and the Lenet two-classification network is used for processing the characteristic diagram to obtain the eye action.
Optionally, the operation executing module includes:
the target selection unit is used for selecting target in-vehicle equipment from a plurality of in-vehicle equipment in the vehicle according to the current sight line direction;
and the equipment control unit is used for controlling the target in-vehicle equipment to execute the operation matched with the eye action.
Optionally, the device control unit includes:
the focus detection subunit is used for detecting the focus position of the equipment in the target vehicle, wherein the current sight line direction is located at the focus position of the equipment in the target vehicle;
the action judging subunit is used for detecting the eye action;
and the control execution subunit is used for controlling the target in-vehicle device to execute the operation matched with the focus position when the eye action meets a preset standard.
Optionally, the human-vehicle interaction device further includes:
and the distraction judgment module is used for processing the current sight line direction by utilizing a support vector machine of the two categories to obtain a conclusion whether the driver is distracted, and sending warning information to the driver when the driver is distracted.
A storage medium having stored thereon program code which, when executed, implements the steps of the human-vehicle interaction method as described above.
The technical scheme shows that the application discloses a human-vehicle interaction method, a human-vehicle interaction device and a storage medium based on human eyes, wherein the method and the device are applied to a vehicle, in particular to the acquisition of a human face image of a driver of the vehicle; processing the face image by using a neural network algorithm to obtain the current sight direction and eye movement of the driver; and executing operation on the in-vehicle equipment in the vehicle according to the current sight line direction and the eye movement. Therefore, the driver does not need to operate the corresponding in-vehicle equipment by hands, and the adverse effect on the safe driving of the automobile caused by the fact that the driver executes the operation action of releasing the steering wheel is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a human-vehicle interaction method based on human eyes according to an embodiment of the present application;
FIG. 2 is a block diagram of a neural network model according to an embodiment of the present application;
FIG. 3 is a flowchart of another human-vehicle interaction method based on human eyes according to an embodiment of the present application;
FIG. 4 is a block diagram of a human-vehicle interaction device based on human eyes according to an embodiment of the present application;
fig. 5 is a block diagram of another human-vehicle interaction device based on human eyes according to an embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
Fig. 1 is a flowchart of a human-vehicle interaction method based on human eyes according to an embodiment of the present application.
As shown in fig. 1, the human-vehicle interaction method of the present application is applied to a human-driven vehicle in which a driver exists, that is, the method operates a device in the vehicle based on the line of sight of the human eye of the driver, and the human-vehicle interaction method includes the following steps:
and S1, acquiring a face image of the driver.
Namely, the image of the face of the driver is obtained by utilizing the camera equipment in the vehicle during the normal driving process of the driver. Specifically, the face image is acquired by the following steps:
firstly, utilize at least one camera equipment in the vehicle to shoot the driver to obtain driver's image, camera equipment can be visible light camera equipment or infrared camera equipment, consequently, the image that obtains also is visible light image or infrared image, adopts infrared camera equipment's advantage to also can gather effectual image when the light is dark in the car.
Then, a processor is used for processing the visible light image or the infrared image such as cutting and perspective, and the face image of the driver is obtained.
And S2, calculating the current sight line direction and the eye movement of the driver by using a neural network algorithm.
That is, the pre-trained neural network model is used to process the facial image of the driver to obtain the current sight direction and eye movement of the driver, where the eye movement may be eye rotation, eye opening, eye closing, or blinking. The neural network model comprises three cascaded Hourglass modules, a convolutional layer module and two branches, wherein one branch is a Resnet network, and the other branch is a Lenet two-class network, as shown in FIG. 2.
Specifically, the current gaze direction and eye movement are obtained by:
firstly, processing a face image by utilizing three cascaded Hourglass modules to obtain face information;
then, processing the face information by using a convolutional layer module to obtain a feature map comprising a plurality of eye key points, wherein the feature map comprises the coordinates of each eye key point;
then, processing the characteristic diagram by using a Resnet network and adopting a direct regression algorithm to obtain the current sight line direction;
and processing the characteristic diagram by utilizing a two-classification network of the Lenet while processing the characteristic diagram by utilizing a Resnet network to obtain the eye action.
And S3, operating the in-vehicle equipment according to the current sight line direction and the eye movement.
After the current sight line direction and the eye movement of the driver are obtained, corresponding operation is carried out on the selected target in-vehicle equipment in the vehicle based on the current sight line direction and the corresponding eye movement, namely, the operation on the in-vehicle equipment can be realized through wireless driving and manual participation. The specific process of the step is as follows:
firstly, a target in-vehicle device in the vehicle is selected according to the current sight line direction, for example, a plurality of in-vehicle devices such as a main control screen, a window, an air conditioner and the like are arranged in the vehicle, and when the current sight line direction falls on a certain position, such as the main control screen or the window, the main control screen or the window is selected as the target in-vehicle device.
Then, on the basis of the obtained eye movement, the target in-vehicle device is controlled to execute an operation matched with the eye movement.
In this embodiment, the specific process of controlling the target in-vehicle device is as follows:
firstly, the current sight line direction is detected to be positioned at the focus position of the in-vehicle equipment, namely, the coordinates of the current sight line direction on the in-vehicle equipment are detected. For example, for the main control screen, the focal position is the position of which button; the position of focus for a window is whether it is at the bottom, middle or top of the window.
Then, the eye movement is detected to determine whether the eye movement is a predefined movement, for example, if we use blinking as the starting movement, we only use blinking as the effective movement for starting the subsequent operation.
And finally, after the eye movement is found as the effective movement, controlling the target in-vehicle equipment to execute the operation corresponding to the focus position.
For example, for the main control screen, if the main control screen is in a screen-off state, when the current sight line direction is found to be located at any point on the main control screen, the main control screen is lightened; then, if the current sight direction is further found to be located on the play button on the main control screen and blinking action is performed, the play button is driven to execute pressing action, and further playing operation is executed corresponding to the pressing action. And when the current sight direction is found to leave the main control screen for a period of time, the main control screen is extinguished.
For the window, when the current sight line direction is detected to be located at a certain position of the window and the blink action of a driver is found, the window is driven to open or close, so that the upper edge of the glass reaches the position where the current sight line direction is located on the window, and the window is automatically opened and closed.
The technical scheme can be seen that the embodiment provides a human-vehicle interaction method based on human eye sight, which is applied to a vehicle, in particular to the acquisition of a human face image of a driver of the vehicle; processing the face image by using a neural network algorithm to obtain the current sight direction and eye movement of the driver; and executing operation on the in-vehicle equipment in the vehicle according to the current sight line direction and the eye movement. Therefore, the driver does not need to operate the corresponding in-vehicle equipment by hands, and the adverse effect on the safe driving of the automobile caused by the fact that the driver executes the operation action of releasing the steering wheel is avoided.
In addition, in an embodiment of the present application, the method further includes the following steps, as specifically shown in fig. 3:
and S4, detecting whether the driver is distracted.
After the current sight line direction of the driver is obtained, the current sight line direction is processed by using a support vector machine of two categories, and the conclusion whether the driver is distracted is obtained. If the driver is found to be distracted, warning information is sent to the driver or other people in the vehicle in time to warn the driver to drive attentively, or other people are warned to remind the driver, so that safe driving of the vehicle is further guaranteed.
Example two
Fig. 4 is a block diagram of a human-vehicle interaction device based on human eyes according to an embodiment of the present application.
As shown in fig. 4, the human-vehicle interaction device of the present application is applied to a human-driven vehicle in which a driver exists, that is, the method operates devices in the vehicle based on the line of sight of human eyes of the driver, and the human-vehicle interaction device includes a human face acquisition module 10, an image processing module 20, and an operation execution module 30.
The face acquisition module is used for acquiring a face image of the driver.
Namely, the image of the face of the driver is obtained by utilizing the camera equipment in the vehicle during the normal driving process of the driver. Specifically, the module specifically includes an image pickup apparatus and a processor.
The camera equipment is used for shooting the driver to obtain driver's image, camera equipment can be visible light camera equipment or infrared camera equipment, consequently, the image that obtains also is visible light image or infrared image, and the benefit of adopting infrared camera equipment is that also can gather effectual image when the light is darker in the car.
The processor cuts and perspectives the visible light image or the infrared image to obtain the face image of the driver.
The image processing module is used for calculating the current sight direction and the eye movement of the driver by utilizing a neural network algorithm.
That is, the pre-trained neural network model is used to process the facial image of the driver to obtain the current sight direction and eye movement of the driver, where the eye movement may be eye rotation, eye opening, eye closing, or blinking. The neural network model comprises three cascaded Hourglass modules, a convolutional layer module and two branches, wherein one branch is a Resnet network, and the other branch is a Lenet two-class network, as shown in FIG. 2.
Specifically, the working contents of each module in the model are as follows:
the three cascaded Hourglass modules are used for processing the face image to obtain face information;
the convolutional layer module is used for processing the face information to obtain a feature map comprising a plurality of eye key points, and the feature map comprises the coordinates of each eye key point;
processing the feature map by using a Resnet network direct regression algorithm to obtain the current sight direction;
and processing the characteristic diagram by utilizing a two-classification network of the Lenet while processing the characteristic diagram by utilizing a Resnet network to obtain the eye action.
The operation execution module is used for operating the in-vehicle equipment according to the current sight line direction and the eye movement.
After the current sight line direction and the eye movement of the driver are obtained, corresponding operation is carried out on the selected target in-vehicle equipment in the vehicle based on the current sight line direction and the corresponding eye movement, namely, the operation on the in-vehicle equipment can be realized through wireless driving and manual participation. The module specifically comprises a target selection unit and a device control unit.
The target selection unit is used for selecting target in-vehicle equipment in the vehicle according to the current sight line direction, for example, a plurality of in-vehicle equipment such as a main control screen, a window and an air conditioner are arranged in the vehicle, and when the current sight line direction falls on a certain position, such as the main control screen or the window, the main control screen or the window is selected as the target in-vehicle equipment.
The device control unit is used for controlling the target in-vehicle device to execute the operation matched with the eye movement on the basis of the obtained eye movement.
In this embodiment, the device control unit specifically includes a focus detection subunit, an action determination subunit, and a control execution subunit.
The focus detection subunit is configured to detect that the current sight line direction is located at a focus position of the in-vehicle device, that is, detect coordinates of the in-vehicle device in the current sight line direction. For example, for the main control screen, the focal position is the position of which button; the position of focus for a window is whether it is at the bottom, middle or top of the window.
The action judging subunit is used for detecting the eye action to determine whether the eye action is a predefined action, for example, if we use the blink as the starting action, only the blink is used as the effective action for starting the subsequent operation.
The action execution subunit is used for controlling the target in-vehicle device to execute the operation corresponding to the focus position after the eye movement is found as the effective action.
For example, for the main control screen, if the main control screen is in a screen-off state, when the current sight line direction is found to be located at any point on the main control screen, the main control screen is lightened; then, if the current sight direction is further found to be located on the play button on the main control screen and blinking action is performed, the play button is driven to execute pressing action, and further playing operation is executed corresponding to the pressing action. And when the current sight direction is found to leave the main control screen for a period of time, the main control screen is extinguished.
For the window, when the current sight line direction is detected to be located at a certain position of the window and the blink action of a driver is found, the window is driven to open or close, so that the upper edge of the glass reaches the position where the current sight line direction is located on the window, and the window is automatically opened and closed.
It can be seen from the above technical solutions that the present embodiment provides a human-vehicle interaction device based on human eyes, which is applied to a vehicle, specifically, to obtain a human face image of a driver of the vehicle; processing the face image by using a neural network algorithm to obtain the current sight direction and eye movement of the driver; and executing operation on the in-vehicle equipment in the vehicle according to the current sight line direction and the eye movement. Therefore, the driver does not need to operate the corresponding in-vehicle equipment by hands, and the adverse effect on the safe driving of the automobile caused by the fact that the driver executes the operation action of releasing the steering wheel is avoided.
In addition, in an embodiment of the present application, the present application further includes a distraction determining module 40, as shown in fig. 5:
the distraction judgment module is used for detecting whether the driver is distracted.
After the current sight line direction of the driver is obtained, the current sight line direction is processed by using a support vector machine of two categories, and the conclusion whether the driver is distracted is obtained. If the driver is found to be distracted, warning information is sent to the driver or other people in the vehicle in time to warn the driver to drive attentively, or other people are warned to remind the driver, so that safe driving of the vehicle is further guaranteed.
Accordingly, an embodiment of the present application further provides a storage medium having stored thereon program code adapted to be executed by a processor, the program code being configured to:
acquiring a face image of a driver of the vehicle;
processing the face image by using a neural network algorithm to obtain the current sight line direction and the eye movement of the driver;
and executing operation on the equipment in the vehicle according to the current sight line direction and the eye movement.
The refinement function and the extension function of the program code may be as described above.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The technical solutions provided by the present invention are described in detail above, and the principle and the implementation of the present invention are explained in this document by applying specific examples, and the descriptions of the above examples are only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (15)
1. A human-vehicle interaction method based on human eye sight is applied to vehicles and is characterized by comprising the following steps:
acquiring a face image of a driver of the vehicle;
processing the face image by using a neural network algorithm to obtain the current sight line direction and the eye movement of the driver;
and executing operation on the equipment in the vehicle according to the current sight line direction and the eye movement.
2. The human-vehicle interaction method according to claim 1, wherein the obtaining of the face image of the driver of the vehicle comprises the steps of:
acquiring an image of the driver by using at least one camera device in the vehicle;
and processing the image to obtain the face image.
3. The human-vehicle interaction method according to claim 2, wherein the image is a visible light image and/or an infrared image.
4. The human-vehicle interaction method as claimed in claim 1, wherein the processing of the face image by using the neural network algorithm comprises the steps of:
processing the face image by utilizing three cascaded Hourglass modules to obtain face information;
processing the face information by using a convolutional layer module to obtain a feature map comprising a plurality of eye key points, wherein the feature map comprises coordinates of each eye key point;
processing the characteristic diagram by using a Resnet network and adopting a direct regression algorithm to obtain the current sight line direction;
and processing the characteristic diagram by utilizing two Lenet classification networks to obtain the eye action.
5. The human-vehicle interaction method according to claim 1, wherein the operation on the in-vehicle device in the vehicle is performed according to the current sight line direction and the eye movement, and the method comprises the following steps:
selecting target in-vehicle equipment from a plurality of in-vehicle equipment in the vehicle according to the current sight line direction;
and controlling the target in-vehicle equipment to execute operation matched with the eye action.
6. The human-vehicle interaction method of claim 5, wherein the controlling the target in-vehicle device to perform the operation matched with the eye action comprises:
detecting that the current sight line direction is located at the focus position of the equipment in the target vehicle;
detecting the eye movement;
and when the eye action meets a preset standard, controlling the target in-vehicle equipment to execute an operation matched with the focus position.
7. The human-vehicle interaction method according to any one of claims 1 to 6, further comprising the steps of:
and processing the current sight line direction by using a support vector machine of the second classification to obtain a conclusion whether the driver is distracted, and sending warning information to the driver when the driver is distracted.
8. The utility model provides a people car interactive installation based on human eye sight, is applied to the vehicle which characterized in that, people car interactive installation includes:
the face acquisition module is used for acquiring a face image of a driver of the vehicle;
the image processing module is used for processing the face image by utilizing a neural network algorithm to obtain the current sight line direction and the eye movement of the driver;
and the operation execution module is used for executing operation on the in-vehicle equipment in the vehicle according to the current sight line direction and the eye movement.
9. The human-vehicle interaction device of claim 8, wherein the face acquisition module comprises:
the camera equipment is used for collecting the image of the driver;
and the processor is used for processing the image to obtain the face image.
10. The human-vehicle interaction device of claim 9, wherein the image is a visible light image and/or an infrared image.
11. The human-vehicle interaction device of claim 8, wherein the image processing module comprises:
the three cascaded Hourglass modules are used for processing the face image to obtain face information;
the convolutional layer module is used for processing the face information to obtain a feature map comprising a plurality of eye key points, and the feature map comprises the coordinates of each eye key point;
the Resnet network is used for processing the characteristic diagram by utilizing a direct regression algorithm to obtain the current sight line direction;
and the Lenet two-classification network is used for processing the characteristic diagram to obtain the eye action.
12. The human-vehicle interaction device of claim 8, wherein the operation execution module comprises:
the target selection unit is used for selecting target in-vehicle equipment from a plurality of in-vehicle equipment in the vehicle according to the current sight line direction;
and the equipment control unit is used for controlling the target in-vehicle equipment to execute the operation matched with the eye action.
13. The human-vehicle interaction device of claim 12, wherein the equipment control unit comprises:
the focus detection subunit is used for detecting the focus position of the equipment in the target vehicle, wherein the current sight line direction is located at the focus position of the equipment in the target vehicle;
the action judging subunit is used for detecting the eye action;
and the control execution subunit is used for controlling the target in-vehicle device to execute the operation matched with the focus position when the eye action meets a preset standard.
14. The human-vehicle interaction device of any one of claims 8 to 13, further comprising:
and the distraction judgment module is used for processing the current sight line direction by utilizing a support vector machine of the two categories to obtain a conclusion whether the driver is distracted, and sending warning information to the driver when the driver is distracted.
15. A storage medium having stored thereon program code which, when executed, performs the steps of the human-vehicle interaction method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011001138.5A CN112114671A (en) | 2020-09-22 | 2020-09-22 | Human-vehicle interaction method and device based on human eye sight and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011001138.5A CN112114671A (en) | 2020-09-22 | 2020-09-22 | Human-vehicle interaction method and device based on human eye sight and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112114671A true CN112114671A (en) | 2020-12-22 |
Family
ID=73801425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011001138.5A Pending CN112114671A (en) | 2020-09-22 | 2020-09-22 | Human-vehicle interaction method and device based on human eye sight and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112114671A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113335300A (en) * | 2021-07-19 | 2021-09-03 | 中国第一汽车股份有限公司 | Man-vehicle takeover interaction method, device, equipment and storage medium |
CN113561988A (en) * | 2021-07-22 | 2021-10-29 | 上汽通用五菱汽车股份有限公司 | Voice control method based on sight tracking, automobile and readable storage medium |
CN114327051A (en) * | 2021-12-17 | 2022-04-12 | 北京乐驾科技有限公司 | Human-vehicle intelligent interaction method |
CN114876312A (en) * | 2022-05-25 | 2022-08-09 | 重庆长安汽车股份有限公司 | Vehicle window lifting control system and method based on eye movement tracking |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101344919A (en) * | 2008-08-05 | 2009-01-14 | 华南理工大学 | Sight tracing method and disabled assisting system using the same |
CN202121681U (en) * | 2011-05-31 | 2012-01-18 | 德尔福电子(苏州)有限公司 | Vehicle-mounted eye movement control device |
CN102830797A (en) * | 2012-07-26 | 2012-12-19 | 深圳先进技术研究院 | Man-machine interaction method and system based on sight judgment |
US20130187847A1 (en) * | 2012-01-19 | 2013-07-25 | Utechzone Co., Ltd. | In-car eye control method |
CN103259971A (en) * | 2012-02-16 | 2013-08-21 | 由田信息技术(上海)有限公司 | Eye control device in vehicle and method for eye control |
CN104461005A (en) * | 2014-12-15 | 2015-03-25 | 东风汽车公司 | Vehicle-mounted screen switch control method |
CN105739705A (en) * | 2016-02-04 | 2016-07-06 | 重庆邮电大学 | Human-eye control method and apparatus for vehicle-mounted system |
CN108309311A (en) * | 2018-03-27 | 2018-07-24 | 北京华纵科技有限公司 | A kind of real-time doze of train driver sleeps detection device and detection algorithm |
CN108537161A (en) * | 2018-03-30 | 2018-09-14 | 南京理工大学 | A kind of driving of view-based access control model characteristic is divert one's attention detection method |
CN109460780A (en) * | 2018-10-17 | 2019-03-12 | 深兰科技(上海)有限公司 | Safe driving of vehicle detection method, device and the storage medium of artificial neural network |
CN109492514A (en) * | 2018-08-28 | 2019-03-19 | 初速度(苏州)科技有限公司 | A kind of method and system in one camera acquisition human eye sight direction |
CN109508679A (en) * | 2018-11-19 | 2019-03-22 | 广东工业大学 | Realize method, apparatus, equipment and the storage medium of eyeball three-dimensional eye tracking |
CN110110662A (en) * | 2019-05-07 | 2019-08-09 | 济南大学 | Driver eye movement behavioral value method, system, medium and equipment under Driving Scene |
CN110765807A (en) * | 2018-07-25 | 2020-02-07 | 阿里巴巴集团控股有限公司 | Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium |
-
2020
- 2020-09-22 CN CN202011001138.5A patent/CN112114671A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101344919A (en) * | 2008-08-05 | 2009-01-14 | 华南理工大学 | Sight tracing method and disabled assisting system using the same |
CN202121681U (en) * | 2011-05-31 | 2012-01-18 | 德尔福电子(苏州)有限公司 | Vehicle-mounted eye movement control device |
US20130187847A1 (en) * | 2012-01-19 | 2013-07-25 | Utechzone Co., Ltd. | In-car eye control method |
CN103259971A (en) * | 2012-02-16 | 2013-08-21 | 由田信息技术(上海)有限公司 | Eye control device in vehicle and method for eye control |
CN102830797A (en) * | 2012-07-26 | 2012-12-19 | 深圳先进技术研究院 | Man-machine interaction method and system based on sight judgment |
CN104461005A (en) * | 2014-12-15 | 2015-03-25 | 东风汽车公司 | Vehicle-mounted screen switch control method |
CN105739705A (en) * | 2016-02-04 | 2016-07-06 | 重庆邮电大学 | Human-eye control method and apparatus for vehicle-mounted system |
CN108309311A (en) * | 2018-03-27 | 2018-07-24 | 北京华纵科技有限公司 | A kind of real-time doze of train driver sleeps detection device and detection algorithm |
CN108537161A (en) * | 2018-03-30 | 2018-09-14 | 南京理工大学 | A kind of driving of view-based access control model characteristic is divert one's attention detection method |
CN110765807A (en) * | 2018-07-25 | 2020-02-07 | 阿里巴巴集团控股有限公司 | Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium |
CN109492514A (en) * | 2018-08-28 | 2019-03-19 | 初速度(苏州)科技有限公司 | A kind of method and system in one camera acquisition human eye sight direction |
CN109460780A (en) * | 2018-10-17 | 2019-03-12 | 深兰科技(上海)有限公司 | Safe driving of vehicle detection method, device and the storage medium of artificial neural network |
CN109508679A (en) * | 2018-11-19 | 2019-03-22 | 广东工业大学 | Realize method, apparatus, equipment and the storage medium of eyeball three-dimensional eye tracking |
CN110110662A (en) * | 2019-05-07 | 2019-08-09 | 济南大学 | Driver eye movement behavioral value method, system, medium and equipment under Driving Scene |
Non-Patent Citations (2)
Title |
---|
董洪义: "深度学习之PYTorch物体检测实战", vol. 2, 31 March 2020, 机械工业出版社, pages: 258 - 263 * |
黄君浩;贺辉等: "基于LSTM的眼动行为识别及人机交互应用", 计算机系统应用, vol. 29, no. 3, 15 March 2020 (2020-03-15), pages 210 - 216 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113335300A (en) * | 2021-07-19 | 2021-09-03 | 中国第一汽车股份有限公司 | Man-vehicle takeover interaction method, device, equipment and storage medium |
CN113561988A (en) * | 2021-07-22 | 2021-10-29 | 上汽通用五菱汽车股份有限公司 | Voice control method based on sight tracking, automobile and readable storage medium |
CN114327051A (en) * | 2021-12-17 | 2022-04-12 | 北京乐驾科技有限公司 | Human-vehicle intelligent interaction method |
CN114876312A (en) * | 2022-05-25 | 2022-08-09 | 重庆长安汽车股份有限公司 | Vehicle window lifting control system and method based on eye movement tracking |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112114671A (en) | Human-vehicle interaction method and device based on human eye sight and storage medium | |
CN111079476B (en) | Driving state analysis method and device, driver monitoring system and vehicle | |
JP6933668B2 (en) | Driving condition monitoring methods and devices, driver monitoring systems, and vehicles | |
JP7146959B2 (en) | DRIVING STATE DETECTION METHOD AND DEVICE, DRIVER MONITORING SYSTEM AND VEHICLE | |
JP7105316B2 (en) | Driver attention monitoring method and device, and electronic device | |
JP6932208B2 (en) | Operation management methods and systems, in-vehicle smart systems, electronic devices and media | |
US11249555B2 (en) | Systems and methods to detect a user behavior within a vehicle | |
CN105825621B (en) | Method and device for driving an at least partially autonomous vehicle | |
CN110481419B (en) | Human-vehicle interaction method, system, vehicle and storage medium | |
CN110765807A (en) | Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium | |
CN112758098B (en) | Vehicle driving authority take-over control method and device based on driver state grade | |
KR20140072734A (en) | System and method for providing a user interface using hand shape trace recognition in a vehicle | |
CN113128295A (en) | Method and device for identifying dangerous driving state of vehicle driver | |
JP2017039373A (en) | Vehicle video display system | |
WO2024222971A1 (en) | Method and apparatus for determining gaze distraction range | |
CN114701503A (en) | Method, device and equipment for adjusting driving behavior of vehicle driver and storage medium | |
CN112319483A (en) | Driving state improving device and driving state improving method | |
CN111267865B (en) | Vision-based safe driving early warning method and system and storage medium | |
CN116022158B (en) | Driving safety control method and device for cooperation of multi-domain controller | |
CN118849978A (en) | Vehicle function prompting method and related device | |
CN116461545A (en) | Control method and device for in-vehicle functions, electronic equipment and storage medium | |
CN117141413A (en) | Vehicle window cleaning method and vehicle | |
CN118270037A (en) | Display method, vehicle and medium | |
CN114771559A (en) | Vehicle human-computer interaction method, device and system | |
CN116088790A (en) | Control method and device for multimedia volume of vehicle, vehicle and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |