CN115268651A - Implicit gesture interaction method and system for steering wheel - Google Patents

Implicit gesture interaction method and system for steering wheel Download PDF

Info

Publication number
CN115268651A
CN115268651A CN202210950891.1A CN202210950891A CN115268651A CN 115268651 A CN115268651 A CN 115268651A CN 202210950891 A CN202210950891 A CN 202210950891A CN 115268651 A CN115268651 A CN 115268651A
Authority
CN
China
Prior art keywords
gesture
steering wheel
module
function
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210950891.1A
Other languages
Chinese (zh)
Inventor
郭栋
李波
刘泰岑
黎洪林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN202210950891.1A priority Critical patent/CN115268651A/en
Publication of CN115268651A publication Critical patent/CN115268651A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the technical field of automobile human-computer interaction, in particular to a method and a system for implicit gesture interaction of a steering wheel. The method comprises collecting a pressure signal; collecting a corner signal; importing a pre-trained gesture recognition module, and performing gesture recognition to obtain a final gesture; comparing with a gesture-function matching module; and if the function corresponding to the final gesture exists, executing the function. The gesture recognition is realized through the pressure sensor arranged on the steering wheel, the interaction between a person and a vehicle is realized, the problem that the attention of a driver is dispersed by the traditional key interaction mode and the gesture interaction based on vision is solved, and the natural implicit interaction is realized; by establishing the common driving gesture library and the uncommon driving gesture library, the condition of man-machine interaction misoperation of a driver can be effectively avoided, and the driving safety is improved.

Description

Implicit gesture interaction method and system for steering wheel
Technical Field
The invention relates to the technical field of automobile human-computer interaction, in particular to a method and a system for implicit gesture interaction of a steering wheel.
Background
The automobile intelligentization and networking trends are continuously deepening the automobile industry digitalization change, new influences are brought to the relation among people, automobiles and the environment, and the human-computer interaction design becomes a core element of the development and innovation of intelligent automobiles. At present, a man-machine interaction mode on an automobile is man-machine interaction based on vision, voice and keys, wherein a gesture interaction mode based on vision is in a more advanced position in a modern interaction technology, but hands of a driver can leave a steering wheel in an interaction process, the attention of the driver can be dispersed, and potential safety hazards of driving are brought. In a conventional interaction mode (multifunctional steering wheel) based on keys on the steering wheel, since the keys are divided into regions, a driver needs to search for the positions of the keys during operation, and the situation of error touch is easy to occur under the condition of turning the steering wheel and the like.
The steering wheel is gradually moving to intellectualization and digitization as important hardware for human-vehicle interaction, and the current HMI (human machine interface) design of the intelligent steering wheel has a very important purpose of reducing the distraction of the driver in the L1-level to L4-level automatic driving vehicle, thereby improving the driving safety.
Therefore, a gesture implicit interaction method of the steering wheel based on the pressure sensor and the deep learning is provided, and a driver can directly make corresponding gestures on the steering wheel under the condition of keeping a driving state to control partial functions of a vehicle.
Disclosure of Invention
The invention aims to provide a method and a system for implicit gesture interaction of a steering wheel, which are used for solving the problems that the existing interaction mode can cause driver distraction and false triggering, in particular the driver distraction problem generated by the gesture interaction mode based on vision and the interaction mode based on keys on the steering wheel.
In order to achieve the purpose, the invention adopts the following technical scheme:
a steering wheel implicit gesture interaction method based on a pressure sensor and deep learning comprises the following steps:
collecting a pressure signal when a driver contacts a steering wheel through a pressure sensor;
acquiring a steering angle signal of a steering wheel through a steering wheel angle sensor;
the collected pressure signals and corner signals are led into a gesture recognition module which is trained in advance to be recognized, and the predicted gesture with the maximum probability in the recognition result is selected as the final gesture;
according to the obtained final gesture and the preset gesture the gesture-function matching module of (1) performs comparison;
if the function corresponding to the final gesture exists, executing the function; if there is no function corresponding to the final gesture, no function is performed.
Further, the establishment method of the gesture recognition module comprises the following steps:
recording the hand holding gesture of the steering wheel in actual driving, and marking a label;
acquiring pressure sensor data and m pieces of data of a corner sensor of n persons under different gestures according to the marked gestures, and marking the acquired sensor data with corresponding gesture labels;
build a model training module, just the training module includes: the device comprises an input layer, a GRU layer, a full connection layer, a Softmax layer and an output layer;
inputting a training set to an input layer of a training module, training input data through a GRU layer to extract partial features, summarizing all gesture features to obtain a full connection layer, inputting the data of the full connection layer to a Softmax layer, performing normalization processing on the data, outputting a result after one training through an output layer, judging a model recognition rate after the training by using data of a verification set, training again by adjusting parameters of a recognition model when the recognition rate is less than p%, ending the model training until the recognition rate of a final model to the gesture in the verification set is more than p%, outputting a trained steering wheel gesture recognition model, and testing the actual recognition rate of the model through testing the gesture data in the training set.
Further, the training module adopts a GRU algorithm, and the formula is as follows:
r t =σ(W r ·[h t-1 ,x t ])
z t =σ(W z ·[h t-1 ,x t ])
Figure BDA0003789468010000031
Figure BDA0003789468010000032
y t =σ(W O ·h t )
in the formula: r is t And z t Respectively representing a reset gate and an update gate, wherein the update gate is used for controlling the degree of state information of the previous moment brought into the current state, and the larger the value of the update gate is, the more the state information of the previous moment is brought into the current state; how much information is written to the current candidate set before reset gate controls the previous state
Figure BDA0003789468010000033
The smaller the reset gate, the less information of the previous state is written, σ represents the sigmoid function, W r 、W z
Figure BDA0003789468010000034
And W 0 And the weight matrix which is continuously optimized in the model training process is represented.
Further, the corresponding establishment of the gesture and the function of the gesture-function matching module comprises the following steps:
selecting a function needing to be matched with the gesture;
inputting a control gesture matched with the function;
detecting whether the control gesture exists in a daily driving gesture library or not, and if so, replacing the control gesture with a new control gesture; if not, confirming the gesture;
and verifying the control function corresponding to the confirmed gesture.
Further, the method for establishing the daily driving gesture library comprises the following steps: and (3) acquiring the probability of various gestures of the test vehicle, if the probability of the gestures occurring in the driving process is greater than a set threshold value, setting the gestures as daily driving gestures, and storing the daily driving gestures into a daily driving gesture library.
Further, the set threshold is 5%.
The invention also provides a steering wheel implicit gesture interaction system based on the pressure sensor and the deep learning, and the system comprises:
the pressure acquisition module is used for acquiring a pressure signal when a driver holds the steering wheel;
the corner sensor module is used for detecting a corner signal when the steering wheel is rotated;
the gesture recognition module comprises a gesture judgment module and a gesture memory module; the gesture memory module is used for storing various different gestures and corresponding pressure signals and corner signals; the gesture judgment module is used for leading the pressure signal and the corner signal into the gesture recognition module for recognition to obtain the gesture of the operator;
the gesture-function matching module comprises a daily driving gesture library, a function self-defining module and a function matching module; the daily driving gesture library is used for storing daily driving gestures and corresponding control functions; the function customizing module is used for customizing the interactive gestures and the corresponding control functions; the function matching module is used for comparing the input gestures with the daily driving gesture library and the function self-defining module and outputting corresponding control functions;
and the function execution module receives the control function from the function matching module and executes a corresponding instruction.
Further, the pressure acquisition module comprises:
a skin sensitive array membrane pressure sensor;
the signal acquisition card is used for acquiring the electric signal of the pressure sensor;
and the data processing module is used for filtering and amplifying the electric signals acquired by the signal acquisition card and converting the electric signals into digital signals.
Further, the pressure sensors are arranged in a 3D array and are arranged on the front side, the side face and the back side of the steering wheel;
the pressure sensor is divided into 4 zones: the first area is an area from 6 to 9 points of the steering wheel; the second area is an area from 9 points to 12 points of the steering wheel; the third area is a 12-point to 3-point area; the fourth region is a 3-to 6-point region.
Furthermore, the acquisition frequency of the signal acquisition card is not lower than 10Hz.
The invention has at least the following beneficial effects:
1. the invention realizes the gesture recognition through the pressure sensor arranged on the steering wheel to realize the interaction between a person and a vehicle, solves the problem that the traditional key interaction mode and the gesture interaction based on vision disperse the attention of a driver, and realizes natural implicit interaction;
2. according to the invention, the common driving gesture library and the uncommon driving gesture library are established, so that the condition of man-machine interaction misoperation of a driver can be effectively avoided, and the driving safety is improved.
3. The invention can meet the personalized requirements of users and support the user-defined setting of the interaction posture.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is an overall structural diagram of a steering wheel implicit gesture interaction system based on a pressure sensor and deep learning.
FIG. 2 is a schematic view of a left side pressure sensor for the steering wheel;
FIG. 3 is a set-up block diagram of a gesture recognition module.
FIG. 4 is a driving gesture library creation map module;
FIG. 5 is a logic diagram for implementing custom gesture interaction functionality.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The specific embodiment is as follows:
as shown in fig. 1 to 5, the present invention provides a method and a system for implicit gesture interaction of a steering wheel.
In the implementation process, an array type flexible pressure sensor and a steering wheel corner sensor at a vehicle-mounted end are needed, and a gesture recognition module, a driving gesture storage module, a self-gesture interaction function module and a function execution module are established.
The modules are combined to form the whole implicit gesture interaction system, a user can interact with the vehicle-mounted end through a gesture of holding the steering wheel, the vehicle-mounted end sends an instruction to the execution equipment, and the vehicle makes corresponding actions, such as air conditioner control, vehicle window control, multimedia control and the like. The user can set the favorite gesture of the user through the user-defined gesture interaction module to interact with the vehicle-mounted end.
For convenience of explanation, the following describes in detail various modules of a method and system for implicit gesture interaction of a steering wheel.
Fig. 1 is a general block diagram of the system, which includes:
the pressure acquisition module is used for acquiring a pressure signal when a driver holds the steering wheel;
the corner sensor module is used for detecting a corner signal when the steering wheel is rotated;
the gesture recognition module comprises a gesture judgment module and a gesture memory module; the gesture memory module is used for storing various different gestures and corresponding pressure signals and corner signals; the gesture judgment module is used for leading the pressure signal and the corner signal into the gesture recognition module for recognition to obtain the gesture of the operator;
the gesture-function matching module comprises a daily driving gesture library, a function self-defining module and a function matching module; the daily driving gesture library is used for storing daily driving gestures and corresponding control functions; the function customizing module is used for customizing the interactive gestures and the corresponding control functions; the function matching module is used for comparing the input gestures with the daily driving gesture library and the function self-defining module and outputting corresponding control functions;
and the function execution module receives the control function from the function matching module and executes a corresponding instruction.
The implementation method comprises the following steps:
s1, collecting a pressure signal when a driver contacts a steering wheel through a pressure sensor arranged below a steering wheel skin.
Further preferably, in S1, the specific method for detecting pressure is as follows:
s11, according to the principle of ergonomics, the GBT/10000 human hand size is determined, wherein the minimum size of the hand size of an adult at the far knuckle of an index finger is 13mm, and in order to avoid missing detection of holding gestures and fingertip characteristics, the distance between two adjacent sensors on a steering wheel is controlled within 13 mm. In order to make the recognition effect better, the steering wheel is evenly divided into 4 areas which are divided in a clock mode, wherein the first area is an area from 6 points of the steering wheel to 9 points of the steering wheel, the second area is an area from 9 points of the steering wheel to 12 points of the steering wheel, the third area is an area from 12 points to 3 points, and the fourth area is an area from 3 points to 6 points.
And S12, reading the value of each area pressure sensor by using a signal acquisition board card, setting a signal acquisition frequency not lower than 10Hz to obtain higher data fidelity, and transmitting an acquired signal to the vehicle-mounted ECU.
S2, a steering wheel corner sensor is installed at a rotating shaft at the bottom of the steering wheel, the collection form of corners is realized through an encoder, angles at a certain moment are obtained through comparison and conversion of the angles, and the measurement of the angle transformation amount in a period of time can be realized.
And S3, inputting the pressure data acquired by the S1 array type flexible pressure sensor and the steering wheel corner signal acquired by the S2 steering wheel corner sensor into the gesture recognition module together, and ensuring that the signals input by the two sensors are signals at the same moment. In order to make the operation function smoother, the time of one-time gesture recognition must be controlled within 200 ms. Because the gesture recognition is a probabilistic problem, recognized gesture results are arranged according to the gesture prediction probability in output from large to small, and finally the gesture with the maximum probability is output.
Further preferably, in S3, the gesture recognition module is established by the following method:
and S31, recording the steering wheel holding gesture in actual driving, and marking a label.
And S32, in order to improve the generalization of the recognition model, acquiring the sensor data of 100 people under different gestures and the data of the steering wheel angle sensor according to the marked gestures in the S31, wherein the number of the acquired gestures of each person is not less than 500. And marking the collected data with a corresponding label.
S33, building a model training module, wherein the module mainly comprises an input layer, a GRU layer (a recurrent neural network with a gating algorithm), a full connection layer, a Softmax layer and an output layer.
And S34, dividing all the gesture data obtained in the S32 into a training set of 70%, a testing set of 15% and a verifying set of 15%. Will:
inputting a training set to an input layer of an S33 training model, training input data through a GRU layer to extract partial features, summarizing all gesture features to obtain a full connection layer, inputting the data of the full connection layer to a Softmax layer, performing normalization processing on the data, outputting a result after one training through an output layer, judging the model recognition rate after the training by using the data of a verification set, training again by adjusting parameters of the recognition model when the recognition rate is less than p%, ending the model training until the recognition rate of the final model to the gestures in the verification set is more than p%, outputting the trained gesture recognition model of the steering wheel, and testing the actual recognition rate of the model through testing the gesture data in the set.
S4, importing a gesture result detected by the gesture recognition module in the S3 into a gesture-function matching module, matching control functions, and executing the matched function if the detected gesture is matched with the function control gesture; if not, it is not executed.
S5, in order to meet the personalized requirements of the user, the user-defined interaction gesture setting is supported, and the user can perform user-defined setting on the interaction gesture through the vehicle-mounted central control terminal. Different functions can be obtained by matching gestures, such as ACC (adaptive cruise), LKS (lane keeping), sound control, window control, air conditioning control and the like.
Further preferably, in S5, the customized interactive gesture setting function is as follows:
s51, opening the vehicle-mounted central control end, finding out custom interaction gesture setting, and selecting a function needing to be matched with the gesture.
S52, selecting custom gesture entry, and displaying a gesture entry state by the central control unit. The user places his hand on the steering wheel and makes a favorite control gesture, which is detected by the sensor on the steering wheel.
S53, when the set gesture is detected, the gesture is compared with the common gestures in the daily driving gesture library, and if the set gesture is detected to be in the common driving gestures, the custom gesture needs to be replaced, so that misoperation is avoided during normal driving. And when the custom gesture is not in the routine driving gesture, inputting the gesture again for gesture confirmation.
And S54, the control function needs to be verified according to the confirmed gesture, when the user-defined gesture is input again by the operator, the verification display module on the vehicle-mounted end central control displays that the verification is successful, the user-defined gesture control is set, and if the verification is failed, the user-defined gesture control needs to be verified again.
Further preferably, in S53, the daily driving gesture library is established as follows:
s531, installing a pressure sensor on a real-vehicle steering wheel, collecting gestures of the steering wheel held in actual driving, collecting pressure signals and steering wheel corner signals generated when a driver holds the steering wheel by a signal collector, processing the signals, transmitting the signals to a gesture recognition module, classifying the gestures, recording the probability of the occurrence of various gestures in the whole test process, recording the gestures which are almost more than 5% in driving in a driving common gesture library as first-level misoperation gestures, and storing the rest gestures in an uncommon gesture library as second-level misoperation gestures.
The pressure acquisition module in the embodiment mainly adopts a skin-sensitive array film pressure sensor to measure the pressure generated by the hands of a driver on the sensor arranged on the steering wheel in real time. The signal collector collects signals from the pressure sensor in real time, and the signals are electric signals. The data processing module carries out filtering amplification processing on the data acquired by the signal acquisition unit, converts the electric signals into digital signals, outputs pressure values, positions of the triggered touch sensors and areas of hand contact, and carries out feature extraction on the signals. The pressure sensors are distributed below the leather layer of the steering wheel, the sensors are protected by the leather layer, the stability and the service life of the sensors are improved, and natural implicit interaction can be achieved. According to the principle of ergonomics, the GBT/10000 human hand size is characterized in that the minimum size of the hand size of an adult at the far knuckle of an index finger is 13mm, and in order to avoid missing detection of a holding gesture and fingertip characteristics, the distance between two adjacent sensors on a steering wheel is controlled within 13 mm. However, the arrangement of the sensors is also influenced by the structure of the steering wheel, and it is difficult to ensure that the distance between two adjacent sensor points is less than 13mm at some points, so that the distance between the sensors needs to be properly adjusted at a special structure.
The sensors are arranged in a 3D array mode, the cylindrical surface of the steering wheel is divided into three areas, namely the front surface, the side surface and the back surface of the steering wheel, and the sensors wrap the whole steering wheel to prevent signal missing detection. In order to make the recognition effect better, the steering wheel is evenly divided into 4 areas which are divided in a clock mode, wherein the first area is an area from 6 points of the steering wheel to 9 points of the steering wheel, the second area is an area from 9 points of the steering wheel to 12 points of the steering wheel, the third area is an area from 12 points to 3 points, and the fourth area is an area from 3 points to 6 points.
In a specific embodiment, the invention adopts a steering wheel of a certain type of Changan automobile to carry out sensor arrangement and design, through actual measurement and marking, a total of four sensors are arranged on one steering wheel, and the installation positions of the four sensors respectively correspond to the four steering wheel areas. Furthermore, the sensing point position on each sensor is divided into: the number of the front sensors of the steering wheel is 11, the number of the side sensors is 14, the number of the back sensors is 11, the number of the sensing points on one sensing is 36, and the size of the sensors is determined by the actual structure of the steering wheel. The sensor adopted in the embodiment has the single point bit length of 1-4 cm, the width of 1-1.2 cm and the thickness of 0.3mm, and the specific structure of the sensor is shown in FIG. 2. And respectively reading the value of each area pressure sensor by using a signal acquisition card, setting the acquisition frequency of the signal acquisition card to be not lower than 10Hz to obtain higher data fidelity, and transmitting the acquired pressure signals to the vehicle-mounted ECU.
Fig. 3 is a process for establishing a gesture recognition, where the establishment of the recognition model is mainly implemented by using a variant GRU (recurrent neural network with gating algorithm) of RNN (recurrent neural network), and the establishment of the model requires calculation, data and algorithm support.
1. The data are generated by a 3D array type flexible pressure sensor and a steering wheel angle sensor which are installed on a steering wheel, when a driver holds the steering wheel in the actual driving process, a palm triggers the pressure sensor installed on the steering wheel to generate a pressure signal, and when the driver rotates the steering wheel, the steering wheel angle sensor can read the angle of the steering wheel at the moment. There are 144 sensor pressure sensor points on a steering wheel and one steering wheel angle sensor, so that a piece of feature data corresponding to a gesture is a matrix of 1 × 145.
2. In order to improve the generalization of the recognition model, according to the gestures marked in S31, the sensor data of 100 people under different gestures and the data of the steering wheel angle sensor are collected, and the number of collected gestures of each person is not less than 500. And marking the collected data with corresponding labels to form a data set required by the gesture recognition model.
3. The algorithm is a GRU (recurrent neural network with a gating algorithm) algorithm, and the algorithm formula is as follows:
r t =σ(W r ·[h t-1 ,x t ]) (1)
z t =σ(W z ·[h t-1 ,x t ]) (2)
Figure BDA0003789468010000131
Figure BDA0003789468010000132
y t =σ(W 0 ·h t ) (5)
in the formula: r is t And z t Respectively representing a reset gate and an update gate, the update gate being used for controlling the degree to which the state information at the previous moment is brought into the current state, the larger the value of the update gate, the more the state information at the previous moment is brought in. How much information is written to the current candidate set before reset gate controls the previous state
Figure BDA0003789468010000133
The smaller the reset gate, the less information of the previous state is written, σ represents the sigmoid function activation function, W r 、W z
Figure BDA0003789468010000134
And W 0 Represented is a weight matrix that is continuously optimized during the model training process.
4. In step 2, the marked holding gesture data (a 1x145 number matrix containing pressure values and steering wheel angles) are divided into a training set of 70%, a testing set of 15% and a verification set of 15%. Inputting a training set into an input layer of a training model, inputting the training set into a GRU neural network, training input data through the GRU layer to extract partial characteristics, summarizing all gesture characteristics to obtain a full connection layer, finishing one-time model training through an activation function (Softmax layer), finally calculating loss (loss function) of the model through data input of a verification set, and adjusting parameters of the model through back propagation. The maximum iteration number of training is 1000 (overfitting is prevented, training is stopped when loss does not decrease any more), the number of hidden layers of the model is 99, an Adam optimizer is adopted by the optimizer, the accuracy of model training is adjusted, the gesture recognition model is stored after model training is finished, and the model can recognize corresponding gestures through input of pressure on a steering wheel and corner signals of the steering wheel, such as five-finger holding, ten-o-clock holding, left-finger sliding operation, right-finger sliding operation and the like.
FIG. 4 is a process for establishing a gesture capture module, and this embodiment mainly describes how to establish a driving gesture repository for setting up custom gesture interactions. The module can divide the gestures into common driving gestures and unusual driving gestures, and the main establishment process is as follows:
1. the method comprises the steps that a pressure sensor is installed on a real-vehicle steering wheel to collect gestures of the steering wheel held in actual driving, a signal collector collects pressure signals and steering wheel corner signals generated when a driver holds the steering wheel, savitsky-Golay (usually, S-G filter for short) is adopted to filter the signals, the processed signals are transmitted to a gesture recognition module and are output to the gesture recognition module, the gesture recognition module recognizes the signals through the transmitted signals, and finally, gesture results are output.
2. Different gestures are denoted by letters, e.g. A i 、B i 、C i 、D i …, i represents the number of times such a gesture occurs, i being incremented by 1 for each occurrence. Recording each gesture in massive driving gestures, recording the occurrence frequency of each gesture, and calculating the appearance probability of various gestures in the whole test processRate:
Figure BDA0003789468010000141
SUM=∑ i (7)
where P is the probability of each type of gesture occurring, X is the number of times a certain type of gesture occurs, and SUM is the total number of times all gestures occur. Recording the gestures with the probability of 5 percent (namely P is more than 5 percent) in the driving common gesture library as first-stage misoperation gestures, and storing the rest gestures (namely P is less than or equal to 5 percent) in the uncommon gesture library as second-stage misoperation gestures.
Fig. 5 is a process for constructing a custom gesture interaction model, which supports setting of a custom interaction gesture to meet the personalized requirements of a user, and the user can perform custom setting on an interaction gesture through a vehicle-mounted central control terminal. Different functions can be obtained by matching gestures, such as ACC (adaptive cruise), LKS (lane keeping), sound control, window control, air conditioning control and the like. The following is a concrete description taking the implementation of controlling the vehicle window as an example:
1. and opening the vehicle-mounted central control end, finding the user-defined interactive gesture setting, selecting the function of needing vehicle window control, and entering a gesture user-defined setting state.
2. And selecting user-defined gesture input, and displaying a gesture input state by the central control unit. The user puts the hand on the steering wheel and makes a favorite control gesture, for example, the hand is held on the steering wheel, the index finger slides upwards on the steering wheel, and the sensor on the steering wheel can detect the favorite control gesture.
3. When the steering wheel can successfully recognize the gesture that the index finger slides upwards, the gesture is compared with the common gestures in the daily driving gesture library, and if the set gesture is detected to be in the common driving gestures, the user-defined gesture needs to be replaced, so that misoperation is avoided during automatic driving. If the custom gesture is not among the routine driving gestures, the hand is held again on the steering wheel and the index finger is slid up to again confirm the gesture of control.
4. The gesture after confirming needs to verify control function, and when the user verifies the function of lifting the control window of the forefinger upwards sliding of user-defined setting, the verification display module on the control display screen in the vehicle-mounted end displays that verification succeeds, the setting of user-defined gesture control is completed, and if verification fails, re-verification is needed.
5. After all the operations are finished, the user can control the lifting of the car window by the gesture that the forefinger slides upwards on the steering wheel by holding the hand on the steering wheel.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A hidden gesture interaction method of a steering wheel based on a pressure sensor and deep learning is characterized by comprising the following steps:
collecting a pressure signal when a driver contacts a steering wheel through a pressure sensor;
acquiring a steering angle signal of a steering wheel through a steering wheel angle sensor;
the collected pressure signals and corner signals are led into a gesture recognition module which is trained in advance and recognized, and the predicted gesture with the highest probability in the recognition result is selected as the final gesture;
comparing the obtained final gesture with a gesture-function matching module which is established in advance;
if the function corresponding to the final gesture exists, executing the function; if there is no function corresponding to the final gesture, no function is performed.
2. The method for implicit gesture interaction of the steering wheel based on the pressure sensor and the deep learning according to claim 1, wherein the method for establishing the gesture recognition module comprises the following steps:
recording the hand holding gesture of the steering wheel in actual driving, and marking a label;
acquiring pressure sensor data and m pieces of data of a corner sensor of n persons under different gestures according to the marked gestures, and marking the acquired sensor data with corresponding gesture labels;
build a model training module, and the training module includes: the device comprises an input layer, a GRU layer, a full connection layer, a Softmax layer and an output layer;
dividing sensor data of different gesture labels into a training set, a verification set and a test set;
inputting a training set to an input layer of a training module, training input data through a GRU layer to extract partial features, summarizing all gesture features to obtain a full connection layer, inputting the data of the full connection layer to a Softmax layer, performing normalization processing on the data, outputting a result after one training through an output layer, judging a model recognition rate after the training by using the data of a verification set, training again by adjusting parameters of a recognition model when the recognition rate is less than p%, ending the model training until the recognition rate of a final model to the gesture in the verification set is more than p%, outputting a trained steering wheel gesture recognition model, and testing the actual recognition rate of the model through testing the gesture data in the training set.
3. The method of claim 2, wherein the training module adopts a GRU algorithm, and the formula is as follows:
r t =σ(W r ·[h t-1 ,x t ])
z t =σ(W z ·[h t-1 ,x t ])
Figure FDA0003789468000000021
Figure FDA0003789468000000022
y t =σ(W 0 ·h t )
in the formula: r is t And z t The reset gate and the update gate are respectively represented, the update gate is used for controlling the degree of state information of the previous moment being brought into the current state, and the larger the value of the update gate is, the more the state information of the previous moment is brought into; how much information is written to the current candidate set before reset gate controls the previous state
Figure FDA0003789468000000023
The smaller the reset gate, the less information of the previous state is written, σ represents the sigmoid function, W r 、W z
Figure FDA0003789468000000024
And W 0 And the weight matrix which is continuously optimized in the model training process is represented.
4. The method for implicit gesture interaction of the steering wheel based on the pressure sensor and the deep learning of the claim 1, wherein the corresponding establishment of the gesture and the function of the gesture-function matching module comprises the following steps:
selecting a function needing to be matched with the gesture;
inputting a control gesture matched with the function;
detecting whether the control gesture exists in a daily driving gesture library or not, and if so, replacing the control gesture with a new control gesture; if not, confirming the gesture;
and verifying the control function corresponding to the confirmed gesture.
5. The method for implicit gesture interaction of the steering wheel based on the pressure sensor and the deep learning as claimed in claim 4, wherein the method for establishing the daily driving gesture library comprises: the method comprises the steps of collecting the probability of various gestures of a test vehicle, setting the probability as a daily driving gesture if the probability of the gestures in driving is larger than a set threshold value, and storing the daily driving gesture into a daily driving gesture library.
6. The method for implicit gesture interaction of steering wheel based on pressure sensor and deep learning of claim 5, wherein the set threshold is 5%.
7. A hidden gesture interaction system of a steering wheel based on a pressure sensor and deep learning, which is characterized by comprising:
the pressure acquisition module is used for acquiring a pressure signal when a driver holds the steering wheel;
the corner sensor module is used for detecting a corner signal when the steering wheel is rotated;
the gesture recognition module comprises a gesture judgment module and a gesture memory module; the gesture memory module is used for storing various different gestures and corresponding pressure signals and corner signals; the gesture judgment module is used for guiding the pressure signal and the corner signal into the gesture recognition module for recognition to obtain the gesture of the operator;
the gesture-function matching module comprises a daily driving gesture library, a function self-defining module and a function matching module; the daily driving gesture library is used for storing daily driving gestures and corresponding control functions; the function customizing module is used for customizing the interactive gestures and the corresponding control functions; the function matching module is used for comparing the input gestures with the daily driving gesture library and the function self-defining module and outputting corresponding control functions;
and the function execution module receives the control function from the function matching module and executes a corresponding instruction.
8. The system of claim 7, wherein the pressure acquisition module comprises:
array film pressure sensors of the skin sensitive grade;
the signal acquisition card is used for acquiring the electric signal of the pressure sensor;
and the data processing module is used for filtering and amplifying the electric signals acquired by the signal acquisition card and converting the electric signals into digital signals.
9. The method and system for implicit gesture interaction of steering wheels according to claim 8, wherein the pressure sensors are arranged in a 3D array and are installed on the front, side and back of the steering wheel;
the pressure sensor is divided into 4 zones: the first area is an area from 6 to 9 points of the steering wheel; the second area is an area from 9 points to 12 points of the steering wheel; the third area is a 12-point to 3-point area; the fourth region is a 3-to 6-point region.
10. The method and system for implicit gesture interaction of steering wheel according to claim 8, wherein the acquisition frequency of the signal acquisition card is not lower than 10Hz.
CN202210950891.1A 2022-08-09 2022-08-09 Implicit gesture interaction method and system for steering wheel Pending CN115268651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210950891.1A CN115268651A (en) 2022-08-09 2022-08-09 Implicit gesture interaction method and system for steering wheel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210950891.1A CN115268651A (en) 2022-08-09 2022-08-09 Implicit gesture interaction method and system for steering wheel

Publications (1)

Publication Number Publication Date
CN115268651A true CN115268651A (en) 2022-11-01

Family

ID=83750919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210950891.1A Pending CN115268651A (en) 2022-08-09 2022-08-09 Implicit gesture interaction method and system for steering wheel

Country Status (1)

Country Link
CN (1) CN115268651A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115729356A (en) * 2023-01-10 2023-03-03 深圳飞蝶虚拟现实科技有限公司 3D remote interaction action optimization system based on habit analysis
CN118107605A (en) * 2024-04-30 2024-05-31 润芯微科技(江苏)有限公司 Vehicle control method and system based on steering wheel gesture interaction

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115729356A (en) * 2023-01-10 2023-03-03 深圳飞蝶虚拟现实科技有限公司 3D remote interaction action optimization system based on habit analysis
CN118107605A (en) * 2024-04-30 2024-05-31 润芯微科技(江苏)有限公司 Vehicle control method and system based on steering wheel gesture interaction
CN118107605B (en) * 2024-04-30 2024-08-02 润芯微科技(江苏)有限公司 Vehicle control method and system based on steering wheel gesture interaction

Similar Documents

Publication Publication Date Title
CN115268651A (en) Implicit gesture interaction method and system for steering wheel
US9323985B2 (en) Automatic gesture recognition for a sensor system
US8055305B2 (en) Method and apparatus for inputting function of mobile terminal using user's grip posture while holding mobile terminal
CN107391014B (en) Intelligent touch screen keyboard with finger identification function
CN110148405B (en) Voice instruction processing method and device, electronic equipment and storage medium
WO2021136054A1 (en) Voice wake-up method, apparatus and device, and storage medium
CN100354882C (en) Information processing apparatus and signature data input programs
EP3215981B1 (en) Nonparametric model for detection of spatially diverse temporal patterns
JP2015097128A (en) Systems and methods for pressure-based authentication of signature on touch screen
CN102985897A (en) Efficient gesture processing
WO1997020284A1 (en) Method and system for velocity-based handwriting recognition
CN112148128A (en) Real-time gesture recognition method and device and man-machine interaction system
CN107818251A (en) A kind of face identification method and mobile terminal
CN115620312A (en) Cross-modal character handwriting verification method, system, equipment and storage medium
CN114397963B (en) Gesture recognition method and device, electronic equipment and storage medium
CN111639318A (en) Wind control method based on gesture monitoring on mobile terminal and related device
CN115393876A (en) Online signature identification method based on neural network
CN111444771B (en) Gesture preposing real-time identification method based on recurrent neural network
CN111008546B (en) Sensor output signal identification method and device
KR101253745B1 (en) Digital door lock and operation method of the same
CN111807173A (en) Elevator control method based on deep learning, electronic equipment and storage medium
CN117684841B (en) Vehicle window control method, device, computer equipment and storage medium
Gabralla Dense Deep Neural Network Architecture for Keystroke Dynamics Authentication in Mobile Phone
CN114879849B (en) Multichannel air pen gesture recognition method
Eremin et al. A concept of continuous user authentication based on behavioral biometrics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination