CN117666794A - Vehicle interaction method and device, electronic equipment and storage medium - Google Patents

Vehicle interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117666794A
CN117666794A CN202311676103.5A CN202311676103A CN117666794A CN 117666794 A CN117666794 A CN 117666794A CN 202311676103 A CN202311676103 A CN 202311676103A CN 117666794 A CN117666794 A CN 117666794A
Authority
CN
China
Prior art keywords
gesture
preset
touch information
steering wheel
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311676103.5A
Other languages
Chinese (zh)
Inventor
彭泳铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Geely Automobile Research Institute Ningbo Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202311676103.5A priority Critical patent/CN117666794A/en
Publication of CN117666794A publication Critical patent/CN117666794A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a vehicle interaction method, a device, electronic equipment and a storage medium, wherein a first sensor is arranged on a steering wheel of a vehicle, and the vehicle interaction method comprises the following steps: acquiring first gesture touch information of a first gesture to be identified interacted between a user and a steering wheel through the first sensor; generating a confirmation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset confirmation gesture based on the first gesture touch information; and generating a cancellation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset cancellation gesture based on the first gesture touch information. The technical problem that the safety of vehicle interaction in the driving process is low in the related art is solved.

Description

Vehicle interaction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of vehicle technologies, and in particular, to a vehicle interaction method, device, electronic device, and storage medium.
Background
Along with the high-speed development of the automobile industry, the scenes of interaction with vehicles or other vehicle-mounted equipment in the driving process of users are more and more increased, so that in order to improve the user experience, a lot of vehicles at present can simplify the user operation as much as possible, so that the operation time of the users is shortened as much as possible, but the decision rights of many operations are required to be mastered in the hands of the users, and therefore, the confirmation and cancellation operations are operations which are frequently and necessary in the driving process of the vehicles.
In the related art, a user needs to make a decision operation of confirmation or cancellation in the driving process, still needs to leave the hand of the driver from the steering wheel and move the hand to the vehicle-mounted terminal or other terminal devices for operation, and needs to accurately select the confirmation or cancellation key, and also needs to concentrate certain efforts to accurately find the position of the confirmation or cancellation key in the display interface, so that the operation is complex, and the operations all cause distraction, thereby increasing driving risk and even causing safety accidents.
Disclosure of Invention
The main purpose of the application is to provide a vehicle interaction method, a device, an electronic device and a storage medium, and aims to solve the technical problem of low safety of vehicle interaction in the driving process in the related technology.
In order to achieve the above object, the present application provides a vehicle interaction method, in which a first sensor is disposed on a steering wheel of a vehicle, the vehicle interaction method includes the following steps:
acquiring first gesture touch information of a first gesture to be identified interacted between a user and a steering wheel through the first sensor;
generating a confirmation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset confirmation gesture based on the first gesture touch information;
And generating a cancellation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset cancellation gesture based on the first gesture touch information.
The application also provides a vehicle interaction device, be provided with first sensor on the steering wheel of vehicle, vehicle interaction device includes:
the acquisition module is used for acquiring first gesture touch information of a first gesture to be identified, which is interacted with the steering wheel by a user, through the first sensor;
the first instruction generation module is used for generating a confirmation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset confirmation gesture based on the first gesture touch information;
and the second instruction generating module is used for generating a cancel instruction under the condition that the first gesture to be recognized is determined to be matched with a preset cancel gesture based on the first gesture touch information.
The application also provides an electronic device, which is an entity device, and includes: the system comprises a memory, a processor and a program of the vehicle interaction method stored in the memory and capable of running on the processor, wherein the program of the vehicle interaction method can realize the steps of the vehicle interaction method when being executed by the processor.
The application also provides a storage medium, which is a computer readable storage medium, and the computer readable storage medium stores a program for implementing the vehicle interaction method, where the program for implementing the vehicle interaction method implements the steps of the vehicle interaction method when being executed by a processor.
The application provides a vehicle interaction method, a device, electronic equipment and a storage medium, wherein the confirmation or cancellation method is applied to a vehicle, a first sensor is arranged on a steering wheel of the vehicle, first gesture touch information of a first gesture to be identified, which is interacted by a user with the steering wheel, is acquired through the first sensor, and the confirmation or cancellation intention detection of the user through the steering wheel is realized, so that the user can express the confirmation or cancellation intention through interacting with the steering wheel under the condition that both hands do not need to leave the steering wheel; and generating a confirmation instruction when the first gesture to be recognized is determined to be matched with the preset confirmation gesture based on the first gesture touch information, and generating a cancellation instruction when the first gesture to be recognized is determined to be matched with the preset cancellation gesture based on the first gesture touch information, so that confirmation or cancellation control based on the detected confirmation or cancellation intention is realized, that is, confirmation or cancellation intention of a user through interactive expression with a steering wheel can be converted into the confirmation or cancellation instruction, and decision control on confirmation or cancellation is realized. Because the user should put the steering wheel on the steering wheel in the vehicle driving process with both hands, when needing the decision, gather first gesture touch information through the first sensor that sets up on the steering wheel, can make the driver need not to move the position of hand and can realize the purpose of decision, because need not to move the position of hand, just also need not to distraction to pay attention to whether the operation of hand decision is accurate. Therefore, a driver can use less vision under the condition that the driver holds the steering wheel by both hands, and the decision intention is conveyed by using gestures through interaction between the hands and the steering wheel, so that the control convenience of the decision is improved, and the safety risk caused by the decision in the driving process is reduced. The method and the device have the advantages that the technical defects that a driver is required to leave the steering wheel, move the driver to the vehicle-mounted terminal or other terminal equipment for operation in the driving process, accurately select the confirmation or cancel key, concentrate certain effort to accurately find the position of the confirmation or cancel key in the display interface, and operate more complicated, and the operations can lead to distraction of attention, so that driving risk is increased, and even safety accidents are caused are overcome.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a first embodiment of a vehicle interaction method of the present application;
FIG. 2 is a schematic diagram of a scene of a confirmation gesture according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a scene of canceling a gesture in an embodiment of the present application;
FIG. 4 is a flow chart of a second embodiment of a vehicle interaction method of the present application;
FIG. 5 is a schematic structural diagram of an embodiment of a vehicle interaction method apparatus according to the present application;
fig. 6 is a schematic device structure diagram of a hardware operating environment related to a vehicle interaction method in an embodiment of the application.
The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings in conjunction with the embodiments.
Detailed Description
In order to make the above objects, features and advantages of the present invention more comprehensible, the following description of the embodiments accompanied with the accompanying drawings will be given in detail. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present invention without making any inventive effort, are intended to be within the scope of the present invention.
Along with the high-speed development of the automobile industry, the scenes of interaction with vehicles or other vehicle-mounted equipment in the driving process of users are more and more increased, so that in order to improve the user experience, a lot of vehicles at present can simplify the user operation as much as possible, so that the operation time of the users is shortened as much as possible, but the decision rights of many operations are required to be mastered in the hands of the users, and therefore, the confirmation and cancellation operations are operations which are frequently and necessary in the driving process of the vehicles. However, improving the safety and ease of operation of the driver has been an important technical challenge. The method for controlling decision making by the user of the related art center in the process of interacting with the vehicle comprises the following steps:
(1) Gesture recognition system based on camera: some vehicle manufacturers and technological companies have begun to attempt to use cameras and gesture recognition algorithms on vehicle interactive control. These systems may capture a driver's gesture, such as a hand swing or finger movement, and then translate it into interactive control commands. However, these systems also require the driver to move his hand into the field of view of the camera and to perform a large gesture operation in order to be ready for recognition, and the driver still needs to be distracted to find the position where the gesture can be made and to move his hand away from the steering wheel, thus still presenting a high safety risk.
(2) And the sound control device comprises: some vehicles are equipped with voice-controlled vehicle interactive control devices that allow the driver to perform vehicle interactive control via voice commands. Although these systems can reduce manual operations, the environmental requirements for voice recognition are high, the vehicle is not necessarily quite quiet during running, the vehicle running itself can generate certain noise, talking sound of passengers in the vehicle, playing music on the vehicle or the vehicle itself is in a noisy environment, under these conditions, the difficulty of controlling instructions through voice recognition is high, the understanding of certain instructions through voice recognition still has challenges, other scenes with low requirements on timeliness can be accurately recognized through a plurality of repeated modes, but during the running of the vehicle, a great deal of time is consumed for a plurality of repeated modes, during the period of time, the vehicle still is in a running state, possibly caused by miswalking, or the speed reduction affects the running of the vehicle at the rear, or the user is caused to be distracted from the operations of accurately sounding, closing background music, closing windows and the like, and also high safety risks can be caused.
(3) Touch-sensitive steering wheel: some automobile manufacturers have begun to use touch sensing technology to integrate control functions on the driving control area. However, these touch pads are generally planar and therefore require a separate setting, and the driver still needs to be distracted to find the position of the touch pad and thus make a corresponding gesture, which also presents a high safety risk.
(4) Virtual reality gesture control techniques: some gesture control schemes already exist in the field of virtual reality, allowing users to interact using gestures in a virtual environment. While these techniques perform well in virtual environments, applications in vehicle interactive control need to consider more safety and utility factors.
In summary, the vehicle interaction method with practicability in the related art still needs the driver to leave the hand from the steering wheel and move to the designated position, in the process, the user also needs to move the sight line from the road to confirm whether the hand moves to the designated position, if the user moves the sight line from the road to confirm whether the hand moves to the designated position, the vehicle interaction operation can be better ensured to be accurately identified, but the distraction of the attention caused by the fact that the hand leaves the steering wheel and the sight line leaves the road increases driving risk and even causes safety accidents; if the user does not move the line of sight from the road to confirm whether the hand moves to the designated position, the situation that the vehicle interaction operation of the user cannot be accurately identified may occur, for example, the user cannot accurately find the touch pad by the hand alone, the hand cannot be adjusted to the angle and the position which can be accurately identified by the camera, the vehicle interaction operation cannot be accurately identified in the image collected by the camera, and the like, and the vehicle interaction operation cannot be accurately identified, so that the interaction control cannot be timely realized, but the vehicle is continuously driven forward in the driving process, thus possibly causing the vehicle to walk wrong, causing inconvenience, wasting resources and time, and bringing bad experience to the user.
Based on the above, the application provides a more convenient and safe vehicle interaction method, the confirmation or cancellation method is applied to a vehicle, a first sensor is arranged on a steering wheel of the vehicle, first gesture touch information of a first gesture to be identified, which is interacted by a user with the steering wheel, is acquired through the first sensor, and the confirmation or cancellation intention of the user through the steering wheel is detected, so that the user can express the confirmation or cancellation intention through interacting with the steering wheel under the condition that both hands do not need to leave the steering wheel; and generating a confirmation instruction when the first gesture to be recognized is determined to be matched with the preset confirmation gesture based on the first gesture touch information, and generating a cancellation instruction when the first gesture to be recognized is determined to be matched with the preset cancellation gesture based on the first gesture touch information, so that confirmation or cancellation control based on the detected confirmation or cancellation intention is realized, that is, confirmation or cancellation intention of a user through interactive expression with a steering wheel can be converted into the confirmation or cancellation instruction, and decision control on confirmation or cancellation is realized. Because the user should put the steering wheel on the steering wheel in the vehicle driving process with both hands, when needing the decision, gather first gesture touch information through the first sensor that sets up on the steering wheel, can make the driver need not to move the position of hand and can realize the purpose of decision, because need not to move the position of hand, just also need not to distraction to pay attention to whether the operation of hand decision is accurate. Therefore, a driver can use less vision under the condition that the driver holds the steering wheel by both hands, and the decision intention is conveyed by using gestures through interaction between the hands and the steering wheel, so that the control convenience of the decision is improved, and the safety risk caused by the decision in the driving process is reduced. The method overcomes the technical defects that a driver still needs to leave the hand from the steering wheel and move the hand to the vehicle-mounted terminal or other terminal equipment for operation in the driving process of the user to make a decision operation of confirmation or cancellation, and the user needs to concentrate certain efforts to accurately find the position of the confirmation or cancellation button in the display interface, so that the operation is complex, the operations can lead to distraction of attention, thereby increasing driving risk and even causing safety accidents, improving the convenience of decision control of the driver in driving and reducing driving risk caused by distraction of attention.
Example 1
In a first embodiment of the vehicle interaction method, referring to fig. 1, a first sensor is disposed on a steering wheel of a vehicle, and the vehicle interaction method includes the following steps:
step S10, acquiring first gesture touch information of a first gesture to be identified interacted between a user and a steering wheel through the first sensor;
the execution main body of the method of the embodiment may be a vehicle interaction device, or may be a vehicle interaction method terminal device or a server, and in this embodiment, the vehicle interaction device is exemplified by a vehicle interaction device, and the vehicle interaction device may be integrated on a terminal device such as a vehicle, a vehicle-mounted terminal, a vehicle controller, a smart phone, a tablet computer, and the like, which have a data processing function.
In this embodiment, it should be noted that, in the driving process of the user, the user may need to interact with the vehicle or other vehicle-mounted devices, so many vehicles at present, in order to improve the user experience, simplify the user operation as much as possible, so that the operation time of the user is shortened as much as possible, but the decision right of many operations must be mastered in the user's own hand, so the confirmation and cancellation operations are operations that are frequently and necessarily performed in the driving process of the vehicle, where the confirmation indicates affirmative, and the cancellation indicates negative. At present, a user needs to make confirmation or cancel operation in the driving process, still needs to leave the hand of the driver from the steering wheel and move the hand to the vehicle-mounted terminal or other terminal equipment for operation, and needs to accurately select the confirmation or cancel key, and further needs to concentrate on a certain effort to accurately find the position of the confirmation or cancel key in the display interface, so that the operation is complex, and the operations all lead to distraction, thereby increasing driving risk and even causing safety accidents. The embodiment can quickly and accurately realize the decision-making function by simply and without separating from the gesture of the steering wheel, and can meet the actual requirement that a user makes decisions conveniently at any time in the driving process under the condition of ensuring the driving safety. For example, when drivers use the car navigation system, they can use gestures to confirm the selected destination, avoiding the need to touch the screen or confirm using buttons, which is particularly useful when drivers need to keep their eyes on the road; for another example, the driver can use gestures to answer, hang up, or cancel an incoming call, thereby making a telephone operation more convenient without distraction; for another example, the gesture confirmation and cancellation function may also be used to quickly and easily control in-vehicle entertainment systems, such as music play, video selection, or game operations; for another example, the driver may also quickly and easily confirm or cancel vehicle system settings, such as temperature, seat adjustments, etc., using gestures.
The vehicle interaction method is applied to the vehicle and is used for converting the decision intention of the user into a confirmation instruction or a cancellation instruction in the interaction process of the user and the vehicle, so that the decision control of the vehicle is realized. The steering wheel of the vehicle is provided with a first sensor, and the first sensor is used for collecting gesture touch information generated by touch interaction operation between a user and the steering wheel and can comprise at least one of a pressure sensor, a photoelectric sensor, an image sensor and the like.
In an embodiment, the first sensor is a sensor array, where the sensor array includes a plurality of sensor units distributed at different positions on the surface of the steering wheel, and when any one or more of the sensor units collect sensor data, the gesture that the user interacts with the steering wheel currently may be determined based on the sensor data and the position of the sensor unit itself, or the relative positional relationship between the sensor unit and other sensor units, or the positional relationship between the sensor units in the sensor array. Different interaction control instructions can be matched for different gestures in advance, after a user learns various gestures and interaction control instructions corresponding to the various gestures, in the actual driving process, when the user wants to perform interaction control, the corresponding gestures are made, and further after gesture touch information generated by the gesture made by the user is detected, the interaction control instructions which the user wants to implement can be identified, and the vehicle or other equipment carried on the vehicle is controlled to execute the interaction control instructions which the user wants to implement.
The gesture touch information may include at least one of pressure information, operation time, operation position information, etc., and the gesture touch information may be extracted from information such as sensor data, deployment position of the sensor unit, etc. Wherein the operation type of the user operation, such as touch, tap, press, slide, etc., can be discriminated based on the pressure information, the operation time, etc.; since the portion where the operation is performed can be recognized based on the operation position information, recognition of a micro gesture including a gesture made by a portion of the hand, such as an action of one finger, an action of a plurality of fingers, an action of the palm, or the like can be realized. Thus, the parts of the hand and the operation types can form more combination modes, so that more interactive control is matched, and more interactive control operations are realized. In this way, on one hand, in the process of driving the vehicle, the hand should be originally placed on the steering wheel to control the steering wheel, in the vehicle interaction method of the embodiment, the hand does not need to be separated from the steering wheel in the process of interaction between the fingers or the palm and the steering wheel, and compared with the traditional gesture recognition system, the hand of the driver cannot be separated from the steering wheel, so that the driving safety is greatly improved; on the other hand, the miniature gestures are very tiny and accurate, and only very few vision and actions are needed to complete interactive control, so that compared with a traditional gesture control system requiring large gestures or complex actions, the risk of distraction is reduced, and the attention of a driver is kept focused on a road.
As an example, the step S10 includes: after the vehicle is electrified and runs, first gesture touch information of a first gesture to be identified, which is interacted with the steering wheel by the hand of the user, can be collected continuously or after preset gesture control conditions are met through the first sensor.
In an embodiment, when the vehicle is powered on, a prompt box for whether gesture control is started may be output, under the condition that the user selects to start gesture control, a preset gesture control condition is judged to be met currently, the vehicle enters a gesture control state, and the first sensor is started, so that the first sensor may collect the gesture touch information until the vehicle exits the gesture control state, and a mode of exiting the gesture control state may be user operation exit, vehicle power-down, or the like, which may be specifically determined according to an actual situation. When the vehicle is powered on initially, the vehicle is usually in a parking state, and at the moment, safety risks cannot be generated when the user interacts with the vehicle, and whether the user needs to conduct interaction control through gestures can be accurately confirmed.
In another embodiment, the gesture control wake gesture may be preset, and the gesture control close gesture may also be preset. After the vehicle is electrified, a first sensor is started, and further when a gesture control wake-up gesture is detected through the first sensor, the fact that preset gesture control conditions are met currently is judged, the vehicle interaction method is started to be executed, and sensor data acquired by the first sensor are analyzed; and stopping executing the vehicle interaction method when the gesture control closing gesture is detected, so as to avoid false touch. The user may generate the requirement of decision control for confirmation or cancellation after the vehicle runs, but the vehicle is already running on the road, the convenience of operation is low when the vehicle is stopped by the side, but if hands are separated from the steering wheel and interact with the vehicle-mounted terminal or other terminal equipment, the attention of the driver needs to be dispersed, and the safety risk is high, so that the vehicle interaction method can be controlled to be started through a gesture control wake-up gesture, and after the interaction control, the vehicle interaction method can be controlled to be stopped through a gesture control close gesture to be executed in order to avoid false touch, so that the flexible switching of the vehicle interaction method is realized.
In an implementation manner, the step of collecting, by the first sensor, gesture touch information of a gesture to be recognized, where the gesture is interacted with by a user with a steering wheel, includes: and acquiring first gesture touch information of a first gesture to be recognized, which is interacted by a user finger and a steering wheel, by the first sensor, wherein the gesture touch information comprises at least one of tapping gesture information of at least one target finger, touch gesture information of at least one target finger, non-touch gesture information of at least one target finger and sliding gesture information of at least one target finger.
In this embodiment, it should be noted that, by detecting the micro gesture that the finger interacts with the steering wheel and performing interaction control based on the micro gesture, on one hand, the number of fingers is more, and the fingers can be combined, and further can be combined with the operation type to form more combination modes, so as to match more interaction controls and implement more interaction control operations; on the other hand, the flexibility of the finger is higher, the operation difficulty of interacting with the steering wheel is smaller, the implementation is easy, the control of the steering wheel is not interfered, and therefore the safety risk is smaller. Thus, the gesture touch information may include a combination of one or more of tap gesture information of at least one target finger, touch gesture information of at least one target finger, non-touch gesture information of at least one target finger, and swipe gesture information of at least one target finger, wherein the target finger may be determined from the position information in the sensor data.
As an example, after the vehicle is powered on and runs, the first gesture touch information of the first gesture to be recognized, which is interacted with the steering wheel by the user finger, may be collected continuously or after the preset gesture control condition is met through the first sensor.
Optionally, the confirmation gesture includes a gesture in which the index finger and the thumb hold the steering wheel while the middle finger, the ring finger, and the little finger strike the steering wheel;
and/or, the cancel gesture includes a gesture in which the middle finger, ring finger and little finger hold the steering wheel while the index finger and thumb strike the steering wheel.
In this embodiment, referring to FIG. 2, the confirmation gesture includes a gesture in which the index finger and thumb hold the steering wheel while the middle finger, ring finger, and little finger strike the steering wheel. On the other hand, since this gesture implies the meaning of "OK", the intention of confirmation is easily associated, and thus the gesture corresponding to the confirmation instruction is easily understood, and the learning cost is low; on the other hand, the gesture is not easy to be made unintentionally in the daily driving process of the driver, and when the index finger and the middle finger are lifted upwards at the same time, the thumb can apply an upward obvious pressure to the steering wheel, namely the gesture features are obvious, and the gesture recognition false detection probability is low.
Referring to fig. 3, the cancel gesture includes a gesture in which the middle finger, ring finger, and little finger hold the steering wheel while the index finger and thumb strike the steering wheel. On the one hand, the gesture is opposite to 'OK', and the intention of cancellation is easily remitted, so that the gesture is taken as the gesture corresponding to the cancellation instruction, and the gesture is easy to understand and has lower learning cost; on the other hand, the driver is not easy to make the gesture unintentionally in the daily driving process, and the gesture recognition false detection probability is low.
Optionally, the step of collecting, by the first sensor, first gesture touch information of a first gesture to be recognized, where the user interacts with the steering wheel, includes:
step S11, after receiving an instruction to be confirmed, outputting vibration prompt information through the steering wheel;
step S12, collecting, by the first sensor, first gesture touch information of a first gesture to be identified, which is interacted with the steering wheel by a user.
In this embodiment, it should be noted that, during driving, a large amount of interaction operations will be generated between the hands of the driver and the steering wheel, and the situations that decision control needs to be performed are few, so the gesture decision control function may be started only under the situation that there is a decision requirement, that is, under the situation that an instruction to be confirmed is received, and the gesture control sleep state may be entered under the situation that the user does not receive the instruction to be confirmed, and the gesture decision control function may be closed, so that not only the normal driving operation of the driver is avoided, but also resources may be saved.
As an example, the steps S11 to S12 include: after the vehicle is electrified and runs, the command to be confirmed of the vehicle can be monitored, after the command to be confirmed is received, vibration prompt information can be output through the steering wheel, a user is reminded of currently receiving the command to be confirmed in a vibration mode, and further, first gesture touch information of a first gesture to be recognized, which is acquired by the first sensor after the vibration prompt information is output, of the user and interacted with the steering wheel is acquired.
In one embodiment, the vehicle interaction method may further include the steps of: after the step of generating the confirmation instruction or after the step of generating the cancellation instruction, entering a gesture control dormant state. Avoiding influencing the normal driving operation of the driver and saving resources.
Step S20, generating a confirmation instruction when it is determined that the first gesture to be recognized matches a preset confirmation gesture based on the first gesture touch information;
step S30, generating a cancel instruction when it is determined that the first gesture to be recognized matches a preset cancel gesture based on the first gesture touch information.
In the present embodiment, the confirmation instruction is an instruction indicating affirmative, the cancellation instruction is an instruction indicating negative, and it should be noted that various functions of the vehicle may be classified as affirmative or negative according to actual needs, so as to adapt to the confirmation instruction and the cancellation instruction, for example, the volume up may be identified as affirmative and the volume down may be identified as negative, so that the control of the volume may be achieved by the confirmation gesture and the cancellation gesture.
As an example, the steps S20 to S30 include: after the first gesture touch information is collected, the first gesture touch information can be analyzed, whether the first gesture to be identified is matched with a preset confirmation gesture or a preset cancellation gesture is judged, and a confirmation instruction is generated under the condition that the first gesture to be identified is determined to be matched with the preset confirmation gesture; generating a cancel instruction under the condition that the first gesture to be recognized is determined to be matched with a preset cancel gesture; and outputting prompt information for making a gesture again under the condition that the first gesture to be recognized is not matched with the preset confirmation gesture or the preset cancellation gesture.
In an implementation manner, the analyzing the first gesture touch information and determining whether the first gesture to be recognized is matched with a preset confirmation gesture or a preset cancellation gesture may be: matching the first gesture touch information with a preset confirmation gesture or a preset cancellation gesture, and judging whether the first gesture to be recognized is matched with the preset confirmation gesture or the preset cancellation gesture; or, inputting the first gesture touch information into a preset first gesture recognition model, analyzing the first gesture touch information through the first gesture recognition model, determining a target gesture corresponding to the first gesture to be recognized, and judging whether the target gesture is the confirmation gesture or the cancellation gesture, wherein the gesture recognition model is similar to the prior art, and is not repeated herein.
Optionally, the step of generating the confirmation instruction includes, when it is determined based on the first gesture touch information that the first gesture to be recognized matches a preset confirmation gesture:
step S21, judging whether the first gesture to be recognized comprises a plurality of sub-gestures to be recognized, the interval time of which is smaller than a preset first time threshold value, based on the first gesture touch information;
step S22, if yes, judging whether the number of sub-gestures to be recognized matched with a preset confirmation gesture exceeds a preset first number threshold based on the first gesture touch information;
step S23, if yes, generating a confirmation instruction.
In this embodiment, it should be noted that, since a large amount of interaction operations are generated between the hands of the driver and the steering wheel during the driving process, it is inevitable that an operation similar to the preset confirmation gesture or the preset cancellation gesture or the gesture feature may be performed, resulting in false detection. The probability of making similar actions by accident once is generally higher, but the probability of making the same similar actions by two or more times in succession is greatly reduced, so that the probability of misidentification is reduced and the user experience is improved by detecting the repeated occurrence of the confirmation gesture or the cancellation gesture for a plurality of times within the preset time threshold.
As an example, the steps S21 to S23 include: after the first gesture touch information is collected, the first gesture touch information can be analyzed, whether the first gesture to be recognized comprises a plurality of sub-gestures to be recognized or not is detected, the time interval of any two adjacent sub-gestures to be recognized in time exceeds a preset third time threshold value and is smaller than the preset first time threshold value, and the sub-gestures can be regarded as two independent sub-gestures when exceeding the preset third time threshold value, and the sub-gestures can be regarded as the same intention when being smaller than the preset first time threshold value, so that the sub-gestures belong to the same gesture; under the condition that a plurality of sub-gestures to be recognized are detected, analyzing first gesture touch information corresponding to each sub-gesture to be recognized, and judging whether each sub-gesture to be recognized is matched with a preset confirmation gesture or not; and generating a confirmation instruction under the condition that the number of sub-gestures to be recognized, which are matched with the preset confirmation gesture, is detected to exceed a preset first number threshold.
In one embodiment, the confirmation instruction is generated assuming that the number of sub-gestures to be recognized that match the preset confirmation gesture is two or more. When a user makes a gesture, an emergency is encountered after the first sub-gesture is done, and the middle finger, the ring finger and the little finger are required to strike the steering wheel at the same time, but the middle finger cannot strike the steering wheel due to the emergency, namely the second sub-gesture is incomplete or nonstandard, and the user can repeat once again rapidly, so that two of the three sub-gestures still match with the preset confirmation gesture, the intention of the user can still be recognized, and a confirmation instruction is generated.
Optionally, the step of generating the cancellation instruction when the first gesture to be recognized is determined to match with a preset cancellation gesture based on the first gesture touch information includes:
step S31, judging whether the first gesture to be recognized comprises a plurality of sub-gestures to be recognized, the interval time of which is smaller than a preset second time threshold value, based on the first gesture touch information;
step S32, if yes, judging whether the number of sub-gestures to be recognized matched with a preset canceling gesture exceeds a preset second number threshold based on the first gesture touch information;
and step S33, if yes, generating a cancel instruction.
As an example, the steps S31 to S33 include: after the first gesture touch information is collected, the first gesture touch information can be analyzed, whether the first gesture to be recognized comprises a plurality of sub-gestures to be recognized or not is detected, the time interval of any two adjacent sub-gestures to be recognized in time exceeds a preset fourth time threshold and is smaller than a preset second time threshold, the sub-gestures can be regarded as two independent sub-gestures when the time interval exceeds the preset fourth time threshold, and the sub-gestures can be regarded as the same intention when the time interval is smaller than the preset second time threshold, so that the sub-gestures belong to the same gesture; under the condition that a plurality of sub-gestures to be recognized are detected, analyzing first gesture touch information corresponding to each sub-gesture to be recognized, and judging whether each sub-gesture to be recognized is matched with a preset canceling gesture or not; and generating a cancellation instruction under the condition that the number of sub-gestures to be recognized, which are matched with the preset cancellation gesture, is detected to exceed a preset second number threshold.
In one embodiment, the cancel instruction is generated assuming that the number of sub-gestures to be recognized that match the preset cancel gesture is two or more. When a user makes a gesture, an emergency is encountered after the first sub-gesture is done, and the middle finger, the ring finger and the little finger are required to strike the steering wheel at the same time, but the middle finger cannot strike the steering wheel due to the emergency, namely the second sub-gesture is incomplete or nonstandard, and the user can repeat once again rapidly, so that two of the three sub-gestures still match with the preset cancellation gesture, the intention of the user can still be identified, and a cancellation instruction is generated.
In this embodiment, the vehicle interaction method is applied to a vehicle, the confirmation or cancellation method is applied to the vehicle, a first sensor is arranged on a steering wheel of the vehicle, first gesture touch information of a first gesture to be recognized, which is interacted by a user with the steering wheel, is collected through the first sensor, and the confirmation or cancellation intention of the user through the steering wheel is detected, so that the user can express the confirmation or cancellation intention through interacting with the steering wheel under the condition that both hands do not need to leave the steering wheel; and generating a confirmation instruction when the first gesture to be recognized is determined to be matched with the preset confirmation gesture based on the first gesture touch information, and generating a cancellation instruction when the first gesture to be recognized is determined to be matched with the preset cancellation gesture based on the first gesture touch information, so that confirmation or cancellation control based on the detected confirmation or cancellation intention is realized, that is, confirmation or cancellation intention of a user through interactive expression with a steering wheel can be converted into the confirmation or cancellation instruction, and decision control on confirmation or cancellation is realized. Because the user should put the steering wheel on the steering wheel in the vehicle driving process with both hands, when needing the decision, gather first gesture touch information through the first sensor that sets up on the steering wheel, can make the driver need not to move the position of hand and can realize the purpose of decision, because need not to move the position of hand, just also need not to distraction to pay attention to whether the operation of hand decision is accurate. Therefore, a driver can use less vision under the condition that the driver holds the steering wheel by both hands, and the decision intention is conveyed by using gestures through interaction between the hands and the steering wheel, so that the control convenience of the decision is improved, and the safety risk caused by the decision in the driving process is reduced. The method and the device have the advantages that the technical defects that a driver is required to leave the steering wheel, move the driver to the vehicle-mounted terminal or other terminal equipment for operation in the driving process, accurately select the confirmation or cancel key, concentrate certain effort to accurately find the position of the confirmation or cancel key in the display interface, and operate more complicated, and the operations can lead to distraction of attention, so that driving risk is increased, and even safety accidents are caused are overcome.
Example two
Further, referring to fig. 3, in the second embodiment of the present application, the same or similar content as the above embodiment may be referred to the above description, and will not be repeated. On this basis, the vehicle further comprises a second sensor, and the vehicle interaction method further comprises:
step A10, acquiring a gesture image through the second sensor;
in this embodiment, the higher the accuracy of detecting and identifying the user operation during the running of the vehicle, the more convenient the user operation, the less intense the user is, and the more concentrated the attention is, so the higher the driving safety is. Compared with the mutual verification of multiple sensors, the accuracy is slightly poor under the condition that the gesture detection of a single sensor is easier to be subjected to false detection.
The image data contains more information, and the accuracy of the position information contained in the image data is higher, but when the image sensor collects the image data, the problem that the object in the collected image data is incomplete due to the shielding of an obstacle, the resolution and the like is easy to cause, namely, the gesture characteristics in the collected gesture image are possibly incomplete, so that the condition that the target gesture cannot be identified easily occurs in single image identification. The gesture touch information has higher comprehensiveness for identifying the interactive operation between the hand of the user and the steering wheel, but lower accuracy for judging the position, so that the accuracy for judging which finger or which part of the hand is used for carrying out the interactive operation is lower. Therefore, the gesture touch information and the gesture image can complement each other, the gesture touch information can make up for the defect that the target object in the collected image data is not complete due to the problems of shielding, resolution and the like of the gesture image, and the gesture image can make up for the defect that the judgment accuracy of the gesture touch information on the position is low, so that the comprehensive and accurate identification of the miniature gesture of the user is realized.
In an embodiment, the second sensor may be disposed at a position corresponding to the steering wheel, for example, may be disposed at a steering column shroud, a dashboard, an a-pillar, or the like, for capturing an image of a steering wheel region, so that only a gesture image of a first gesture to be recognized, in which a user's hand interacts with the steering wheel, may be acquired, and the probability of occurrence of detection anomalies is reduced.
As an example, the step a10 includes: after the vehicle is powered on and runs, gesture images of a first gesture to be identified, which is interacted with the steering wheel, of the hand of the user in the steering wheel area can be collected through the second sensor continuously or after preset gesture control conditions are met. The gesture image may be a picture or a video, and may specifically be determined according to actual needs, which is not limited in this embodiment.
Step a20, detecting whether the first gesture to be recognized is matched with a preset confirmation gesture or a preset cancellation gesture according to the first gesture touch information and the gesture image under the condition that the gesture image is matched with the first gesture touch information.
As an example, the step a20 includes: and carrying out information matching on the gesture image and the first gesture touch information, analyzing the gesture image and the first gesture touch information under the condition that the gesture image is determined to be matched with the first gesture touch information, and detecting whether the first gesture to be recognized is matched with a preset confirmation gesture or a preset cancellation gesture or not based on the gesture image and the first gesture touch information.
In an implementation manner, the method for performing information matching between the gesture image and the first gesture touch information may be: extracting first gesture features from the gesture image through a preset first feature extractor, extracting second gesture features from the first gesture touch information through a preset second feature extractor, and judging whether the first gesture features are matched with the second gesture features.
In another implementation manner, the method for performing information matching between the gesture image and the first gesture touch information may further be: and carrying out gesture recognition on the gesture image through a preset first gesture recognition model to obtain a preset first gesture, carrying out gesture recognition on the first gesture touch information through a preset second gesture recognition model to obtain a preset second gesture, and judging whether the preset first gesture is matched with the preset second gesture.
Optionally, the step of detecting whether the first gesture to be recognized matches a preset confirmation gesture or a preset cancellation gesture according to the first gesture touch information and the gesture image includes:
step C21, determining a target gesture from a preset gesture library based on the first gesture touch information and the gesture image through a preset gesture recognition algorithm;
Step C22, judging whether the target gesture is a confirmation gesture or a cancel gesture.
As an example, the steps C21-C22 include: and carrying out gesture recognition operation based on the first gesture touch information and the gesture image through a preset gesture recognition algorithm, calculating the probability that a first gesture to be recognized, which is interacted with the steering wheel by the hand of a user, is each gesture in a preset gesture library, determining a target gesture according to the probability, and judging whether the target gesture is a confirmation gesture or a cancellation gesture.
In one implementation manner, the gesture recognition algorithm includes a first feature extraction algorithm, a second feature extraction algorithm and a gesture classification algorithm, where the feature extraction algorithm may be determined according to a type of data, actually required features, and the like, which is not limited in this embodiment; the step of determining, by a preset gesture recognition algorithm, the target gesture from the preset gesture library based on the first gesture touch information and the gesture image may include: first gesture features are extracted from the first gesture touch information through a first feature extraction algorithm, second gesture features are extracted from the gesture image through a second feature extraction algorithm, the first gesture features and the second gesture features are spliced into gesture feature vectors or gesture feature matrixes, then the probability that a first gesture to be identified, which is interacted with a steering wheel currently, of a user hand is each gesture in a preset gesture library is calculated based on the gesture feature vectors or gesture feature matrixes through a preset gesture classification algorithm, a target gesture is output according to the probability as a classification result, and an exemplary gesture with highest probability can be determined to be a target gesture, a gesture with probability higher than a preset probability threshold and highest probability can be determined to be a target gesture, and the gesture can be determined according to actual conditions.
In this embodiment, the second sensor collects the gesture image, so that mutual verification and information complementation can be performed on the gesture touch information collected by the first sensor, the probability of error recognition can be reduced, the comprehensiveness of the collected gesture information can be improved, the accuracy of gesture recognition is improved, and accordingly, the target interaction control instruction meeting the actual requirements of the user can be more accurately determined, and therefore, the convenience and driving safety of user operation can be effectively improved, and the user experience sense is improved.
Example III
Further, referring to fig. 4, an embodiment of the present application further provides a vehicle interaction device, which is characterized in that a first sensor is disposed on a steering wheel of the vehicle, and the vehicle interaction device includes:
the acquisition module 10 is configured to acquire, through the first sensor, first gesture touch information of a first gesture to be identified, which is interacted with the steering wheel by a user;
the first instruction generating module 20 is configured to generate a confirmation instruction when it is determined that the first gesture to be recognized matches a preset confirmation gesture based on the first gesture touch information;
the second instruction generating module 20 is configured to generate a cancellation instruction when it is determined that the first gesture to be recognized matches a preset cancellation gesture based on the first gesture touch information.
Optionally, the vehicle further comprises a second sensor, and the vehicle interaction device further comprises a gesture image recognition module for:
acquiring a gesture image through the second sensor;
and detecting whether the first gesture to be recognized is matched with a preset confirmation gesture or a preset cancellation gesture according to the first gesture touch information and the gesture image under the condition that the gesture image is matched with the first gesture touch information.
Optionally, the gesture image recognition module is further configured to:
determining a target gesture from a preset gesture library based on the first gesture touch information and the gesture image through a preset gesture recognition algorithm;
and judging whether the target gesture is a confirmation gesture or a cancel gesture.
Optionally, the confirmation gesture includes a gesture in which the index finger and the thumb hold the steering wheel while the middle finger, the ring finger, and the little finger strike the steering wheel;
and/or, the cancel gesture includes a gesture in which the middle finger, ring finger and little finger hold the steering wheel while the index finger and thumb strike the steering wheel.
Optionally, the acquisition module 10 is further configured to:
after receiving the instruction to be confirmed, outputting vibration prompt information through the steering wheel;
And acquiring first gesture touch information of a first gesture to be identified, which is interacted with the steering wheel by a user, through the first sensor.
Optionally, the first instruction generating module 20 is further configured to:
judging whether the first gesture to be recognized comprises a plurality of sub-gestures to be recognized, wherein the interval time of the sub-gestures is smaller than a preset first time threshold value, based on the first gesture touch information;
if yes, judging whether the number of sub-gestures to be recognized matched with a preset confirmation gesture exceeds a preset first number threshold or not based on the first gesture touch information;
if yes, generating a confirmation instruction.
Optionally, the second instruction generating module 30 is further configured to:
judging whether the first gesture to be recognized comprises a plurality of sub-gestures to be recognized, wherein the interval time of the sub-gestures is smaller than a preset second time threshold value, based on the first gesture touch information;
if yes, judging whether the number of sub-gestures to be recognized, which are matched with a preset canceling gesture, exceeds a preset second number threshold value based on the first gesture touch information;
if yes, generating a cancel instruction.
The vehicle interaction device provided by the invention solves the technical problem of lower safety of vehicle interaction in the driving process in the related technology by adopting the vehicle interaction method in the embodiment. Compared with the prior art, the beneficial effects of the vehicle interaction device provided by the embodiment of the invention are the same as those of the vehicle interaction method provided by the embodiment, and other technical features of the vehicle interaction device are the same as those disclosed by the method of the embodiment, so that the description is omitted herein.
Example IV
Further, an embodiment of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle interaction method of the above embodiments.
Referring now to fig. 6, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as bluetooth headsets, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphic processor, etc.) that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage means into a Random Access Memory (RAM). In the RAM, various programs and arrays required for the operation of the electronic device are also stored. The processing device, ROM and RAM are connected to each other via a bus. An input/output (I/O) interface is also connected to the bus.
In general, the following systems may be connected to the I/O interface: input devices including, for example, touch screens, touch pads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, etc.; output devices including, for example, liquid Crystal Displays (LCDs), speakers, vibrators, etc.; storage devices including, for example, magnetic tape, hard disk, etc.; a communication device. The communication means may allow the electronic device to communicate with other devices wirelessly or by wire to exchange arrays. While electronic devices having various systems are shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device, or installed from a storage device, or installed from ROM. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by a processing device.
The electronic equipment provided by the invention solves the technical problem of lower safety of vehicle interaction in the driving process in the related technology by adopting the vehicle interaction method in the embodiment. Compared with the prior art, the beneficial effects of the electronic device provided by the embodiment of the invention are the same as those of the vehicle interaction method provided by the embodiment, and other technical features of the electronic device are the same as those disclosed by the method of the embodiment, so that details are not repeated.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Example five
Further, the present embodiment provides a computer-readable storage medium having computer-readable program instructions stored thereon for performing the vehicle interaction method in the above-described embodiments.
The computer readable storage medium according to the embodiments of the present invention may be, for example, a usb disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The above-described computer-readable storage medium may be contained in an electronic device; or may exist alone without being assembled into an electronic device.
The computer-readable storage medium carries one or more programs that, when executed by an electronic device, cause the electronic device to: acquiring first gesture touch information of a first gesture to be identified interacted between a user and a steering wheel through the first sensor; generating a confirmation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset confirmation gesture based on the first gesture touch information; and generating a cancellation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset cancellation gesture based on the first gesture touch information.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The computer readable storage medium provided by the invention stores the computer readable program instructions for executing the vehicle interaction method, and solves the technical problem of lower safety of vehicle interaction in the driving process in the related technology. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the embodiment of the invention are the same as those of the vehicle interaction method provided by the above embodiment, and are not described in detail herein.
Example six
Further, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the vehicle interaction method as described above.
The computer program product provided by the application solves the technical problem of lower safety of vehicle interaction in the driving process in the related technology. Compared with the prior art, the beneficial effects of the computer program product provided by the embodiment of the invention are the same as those of the vehicle interaction method provided by the above embodiment, and are not described herein.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims.

Claims (10)

1. A vehicle interaction method, characterized in that a first sensor is provided on a steering wheel of the vehicle, the vehicle interaction method comprising the steps of:
acquiring first gesture touch information of a first gesture to be identified interacted between a user and a steering wheel through the first sensor;
generating a confirmation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset confirmation gesture based on the first gesture touch information;
and generating a cancellation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset cancellation gesture based on the first gesture touch information.
2. The vehicle interaction method of claim 1, wherein the vehicle further comprises a second sensor, the vehicle interaction method further comprising:
acquiring a gesture image through the second sensor;
and detecting whether the first gesture to be recognized is matched with a preset confirmation gesture or a preset cancellation gesture according to the first gesture touch information and the gesture image under the condition that the gesture image is matched with the first gesture touch information.
3. The vehicle interaction method according to claim 2, wherein the step of detecting whether the first gesture to be recognized matches a preset confirm gesture or a preset cancel gesture based on the first gesture touch information and the gesture image includes:
Determining a target gesture from a preset gesture library based on the first gesture touch information and the gesture image through a preset gesture recognition algorithm;
and judging whether the target gesture is a confirmation gesture or a cancel gesture.
4. The vehicle interaction method of claim 1, wherein the confirmation gesture comprises a gesture in which an index finger and a thumb hold the steering wheel while middle finger, ring finger, and little finger strike the steering wheel;
and/or, the cancel gesture includes a gesture in which the middle finger, ring finger and little finger hold the steering wheel while the index finger and thumb strike the steering wheel.
5. The vehicle interaction method according to claim 1, wherein the step of collecting first gesture touch information of a first gesture to be recognized for user interaction with a steering wheel through the first sensor includes:
after receiving the instruction to be confirmed, outputting vibration prompt information through the steering wheel;
and acquiring first gesture touch information of a first gesture to be identified, which is interacted with the steering wheel by a user, through the first sensor.
6. The vehicle interaction method according to claim 1, wherein the step of generating a confirmation instruction in a case where it is determined that the first gesture to be recognized matches a preset confirmation gesture based on the first gesture touch information includes:
Judging whether the first gesture to be recognized comprises a plurality of sub-gestures to be recognized, wherein the interval time of the sub-gestures is smaller than a preset first time threshold value, based on the first gesture touch information;
if yes, judging whether the number of sub-gestures to be recognized matched with a preset confirmation gesture exceeds a preset first number threshold or not based on the first gesture touch information;
if yes, generating a confirmation instruction.
7. The vehicle interaction method according to claim 1, wherein the step of generating a cancel instruction in a case where it is determined that the first gesture to be recognized matches a preset cancel gesture based on the first gesture touch information includes:
judging whether the first gesture to be recognized comprises a plurality of sub-gestures to be recognized, wherein the interval time of the sub-gestures is smaller than a preset second time threshold value, based on the first gesture touch information;
if yes, judging whether the number of sub-gestures to be recognized, which are matched with a preset canceling gesture, exceeds a preset second number threshold value based on the first gesture touch information;
if yes, generating a cancel instruction.
8. A vehicle interaction device, characterized in that a first sensor is provided on a steering wheel of the vehicle, the vehicle interaction device comprising:
The acquisition module is used for acquiring first gesture touch information of a first gesture to be identified, which is interacted with the steering wheel by a user, through the first sensor;
the first instruction generation module is used for generating a confirmation instruction under the condition that the first gesture to be recognized is determined to be matched with a preset confirmation gesture based on the first gesture touch information;
and the second instruction generating module is used for generating a cancel instruction under the condition that the first gesture to be recognized is determined to be matched with a preset cancel gesture based on the first gesture touch information.
9. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the vehicle interaction method of any one of claims 1 to 7.
10. A storage medium, characterized in that the storage medium is a computer-readable storage medium having stored thereon a program for realizing a vehicle interaction method, the program for realizing the vehicle interaction method being executed by a processor to realize the steps of the vehicle interaction method according to any one of claims 1 to 7.
CN202311676103.5A 2023-12-07 2023-12-07 Vehicle interaction method and device, electronic equipment and storage medium Pending CN117666794A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311676103.5A CN117666794A (en) 2023-12-07 2023-12-07 Vehicle interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311676103.5A CN117666794A (en) 2023-12-07 2023-12-07 Vehicle interaction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117666794A true CN117666794A (en) 2024-03-08

Family

ID=90074805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311676103.5A Pending CN117666794A (en) 2023-12-07 2023-12-07 Vehicle interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117666794A (en)

Similar Documents

Publication Publication Date Title
US8914163B2 (en) System and method for incorporating gesture and voice recognition into a single system
EP3502862A1 (en) Method for presenting content based on checking of passenger equipment and distraction
WO2023273064A1 (en) Object speaking detection method and apparatus, electronic device, and storage medium
KR20150099259A (en) Electronic device and method for recognizing biometrics information
US20220244789A1 (en) Method for operating a mobile terminal using a gesture recognition and control device, gesture recognition and control device, motor vehicle, and an output apparatus that can be worn on the head
CN106105247B (en) Display device and control method thereof
WO2013090868A1 (en) Interacting with a mobile device within a vehicle using gestures
CN104428729A (en) Enabling and disabling features of a headset computer based on real-time image analysis
WO2014022251A1 (en) Interaction with devices based on user state
CN107430856B (en) Information processing system and information processing method
EP3098692A1 (en) Gesture device, operation method for same, and vehicle comprising same
WO2014196208A1 (en) Gesture input device for car navigation device
US20190056813A1 (en) Display linking system
US20190129517A1 (en) Remote control by way of sequences of keyboard codes
KR20160133305A (en) Gesture recognition method, a computing device and a control device
WO2023273063A1 (en) Passenger speaking detection method and apparatus, and electronic device and storage medium
KR20150137827A (en) Vehicle control apparatus using steering wheel and method for controlling the same
CN108469875B (en) Control method of functional component and mobile terminal
Chen et al. Eliminating driving distractions: Human-computer interaction with built-in applications
KR101709129B1 (en) Apparatus and method for multi-modal vehicle control
CN117666794A (en) Vehicle interaction method and device, electronic equipment and storage medium
CN117666793A (en) Dominant hand selection method and device, electronic equipment and storage medium
CN117666796A (en) Gesture control method and system for navigation, electronic equipment and storage medium
CN117666791A (en) Gesture control dual authentication method and device, electronic equipment and storage medium
CN117762315A (en) Navigation route passing point adding method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination