CN113076836B - Automobile gesture interaction method - Google Patents

Automobile gesture interaction method Download PDF

Info

Publication number
CN113076836B
CN113076836B CN202110320822.8A CN202110320822A CN113076836B CN 113076836 B CN113076836 B CN 113076836B CN 202110320822 A CN202110320822 A CN 202110320822A CN 113076836 B CN113076836 B CN 113076836B
Authority
CN
China
Prior art keywords
gesture
area
recognition
driver
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110320822.8A
Other languages
Chinese (zh)
Other versions
CN113076836A (en
Inventor
黄浩伟
张海培
肖晨
董士琦
李光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Motor Corp
Original Assignee
Dongfeng Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Motor Corp filed Critical Dongfeng Motor Corp
Priority to CN202110320822.8A priority Critical patent/CN113076836B/en
Publication of CN113076836A publication Critical patent/CN113076836A/en
Application granted granted Critical
Publication of CN113076836B publication Critical patent/CN113076836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/117Biometrics derived from hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of automobile control methods, in particular to an automobile gesture interaction method. The control system is used for carrying out segmentation processing on an image area subjected to gesture recognition by the camera, and the image area is divided into a recognition area and a non-recognition area; recognizing the gesture appearing in the recognition area, and acquiring the confidence of the gesture image appearing in the gesture detection box; acquiring a current vehicle speed to obtain a confidence threshold value of gesture recognition under the current vehicle speed, wherein the confidence threshold value is a set value with a smaller numerical value when the vehicle speed is larger; comparing the confidence with a confidence threshold, and if the confidence is less than or equal to the confidence threshold, judging the gesture corresponding to the gesture image as an invalid gesture; if the confidence coefficient is larger than the confidence threshold value, the gesture corresponding to the gesture image is judged to be an effective gesture, and the control system identifies the effective gesture to judge the intention of the driver. The identification method is extremely simple, different identification measures are taken according to different scenes, and the adaptability and the accuracy are better.

Description

Automobile gesture interaction method
Technical Field
The invention relates to the technical field of automobile control methods, in particular to an automobile gesture interaction method.
Background
With the development of automotive electronics, more and more electronic devices are integrated in the passenger compartment of the automobile, and the devices all need to be controlled by a driver. The traditional mechanical switch can not be used satisfactorily, and new interaction technologies such as a touch screen, voice interaction and gesture interaction appear. The gesture interaction technology captures hand motions of a driver by using various sensors, and the hand motions are recognized and drive function application through an algorithm.
General gesture interaction technology requires that the recognition range is as large as possible, but is not completely applicable to automobile gesture interaction. On one hand, the space in the vehicle is limited and various parts are arranged, and if other objects exist in the identification range, the identification effect is influenced; on the other hand, the hand activity space of the driver is limited during normal driving, and the comfortable activity space of the hand is mainly considered; furthermore, recognition requirements are different under different driving conditions, and if a large range of gesture recognition is pursued, unnecessary false recognition can be caused, and driving of a driver can be affected in severe cases. Meanwhile, the deep learning algorithm needs a large amount of mathematical calculation, has high calculation requirement on the controller and has direct relation with the identification range. Therefore, the gesture interaction recognition range needs to be considered comprehensively, the existing image recognition field is usually a fixed recognition range, namely a recognition frame is usually arranged, the recognition frame is positioned in the recognizable area, the recognition frame is positioned outside the unrecognizable area, and the fixed recognition mode is not suitable for the gesture recognition in the vehicle actually in consideration of the recognition requirements of the environment condition of the environment recovery in the vehicle and different driving conditions.
Disclosure of Invention
The invention aims to solve the defects of the background technology and provide an automobile gesture interaction method.
The technical scheme of the invention is as follows: a car gesture interaction method is characterized in that: the control system is used for carrying out segmentation processing on an image area subjected to gesture recognition by the camera, and the image area is divided into a recognition area and a non-recognition area; recognizing the gesture appearing in the recognition area, and acquiring the confidence of the gesture image appearing in the gesture detection box; acquiring a current vehicle speed to obtain a confidence threshold value of gesture recognition under the current vehicle speed, wherein the confidence threshold value is a set value with a smaller numerical value when the vehicle speed is larger;
comparing the confidence with a confidence threshold, and if the confidence is less than or equal to the confidence threshold, judging the gesture corresponding to the gesture image as an invalid gesture; if the confidence coefficient is larger than the confidence threshold value, the gesture corresponding to the gesture image is judged to be an effective gesture, and the control system identifies the effective gesture to judge the intention of the driver.
The further method for obtaining the confidence level of the gesture image appearing in the gesture detection box comprises the following steps: firstly, obtaining an initial confidence coefficient of a gesture detection box through a target detection algorithm, then calculating the distance between the position of the gesture detection box and a middle point of a recognition area, and adjusting the initial confidence coefficient of the gesture detection box according to a rule that the confidence coefficient is larger when the distance is smaller and the confidence coefficient is smaller when the distance is larger to obtain the confidence coefficient of a gesture image in the gesture detection box at the current position.
The method for acquiring the current vehicle speed to obtain the confidence threshold value of the gesture recognition under the current vehicle speed further comprises the following steps: if the collected current vehicle speed is less than or equal to a first set value, the confidence threshold value of the gesture recognition at the current vehicle speed is a first threshold value;
if the first set value is smaller than the current vehicle speed and smaller than the second set value, the confidence threshold value of the gesture recognition at the current vehicle speed is the difference value of the first threshold value and the product of the adjustment coefficient obtained by calibration and the current vehicle speed;
if the collected current vehicle speed is larger than or equal to a second set value, the confidence threshold value of the gesture recognition at the current vehicle speed is a second threshold value;
the first threshold > difference > second threshold; the first set value is less than the second set value.
The method for dividing the image area into the identification area and the non-identification area further comprises the following steps: dividing a region which is comfortable in hand activity of a driver and difficult to cause false recognition in an image region acquired by a camera into recognition regions; and dividing an area containing parts capable of shielding gestures of the driver and an area containing parts which are required to be operated and controlled by hands of the driver in an image area acquired by the camera into an unrecognized area.
And further performing 3D modeling by using image software, drawing part images and camera view angle range images in the cockpit, making a plane at the maximum identification distance of the camera, projecting all the images into the plane, drawing a frame in the plane according to the camera image proportion, wherein the object in the frame is an object which can be seen by the camera, and determining an identification area and a non-identification area in the frame.
Further, the method for dividing the area, which contains the parts capable of blocking the gestures of the driver, in the image area acquired by the camera into the unrecognized area includes: drawing a rectangular frame containing parts and parts which are easy to shelter from the gestures of the driver in the vehicle in a plane, wherein the interval between the areas in the rectangular frame is the range of the area containing the parts and parts which can shelter from the gestures of the driver.
The method for dividing the area containing the parts which are required to be manually operated by the driver in the image area acquired by the camera into the unrecognized area comprises the following steps: the dummy is used for simulating a driver, the hand joints of the dummy are adjusted under the condition that a safety belt is fastened to simulate the action of the driver for controlling the parts, the hand activity area when the driver controls the parts is obtained, and the activity area is projected into a plane, so that the range of the area containing the parts which are required to be controlled by the driver with hands can be obtained.
The range determination method of the identification area further comprises the following steps: the method comprises the steps of simulating a driver by using a dummy, adjusting the hand joint angle of the dummy under the condition that a safety belt is fastened, determining a comfortable area for the hand movement of the driver, projecting the area into a plane, and removing the part including the unidentified area to obtain the range of the identified area.
The method for calculating the distance between the gesture image appearance position in the gesture detection frame and the middle point of the recognition area further comprises the following steps: and obtaining the center coordinate of the gesture detection frame by using a target detection algorithm, and calculating the distance between the center coordinate and the middle point of the identification area to obtain the distance between the gesture image appearance position in the gesture detection frame and the middle point of the identification area.
Further, the parts capable of blocking the gestures of the driver comprise a steering wheel, a light adjusting deflector rod and a wiper adjusting deflector rod; the parts which need to be operated and controlled by hands of the driver comprise a gear shift lever, a central control screen and physical keys in the cab.
The gesture recognition method can be correspondingly adapted according to the driving conditions of the automobile, different confidence thresholds are adopted under different automobile speeds, the confidence of the gesture recognition is completely different under different automobile speed conditions, the problems of false recognition rate under low-speed driving and insufficient recognition accuracy under high-speed driving are effectively solved, the formation safety and the recognition effectiveness are improved, and the method has great popularization value.
According to the method, the area in the field angle range of the camera is divided into the unrecognized area and the recognized area, the gesture appearing in the unrecognized area is not processed, only the gesture appearing in the recognized area is processed, the correct recognition rate of gesture interaction is improved, the false recognition rate is reduced, the gesture interaction computing force requirement is also reduced, the gesture interaction is convenient to arrange in an automobile and transplant different automobile types, and the method has great popularization value.
The identification method is extremely simple, different identification measures are taken according to different scenes, the adaptability and the accuracy are better, the safety of a driver for driving a vehicle is improved, and the method has great popularization value.
Drawings
FIG. 1: the invention is a schematic diagram of the distribution structure of the unidentified area and the identified area.
Detailed Description
Reference will now be made in detail to the embodiments of the present invention, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
The invention is described in further detail below with reference to the figures and the specific embodiments.
The embodiment introduces a method for automobile gesture interaction, which divides an area in a field angle range of a camera used for gesture interaction in an automobile into an unrecognized area which is easy to cause misoperation and a recognized area which is comfortable in hand movement of a driver and difficult to cause misrecognition, and a control system only recognizes gestures in the recognized area, so that the misrecognition rate is greatly reduced, the computational power requirement is reduced, and the recognition accuracy is improved.
The infrared camera is adopted in the embodiment, two LED lamps for emitting near-infrared light rays are arranged in the infrared camera, the emitted infrared light rays irradiate an object and then are reflected and return to a lens of the infrared camera, a CMOS chip in the infrared camera converts an optical signal into an electric signal, and an ISP chip processes an image signal; and then the image is transmitted to a controller through LVDS, firstly the image is segmented, then algorithm recognition is carried out on the segmented image, a certain gesture is analyzed into an ID, the controller executes corresponding actions such as answering/hanging up the phone, turning on/off music and the like after receiving the gesture ID, and meanwhile, a screen has corresponding UI prompt.
The embodiment is implemented by CATIA software, and is characterized in that an automobile 3D digital-analog is firstly introduced into the CATIA, the FOV range of the infrared camera is drawn, a plane is made at the position of the maximum distance recognized by the infrared camera, all objects in the FOV range of the infrared camera are projected into the plane, a frame is drawn in the plane according to the image proportion (such as 16:9) of the infrared camera, and the objects in the frame are the objects which can be seen by the infrared camera.
Then, the unrecognized area and the range of the recognized area are determined, and the unrecognized area of the embodiment includes the following two types:
1. area containing vehicle interior parts: when an infrared camera is arranged in a vehicle, the FOV is prevented from being blocked or having other objects as much as possible, but automobile parts (such as a steering wheel, a light adjusting deflector rod, a wiper adjusting deflector rod and the like) in certain areas cannot be avoided, and the areas are defined as a first area (namely, an area 1 in fig. 1);
2. areas prone to misidentification: during normal driving, a driver needs to frequently operate various components of the automobile, such as a gear shift lever, a central control screen, various physical keys and the like, and the area where the components are located is defined as a second area (i.e., area 2 in fig. 1).
The identification area of the present embodiment includes: the comfort zone of hand movement by the driver, the hand reach zone of the driver when wearing the harness is limited and typically the driver will select the most comfortable motion when interacting with gestures, whereby a gesture comfort zone can be determined, which is the third zone (i.e. zone 3 in fig. 1) and which is the zone separating the first zone and the second zone.
The method for specifically determining the first region, the second region and the third region is as follows:
1. drawing a rectangular frame containing vehicle content parts which are easy to shelter from gestures of a driver in a plane, wherein the space in the rectangular frame is a range of a first area;
2. the method comprises the steps that a dummy is used for simulating a driver, hand joints of the dummy are adjusted under the condition that a safety belt is fastened to simulate the action of the driver for controlling parts, a hand moving area when the driver controls the parts (comprising a gear shifting lever, a central control screen, physical keys in a cab and the like) is obtained, and the moving area is projected into a plane, so that the range of a second area can be obtained;
3. the method comprises the steps of simulating a driver by using a dummy, adjusting the hand joint angle of the dummy under the condition that a safety belt is fastened, determining a comfortable area for the hand movement of the driver, projecting the area into a plane, and removing a part comprising a first area and a second area to obtain a range of a third area, wherein the third area is an area which is in the image area except the first area and the second area and is comfortable for the hand movement of the driver.
In the embodiment, the dummy is used for simulating the driver, and 95% of male dummy and 5% of female dummy are mainly used for respectively simulating the high driver and the low driver, so that the simulation condition is more accurate and conforms to the condition of most drivers.
After the range of the 3 types of regions is determined in the CATIA, the 3 types of regions need to be mapped into an image of the camera (the zoom factor of the embodiment is 1280/750 ≈ 1.7ppi/mm, and the pixel ranges corresponding to the various types of regions in the image can be obtained by multiplying the zoom factors by the various types of regions), and after the pixel ranges of the first region, the second region and the third region are determined in the image, further processing is performed in an algorithm. The processing of the third area is simple, and the image size of the camera can be directly cut when the size of the input image is set; for the first area and the second area, logic judgment needs to be added, a target detection algorithm can be used to obtain the center coordinates of the gesture detection box, whether the coordinates are located in the first area and the second area or not is judged, and if the center coordinates are located in the unrecognized area, the gesture is not processed.
Through the processing of the process, only when the gesture appears in the third area, the gesture recognition calculation is further performed, so that the error recognition can be effectively reduced, and the calculation force requirement is reduced. It should be noted that the sizes, shapes, and positions of the first region, the second region, and the third region in fig. 1 are merely examples, and need to be determined according to a specific environment in a vehicle in practical application.
According to the gesture recognition method, the gesture is recognized only after the gesture detection frame detects the gesture in the third area, but whether the gesture is judged to be the effective gesture is not certain. For the driving condition of the vehicle, in this embodiment, in consideration of different gesture accuracy requirements of the driver at different vehicle speeds, that is, during low-speed driving, the number of invalid gestures that may occur to the driver is large, and during high-speed driving, the number of invalid gestures of the driver is small, and in order to improve the recognition accuracy, reduce the false recognition rate and provide driving safety, the embodiment performs confidence recognition on the gesture image in the gesture recognition frame.
The method comprises the steps of firstly obtaining initial confidence of a gesture detection box through a target detection algorithm, then calculating the distance between the gesture image appearance position in the gesture detection box and a middle point of a recognition area (the middle point of the recognition area is not necessarily used as a reference point, a driver can also be used as the reference point, and the function of the embodiment can be realized), obtaining the center coordinate of the gesture detection box through the target detection algorithm, and calculating the distance between the center coordinate and the middle point of the recognition area to obtain the distance between the gesture image appearance position in the gesture detection box and the middle point of a third area. And obtaining a confidence coefficient adjusting value of the gesture image appearing in the gesture detection box according to a rule that the confidence coefficient is larger when the distance is smaller and the confidence coefficient is smaller when the distance is larger, wherein the sum of the initial confidence coefficient and the confidence coefficient adjusting value is the confidence coefficient of the gesture image in the gesture detection box at the current position. In practical application, a distance coefficient may be set, and the product of the distance coefficient and the distance is a confidence adjustment value.
Acquiring the current vehicle speed to acquire a confidence threshold of gesture recognition at the current vehicle speed, wherein if the acquired current vehicle speed is less than or equal to a first set value, the confidence threshold of gesture recognition at the current vehicle speed is a first threshold, the first set value of the embodiment is 60km/h, and the first threshold is p0
If the first set value is less than the current vehicle speed and less than the second set value, the confidence threshold value of the gesture recognition at the current vehicle speed is the difference value of the product of the first threshold value and the adjustment coefficient obtained by calibration and the current vehicle speed, the second set value of the embodiment is 100km/h, the adjustment coefficient is mu, and the adjustment coefficient is obtained by calibration;
if the collected current vehicle speed is larger than or equal to a second set value, the confidence threshold value of the gesture recognition at the current vehicle speed is a second threshold value, and the second threshold value of the embodiment is p1(ii) a First threshold > difference > second threshold.
The expression is specifically shown by the following formula:
Figure BDA0002992780630000081
wherein: p is confidence threshold value of gesture recognition under the current vehicle speed;
p0-a first threshold value;
p1-a second threshold value;
mu is obtained by adjusting the coefficient and calibrating;
v-Current vehicle speed.
Comparing the confidence with a confidence threshold, and if the confidence is less than or equal to the confidence threshold, judging the gesture corresponding to the gesture image as an invalid gesture; if the confidence coefficient is larger than the confidence threshold value, the gesture corresponding to the gesture image is judged to be an effective gesture, and the control system identifies the effective gesture to judge the intention of the driver.
The device can meet the requirement of recognition accuracy of drivers when the vehicle speeds are different, the recognition accuracy is reduced when the vehicle speeds are low, false recognition is avoided, the recognition accuracy is improved when the vehicle speeds are high, and potential safety hazards caused by repeated actions of the drivers due to low recognition accuracy are avoided.
In the practical application process, the specific set value is not unique and can be adjusted through the practical requirement.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are given by way of illustration of the principles of the present invention, and that various changes and modifications may be made without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A car gesture interaction method is characterized in that: the control system is used for carrying out segmentation processing on an image area subjected to gesture recognition by the camera, and the image area is divided into a recognition area and a non-recognition area; recognizing the gesture appearing in the recognition area, and acquiring the confidence of the gesture image appearing in the gesture detection box; acquiring a current vehicle speed to obtain a confidence threshold value of gesture recognition under the current vehicle speed, wherein the confidence threshold value is a set value with a smaller numerical value when the vehicle speed is larger;
comparing the confidence with a confidence threshold, and if the confidence is less than or equal to the confidence threshold, judging the gesture corresponding to the gesture image as an invalid gesture; if the confidence coefficient is larger than the confidence threshold value, the gesture corresponding to the gesture image is judged to be an effective gesture, and the control system identifies the effective gesture to judge the intention of the driver.
2. The automobile gesture interaction method of claim 1, characterized in that: the method for acquiring the confidence level of the gesture image appearing in the gesture detection box comprises the following steps: firstly, obtaining an initial confidence coefficient of a gesture detection box through a target detection algorithm, then calculating the distance between the position of the gesture detection box and a middle point of a recognition area, and adjusting the initial confidence coefficient of the gesture detection box according to a rule that the confidence coefficient is larger when the distance is smaller and the confidence coefficient is smaller when the distance is larger to obtain the confidence coefficient of a gesture image in the gesture detection box at the current position.
3. The automobile gesture interaction method of claim 1, characterized in that: the method for acquiring the current vehicle speed to acquire the confidence threshold of the gesture recognition at the current vehicle speed comprises the following steps: if the collected current vehicle speed is less than or equal to a first set value, the confidence threshold value of the gesture recognition at the current vehicle speed is a first threshold value;
if the first set value is smaller than the current vehicle speed and smaller than the second set value, the confidence threshold value of the gesture recognition at the current vehicle speed is the difference value of the first threshold value and the product of the adjustment coefficient obtained by calibration and the current vehicle speed;
if the collected current vehicle speed is larger than or equal to a second set value, the confidence threshold value of the gesture recognition at the current vehicle speed is a second threshold value;
the first threshold > difference > second threshold; the first set value is less than the second set value.
4. The automobile gesture interaction method of claim 1, characterized in that: the method for dividing the image area into the identification area and the non-identification area comprises the following steps: dividing a region which is comfortable in hand activity of a driver and difficult to cause false recognition in an image region acquired by a camera into recognition regions; and dividing an area containing parts capable of shielding gestures of the driver and an area containing parts which are required to be operated and controlled by hands of the driver in an image area acquired by the camera into an unrecognized area.
5. The automobile gesture interaction method of claim 4, characterized in that: the method comprises the steps of utilizing image software to carry out 3D modeling, drawing part images and camera view angle range images in a cockpit, making a plane at the position of the maximum recognition distance of a camera, projecting all the images into the plane, drawing a frame in the plane according to the camera image proportion, wherein objects in the frame are objects which can be seen by the camera, and determining a recognition area and a non-recognition area in the frame.
6. The automobile gesture interaction method of claim 5, characterized in that: the method for dividing the area, which contains the parts capable of shielding the gestures of the driver, of the image area acquired by the camera into the unrecognized area comprises the following steps: drawing a rectangular frame containing parts which are easy to shelter from the gestures of the driver in the vehicle in a plane, wherein the space in the rectangular frame is a range containing an area which can shelter from the parts of the gestures of the driver.
7. The automobile gesture interaction method of claim 5, characterized in that: the method for dividing the area containing the parts which are required to be manually operated by the driver in the image area acquired by the camera into the unrecognized area comprises the following steps: the dummy is used for simulating a driver, the hand joints of the dummy are adjusted under the condition that a safety belt is fastened to simulate the action of the driver for controlling the parts, the hand activity area when the driver controls the parts is obtained, and the activity area is projected into a plane, so that the range of the area containing the parts which are required to be controlled by the driver with hands can be obtained.
8. The automobile gesture interaction method of claim 5, characterized in that: the range determination method of the identification area comprises the following steps: the method comprises the steps of simulating a driver by using a dummy, adjusting the hand joint angle of the dummy under the condition that a safety belt is fastened, determining a comfortable area for the hand movement of the driver, projecting the area into a plane, and removing the part including the unidentified area to obtain the range of the identified area.
9. The automobile gesture interaction method of claim 2, characterized in that: the method for calculating the distance between the gesture image appearance position in the gesture detection frame and the middle point of the recognition area comprises the following steps: and obtaining the center coordinate of the gesture detection frame by using a target detection algorithm, and calculating the distance between the center coordinate and the middle point of the identification area to obtain the distance between the gesture image appearance position in the gesture detection frame and the middle point of the identification area.
10. The automobile gesture interaction method of claim 4, characterized in that: the parts capable of shielding gestures of the driver comprise a steering wheel, a light adjusting deflector rod and a wiper adjusting deflector rod; the parts which need to be operated and controlled by hands of the driver comprise a gear shift lever, a central control screen and physical keys in the cab.
CN202110320822.8A 2021-03-25 2021-03-25 Automobile gesture interaction method Active CN113076836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110320822.8A CN113076836B (en) 2021-03-25 2021-03-25 Automobile gesture interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110320822.8A CN113076836B (en) 2021-03-25 2021-03-25 Automobile gesture interaction method

Publications (2)

Publication Number Publication Date
CN113076836A CN113076836A (en) 2021-07-06
CN113076836B true CN113076836B (en) 2022-04-01

Family

ID=76611613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110320822.8A Active CN113076836B (en) 2021-03-25 2021-03-25 Automobile gesture interaction method

Country Status (1)

Country Link
CN (1) CN113076836B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202530B (en) * 2022-05-26 2024-04-09 当趣网络科技(杭州)有限公司 Gesture interaction method and system of user interface
CN117789297A (en) * 2023-12-26 2024-03-29 大明电子股份有限公司 Vehicle-mounted quick-charging gesture recognition processing method and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455144B (en) * 2013-08-22 2017-04-12 深圳先进技术研究院 Vehicle-mounted man-machine interaction system and method
CN103761508A (en) * 2014-01-02 2014-04-30 大连理工大学 Biological recognition method and system combining face and gestures
KR101630153B1 (en) * 2014-12-10 2016-06-24 현대자동차주식회사 Gesture recognition apparatus, vehicle having of the same and method for controlling of vehicle
DE102014226546A1 (en) * 2014-12-19 2016-06-23 Robert Bosch Gmbh Method for operating an input device, input device, motor vehicle
KR101744809B1 (en) * 2015-10-15 2017-06-08 현대자동차 주식회사 Method and apparatus for recognizing touch drag gesture on curved screen
CN106502570B (en) * 2016-10-25 2020-07-31 科世达(上海)管理有限公司 Gesture recognition method and device and vehicle-mounted system
CN106933346B (en) * 2017-01-20 2019-07-26 深圳奥比中光科技有限公司 The zoning methods and equipment in car manipulation space
CN110458095B (en) * 2019-08-09 2022-11-18 厦门瑞为信息技术有限公司 Effective gesture recognition method, control method and device and electronic equipment
CN111931579B (en) * 2020-07-09 2023-10-31 上海交通大学 Automatic driving assistance system and method using eye tracking and gesture recognition techniques

Also Published As

Publication number Publication date
CN113076836A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN113076836B (en) Automobile gesture interaction method
CN107944333B (en) Automatic driving control apparatus, vehicle having the same, and control method thereof
EP3107790B1 (en) Autonomous driving system and method for same
US7437488B2 (en) Interface for car-mounted devices
US9275274B2 (en) System and method for identifying handwriting gestures in an in-vehicle information system
WO2012014735A1 (en) External environment recognition device for vehicle, and light distribution control system using same
US20120106792A1 (en) User interface apparatus and method using movement recognition
CN102859568A (en) Video based intelligent vehicle control system
US10351087B2 (en) Protection control apparatus
JP2009237776A (en) Vehicle drive supporting apparatus
US20140133705A1 (en) Red-eye determination device
JP4645433B2 (en) Graphic center detection method, ellipse detection method, image recognition device, control device
CN113591659A (en) Gesture control intention recognition method and system based on multi-modal input
CN114821810A (en) Static gesture intention recognition method and system based on dynamic feature assistance and vehicle
CN105759955B (en) Input device
CN111738235B (en) Action detection method and device for automatically opening vehicle door
JP2003076987A (en) Preceding vehicle recognizing device
JP2010061375A (en) Apparatus and program for recognizing object
WO2013175603A1 (en) Operation input device, operation input method and operation input program
CN116311136A (en) Lane line parameter calculation method for driving assistance
KR20150067679A (en) System and method for gesture recognition of vehicle
US11262849B2 (en) User interface, a means of transportation and a method for classifying a user gesture performed freely in space
Herrmann et al. Hand-movement-based in-vehicle driver/front-seat passenger discrimination for centre console controls
McAllister et al. Towards a non-contact driver-vehicle interface
JP2016157457A (en) Operation input device, operation input method and operation input program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant