WO2021217570A1 - 基于隔空手势的控制方法、装置及系统 - Google Patents
基于隔空手势的控制方法、装置及系统 Download PDFInfo
- Publication number
- WO2021217570A1 WO2021217570A1 PCT/CN2020/088219 CN2020088219W WO2021217570A1 WO 2021217570 A1 WO2021217570 A1 WO 2021217570A1 CN 2020088219 W CN2020088219 W CN 2020088219W WO 2021217570 A1 WO2021217570 A1 WO 2021217570A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gesture
- air
- distance
- adjustment
- camera
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3206—Monitoring of events, devices or parameters that trigger a change in power modality
- G06F1/3231—Monitoring the presence, absence or movement of users
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3265—Power saving in display device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/3287—Power saving characterised by the action undertaken by switching off individual functional units in the computer system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
Definitions
- This application relates to the field of automated driving (automated driving) technology, and in particular to a control method, device, and system based on air gestures.
- the corresponding relationship between the air gesture and the function is preset. After that, the user makes a gap gesture within the sensing range of the terminal device, and the terminal device recognizes the gap gesture, and then determines and adjusts the function corresponding to the gap gesture. For example, if the function corresponding to the fist is preset to take a screenshot, the terminal device will automatically take a screenshot after recognizing the gesture of making a fist in the air.
- the embodiments of the application provide a control method, device, and system based on a space gesture, which continuously monitors the movement distance or holding time of the space gesture made by a user, and continuously adjusts the corresponding space gesture according to the movement distance or the holding time.
- the function to achieve the control of the terminal equipment.
- the embodiments of the present application provide a control method based on air gestures.
- the method can be executed by a terminal device or a chip in the terminal device. The method will be described below by taking the terminal device as an example.
- the method includes: after the user makes an air gesture within the shooting range of the camera, the camera collects the air gesture. After the terminal device recognizes the target function corresponding to the gap gesture, it continuously adjusts the target function corresponding to the gap gesture according to the first distance that the gap gesture moves within the shooting range, etc., so that the target function gradually changes to continuously adjust the gap gesture.
- the target function allows the user to determine whether the adjustment needs to be terminated in time, avoiding repeated adjustments caused by the user's inability to adjust in place at one time, and the operation is simple; at the same time, it reduces the time the user looks at the screen when adjusting the target function, and improves driving safety.
- the target function can be accurately adjusted through remote air gesture input, without the need to get up and adjust the target function by touching, etc., which greatly improves the convenience and user experience.
- the above-mentioned first distance can be determined at least according to the focal length of the camera, the distance between the air gesture and the optical center of the camera, and the second distance, wherein the second distance is used to indicate that the air gesture is shooting When moving within the range, the distance that the air gesture on the imaging surface of the camera moves.
- the terminal device in order to determine the first distance, the terminal device also determines the first position and the second position during the movement of the space gesture on the imaging surface of the camera, according to the difference between the first position and the second position The number of pixels and the pixel size determine the second distance.
- the aforementioned first distance is less than or equal to a preset distance, and the preset distance is positively correlated with the arm length of the user's arm.
- the terminal device recognizes the user's wrist bone points and elbow bone points, determines the user's arm length according to the three-dimensional coordinates of the wrist bone points and the three-dimensional coordinates of the elbow bone points, and determines the preset distance according to the arm length .
- the terminal device recognizes the user's wrist bone points and elbow bone points, determines the user's arm length according to the three-dimensional coordinates of the wrist bone points and the three-dimensional coordinates of the elbow bone points, and determines the preset distance according to the arm length .
- the terminal device continuously adjusts the target function corresponding to the air gesture according to the unit adjustment amount after the air gesture is recognized, until the air gesture moves the first distance.
- the unit distance adjustment is the ratio of the total adjustment of the target function to the preset distance. This kind of scheme is adopted to realize the purpose of continuously adjusting the target function.
- the terminal device continuously adjusts the target function corresponding to the air gesture according to the ratio of the first distance to the preset distance. This kind of scheme is adopted to realize the purpose of continuously adjusting the target function.
- the terminal device when determining the first distance, inputs the focal length of the camera, the distance between the air gesture and the optical center of the camera, and the second distance into the pre-trained preset model, and the preset model is used to determine The first distance.
- the purpose of accurately determining the first distance that the user's air gesture moves within the shooting range is achieved.
- the terminal device also obtains a sample data set.
- the sample data set contains multiple sets of sample data.
- One set of sample data includes the focal length of the sample camera, the distance between the sample space gesture and the optical center of the sample camera, and the sample second Distance: Use multiple sets of sample data in the sample data set to train a preset model. Using this scheme, the purpose of training a preset model is realized.
- the terminal device after the terminal device recognizes the gap gesture, it can also continuously adjust the target function corresponding to the gap gesture according to the angle of the gap gesture in the space, and the adjustment amount of the continuous adjustment is positively related to the angle.
- the terminal device can continuously adjust the target function corresponding to the space gesture according to the angle of the space gesture movement, so that the target function gradually changes to continuously adjust the target function, so that the user can determine in time whether the adjustment needs to be terminated, and avoid The user can not adjust the repeated adjustment in place at one time, and the operation is simple.
- the function origin is displayed on the display, and the function origin is used to indicate that the terminal device has recognized the air gesture and wakes up the air gesture function.
- Adopt this kind of scheme realize the purpose of reminding users.
- the continuously adjustable functions of different applications correspond to the same air gesture.
- the problem of too many space gestures causing users to be unable to distinguish between space gestures is avoided, and the user's learning cost is reduced.
- Similar functions adopt the same space gesture to meet the requirements of man-machines. Interactive logic.
- the application of different sustainable adjustment functions corresponds to different air gestures.
- the above-mentioned target function includes any one of the following functions: volume adjustment, audio and video progress adjustment, air conditioning temperature adjustment, seat back height adjustment, 360° surround view angle adjustment, window height adjustment, sunroof size Adjustment, air-conditioning air volume adjustment, ambient light brightness adjustment.
- the terminal device when the terminal device recognizes the air gesture made by the user within the shooting range of the camera, it uses the camera to continuously capture the shooting range, and judges whether the latest image frame contains the air gesture. If the image frame of contains an air gesture, the air gesture made by the user within the shooting range of the camera is recognized.
- the purpose of the terminal device to recognize the gesture in the air is realized.
- the embodiments of the present application provide a control method based on air gestures.
- the method can be applied to a terminal device or a chip in the terminal device.
- the method is described below by taking the application to the terminal device as an example.
- the method includes: after the user makes an air gesture within the shooting range of the camera, the camera collects the air gesture.
- the terminal device After identifying the target function corresponding to the air gesture, the terminal device continuously adjusts the target function corresponding to the air gesture according to the holding time of the air gesture in the shooting range, etc., so that the target function gradually changes to continuously adjust the target function
- the target function can be accurately adjusted through remote air gesture input, without the need to get up and adjust the target function by touching, etc., which greatly improves the convenience and user experience.
- the above-mentioned holding time includes a first time period and a second time period.
- the air gesture continues to pan, and the second time period corresponds to the second time period.
- the gesture is static, and the first time period is before the second time period.
- the terminal device when the terminal device continuously adjusts the target function corresponding to the air gesture according to the holding time of the air gesture within the shooting range, it first determines the adjustment amount per unit time; then, after the air gesture is recognized, Continue to adjust the target function corresponding to the air gesture according to the adjustment amount per unit time until the air gesture no longer appears in the shooting range.
- the embodiments of the present application provide a control method based on gestures in the air.
- the method uses terminal devices such as a vehicle-mounted terminal, which is set on the vehicle and connected to the camera, display, etc. on the vehicle.
- the camera collects the air gesture
- the vehicle terminal obtains the air gesture from the camera and recognizes the target function corresponding to the air gesture, and then moves within the shooting range of the camera according to the air gesture
- the target function corresponding to the air gesture is continuously adjusted for the distance or holding time, so that the target function is gradually changed to continuously adjust the target function, so that the user can determine in time whether the adjustment needs to be terminated, so as to avoid the user’s inability to adjust in place at one time.
- the camera can be the camera of the driver monitoring system (DMS) on the vehicle, the collision mitigation system (CMS), etc.; the vehicle terminal can obtain the air gesture from the camera through the connection of Bluetooth, WiFi, etc.; screen It can be the central control display screen (main screen) on the vehicle, the screen (secondary screen) installed at the rear of the vehicle seat, the instrument screen in front of the steering wheel, and so on.
- DMS driver monitoring system
- CMS collision mitigation system
- screen It can be the central control display screen (main screen) on the vehicle, the screen (secondary screen) installed at the rear of the vehicle seat, the instrument screen in front of the steering wheel, and so on.
- an embodiment of the present application provides a control device based on air gestures, including a display unit and a processing unit.
- the display unit is used to enable the display to display the user interface of the application;
- the processing unit is used to obtain the space gesture collected by the camera, according to all The first distance that the air gesture moves within the shooting range of the camera, the target function corresponding to the air gesture is continuously adjusted, and the air gesture is a gesture whose distance to the display is greater than a preset threshold, so
- the target function is a sustainable adjustment function of the application, and the continuously adjusted adjustment amount is positively correlated with the first distance.
- the first distance is determined at least according to the focal length of the camera, the distance between the space gesture and the optical center of the camera, and a second distance, and the second distance is used to indicate When the air gesture moves within the shooting range of the camera, the distance that the air gesture on the imaging surface of the camera moves.
- the processing unit is further configured to determine before continuously adjusting the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera
- the first position and the second position in the space gesture movement process in the imaging surface of the camera are determined according to the number of pixels and the pixel size between the first position and the second position. distance.
- the first distance is less than or equal to a preset distance, and the preset distance is positively correlated with the arm length of the user's arm.
- the processing unit is also used to identify all objects before continuously adjusting the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera.
- the user’s wrist bone points and elbow bone points, the user’s arm length is determined according to the three-dimensional coordinates of the wrist bone point and the three-dimensional coordinates of the elbow bone point, and the preset is determined according to the arm length distance.
- the processing unit is used to determine the unit distance when continuously adjusting the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera
- the adjustment amount, the unit distance adjustment amount is the ratio of the total adjustment amount of the target function to the preset distance, and since the clearance gesture is recognized, the clearance gesture is continuously adjusted according to the unit adjustment amount The corresponding target function until the air gesture moves the first distance.
- the processing unit is used to determine the target function corresponding to the space gesture when continuously adjusting the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera.
- the ratio of the first distance to the preset distance is determined according to the ratio, the adjustment amount is determined, and the target function corresponding to the gap gesture is continuously adjusted according to the adjustment amount.
- the processing unit is further configured to adjust the target function corresponding to the space gesture before continuously adjusting the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera.
- the focal length of the camera, the distance between the space gesture and the optical center of the camera, and the second distance are input to a pre-trained preset model, and the first distance is determined by using the preset model.
- the processing unit is also used to obtain a sample data set.
- the sample data set includes multiple sets of sample data.
- the distance between the heart and the second distance of the sample are used to train the preset model by using multiple sets of sample data in the sample data set.
- the display unit is used to enable the display to display the user interface of the application
- the processing unit is used to recognize that the user is within the shooting range of the camera of the terminal device
- the gap gesture is a gesture whose distance from the display is greater than a preset threshold, and the gap is continuously adjusted according to the length of time the gap gesture remains within the shooting range of the camera
- the target function corresponding to the gesture, the target function is a sustainable adjustment function of the application, and the adjustment amount of the continuous adjustment is positively correlated with the holding time.
- the holding duration includes a first duration and a second duration.
- the air gesture continues to translate, and the second duration corresponds to the second duration.
- the gap gesture is stationary, and the first time period is before the second time period.
- the processing unit is used to determine the unit time adjustment when continuously adjusting the target function corresponding to the air gesture according to the holding time of the air gesture within the shooting range of the camera. Since the space gesture is recognized, the target function corresponding to the space gesture is continuously adjusted according to the adjustment amount per unit time until the space gesture no longer appears in the shooting range of the camera .
- the display unit is further configured to display a function origin on the display when the terminal device recognizes the space gesture, and the function origin is used to indicate that the terminal device has recognized The space gesture and wake up the space gesture function.
- the continuously adjustable functions of different applications correspond to the same air gesture.
- the target function includes any one of the following functions: volume adjustment, audio and video progress adjustment, air conditioning temperature adjustment, seat back height adjustment, 360° surround view angle adjustment, window height adjustment, sunroof size Adjustment, air-conditioning air volume adjustment, ambient light brightness adjustment.
- the processing unit is used to use the camera to continuously shoot the shooting range of the camera when recognizing the air gesture made by the user within the shooting range of the camera, and to determine the latest image frame captured Whether the gap gesture is included in the, and if the gap gesture is included in the newly captured image frame, the gap gesture made by the user within the shooting range of the camera is recognized.
- an embodiment of the present application provides a terminal device, which is characterized by comprising: one or more processors; one or more memories; and one or more computer programs, wherein the one or more computer programs The one or more computer programs are stored in the one or more memories, and the one or more computer programs include instructions, and when the instructions are executed by the terminal device, the terminal device executes any of the embodiments of the first aspect.
- the method; or, the terminal device is caused to execute the method according to any embodiment of the second aspect; or, the terminal device is caused to execute the method according to any embodiment of the third aspect.
- the embodiments of the present application also provide a computer storage medium, including computer instructions, which when the computer instructions run on a terminal device, cause the terminal device to execute the method described in any of the foregoing implementation manners; or, cause the The terminal device executes the method according to any embodiment of the second aspect; or, causes the terminal device to execute the method according to any embodiment of the third aspect.
- the embodiments of the present application also provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the method described in any of the foregoing implementation manners; or, causes the terminal device to Perform the method according to any embodiment of the second aspect; or cause the terminal device to perform the method according to any embodiment of the second aspect.
- an embodiment of the present application provides a terminal device, including: a logic circuit and an input interface, wherein the input interface is used to obtain the data to be processed, and the logic circuit is used to execute the data to be processed as in the first The method according to any one of the aspects; or the method according to any one of the second aspect above to obtain the processed data; or the method according to any one of the third aspect above to obtain the processed data.
- the terminal device further includes: an output interface for outputting the processed data.
- an embodiment of the present application provides an automatic driving system, including:
- the display is used to display the user interface of the application
- the camera is configured to collect an air gesture made by a user, where the air gesture is a gesture whose distance from the display is greater than a preset threshold;
- control device based on space gestures is used to execute the method as described in any embodiment of the first aspect; or the control device based on space gestures is used to execute any embodiment of the second aspect The method; or, the control device based on air gestures is used to execute the method according to any one of the embodiments of the third aspect.
- control method, device, and system based on air gestures provided by the present application can continuously monitor the moving distance or holding time of the air gesture made by the user, and continuously adjust the air gesture according to the moving distance or holding time.
- the function corresponding to the gesture so as to realize the control of the terminal device.
- FIG. 1 is a functional block diagram of a vehicle-mounted terminal for executing the control method based on air gestures provided by an embodiment of the present application;
- FIG. 2A is a flowchart of a control method based on air gestures provided by an embodiment of the present application
- 2B is a schematic diagram of a car cabin provided with a control device based on air gestures provided by an embodiment of the present application;
- FIG. 3 is a schematic diagram of the function origin in the air gesture-based control method provided by an embodiment of the present application.
- 4A is a schematic diagram of a process of a control method based on air gestures according to an embodiment of the present application
- FIG. 4B is another process schematic diagram of the control method based on air gestures according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of a display screen and an air gesture in a control method based on an air gesture provided by an embodiment of the present application;
- FIG. 6 is a schematic diagram of a display screen and an air gesture in a control method based on an air gesture provided by an embodiment of the present application;
- FIG. 7 is a flowchart of a control method based on air gestures according to an embodiment of the present application.
- FIG. 8 is a schematic diagram of a space gesture in a space gesture-based control method provided by an embodiment of the present application.
- FIG. 9 is a schematic diagram of a process of setting a sustainable adjustment mode in a control method based on air gestures provided by an embodiment of the present application.
- FIG. 10 is a schematic diagram of a process of determining a first distance in a control method based on an air gesture according to an embodiment of the present application
- FIG. 11 is a schematic diagram of a process of determining F in a control method based on an air gesture provided by an embodiment of the present application;
- FIG. 12 is a schematic diagram of a process of determining a second distance in a control method based on an air gesture provided by an embodiment of the present application.
- FIG. 13 is a schematic diagram of a process of adjusting a target function in a control method based on an air gesture provided by an embodiment of the present application;
- FIG. 14 is a schematic diagram of a process of detecting arm length in a control method based on air gestures according to an embodiment of the present application
- 15 is a schematic diagram of a process of adjusting a target function in a control method based on air gestures according to an embodiment of the present application
- 16 is a schematic diagram of a process of detecting an air gesture in the air gesture-based control method provided by an embodiment of the present application.
- FIG. 17 is a schematic structural diagram of a control device based on air gestures according to an embodiment of the present application.
- FIG. 18 is a schematic structural diagram of a terminal device provided by an embodiment of this application.
- the vehicle-mounted terminal is fixed in the center console of the vehicle, and the screen of the vehicle-mounted terminal can also be called the central control screen.
- the camera on the car, the instrument display and the screen behind the seat of the car are not integrated with the vehicle terminal.
- the passengers on the co-pilot can also operate the on-board terminal through the central control display, and even the rear passengers can also operate the on-board terminal through the secondary screen.
- the secondary screen refers to the screen installed on the back of the front seat and facing the rear passengers.
- the position of the camera on the car is relatively flexible, if the camera on some vehicles is located above the vehicle's central control display, and some vehicles are located on the left side of the vehicle's central control screen, the location of the DMS or CMS camera is even independent of the terminal device. Moreover, different users, such as the driver, the passenger on the co-pilot or the rear passenger, have different positions, which increases the difficulty of controlling the target function of the vehicle terminal by using air gestures. Obviously, how to use air gestures for continuous adjustment to control terminal devices is regarded as an urgent problem to be solved.
- the embodiments of the present application provide a control method, device, and system based on an air gesture, which uses the holding time and movement distance of the air gesture to continuously adjust the function corresponding to the air gesture, so that the function is gradually changed, thereby Realize the control of terminal equipment.
- control method based on space gestures described in the embodiments of the present application can be applied to terminal devices such as smart screens and vehicle-mounted terminals, and users can interact with these terminal devices in a large range.
- FIG. 1 is a functional block diagram of a vehicle-mounted terminal used to execute the control method based on air gestures provided by an embodiment of the present application.
- the vehicle-mounted terminal 100 may include various subsystems, such as a traveling system 102, a sensor system 104, a control system 106, one or more peripheral devices 108, a power supply 110, a computer system 112, and a user interface 116.
- the in-vehicle terminal 100 may include more or fewer subsystems, and each subsystem may include multiple elements.
- each subsystem and element of the in-vehicle terminal 100 may be interconnected by wire or wireless.
- the traveling system 102 may include components that provide power movement for the vehicle on which the vehicle-mounted terminal 100 is installed.
- the travel system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
- the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, such as a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
- the engine 118 converts the energy source 119 into mechanical energy.
- Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
- the energy source 119 may also provide energy for other systems of the vehicle-mounted terminal 100.
- the transmission device 120 can transmit mechanical power from the engine 118 to the wheels 121.
- the transmission device 120 may include a gearbox, a differential, and a drive shaft.
- the transmission device 120 may also include other devices, such as a clutch.
- the drive shaft may include one or more shafts that can be coupled to one or more wheels 121.
- the sensor system 104 may include several sensors that sense information about the environment around the vehicle.
- the sensor system 104 may include a positioning system 122 (the positioning system may be a GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, a laser rangefinder 128, and Camera 130.
- the sensor system 104 may also include sensors of the internal system of the monitored vehicle (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.).
- the positioning system 122 can be used to estimate the geographic location of the vehicle.
- the IMU 124 is used to sense the position and orientation changes of the vehicle based on the inertial acceleration.
- the IMU 124 may be a combination of an accelerometer and a gyroscope.
- the radar 126 may use radio signals to sense objects in the surrounding environment of the vehicle. In some embodiments, in addition to sensing the object, the radar 126 may also be used to sense the speed and/or direction of the object.
- the laser rangefinder 128 can use laser light to sense objects in the environment where the vehicle is located.
- the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
- the camera 130 can be used to capture air gestures made by the user within the shooting range of the camera.
- the camera 130 can be a monocular camera, a binocular camera, a time of flight (TOF) camera, a DMS camera, or a CMS camera, etc. .
- the control system 106 controls the operation of the vehicle and its components.
- the control system 106 may include various components, including a steering system 132, a throttle 134, a braking unit 136, a sensor fusion algorithm 138, a computer vision system 140, a route control system 142, and an obstacle avoidance system 144.
- the steering system 132 is operable to adjust the forward direction of the vehicle.
- it may be a steering wheel system.
- the throttle 134 is used to control the operating speed of the engine 118 and thereby control the speed of the vehicle.
- the braking unit 136 is used to control the deceleration of the vehicle.
- the braking unit 136 may use friction to slow down the wheels 121.
- the braking unit 136 may convert the kinetic energy of the wheels 121 into electric current.
- the braking unit 136 may also take other forms to slow down the rotation speed of the wheels 121 to control the speed of the vehicle.
- the computer vision system 140 may be operable to process and analyze the images captured by the camera 130 to identify objects and/or features in the surrounding environment of the vehicle.
- the objects and/or features may include traffic signals, road boundaries and obstacles.
- the computer vision system 140 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision technologies.
- SFM structure from motion
- the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and so on.
- the route control system 142 is used to determine the driving route of the vehicle.
- the route control system 142 may combine data from the sensor 138, the global positioning system (GPS) 122, and one or more predetermined maps to determine the driving route for the vehicle.
- GPS global positioning system
- the obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise surpass potential obstacles in the environment of the vehicle.
- control system 106 may add or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
- the in-vehicle terminal 100 interacts with external sensors, other vehicles, other computer systems, or users through peripheral devices 108.
- the peripheral device 108 may include a wireless communication system 146, a display screen 148, a microphone 150, and/or a speaker 152.
- the peripheral device 108 provides a means for users in the vehicle to interact with the user interface 116.
- the display screen 148 may display information to users in the vehicle.
- the user interface 116 can also operate the on-board computer to receive user input.
- the peripheral device 108 may provide a means for the vehicle-mounted terminal 100 to communicate with other devices located in the vehicle.
- the microphone 150 may receive a voice command or other audio input of a user in the vehicle.
- the speaker 152 may output audio to the user in the vehicle.
- the wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network.
- the wireless communication system 146 may use 3G cellular communication, such as code division multiple access (CDMA), EVD0, global system for mobile communications (GSM)/general packet radio service (general packet radio) service, GPRS), or 4G cellular communication, such as LTE. Or 5G cellular communication.
- the wireless communication system 146 may use wireless-fidelity (wireless-fidelity, WiFi) to communicate with a wireless local area network (WLAN).
- WLAN wireless local area network
- the wireless communication system 146 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee protocol. Other wireless protocols, such as various vehicle communication systems.
- the wireless communication system 146 may include one or more dedicated short-range communications (DSRC) devices, which may include vehicles and/or roadside stations. Public and/or private data communications.
- DSRC dedicated short-range communications
- the power supply 110 may provide power to various components of the vehicle.
- the power source 110 may be a rechargeable lithium ion or lead-acid battery.
- One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle.
- the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
- the computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer readable medium such as a data storage device 114.
- the computer system 112 may also be a plurality of computing devices that control individual components or subsystems of the vehicle-mounted terminal 100 in a distributed manner.
- the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
- ASIC application specific integrated circuit
- the processor, computer, or memory may actually include multiple processors, computers, or memories that may or may not be stored in the same physical housing.
- the memory may be a hard disk drive or other storage medium located in a housing other than the computer. Therefore, a reference to a processor or computer will be understood to include a reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described here, some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
- the processor may be located away from the vehicle and wirelessly communicate with the vehicle.
- some of the processes described herein are executed on a processor disposed in the vehicle and others are executed by a remote processor, including taking the necessary steps to perform a single manipulation.
- the data storage device 114 may include instructions 115 (for example, program logic), and the instructions 115 may be executed by the processor 113 to perform various functions of the in-vehicle terminal 100, including those described above.
- the data storage device 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data on one or more of the propulsion system 102, the sensor system 104, the control system 106, and the peripheral device 108. Control instructions.
- the data storage device 114 may also store data, such as road maps, route information, the location, direction, and speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle terminal 100 and the computer system 112 during the operation of the vehicle in autonomous, semi-autonomous, and/or manual modes.
- the user interface 116 is used to provide information to or receive information from users in the vehicle.
- the user interface 116 may include one or more input/output devices in the set of peripheral devices 108, such as a wireless communication system 146, a display screen 148, a microphone 150, and a speaker 152.
- the computer system 112 may control the functions of the in-vehicle terminal 100 based on inputs received from various subsystems (for example, the traveling system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering unit 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control of many aspects of the in-vehicle terminal 100 and its subsystems.
- one or more of the above-mentioned components may be separately installed or associated with the in-vehicle terminal 100.
- the data storage device 114 may exist partially or completely separately from the in-vehicle terminal 100.
- the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
- FIG. 1 should not be construed as a limitation to the embodiments of the present application.
- computing devices associated with the vehicle-mounted terminal 100 may be based on the characteristics of the identified obstacle and the state of the surrounding environment (for example, traffic, Rain, ice on the road, etc.) to predict the behavior of the identified obstacle.
- each of the identified obstacles depends on each other's behavior, so all the identified obstacles can also be considered together to predict the behavior of a single identified obstacle.
- the vehicle-mounted terminal 100 can adjust its speed based on the predicted behavior of the obstacle.
- an autonomous vehicle can determine what state the vehicle will need to adjust to (for example, accelerate, decelerate, or stop) based on the predicted behavior of the obstacle.
- other factors can also be considered to determine the speed of the vehicle, such as the lateral position of the vehicle on the road, the curvature of the road, the proximity of static and dynamic objects, and so on.
- the above-mentioned vehicles may be cars, trucks, motorcycles, buses, boats, airplanes, helicopters, lawn mowers, recreational vehicles, playground vehicles, construction equipment, trams, golf carts, trains, and trolleys, etc.
- the present invention The embodiments are not particularly limited.
- Fig. 2A is a flowchart of a control method based on an air gesture provided by an embodiment of the present application. This embodiment includes:
- the application may be a system application or a third-party application installed on the terminal device.
- the terminal device recognizes the operation instructions input by the user by clicking on the display, etc., and enables the display to display the user interface of the application.
- the gap gesture is a gesture whose distance from the display is greater than a preset threshold.
- the space gesture may also be referred to as a 3D gesture, a three-dimensional gesture, a non-contact gesture, and the like.
- the local database or the remote database of the terminal device stores a gesture set, and the gesture set stores the space gesture, the function corresponding to the space gesture, and the correspondence relationship between the space gesture and the function.
- the user makes an air gesture within the shooting range of the camera, for example, the user in the co-pilot position makes an air gesture in front of the camera.
- the camera collects the air gesture, and the terminal device acquires the air gesture collected by the camera and determines whether the air gesture exists in the gesture set. If the gap gesture exists in the gesture set, the target function corresponding to the gap gesture is determined; if the gap gesture does not exist in the gesture set, it is considered that the gesture cannot be recognized.
- the embodiments of the present application do not limit the positions of the display, camera, and terminal device.
- the terminal device is a vehicle-mounted terminal device, and a camera, a display, etc. are provided on the vehicle-mounted terminal device.
- the display, the camera, and the terminal device are integrated.
- the terminal device is a vehicle-mounted terminal device, and the camera and display are not integrated on the vehicle-mounted terminal.
- Fig. 2B is a schematic diagram of a car cabin provided with a control device based on air gestures provided by an embodiment of the present application. Please refer to FIG. 2B.
- the control device based on air gestures is integrated on the vehicle terminal, and the display can be a central control display (main screen) on the vehicle, or a screen (secondary screen) installed at the rear of the vehicle seat.
- the camera can be a car MDS or CMS camera, which is not shown in the figure.
- the vehicle-mounted terminal is connected to the camera, main screen and sub-screen via WiFi and so on.
- the adjustment amount of the continuous adjustment is the same as that of the air gesture moving within the shooting range.
- the first distance is positively correlated; or, the continuously adjusted adjustment amount is positively correlated with the holding time of the air gesture within the shooting range.
- the terminal device continuously adjusts the target function corresponding to the air gesture from the recognition of the air gesture until the air gesture removes the shooting range, that is, continuously adjusts the target function while the air gesture is maintained within the shooting range.
- the amount of continuous adjustment is positively related to the duration of the air gesture within the shooting range.
- the target function of the air gesture is to increase the volume, and then adjust 10% per second; for another example, the adjustment amount of the continuous adjustment is related to The first distance is positively correlated.
- the target function corresponding to the air gesture is fast forward, and the video is fast forwarded for 10 minutes for every 3 cm of the air gesture within the shooting range.
- the camera collects the space gesture.
- the terminal device recognizes the target function corresponding to the air gesture, it continuously adjusts the target function corresponding to the air gesture according to the holding time and moving distance of the air gesture in the shooting range, so that the target function gradually changes to continuously adjust
- This target function allows the user to determine in time whether the adjustment needs to be terminated, avoiding repeated adjustments caused by the user's inability to adjust in place at one time, and simple operation; at the same time, it reduces the time the user looks at the screen when adjusting the target function, and improves driving safety.
- the target function can be accurately adjusted through remote air gesture input, without the need to get up and adjust the target function by touching, etc., which greatly improves the convenience and user experience.
- the display displays the function origin, and the function origin is used to indicate that the terminal device has recognized the air gesture and wakes up the air gesture function. For example, see Figure 3.
- FIG. 3 is a schematic diagram of the function origin in the control method based on the air gesture provided by the embodiment of the present application.
- a camera is set on the upper left corner of the vehicle terminal, as shown by the black circle in the figure.
- the vehicle-mounted terminal plays the video.
- the vehicle terminal recognizes that the function of the air gesture is to adjust the video progress, and the vehicle terminal pops up the progress bar and the current playback progress, and displays the progress bar
- the upper display shows the origin of the function, as shown by the black circle in the figure, so that the user knows that the terminal device has recognized the air gesture and wakes up the air gesture function.
- FIG. 3 takes the camera installed in the upper left corner of the vehicle-mounted terminal as an example, the embodiment of the present application will be described in detail. However, the embodiment of the present application is not limited thereto. In other optional implementation manners, the vehicle terminal manufacturer can flexibly set the position of the camera.
- the space gesture may be a wake-up gesture and a functional gesture at the same time, the wake-up gesture is used to wake up the space gesture function, and the functional gesture is used to adjust the target function; or, the wake-up gesture and the functional gesture are different space gestures.
- the wake-up gesture is used to wake up the space gesture function
- the functional gesture is used to adjust the target function; or, the wake-up gesture and the functional gesture are different space gestures.
- Exemplary please refer to Fig. 4A and Fig. 4B.
- FIG. 4A is a schematic diagram of a process of a control method based on an air gesture according to an embodiment of the present application.
- the terminal device recognizes the wake-up gesture and wakes up the space gesture function.
- the space gesture function is the functional gesture made by the terminal device in response to the user.
- the adjustment target function of functional gestures such as holding time or moving distance can also be referred to as the adjusting method according to time according to the adjusting method of holding time, and the adjusting method according to the moving distance can also be referred to as the adjusting method according to space.
- the terminal device After the air gesture function wakes up, the terminal device prompts the user through voice, animation, etc.: The air gesture function has been awakened, and please make a functional gesture. After that, the user makes a functional gesture, and the terminal device recognizes the target function corresponding to the functional gesture, and adjusts the target function according to the holding time and moving distance of the functional gesture within the shooting range.
- the target function can be any one of volume adjustment, audio and video progress adjustment, air conditioning temperature adjustment, seat back height adjustment, 360° surround view angle adjustment, window height adjustment, sunroof size adjustment, air conditioning air volume adjustment, and ambient light brightness adjustment.
- a wake-up gesture with a dedicated wake-up gesture function is set, such as a five-finger vertical-up gesture.
- the terminal device If the terminal device detects that the user makes a five-finger vertical upward gesture, and the holding period of the gesture exceeds a preset time, such as 3 seconds, it is considered that the user wants to wake up the gesture function. At this time, the terminal device turns on the air gesture function and recognizes the air gesture collected by the camera.
- a preset time such as 3 seconds
- FIG. 4B is another process schematic diagram of the control method based on air gestures provided by an embodiment of the present application. Please refer to Figure 4B, when the air gesture is a wake-up gesture and a functional gesture at the same time, after the terminal device recognizes the air gesture, it wakes up the air gesture function and determines the target function, and adjusts the target according to the holding time or moving distance of the air gesture Function.
- the target function corresponding to the space gesture is the sustainable adjustment function of the application currently displayed on the screen, and the sustainable adjustment function of different applications corresponds to the same space gesture.
- the terminal device has many functions that can be continuously adjusted, if different space gestures are set for different sustainable adjustment functions, the space gestures will be particularly large, and the user will confuse various gestures. Therefore, the same air gesture can be set for the sustainable adjustment function of different applications. For example, see Figure 5.
- Fig. 5 is a schematic diagram of a display screen and a blank gesture in a control method based on a blank gesture provided by an embodiment of the present application.
- Figure 5 is also an air gesture with two fingers moving to the right.
- the target function of the air gesture is fast forward
- the floating window next to the function origin displays "fast forward”.
- the user understands the target function corresponding to the air gesture; when the vehicle terminal is in the air-conditioning interface, the target function of the air gesture is to increase the temperature, and the floating window next to the function origin displays "increase temperature" for the user to understand the air gesture Corresponding target function.
- the target function corresponding to the space gesture is the sustainable adjustment function of the application currently displayed on the screen, and different sustainable adjustment functions of the same application correspond to different space gestures.
- the sustainable adjustment functions of video playback applications include volume, progress, etc.
- the sustainable adjustment functions of air conditioners include temperature, air volume, etc.
- different air gestures can be set for different sustainable adjustment functions of the same application. For example, see Figure 6.
- Fig. 6 is a schematic diagram of a display screen and a blank gesture in a control method based on a blank gesture provided by an embodiment of the present application.
- the target function corresponding to the air gesture with two fingers moving to the right is fast forward
- the floating window next to the function origin displays "fast forward” for the user to understand the corresponding air gesture
- the target function corresponding to the air gesture of moving two fingers up is to increase the volume
- the floating window next to the function origin displays “volume increase” for the user to understand the corresponding air gesture Target function.
- the terminal device after the terminal device recognizes the target function of the air gesture, it can determine the adjustment amount according to the holding time and movement distance of the air gesture in the shooting range, and then adjust the target function according to the adjustment amount.
- the adjustment amount for example, please refer to Figure 7.
- Fig. 7 is a flow chart of a control method based on an air gesture provided by an embodiment of the present application. Referring to FIG. 7, this embodiment includes the following steps:
- the terminal device detects that the user makes an air gesture.
- the camera sends the captured video stream within the shooting range to the processor on the terminal device, and the processor analyzes these video streams. If the terminal device determines the latest captured image frame or several consecutive images If the frame contains a gesture in the air, it is considered that the user is detected to make a gesture in the air.
- FIG. 8 is a schematic diagram of the space gesture in the space gesture-based control method provided by an embodiment of the present application.
- the user makes gestures of raising the index finger and middle finger within the shooting range.
- the hand target detection technology is used to extract the location area of the air gesture from the collected image, such as Shown in the dashed box in the figure.
- the hand target detection technology is, for example, a hand target detection technology based on a depth model such as a multi-frame target detector (Single Shot MultiBox Detector, SSD).
- the terminal device inputs the picture corresponding to the location area where the air gesture is determined to the hand key point detection model, and uses the hand key point detection model to detect the key points, as shown in the figure Shown by the black dots.
- the hand key point detection model is, for example, a model trained by an Open Pose model using hand segmentation technology, hand key point positioning technology, and the like.
- the terminal device uses the detected key points to recognize an air gesture, so as to recognize that the user makes a single-finger up gesture, a two-finger gesture, and the like. Then, by querying the database, etc., it is determined whether the air gesture is an air gesture related to the application of the current interface, and if the air gesture is a gesture corresponding to a certain function of the application of the current interface, then the function is used as the target function; if If the air gesture is not any function of the application on the current interface, the terminal device does not respond or prompts the user that the air gesture is incorrect.
- step 306 Select the continuous adjustment mode. If the space adjustment mode is selected, step 307 to step 308 are performed; if the time adjustment mode is selected, step 309 to step 310 are performed.
- the spatial adjustment method refers to the method of determining the adjustment amount according to the first distance moved by the air gesture within the shooting range.
- the adjustment amount that is continuously adjusted is the same as the first distance that the air gesture moves within the shooting range.
- the distance is positively correlated;
- the time adjustment method refers to the method of determining the adjustment amount according to the holding time of the air gesture in the shooting range. In this way, the adjustment amount of continuous adjustment is positively correlated with the holding time of the air gesture in the shooting range.
- the above adjustment method according to space or adjustment method according to time can be set before the terminal device leaves the factory, or it can also be opened for users to set independently. For example, see Figure 9.
- FIG. 9 is a schematic diagram of a process of setting a sustainable adjustment mode in a control method based on an air gesture provided by an embodiment of the present application. Please refer to Figure 9.
- the user clicks the setting icon on the terminal device interface to enter the setting interface, and clicks the drop-down menu button of "Sustainable adjustment method" on the setting interface.
- a floating window pops up on the terminal device interface for the user to select "Time adjustment method” , "According to the space adjustment method” or "close”.
- the terminal device detects the starting point and the end point of the air gesture movement, and determines the first distance that the air gesture moves within the shooting range according to the starting point and the end point.
- the terminal device continuously adjusts the target function during the air gesture movement process until the air gesture moves the first distance.
- the terminal device continuously adjusts the target function during the air gesture movement process until the air gesture removes the shooting range.
- the continuously adjusted adjustment amount is positively correlated with the first distance moved by the air gesture within the shooting range of the camera.
- the first distance is determined at least according to the focal length of the camera, the distance between the space gesture and the optical center of the camera, and the second distance, and the second distance is used to indicate that the space gesture is in the When the camera moves within the shooting range of the camera, the distance that the air gesture in the imaging surface of the camera moves. For example, see Figure 10.
- FIG. 10 is a schematic diagram of a process of determining a first distance in a control method based on an air gesture provided by an embodiment of the present application.
- the point-filled part is the imaging surface of the camera
- the plane where the line AB lies is a plane parallel to the imaging surface
- point O is the optical center of the camera
- the bottom edge of the double-dotted dash triangle represents the maximum field of view of the camera.
- the distance L max is within the field of view.
- a first shift distance L is a distance Taking a user gesture moving in the shooting range, the first distance exceeds the predetermined distance L can not be shifted L max, L max predetermined distance may be a fixed value, a user may be The arm length and other related values.
- the focal length f of the camera is related to the model and type of the camera.
- the preset distance L max corresponds to a maximum imaging surface distance, that is, when the user's gap gesture moves the preset distance L max , the gap gesture in the camera imaging surface moves the maximum imaging surface distance.
- the second distance l represents the distance moved by the gap gesture on the imaging surface of the camera when the gap gesture moves within the shooting range of the camera.
- the focal length f of the camera is known
- the distance F between the air gesture and the optical center of the camera can be obtained by measurement
- the second distance l can be obtained by tracking the air gesture.
- the first distance L shift is the quantity to be solved. After solving the first distance L shift , the terminal device can determine the adjustment amount according to the first distance L shift.
- FIG. 10 uses the AB line segment to represent the preset distance L max as an example to illustrate the embodiment of the present application in detail, that is, the air gesture moving from point A to point B as an example, however, the embodiment of the present application is not limited In other feasible implementation manners, the air gesture can also be moved from point C to point D, that is, the CD line segment represents the preset distance L max .
- the terminal device obtains the distance F between the air gesture and the optical center of the camera, the second distance l, how to calculate the first distance L shift , and how to determine the continuous adjustment according to the first distance L shift will be described in detail.
- the terminal device sensor determines the distance F between the air gesture and the optical center of the camera.
- the sensor can be set separately or integrated on the camera. In the following, taking the integrated sensor on the camera as an example, how the terminal device obtains the distance F between the air gesture and the optical center of the camera will be described in detail.
- the terminal device When the camera is a monocular camera, the terminal device first performs target recognition through image matching, so as to recognize the air gesture. Then, the terminal device estimates the distance F between the gap gesture and the optical center of the camera according to the size of the gap gesture in the image. In order to estimate the distance F, the terminal device needs to accurately recognize the air gesture, and this accurate recognition is the first step in estimating the distance. In order to accurately recognize the air gestures, it is necessary to establish and continuously maintain a sample feature database locally or remotely on the terminal to ensure that the sample feature database contains all air gestures.
- the terminal device uses the binocular camera to capture the shooting range to obtain two images.
- the two images are used to determine the parallax corresponding to the air gesture. The farther the air gesture is from the camera, the smaller the parallax; The greater the parallax. Then, the terminal device can determine the distance F between the air gesture and the optical center of the camera according to the preset correspondence between the parallax and the distance. For example, see Figure 11.
- FIG. 11 is a schematic diagram of a process of determining F in a control method based on an air gesture provided by an embodiment of the present application.
- the air gesture is not located at the same position. Therefore, when the two images overlap, the space gesture does not overlap, and the distance between the two space gestures is called parallax. After the terminal device obtains the disparity, the distance F can be determined according to the disparity.
- the terminal device When the camera is a TOF lens, the terminal device continuously sends light pulses to the air gesture, and then uses the sensor to receive the light returned by the air gesture, and determines the distance F by detecting the flight time of the light pulse. Among them, the flight time of the light pulse is the round-trip time of the light pulse.
- the distance F between the gap gesture and the optical center of the image head is the same.
- the second distance l can be obtained according to image processing.
- the air gesture imaged on the imaging surface of the camera also moves.
- the terminal device uses deep learning technology to obtain the hand position area.
- the terminal device determines the first position and the second position during the movement of the space gesture on the imaging surface of the camera, and determines the second position according to the number of pixels between the first position and the second position and the pixel size.
- Distance l For example, see Figure 12.
- FIG. 12 is a schematic diagram of a process of determining a second distance in a control method based on an air gesture provided by an embodiment of the present application.
- the user makes an air gesture of moving two fingers to the right within the shooting range of the camera. During the movement, the air gesture imaged on the imaging surface also moves.
- the terminal device determines the location area where the gap gesture is located in the imaging plane, and tracks the location area.
- the dotted line in the figure shows the first position, and the solid line shows the second position.
- the terminal device determines the number of pixels between the center of the position area corresponding to the first position (as shown by the gray filled circle in the figure) and the center of the position area corresponding to the second position, and according to the number of pixels and the pixel size, you can Determine the second distance l.
- the second distance l can be determined according to the position area of the hand by continuously tracking the position area of the hand.
- the terminal device determines that the distance F, the second distance l, the distance L can be determined that the first shift according to the formula:
- f represents the focal length of the camera
- F represents the distance between the air gesture and the optical center of the camera
- l represents the second distance
- a preset model in order to determine the first distance L shift , can be trained in advance, and the focal length f of the camera, the distance F between the air gesture and the optical center of the camera, and the second distance l are input to the pre-trained Using the preset model to determine the first distance L shift .
- sample data sets are obtained. These sample data sets include multiple sets of sample data.
- One set of sample data includes the focal length of the sample camera, the distance between the sample space gesture and the optical center of the sample camera, and the sample second Distance, using multiple sets of sample data in the sample data set to train the preset model.
- the training process can be executed by the terminal device, and can also be executed by the cluster server, which is not limited in the embodiment of the present application.
- the process of continuous adjustment is determined according to the first distance L shift.
- the terminal device can continuously adjust the target function corresponding to the air gesture according to the first distance L shift and the unit distance adjustment amount.
- a unit distance adjustment amount may be preset on the terminal device, and the unit distance adjustment amount represents the adjustment amount of the target function per unit distance moved by the air gesture.
- the unit distance may be, for example, 1 centimeter, 1 decimeter, etc., which is not limited in the embodiment of the present application. In this way, there is no need to set the preset distance L max . In other words, there is no need to consider the length of the air gesture within the shooting range.
- FIG. 13 is a schematic diagram of a process of adjusting a target function in a control method based on an air gesture provided by an embodiment of the present application. Please refer to Figure 13, after the terminal device detects the air gesture, the function origin appears on the screen. The terminal device continues to monitor the space gesture, and when the space gesture movement is detected, it determines the movement direction and the first distance, determines the target function of the space gesture according to the movement direction, and then adjusts the target function according to the first distance. The part between the dotted line function origin and the solid line function origin in the figure is the final adjustment value.
- the total adjustment amount is essentially the ratio of the first distance L shift to the preset distance L max , that is, the total adjustment amount is the ratio, proportion, or percentage of the first distance L shift in the preset distance L max.
- the preset distance L max can be a fixed value.
- the fixed value represents a comfortable distance when the user moves his arm. Count the distance that the arm moves when users of different ages, genders, and heights swing their arms. This fixed value can be the average value of the arm movement distances when most users swing their arms; or, the preset distance L max is also It can be a value that is positively correlated with the arm length of the user's arm.
- the terminal device detects the space gesture and recognizes the user’s wrist bone points and elbow bone points, and determines the user’s arm length according to the wrist bone points and elbow bone points. Determine the preset distance according to the arm length. For example, see Figure 14.
- FIG. 14 is a schematic diagram of the process of detecting the arm length in the control method based on the air gesture provided by the embodiment of the present application.
- the terminal device uses deep learning to detect the user's wrist bone point and elbow bone point, and the two bone points are shown as the black filled origin in the figure. Then, the terminal device uses the three-dimensional vision system to determine the three-dimensional coordinates of the wrist bone point and the three-dimensional coordinate of the elbow bone point. According to the two three-dimensional coordinates, the distance between the wrist bone point and the elbow bone point can be determined to determine the user's arm. Long (that is, the length of the forearm).
- the three-dimensional vision system can be a binocular camera, a multi-eye camera, a TOF camera, etc.
- the preset distance L max corresponding to the arm length can be determined by searching the database.
- the database refers to a database that stores the arm length and the mapping relationship of the preset distance L max corresponding to the arm length. Exemplary, see Table 1.
- Table 1 shows the mapping relationship between part of the arm length and the preset distance L max. According to Table 1, it can be seen that the preset distance L max corresponding to different arm lengths may be different.
- the adjustment amount of continuous adjustment is positively correlated with the holding time of the air gesture in the shooting range.
- the user can continue to move the air gesture during the hold time, or after moving for a period of time, stay still but stay within the shooting range. That is to say, the holding time includes the first time length and the second time length.
- the air gesture continues to translate, and during the second time period corresponding to the second time period, the air gesture is stationary, and the first time period corresponds to the second time period.
- the time period is before the second time period. For example, see Figure 15.
- FIG. 15 is a schematic diagram of a process of adjusting a target function in a control method based on an air gesture provided by an embodiment of the present application.
- the terminal device detects the air gesture, the function origin appears on the screen.
- the terminal device continues to monitor the space gesture, and when the space gesture movement is detected, it determines the movement direction, and determines the target function of the space gesture according to the movement direction.
- the target function is continuously adjusted according to the holding time of the air gesture. In the first time period, the air gesture has been shifted to the right, and the adjustment amount in this period of time is the distance between the two dotted line function origins in the figure.
- the adjustment of the second time length is the distance between the second dashed line function origin and the solid line function origin in the figure.
- the total adjustment amount is the sum of the adjustment amount in the first time period and the adjustment amount in the second time period.
- the camera detects a frame at a preset time interval, such as every 0.1 second, and continues to adjust the target function as long as the screen also includes an air gesture. In other words, regardless of whether the air gesture continues to move or is still, the adjustment will continue.
- the first duration is at least the duration during which the terminal device can recognize the gap gesture
- the second duration is the duration from when the terminal device recognizes the gap gesture to the point in time when the user feels that the adjustment moves out of the gap gesture is completed.
- the air gesture may not always be still, such as continuous movement or intermittent movement.
- the target function is continuously adjusted according to the preset unit time adjustment amount.
- the unit time adjustment amount is 10% per second, and the volume is increased by 10% every time the air gesture is held for 1 second.
- the unit time adjustment amount is 2% per second, and the volume is adjusted by 2% every time the air gesture is held for 1 second. After that, the air gesture moves out of the shooting range of the camera, and the continuous adjustment ends. For example, see Figure 16.
- FIG. 16 is a schematic diagram of a process of detecting an air gesture in a control method based on an air gesture provided by an embodiment of the present application.
- the camera detects a frame at a preset time interval, such as every 0.1 second, and continues to adjust the target function as long as the air gesture is included in the screen. If it is detected that the screen does not contain an air gesture, it means that the user believes that the target function has been adjusted to an ideal state and moved out of the gesture.
- FIG. 17 is a schematic structural diagram of a control device based on air gestures according to an embodiment of the present application.
- the control device based on air gestures involved in this embodiment may be a terminal device, or may be a chip applied to a terminal device.
- the control device based on air gestures can be used to perform the functions of the terminal device in the foregoing embodiments.
- the control device 100 based on air gestures may include: a display unit 11 and a processing unit 12.
- the display unit 11 is used to enable the display to display the user interface of the application;
- the processing unit 12 is used to obtain the space gesture collected by the camera And continuously adjust the target function corresponding to the space gesture according to the first distance moved by the space gesture within the shooting range of the camera, and the space gesture is a distance greater than a preset threshold from the display Gesture, the target function is a sustainable adjustment function of the application, and the adjustment amount of the continuous adjustment is positively correlated with the first distance.
- the first distance is determined at least according to the focal length of the camera, the distance between the space gesture and the optical center of the camera, and a second distance, and the second distance is used to indicate When the air gesture moves within the shooting range of the camera, the distance that the air gesture on the imaging surface of the camera moves.
- the processing unit 12 is also used for continuously adjusting the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera. Determine the first position and the second position in the space gesture movement process on the imaging surface of the camera, and determine the first position and the second position according to the number of pixels between the first position and the second position and the pixel size. Two distance.
- the first distance is less than or equal to a preset distance, and the preset distance is positively correlated with the arm length of the user's arm.
- the processing unit 12 is also used to recognize before continuously adjusting the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera.
- the user’s wrist bone point and elbow bone point determine the user’s arm length according to the three-dimensional coordinates of the wrist bone point and the three-dimensional coordinate of the elbow bone point, and the prediction is determined according to the arm length. Set the distance.
- the processing unit 12 is used for self-recognition when continuously adjusting the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera. After the gap gesture, the target function corresponding to the gap gesture is continuously adjusted according to the unit adjustment amount until the gap gesture moves the first distance, and the unit distance adjustment amount is the target function The ratio of the total adjustment amount to the preset distance.
- the processing unit 12 when the processing unit 12 continuously adjusts the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera, it is used to adjust the target function according to the space gesture.
- the ratio of the first distance to the preset distance continuously adjusts the target function corresponding to the gap gesture.
- the processing unit 12 is also used to adjust the target function corresponding to the space gesture before continuously adjusting the target function corresponding to the space gesture according to the first distance that the space gesture moves within the shooting range of the camera.
- the focal length of the camera, the distance between the space gesture and the optical center of the camera, and the second distance are input to a pre-trained preset model, and the first distance is determined by using the preset model.
- the processing unit 12 is also used to obtain a sample data set.
- the sample data set includes multiple sets of sample data.
- the distance of the optical center and the second distance of the sample are used to train the preset model by using multiple sets of sample data in the sample data set.
- the display unit 11 is used to enable the display to display the user interface of the application
- the processing unit 12 is used to obtain the space gesture collected by the camera.
- the gap gesture is a gesture whose distance from the display is greater than a preset threshold, and the target function corresponding to the gap gesture is continuously adjusted according to the length of time the gap gesture remains within the shooting range of the camera,
- the target function is a continuous adjustment function of the application, and the adjustment amount of the continuous adjustment is positively correlated with the holding time.
- the holding duration includes a first duration and a second duration.
- the air gesture continues to translate, and the second duration corresponds to the second duration.
- the gap gesture is stationary, and the first time period is before the second time period.
- the processing unit 12 is used to determine the unit time when continuously adjusting the target function corresponding to the space gesture according to the holding time of the space gesture within the shooting range of the camera. Adjustment amount; since the gap gesture is recognized, the target function corresponding to the gap gesture is continuously adjusted according to the adjustment amount per unit time until the gap gesture no longer appears in the shooting range of the camera Inside.
- the display unit 11 is also used to display a function origin on the display when the terminal device recognizes the space gesture, and the function origin is used to indicate that the terminal device has recognized Exit the air gesture and wake up the air gesture function.
- the continuously adjustable functions of different applications correspond to the same air gesture.
- the target function includes any one of the following functions: volume adjustment, audio and video progress adjustment, air conditioning temperature adjustment, seat back height adjustment, 360° surround view angle adjustment, window height adjustment, sunroof size Adjustment, air-conditioning air volume adjustment, ambient light brightness adjustment.
- the processing unit 12 is used to use the camera to continuously capture the shooting range of the camera when recognizing the air gesture made by the user within the shooting range of the camera, and to determine the latest captured image Whether the frame includes the space gesture, and if the space gesture is included in the newly captured image frame, the space gesture made by the user within the shooting range of the camera is recognized.
- FIG. 18 is a schematic structural diagram of a terminal device provided by an embodiment of this application. As shown in FIG. 18, the terminal device 200 includes:
- the memory 22 stores computer execution instructions
- the processor 21 executes the computer-executable instructions stored in the memory 22 to implement the above-mentioned method executed by the terminal device.
- the terminal device 200 further includes: a display 23 (as shown by a dashed box in FIG. 18) for displaying a user interface of the application;
- the aforementioned processor 21, memory 22, and display 23 may be connected via a bus 24.
- the memory and the processor are directly or indirectly electrically connected to realize data transmission or interaction, that is, the memory and the processor can be connected through an interface or can be integrated together.
- these elements may be electrically connected to each other through one or more communication buses or signal lines, for example, they may be connected through a bus.
- the memory stores computer execution instructions for implementing the data access control method, including at least one software function module that can be stored in the memory in the form of software or firmware.
- the processor executes various software programs and modules by running the software programs and modules stored in the memory. Functional application and data processing.
- the memory can be, but is not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Programmable Read-Only Memory (PROM) ), Erasable Programmable Read-Only Memory (EPROM), Electrical Erasable Programmable Read-Only Memory (EEPROM), etc.
- RAM Random Access Memory
- ROM Read Only Memory
- PROM Programmable Read-Only Memory
- EPROM Erasable Programmable Read-Only Memory
- EEPROM Electrical Erasable Programmable Read-Only Memory
- the memory is used to store the program, and the processor executes the program after receiving the execution instruction.
- the software programs and modules in the aforementioned memory may also include an operating system, which may include various software components and/or drivers for managing system tasks (such as memory management, storage device control, power management, etc.), and may Communicate with various hardware or software components to provide an operating environment for other software components.
- the processor can be an integrated circuit chip with signal processing capabilities.
- the foregoing processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), and so on.
- the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the present application also provides a chip including: a logic circuit and an input interface, wherein: the input interface is used to obtain the data to be processed; the logic circuit is used to execute the foregoing method embodiment on the data to be processed In the technical solution on the terminal device side, the processed data is obtained.
- the chip may further include: an output interface for outputting processed data.
- the to-be-processed data acquired by the aforementioned input interface includes the first moving distance of the air gesture within the shooting range, etc., and the processed data output by the output interface includes the continuously adjusted adjustment amount and the like.
- the present application also provides a computer-readable storage medium, where the computer-readable storage medium is used to store a program, and the program is used to execute the technical solution of the terminal device in the foregoing embodiment when the program is executed by the processor.
- the embodiments of the present application also provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the technical solutions in the foregoing embodiments.
- the aforementioned program can be stored in a computer readable storage medium.
- the program executes the steps of the above-mentioned method embodiments; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other media that can store program code.
- the specific medium type is not limited in this application. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本申请实施例提供一种基于隔空手势的控制方法、装置及系统,涉及自动驾驶技术领域。当用户在摄像头的拍摄范围内做出隔空手势后,摄像头采集该隔空手势。终端设备识别出该隔空手势对应的目标功能后,根据该隔空手势在拍摄范围内的保持时长、移动距离等持续调节该隔空手势对应的目标功能,使得该目标功能逐渐变化以持续调节该目标功能,进而使得用户及时确定是否需要终止调节,避免用户无法一次性调节到位带来的反复调节,操作简单。另外,对于智能座舱副驾驶或后排用户而言,通过远程隔空手势输入即可准确调节目标功能,无需起身通过触摸等方式调节目标功能,极大程度提升了方便性与用户体验。
Description
本申请涉及自动驾驶(automated driving)技术领域,尤其涉及一种基于隔空手势的控制方法、装置及系统。
目前,随着终端设备的智能化发展,用户与手机、平板电脑、车载终端等终端设备的交互方式也越来越多,隔空手势就是近年来兴起的一种交互形式。
常见的隔空手势操作过程中,预先设置隔空手势与功能的对应关系。之后,用户在终端设备的感知范围内做出隔空手势,则终端设备识别该隔空手势,然后确定出该隔空手势对应的功能并调节。例如,预先设置握拳对应的功能是截屏,则终端设备识别到用户握拳的隔空手势后,自动截屏。
上述基于隔空手势控制终端设备的过程中,隔空手势对应的功能比较单一。然而,随着终端设备上应用的种类和数量越来越多,需要利用隔空手势控制的功能也越来越多。显然,如何利用隔空手势控制终端设备,实为需要解决的问题。
发明内容
本申请实施例提供一种基于隔空手势的控制方法、装置及系统,通过持续监测用户做出的隔空手势的移动距离或保持时长,并根据移动距离或保持时长持续调节该隔空手势对应的功能,从而实现对终端设备的控制。
第一方面,本申请实施例提供一种基于隔空手势的控制方法,该方法可以由终端设备执行、也可以由终端设备中的芯片执行,下面以应用于终端设备为例对该方法进行描述,该方法包括:当用户在摄像头的拍摄范围内做出隔空手势后,摄像头采集该隔空手势。终端设备识别出该隔空手势对应的目标功能后,根据该隔空手势在拍摄范围内移动的第一距离等持续调节该隔空手势对应的目标功能,使得该目标功能逐渐变化以持续调节该目标功能,进而使得用户及时确定是否需要终止调节,避免用户无法一次性调节到位带来的反复调节,操作简单;同时,减少用户调节目标功能时注视屏幕的时长,提高驾驶安全性。另外,对于智能座舱副驾驶或后排用户而言,通过远程隔空手势输入即可准确调节目标功能,无需起身通过触摸等方式调节目标功能,极大程度提升了方便性与用户体验。
一种可行的设计中,上述的第一距离可以至少根据摄像头的焦距、隔空手势与摄像头的光心之间的距离、第二距离确定,其中,第二距离用于指示隔空手势在拍摄范围内移动时,摄像头成像面中的隔空手势移动的距离。采用该种方案,实现准确确定出用户的隔空 手势在拍摄范围内移动的第一距离的目的。
一种可行的设计中,为了确定出第一距离,终端设备还确定摄像头的成像面中的隔空手势移动过程中的第一位置和第二位置,根据第一位置和第二位置之间的像素个数以及像素大小,确定第二距离。采用该种方案,实现根据隔空手势在成像面中移动的第二距离,准确确定出用户的隔空手势在拍摄范围内移动的第一距离的目的。
一种可行的设计中,上述的第一距离小于或等于预设距离,该预设距离与用户的手臂的臂长正相关。采用该种方案,实现根据不同用户灵活调整目标功能的目的。
一种可行的设计中,终端设备识别用户的手腕骨骼点和手肘骨骼点,根据手腕骨骼点的三维坐标和手肘骨骼点的三维坐标,确定用户的臂长,根据臂长确定预设距离。采用该种方案,实现根据不同用户灵活调整目标功能的目的。
一种可行的设计中,持续调整过程中,终端设备自识别出隔空手势起,根据单位调节量持续调节隔空手势对应的目标功能,直到隔空手势移动第一距离。其中,单位距离调节量是目标功能的总调整量与预设距离的比值。采用该种方案,实现持续调节目标功能的目的。
一种可行的设计中,持续调整过程中,终端设备根据第一距离与预设距离的比值持续调节隔空手势对应的目标功能。采用该种方案,实现持续调节目标功能的目的。
一种可行的设计中,终端设备在确定第一距离时,将摄像头的焦距、隔空手势与摄像头的光心的距离、第二距离输入至预先训练好的预置模型,利用预置模型确定第一距离。采用该种方案,实现准确确定出用户的隔空手势在拍摄范围内移动的第一距离的目的。
一种可行的设计中,终端设备还获取样本数据集,样本数据集包含多组样本数据,一组样本数据包含样本摄像头的焦距、样本隔空手势与样本摄像头的光心的距离、样本第二距离;利用样本数据集中的多组样本数据训练出预置模型。采用该种方案,实现训练出预置模型的目的。
一种可行的设计中,终端设备识别出隔空手势后,还可以根据该隔空手势在空间移动的角度持续调节隔空手势对应的目标功能,持续调节的调节量与角度正相关。采用该种方案,终端设备能够根据隔空手势移动的角度持续调节该隔空手势对应的目标功能,使得该目标功能逐渐变化以持续调节该目标功能,进而使得用户及时确定是否需要终止调节,避免用户无法一次性调节到位带来的反复调节,操作简单。
一种可行的设计中,当终端设备识别出隔空手势时,在显示器上显示功能原点,功能原点用于指示终端设备已识别出隔空手势并唤醒隔空手势功能。采用该种方案,实现提醒用户的目的。
一种可行的设计中,不同应用的可持续调节的功能对应相同的隔空手势。采用该种方案,通过对不同应用设置相同的隔空手势,避免隔空手势过多导致用户无法区分隔空手势的问题,减少了用户的学习成本,相似功能采用同一个隔空手势符合人机交互逻辑。
一种可行的设计中,应用不同的可持续调节调节功能对应不同的隔空手势。采用该种方案,通过对同一应用的不同可持续调节功能设置不同的隔空手势,实现方便快捷调整同一个应用的不同可持续调节功能的目的。
一种可行的设计中,上述的目标功能包括下述功能中的任意一个:音量调节、音视频进度调节、空调温度调节、椅背高度调节、360°环视视角调节、车窗高度调节、天窗大 小调节、空调风量调节、氛围灯亮度调节。采用该种方案,实现终端设备灵活调节任意一个可持续调节的功能的目的。
一种可行的设计中,终端设备识别用户在摄像头的拍摄范围内做出的隔空手势时,利用摄像头持续拍摄拍摄范围,判断最新拍摄到的图像帧中是否包含隔空手势,若最新拍摄到的图像帧中包含隔空手势,则识别出用户在摄像头的拍摄范围内做出的隔空手势。采用该种方案,实现终端设备识别出隔空手势的目的。
第二方面,本申请实施例提供一种基于隔空手势的控制方法,该方法可以应用于终端设备、也可以应用于终端设备中的芯片,下面以应用于终端设备为例对该方法进行描述,该方法包括:当用户在摄像头的拍摄范围内做出隔空手势后,摄像头采集该隔空手势。终端设备识别出该隔空手势对应的目标功能后,根据该隔空手势在拍摄范围内的保持时长等持续调节该隔空手势对应的目标功能,使得该目标功能逐渐变化以持续调节该目标功能,进而使得用户及时确定是否需要终止调节,避免用户无法一次性调节到位带来的反复调节,操作简单;同时,减少用户调节目标功能时注视屏幕的时长,提高驾驶安全性。另外,对于智能座舱副驾驶或后排用户而言,通过远程隔空手势输入即可准确调节目标功能,无需起身通过触摸等方式调节目标功能,极大程度提升了方便性与用户体验。
一种可行的设计中,上述的保持时长包括第一时长和第二时长,第一时长对应的第一时间段内,隔空手势持续平移,第二时长对应的第二时间段内,隔空手势静止,第一时间段位于第二时间段之前。采用该种方案,实现根据隔空手势在拍摄范围内的保持时长调整目标功能的目的。
一种可行的设计中,终端设备在根据隔空手势在拍摄范围内的保持时长,持续调节隔空手势对应的目标功能时,先确定单位时间调节量;然后,自识别出隔空手势起,根据单位时间内的调节量持续调节隔空手势对应的目标功能,直到隔空手势不再出现在拍摄范围内。采用该种方案,实现根据隔空手势在拍摄范围内的保持时长调整目标功能的目的。
第三方面,本申请实施例提供一种基于隔空手势的控制方法,该方法由车载终端等终端设备上,车载终端设置在车辆上并与车辆上的摄像头、显示器等连接,当用户在车辆的屏幕上做出隔空手势后,摄像头采集该隔空手势,车载终端从摄像头获取到隔手势并识别出该隔空手势对应的目标功能后,根据该隔空手势在摄像头的拍摄范围内移动的距离或保持时长等持续调节该隔空手势对应的目标功能,使得该目标功能逐渐变化以持续调节该目标功能,进而使得用户及时确定是否需要终止调节,避免用户无法一次性调节到位带来的反复调节,操作简单。其中,摄像头可以是车辆上驾驶员检测系统(driver monitoring systems,DMS)的摄像头、碰撞缓解系统(collision mitigation system,CMS)等;车载终端可以通过蓝牙、WiFi等连接从摄像头获取隔空手势;屏幕可以是车辆上的中控显示屏(主屏),还可以是设置于车辆座位后部的屏幕(副屏)、方向盘前方的仪表屏等。
第四方面,本申请实施例提供一种基于隔空手势的控制装置,包括显示单元和处理单元。
当该基于隔空手势的控制装置用于执行上述根据空间持续调整目标功能时,显示单元,用于使能显示器显示应用的用户界面;处理单元,用于获取摄像头采集的隔空手势,根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能,所述隔空手势为与显示器之间的距离大于预设阈值的手势,所述目标功能是所 述应用的可持续调节功能,所述持续调节的调节量与所述第一距离正相关。
一种可行的设计中,所述第一距离至少根据所述摄像头的焦距、所述隔空手势与所述摄像头的光心之间的距离、第二距离确定,所述第二距离用于指示所述隔空手势在所述摄像头的拍摄范围内移动时,所述摄像头成像面中的隔空手势移动的距离。
一种可行的设计中,所述处理单元,在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能之前,还用于确定所述摄像头的成像面中的隔空手势移动过程中的第一位置和第二位置,根据所述第一位置和所述第二位置之间的像素个数以及像素大小,确定所述第二距离。
一种可行的设计中,所述第一距离小于或等于预设距离,所述预设距离与所述用户的手臂的臂长正相关。
一种可行的设计中,所述处理单元在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能之前,还用于识别所述用户的手腕骨骼点和手肘骨骼点,根据所述手腕骨骼点的三维坐标和所述手肘骨骼点的三维坐标,确定所述用户的臂长,根据所述臂长确定所述预设距离。
一种可行的设计中,所述处理单元在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能时,用于确定单位距离调节量,所述单位距离调节量是所述目标功能的总调整量与所述预设距离的比值,自识别出所述隔空手势起,根据所述单位调节量持续调节所述隔空手势对应的目标功能,直到所述隔空手势移动所述第一距离。
一种可行的设计中,所述处理单元在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能时,用于确定所述第一距离与所述预设距离的比值,根据所述比值,确定所述调节量,根据所述调整量,持续调节所述隔空手势对应的目标功能。
一种可行的设计中,所述处理单元在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能之前,还用于将所述摄像头的焦距、所述隔空手势与所述摄像头的光心的距离、所述第二距离输入至预先训练好的预置模型,利用所述预置模型确定所述第一距离。
一种可行的设计中,所述处理单元,还用于获取样本数据集,所述样本数据集包含多组样本数据,一组样本数据包含样本摄像头的焦距、样本隔空手势与样本摄像头的光心的距离、样本第二距离,利用所述样本数据集中的多组样本数据训练出所述预置模型。
当该基于隔空手势的控制装置用于执行上述根据空间持续调整目标功能时,显示单元,用于使能显示器显示应用的用户界面,处理单元,用于识别用户在终端设备摄像头的拍摄范围内做出的隔空手势,所述隔空手势为与显示器之间的距离大于预设阈值的手势,根据所述隔空手势在所述摄像头的拍摄范围内的保持时长,持续调节所述隔空手势对应的目标功能,所述目标功能是所述应用的可持续调节功能,所述持续调节的调节量与所述保持时长正相关。
一种可行的设计中,所述保持时长包括第一时长和第二时长,所述第一时长对应的第一时间段内,所述隔空手势持续平移,所述第二时长对应的第二时间段内,所述隔空手势静止,所述第一时间段位于所述第二时间段之前。
一种可行的设计中,所述处理单元,在根据所述隔空手势在所述摄像头的拍摄范围内的保持时长,持续调节所述隔空手势对应的目标功能时,用于确定单位时间调节量;自识别出所述隔空手势起,根据所述单位时间内的调节量持续调节所述隔空手势对应的目标功能,直到所述隔空手势不再出现在所述摄像头的拍摄范围内。
一种可行的设计中,所述显示单元,还用于当终端设备识别出所述隔空手势时,在所述显示器上显示功能原点,所述功能原点用于指示所述终端设备已识别出所述隔空手势并唤醒隔空手势功能。
一种可行的设计中,不同应用的可持续调节的功能对应相同的隔空手势。
一种可行的设计中,同一个应用不同的可持续调节调节功能对应不同的隔空手势。
一种可行的设计中,所述目标功能包括下述功能中的任意一个:音量调节、音视频进度调节、空调温度调节、椅背高度调节、360°环视视角调节、车窗高度调节、天窗大小调节、空调风量调节、氛围灯亮度调节。
一种可行的设计中,所述处理单元在识别用户在摄像头的拍摄范围内做出的隔空手势时,用于利用所述摄像头持续拍摄所述摄像头的拍摄范围,判断最新拍摄到的图像帧中是否包含所述隔空手势,若所述最新拍摄到的图像帧中包含所述隔空手势,则识别出所述用户在所述摄像头的拍摄范围内做出的隔空手势。
第五方面,本申请实施例提供一种终端设备,其特征在于,包括:一个或多个处理器;一个或多个存储器;以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述终端设备执行时,使得所述终端设备执行如第一方面中任一实施例所述的方法;或者,使得所述终端设备执行如第二方面中任一实施例所述的方法;或者,使得所述终端设备执行如第三方面中任一实施例所述的方法。
第六方面,本申请实施例还提供了一种计算机存储介质,包括计算机指令,当计算机指令在终端设备上运行时,使得终端设备执行前述任一实现方式所述的方法;或者,使得所述终端设备执行如第二方面中任一实施例所述的方法;或者,使得所述终端设备执行如第三方面中任一实施例所述的方法。
第七方面,本申请实施例还提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行如前述任一实现方式所述的方法;或者,使得所述终端设备执行如第二方面中任一实施例所述的方法;或者,使得所述终端设备执行如第二方面中任一实施例所述的方法。
第八方面,本申请实施例提供一种终端设备,包括:逻辑电路和输入接口,其中,所述输入接口用于获取待处理的数据,所述逻辑电路用于对待处理的数据执行如第一方面任一项所述的方法;或者,如上第二方面任一项所述的方法,得到处理后的数据;或者,如上第三方面任一项所述的方法,得到处理后的数据。
一种可行的设计中,该终端设备还包括:输出接口,该输出接口用于输出所述处理后的数据。
第九方面,本申请实施例提供一种自动驾驶系统,包括:
车辆本体,设置在所述车辆本体上的摄像头、显示器以及基于隔空手势的控制装置,所述摄像头、所述显示器分别与所述基于隔空手势的控制装置连接,其中:
所述显示器,用于显示应用的用户界面;
所述摄像头,用于采集用户做出的隔空手势,所述隔空手势为与所述显示器之间的距离大于预设阈值的手势;
所述基于隔空手势的控制装置,用于执行如第一方面中任一实施例所述的方法;或者,所述基于隔空手势的控制装置用于执行如第二方面中任一实施例所述的方法;或者,所述基于隔空手势的控制装置用于执行如第三方面中任一实施例所述的方法。
综上,本申请所提供的基于隔空手势的控制方法、装置及系统,能够通过持续监测用户做出的隔空手势的移动距离或保持时长,并根据移动距离或保持时长持续调节该隔空手势对应的功能,从而实现对终端设备的控制。
图1是用于执行本申请实施例提供的基于隔空手势的控制方法的车载终端的功能框图;
图2A是本申请实施例提供的基于隔空手势的控制方法的流程图;
图2B是设置了本申请实施例提供的基于隔空手势的控制装置的汽车座舱的示意图;
图3是本申请实施例提供的基于隔空手势的控制方法中功能原点的示意图;
图4A是本申请实施例提供的基于隔空手势的控制方法的一个过程示意图;
图4B是本申请实施例提供的基于隔空手势的控制方法的另一个过程示意图;
图5是本申请实施例提供的基于隔空手势的控制方法中显示屏和隔空手势的示意图;
图6是本申请实施例提供的基于隔空手势的控制方法中显示屏和隔空手势示意图;
图7是本申请实施例提供的基于隔空手势的控制方法的流程图;
图8是本申请实施例提供的基于隔空手势的控制方法中隔空手势的示意图;
图9是本申请实施例提供的基于隔空手势的控制方法中设置可持续调节方式的过程示意图;
图10是本申请实施例提供的基于隔空手势的控制方法中确定第一距离的过程示意图;
图11是本申请实施例提供的基于隔空手势的控制方法中确定F的过程示意图;
图12是本申请实施例提供的基于隔空手势的控制方法中确定第二距离的过程示意图;
图13是本申请实施例提供的基于隔空手势的控制方法中调节目标功能的过程示意图;
图14是本申请实施例提供的基于隔空手势的控制方法中检测臂长的过程示意图;
图15是本申请实施例提供的基于隔空手势的控制方法中调节目标功能的过程示意图;
图16是本申请实施例提供的基于隔空手势的控制方法中检测隔空手势的过程示意图;
图17是本申请实施例提供的一种基于隔空手势的控制装置的结构示意图;
图18为本申请实施例提供的一种终端设备的结构示意图。
目前,随着车辆技术的突飞猛进,车辆越来越普遍,车辆已经成为了人们日常生活中的重要交通工具之一。同时,随着车载终端屏幕的增加,用户对汽车的功能使用不仅仅局限于驾驶。未来汽车的智能座舱中,人机智能交互是很重要的基本功能。人机交互中,隔空手势交互是近年来兴起的一种交互形式。
相较于手持移动终端,汽车领域中,车载终端固定位于汽车的中控台,车载终端的屏幕也可以称之为中控屏。汽车上的摄像头、仪表显示屏以及车辆座位后部的屏幕等与车载终端并未集成在一起。随着车机娱乐等的增多,这就要求除了支持驾驶员操控车载终端外,副驾驶上的乘客也能够通过中控显示屏操作车载终端,甚至后排乘客也可以通过副屏来操作车载终端上的目标功能。其中,副屏指设置在前排椅背上且面对后排乘客的屏幕。由于汽车上摄像头的位置比较灵活,如有的车辆上摄像头位于车辆中控显示屏的上方,有的车辆上摄像头位于车辆中控屏的左侧,DMS或CMS的摄像头的位置甚至和终端设备无关等;而且,不同用户如驾驶员、副驾驶上的乘客或后排乘客的位置不同,这就增加了利用隔空手势控制车载终端的目标功能的难度。显然,如何利用隔空手势进行持续性调节以控制终端设备,视为急待解决的问题。
有鉴于此,本申请实施例提供一种基于隔空手势的控制方法、装置及系统,利用隔空手势的保持时长、移动距离等持续调节该隔空手势对应的功能,使得该功能渐变,从而实现对终端设备的控制。
本申请实施例所述的基于隔空手势的控制方法可应用于智慧屏、车载终端等终端设备,用户可以在较大范围与该些终端设备进行人机交互。
图1是用于执行本申请实施例提供的基于隔空手势的控制方法的车载终端的功能框图。请参照图1,车载终端100可包括各种子系统,例如行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。可选地,车载终端100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车载终端100的每个子系统和元件可以通过有线或者无线互连。
行进系统102可包括为安装了车载终端100的车辆提供动力运动的组件。在一个实施例中,行进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。
能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为车载终端100的其他系统提供能量。
传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
传感器系统104可包括感测关于车辆周边的环境的信息的若干个传感器。例如,传感器系统104可包括定位系统122(定位系统可以是GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及摄像头130。传感器系统104还可包括被监视车辆的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。
定位系统122可用于估计车辆的地理位置。IMU124用于基于惯性加速度来感测车辆的位置和朝向变化。在一个实施例中,IMU124可以是加速度计和陀螺仪的组合。
雷达126可利用无线电信号来感测车辆的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达126还可用于感测物体的速度和/或前进方向。
激光测距仪128可利用激光来感测车辆所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
摄像头130可用于捕捉用户在该摄像头拍摄范围内做出的隔空手势,摄像头130可以是单目摄像头、双目摄像头、飞行时间(Time of flight,TOF)摄像头、DMS的摄像头或CMS的摄像头等。
控制系统106为控制车辆及其组件的操作。控制系统106可包括各种元件,其中包括转向系统132、油门134、制动单元136、传感器融合算法138、计算机视觉系统140、路线控制系统142以及障碍物避免系统144。
转向系统132可操作来调整车辆的前进方向。例如在一个实施例中可以为方向盘系统。
油门134用于控制引擎118的操作速度并进而控制车辆的速度。
制动单元136用于控制车辆减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制车辆的速度。
计算机视觉系统140可以操作来处理和分析由摄像头130捕捉的图像以便识别车辆周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉系统140可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。
路线控制系统142用于确定车辆的行驶路线。在一些实施例中,路线控制系统142可结合来自传感器138、全球定位系统(global positioning system,GPS)122和一个或多个预定地图的数据以为车辆确定行驶路线。
障碍物规避系统144用于识别、评估和避开或者以其他方式越过车辆的环境中的潜在障碍物。
当然,在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
车载终端100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、显示屏148、麦克风150和/或扬声器152。
在一些实施例中,外围设备108提供车辆内的用户与用户接口116交互的手段。例如,显示屏148可向车辆内的用户展示信息。用户接口116还可操作车载电脑来接收用户的输入。在其他情况中,外围设备108可提供用于车载终端100与位于车内的其它设备通信的手段。例如,麦克风150可接收车辆内的用户的语音命令或其他音频输入。类似地,扬声器152可向车辆内的用户输出音频。
无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如码分多址(code division multiple access,CDMA)、EVD0、全球移动通信系统(global system for mobile communications,GSM)/通用分组无线 服务(general packet radio service,GPRS),或者4G蜂窝通信,例如LTE。或者5G蜂窝通信。无线通信系统146可利用无线保真(wireless-fidelity,WiFi)与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或紫蜂协议(Zig Bee)与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
电源110可向车辆的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为车辆的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。
车载终端100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制车载终端100的个体组件或子系统的多个计算设备。
处理器113可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。替选地,该处理器可以是诸如用于供专门应用的集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,数据存储装置114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行车载终端100的各种功能,包括以上描述的那些功能。数据存储装置114也可包含额外的指令,包括向推进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令115以外,数据存储装置114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆在自主、半自主和/或手动模式中操作期间被车载终端100和计算机系统112使用。
用户接口116,用于向车辆内的用户提供信息或从其接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、显示屏148、麦克风150和扬声器152。
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制车载终端100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向单元132来避免由传感器系统104和障碍物避免系统144检测到的障碍物。在一些实施例中,计算机系统112可操作来对车载终端100 及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车载终端100分开安装或关联。例如,数据存储装置114可以部分或完全地与车载终端100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
可选地,车载终端100相关联的计算设备(如图1的计算机系统112、计算机视觉系统140、数据存储装置114)可以基于所识别的障碍物的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所述识别的障碍物的行为。可选地,每一个所识别的障碍物都依赖于彼此的行为,因此还可以将所识别的所有障碍物全部一起考虑来预测单个识别的障碍物的行为。车载终端100能够基于预测的障碍物的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的障碍物的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么状态。在这个过程中,也可以考虑其它因素来确定车辆的速度,诸如,车辆在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。
上述车辆可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本发明实施例不做特别的限定。
下面,以终端设备为上述图1所示车载终端为例,对本申请实施例所述的基于隔空手势的控制方法进行详细说明。示例性的,可参见图2A。
图2A是本申请实施例提供的基于隔空手势的控制方法的流程图。本实施例包括:
201、使能显示器显示应用的用户界面。
示例性的,应用可以是安装在终端设备上的系统应用或第三方应用等。终端设备识别用户点击显示器等输入的操作指令,使能显示器显示应用的用户界面。
202、获取摄像头采集的隔空手势,所述隔空手势为与所述显示器之间的距离大于预设阈值的手势。
其中,所述隔空手势为与显示器之间的距离大于预设阈值的手势。
本申请实施例中,隔空手势也可以称之为3D手势、3维手势、非接触手势等。终端设备的本地数据库或远程数据库中存储有手势集合,该手势集合中存储了隔空手势、该隔空手势对应的功能、隔空手势与功能的对应关系。当用户在摄像头的拍摄范围内做出隔空手势后,例如,副驾驶位置上的用户在摄像头前做出隔空手势。摄像头采集该隔空手势,终端设备获取摄像头采集的隔空手势并判断该隔空手势是否存在于手势集合中。若该隔空手势存在于手势集合中,则确定该隔空手势对应的目标功能;若手势集合中不存在该隔空手势,则认为该手势无法识别。
本申请实施例并不限定显示器、摄像头和终端设备的位置。例如,终端设备为车载终端设备,车载终端设备上设置有摄像头、显示器等,此时,显示器、摄像头和终端设备集成在一起。再如,终端设备为车载终端设备,而摄像头、显示器并未集成在车载终端上。图2B是设置了本申请实施例提供的基于隔空手势的控制装置的汽车座舱的示意图。请参照图2B基于隔空手势的控制装置集成在车载终端上,而显示器可以是车辆上的中控显示屏(主屏),还可以是设置于车辆座位后部的屏幕(副屏)等。摄像头可以是汽车MDS或 CMS的摄像头,图中未示意出。车载终端与摄像头、主屏和副屏通过WiFi等连接。
203、根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能,该持续调节的调节量与隔空手势在拍摄范围内移动的第一距离正相关;或者,持续调节的调节量与隔空手势在拍摄范围内的保持时长正相关。
示例性的,终端设备自识别出隔空手势开始至隔空手势移除拍摄范围,持续调节隔空手势对应的目标功能,即在隔空手势在拍摄范围内保持的过程持续调节目标功能。例如,持续调节的调节量与隔空手势在拍摄范围内的保持时长正相关,比如隔空手势对应的目标功能是增加音量,则每秒调整10%;再如,该持续调节的调节量与第一距离正相关,比如该隔空手势对应的目标功能是快进,则隔空手势在拍摄范围内每移动3厘米,则视频快进10分钟。
本申请实施例提供的基于隔空手势的控制方法,当用户在摄像头的拍摄范围内做出隔空手势后,摄像头采集该隔空手势。终端设备识别出该隔空手势对应的目标功能后,根据该隔空手势在拍摄范围内的保持时长、移动距离等持续调节该隔空手势对应的目标功能,使得该目标功能逐渐变化以持续调节该目标功能,进而使得用户及时确定是否需要终止调节,避免用户无法一次性调节到位带来的反复调节,操作简单;同时,减少用户调节目标功能时注视屏幕的时长,提高驾驶安全性。另外,对于副驾驶或后排用户而言,通过远程隔空手势输入即可准确调节目标功能,无需起身通过触摸等方式调节目标功能,极大程度提升了方便性与用户体验。
可选的,为了提醒用终端设备已经识别出隔空手势并唤醒隔空手势功能,则显示器显示功能原点,功能原点用于指示终端设备已识别出隔空手势并唤醒隔空手势功能。示例性的,可参见图3。
图3是本申请实施例提供的基于隔空手势的控制方法中功能原点的示意图。请参照图3,车载终端的左上角设置有摄像头,如图中黑色圆圈所示。初始时,车载终端播放视频。当用户在摄像头的拍摄范围内做出双指向右平移的隔空手势时,车载终端识别出该隔空手势的功能为调节视频进度,则车载终端弹出进度条和当前播放进度,并在进度条上显示功能原点,如图中黑色圆环所示,从而使得用户获知:终端设备已识别出隔空手势并唤醒隔空手势功能。
需要说明的是,虽然图3中是以摄像头设置在车载终端的左上角为例对本申请实施例进行详细说明。然而,本申请实施例并不以此为限制,其他可选的实现方式中,车载终端厂商可以灵活设置摄像头的位置。
上述实施例中,隔空手势可以同时是唤醒手势和功能手势,唤醒手势用于唤醒隔空手势功能,功能手势用于调节目标功能;或者,唤醒手势和功能手势为不同的隔空手势。示例性的,可参见图4A和图4B。
图4A是本申请实施例提供的基于隔空手势的控制方法的一个过程示意图。请参照图4A,唤醒手势和功能手势为不同的隔空手势时,终端设备识别出唤醒手势后,唤醒隔空手势功能,隔空手势功能即为终端设备响应用户做出的功能手势,根据该功能手势的保持时长或移动距离等调节目标功能,根据保持时长的调节方式也可以称之为根据时间调节方式,根据移动距离的调节方式也可以称之为根据空间的调节方式。
隔空手势功能唤醒后,终端设备通过语音、动画等方式提示用户:已唤醒隔空手势功 能,请做出功能手势。之后,用户做出功能手势,终端设备识别功能手势对应的目标功能,并根据功能手势在拍摄范围内的保持时长、移动距离等调节目标功能。目标功能可以是音量调节、音视频进度调节、空调温度调节、椅背高度调节、360°环视视角调节、车窗高度调节、天窗大小调节、空调风量调节、氛围灯亮度调节中的任意一个。采用该种方案,实现终端设备灵活调节任意一个可持续调节的功能的目的。
另外,由于智能座舱环境内,用户的身体可能会由于刹车、道路颠簸等原因晃动,而且,智能座舱内摄像头的拍摄范围可能会比较大。为了防止终端设备错误的将用户拿中控屏前的水杯、挡风玻璃前方的纸巾等动作误判为用户想要做出隔空手势。图4A请实施例中,通过设置一个专用唤醒隔空手势功能的唤醒手势,比如五指竖直向上的隔空手势。如果终端设备检测出用户做出五指竖直向上的隔空手势,且该隔空手势的保持时长超过预设时长,如3秒钟,则认为用户想要唤醒隔空手势功能。此时,终端设备开启隔空手势功能并对摄像头采集到的隔空手势进行识别。
图4B是本申请实施例提供的基于隔空手势的控制方法的另一个过程示意图。请参照图4B,隔空手势同时为唤醒手势和功能手势时,终端设备识别出隔空手势后,唤醒隔空手势功能并确定目标功能,根据该隔空手势的保持时长或移动距离等调节目标功能。
上述实施例中,所述隔空手势对应的目标功能是所述屏幕当前显示的应用的可持续调节功能,不同应用的可持续调节的功能对应相同的隔空手势。
终端设备可持续调节的功能比较多时,若为不同的可持续调节功能设置不同的隔空手势,则导致隔空手势特别多,进而导致用户混淆各种各种手势。因此,可以针对不同应用的可持续调节功能设置相同的隔空手势。示例性的,可参见图5。
图5是本申请实施例提供的基于隔空手势的控制方法中显示屏和隔空手势的示意图。请参照图5,同样为双指右移的隔空手势,当车载终端处于视频播放界面时,该隔空手势对应的目标功能为快进,功能原点旁边的浮窗显示“快进”,供用户了解该隔空手势对应的目标功能;当车载终端处于空调界面时,该隔空手势对应的目标功能为增加温度,功能原点旁边的浮窗显示“增加温度”,供用户了解该隔空手势对应的目标功能。采用该种方案,通过对不同应用设置相同的隔空手势,避免隔空手势过多导致用户无法区分隔空手势的问题,减少了用户的学习成本,相似功能采用同一个隔空手势符合人机交互逻辑。
上述实施例中,所述隔空手势对应的目标功能是所述屏幕当前显示的应用的可持续调节功能,同一个应用不同的可持续调节调节功能对应不同的隔空手势。
终端设备同一个应用可持续调节的功能比较多,例如,视频播放应用的可持续调节功能包括音量、进度等;空调的可持续调节功能包括温度、风量等。为了方便对同一应用的不同可持续调节功能进行调节,可以针对同一个应用的不同可持续调节功能设置不同的隔空手势。示例性的,可参见图6。
图6是本申请实施例提供的基于隔空手势的控制方法中显示屏和隔空手势示意图。请参照图6,当车载终端处于视频播放界面时,双指右移的隔空手势对应的目标功能为快进,功能原点旁边的浮窗显示“快进”,供用户了解该隔空手势对应的目标功能;当车载终端处于视频播放界面时,双指上移的隔空手势对应的目标功能为增加音量,功能原点旁边的浮窗显示“增加音量”,供用户了解该隔空手势对应的目标功能。采用该种方案,通过对同一应用的不同可持续调节功能设置不同的隔空手势,实现方便快捷调整同一个应用的不 同可持续调节功能的目的。
上述实施例中,终端设备在识别出隔空手势的目标功能后,可根据隔空手势在拍摄范围内的保持时长、移动距离等确定调整量,进而根据调整量调节目标功能。示例性的,请参照图7。
图7是本申请实施例提供的基于隔空手势的控制方法的流程图。请参照图7,本实施例包括如下步骤:
301、终端设备检测出用户做出隔空手势。
示例性的,摄像头将采集到的拍摄范围内的视频流发送给终端设备上的处理器,处理器对该些视频流进行分析,若终端设备判断出最新拍摄到的图像帧或连续几个图像帧中均包含隔空手势,则认为检出用户做出隔空手势。
302、从图像帧中确定隔空手势所在的位置区域。
示例性的,可参见图8,图8是本申请实施例提供的基于隔空手势的控制方法中隔空手势的示意图。
请参照图8,用户在拍摄范围内做出竖起食指和中指的手势,摄像头采集到该手势后,利用手部目标检测技术从采集到的图像中提取出隔空手势所在的位置区域,如图中虚线框所示。其中,手部目标检测技术例如为基于多框目标检测器(Single Shot MultiBox Detector,SSD)等深度模型的手部目标检测技术。
303、从位置区域中提取出隔空手势的关键点。
再请参照图8,确定出位置区域后,终端设备将确定隔空手势所在的位置区域对应的图片输入至手部关键点检测模型,利用该手部关键点检测模型检测出关键点,如图中的黑点所示。该过程中,手部关键点检测模型例如为开放姿态(Open Pose)模型利用手部分割技术、手部关键点定位技术等训练出的模型。
304、激活隔空手势功能。
305、确定隔空手势的目标功能。
示例性的,终端设备利用检测出的关键点识别隔空手势,以识别用户做出单指竖起手势、双指手势等。然后,通过查询数据库等确定该隔空手势是否是当前界面的应用相关的隔空手势,若该隔空手势是当前界面的应用的某个功能对应的手势,则将该功能作为目标功能;若该隔空手势不是当前界面的应用的任何功能,则终端设备无反应或提示用户隔空手势不正确。
306、选择可持续调节的方式,若选定根据空间调节方式,则执行步骤307~步骤308;若选定根据时间调节方式,则执行步骤309-步骤310。
示例性的,根据空间调节方式指根据隔空手势在拍摄范围内移动的第一距离确定调节量的方式,该种方式下,持续调节的调节量与隔空手势在拍摄范围内移动的第一距离正相关;根据时间调整方式指根据隔空手势在拍摄范围内的保持时长确定调节量的方式,该种方式下,持续调节的调节量与隔空手势在拍摄范围内的保持时长正相关。
上述根据空间调节方式或根据时间调节方式可以在终端设备出厂前就设置好,或者,也可以开放出来供用户自主设置。示例性的,可参见图9。
图9是本申请实施例提供的基于隔空手势的控制方法中设置可持续调节方式的过程示意图。请参照图9,用户点击终端设备界面上的设置图标进入设置界面,在设置界面点击 “可持续调节方式”的下拉菜单按钮,终端设备界面上弹出浮窗,供用户选择“根据时间调节方式”、“根据空间调节方式”或“关闭”。
307、确定第一距离。
示例性的,终端设备检测隔空手势移动的起点和终点,根据起点和终点确定出隔空手势在拍摄范围内移动的第一距离。
308、根据第一距离调节目标功能。
示例性的,终端设备在隔空手势移动过程中持续调节目标功能直到隔空手势移动第一距离。
309、确定隔空手势在拍摄范围内的保持时长。
310、根据保持时长调节目标功能。
示例性的,终端设备在隔空手势移动过程中持续调节目标功能,直到隔空手势移除拍摄范围。
下面,对上述实施例中,终端设备如何根据空间调节方式和根据空间调节方式进行可持续调节进行详细说明。
首先,根据空间调节方式进行可持续调节。
根据空间调节方式中,持续调节的调节量与所述隔空手势在所述摄像头的拍摄范围内移动的第一距离正相关。该第一距离至少根据所述摄像头的焦距、所述隔空手势与所述摄像头的光心之间的距离、第二距离确定,所述第二距离用于指示所述隔空手势在所述摄像头的拍摄范围内移动时,所述摄像头成像面中的隔空手势移动的距离。示例性的,可参见图10。
图10是本申请实施例提供的基于隔空手势的控制方法中确定第一距离的过程示意图。请参照图10,点填充部分为摄像头的成像面,直线AB所在的平面为与成像面平行的平面,O点为摄像头的光心,双点划线三角形的底边表示相机的最大视野,预设距离L
max位于该视野内。第一距离L
移为用户的隔空手势在拍摄范围内移动的距离,该第一距离L
移不能超过预设距离L
max,预设距离L
max可以是一个固定值,也可以是一个与用户的臂长等相关的值。摄像头的焦距f和摄像头的型号、类型等有关。该预设距离L
max对应一个最大成像面距离,也就是说,当用户的隔空手势的移动预设距离L
max时,摄像头成像面中的隔空手势移动最大成像面距离。
第二距离l表示隔空手势在所述摄像头的拍摄范围内移动时,所述摄像头成像面中的隔空手势移动的距离。对于终端设备而言,该些参数中,摄像头的焦距f是已知的,隔空手势和摄像头的光心之间的距离F可通过测量得到,第二距离l可通过追踪隔空手势得到,而第一距离L
移是待求解的量。求解出第一距离L
移后,终端设备即可根据该第一距离L
移确定出调节量。
需要说明是,虽然图10是以AB线段表示预设距离L
max为例对本申请实施例进行详细说明,即隔空手势从点A移动至点B为例,然而,本申请实施例并不限制,其他可行的实现方式中,隔空手势也可以从点C移动至点D,即CD线段表示预设距离L
max。
下面,对终端设备如何获得隔空手势和摄像头的光心之间的距离F、第二距离l、如何计算第一距离L
移、如何根据第一距离L
移确定进行持续性调节进行详细说明。
第一、获得隔空手势和摄像头的光心之间的距离F的过程。
示例性的,终端设备传感器确定出隔空手势和摄像头的光心之间的距离F。其中,传感器可以单独设置,也可以集成在摄像头上。下面,以摄像头上集成传感器为例,对终端设备如何获得隔空手势和摄像头的光心之间的距离F进行详细说明。
当摄像头为单目摄像头时,终端设备先通过图像匹配方式进行目标识别,以识别出隔空手势。然后,终端设备根据隔空手势在图像中的大小去估算该隔空手势和摄像头的光心之间的距离F。为估算出距离F,需要终端设备准确识别出隔空手势,该准确识别是估算距离的第一步。为实现准确识别出隔空手势,需要在终端本地或远程建立并不断维护一个样本特征数据库,以保证该样本特征数据库中包含全部的隔空手势。
当摄像头为双目摄像头时,终端设备利用双目摄像头对拍摄范围进行拍摄得到两幅图像,通过两幅图像确定隔空手势对应的视差,隔空手势距离摄像头越远,视差越小;反之,视差越大。然后,终端设备根据预设的视差和距离的对应关系,即可确定出隔空手势和摄像头的光心之间的距离F。示例性的,可参见图11。
图11是本申请实施例提供的基于隔空手势的控制方法中确定F的过程示意图。请参照图11,类似于人眼,双目摄像头拍摄的两幅图像中,隔空手势并不位于相同的位置。因此,当该两幅图像重合时,隔空手势并未重合在一起,两个隔空手势之间的距离称之为视差。终端设备得到视差后,即可根据该视差确定出距离F。
当摄像头为TOF镜头时,终端设备同给隔空手势连续发送光脉冲,然后利用传感器接收隔空手势返回的光,通过探测光脉冲的飞行时间来确定距离F。其中,光脉冲的飞行时间即为光脉冲的往返时间。
需要说明的是,当隔空手势位于图10中AB所在的平面的任意位置时,隔空手势和像头的光心之间的距离F是相同的。
第二、获得第二距离l的过程。
示例性的,该第二距离l可根据图像处理得到。获取第二距离l的过程中,用户的隔空手势在拍摄范围内移动时,摄像头的成像面中成像出的隔空手势也发生移动。终端设备利用深度学习技术得到手部位置区域。移动过程中,终端设备确定摄像头的成像面中的隔空手势移动过程中的第一位置和第二位置,根据该第一位置和第二位置之间的像素个数以及像素大小,确定第二距离l。示例性的,可参见图12。
图12是本申请实施例提供的基于隔空手势的控制方法中确定第二距离的过程示意图。请参照图12,用户在摄像头的拍摄范围内做出双指右移的隔空手势,移动过程中,成像面中成像出的隔空手势也发生移动。终端设备确定出成像面中隔空手势所在的位置区域,对该位置区域进行追踪,图中虚线所示为第一位置,实线所示为第二位置。终端设备确定第一位置对应的位置区域的中心(如图中灰色填充圆圈所示)和第二位置对应的位置区域的中心之间的像素个数,根据该像素个数和像素大小,即可确定出第二距离l。
采用该种方案,通过不断的追踪手部位置区域,根据该位置区域,即可确定出第二距离l。
第三、确定第一距离L
移。
终端设备确定出距离F、第二距离l后,即可根据下述公式确定出第一距离L
移:
该公式中,f表示摄像头的焦距,F表示隔空手势和摄像头的光心之间的距离,l表示第二距离。
本申请实施例中,为了确定出第一距离L
移,可预先训练一个预置模型,将摄像头的焦距f、隔空手势与摄像头的光心的距离F、第二距离l输入至预先训练好的预置模型,利用预置模型确定第一距离L
移。
训练预置模型的过程中,获取样本数据集,该些样本数据集包含多组样本数据,一组样本数据包含样本摄像头的焦距、样本隔空手势与样本摄像头的光心的距离、样本第二距离,利用所述样本数据集中的多组样本数据训练出所述预置模型。该训练过程可以由终端设备执行,也可以由集群服务器执行,本申请实施例并不限制。
第四、根据第一距离L
移确定进行持续性调节的过程。
终端设备根据第一距离L
移和单位距离调整量即可对隔空手势对应的目标功能进行持续性调节。
例如,终端设备上可以预先设置一个单位距离调整量,该单位距离调整量表示隔空手势每移动单位距离,目标功能的调整量。其中,单位距离例如可以是1厘米、1分米等,本申请实施例并不限制。该种方式中,无需设置预设距离L
max。也就是说,无需考虑隔空手势在拍摄范围内移动的长度。
再如,终端设备上可以预先设置一个预设距离L
max,根据该预设距离L
max和目标功能的总调整量确定出一个单位距离调节量,该单位距离调节量是目标功能的总调整量与预设距离的比值。假设目标功能为快进,视频为90分钟的时长,即总调整量为90分钟,预设距离L
max为30厘米,则单位距离调节量为90/30=3。也就是说,隔空手势每移动1厘米,视频快进3分钟。假设第一距离L
移为5厘米,则隔空手势从初始位置启动5厘米的过程中,视频快进15分钟。该调节过程可参见图13。
图13是本申请实施例提供的基于隔空手势的控制方法中调节目标功能的过程示意图。请参照图13,终端设备检测出隔空手势后,屏幕上出现功能原点。终端设备继续监测该隔空手势,当检测出隔空手势移动时,判断移动方向和第一距离,根据移动方向确定该隔空手势的目标功能,进而根据第一距离调节该目标功能。图中虚线功能原点与实线功能原点之间的部分即为最终的调整量。
根据上述可知:总的调节量实质上是第一距离L
移与预设距离L
max的比值,即总的调节量是预设距离L
max中第一距离L
移的比例、占比或百分比。
当单位距离调整量是根据预设距离L
max和总的调节量确定时,预设距离L
max可以是一个固定值,此时,该固定值表示用户移动胳膊时的一个舒适距离,具体可通过统计不同年龄、不同性别、不同身高的用户挥舞手臂时,手臂移动的距离得出,该固定值可以为大多数用户挥舞手臂时,手臂移动的距离的平均值;或者,预设距离L
max也可以是一个与用户的手臂的臂长正相关的数值。
当预设距离L
max和臂长正相关时,终端设备检测出隔空手势后,识别用户的手腕骨骼点和手肘骨骼点,根据手腕骨骼点和手肘骨骼点,确定用户的臂长,根据臂长确定预设距离。示例性的,可参见图14。
图14是本申请实施例提供的基于隔空手势的控制方法中检测臂长的过程示意图。请参照图14,用户做出隔空手势后,终端设备利用深度学习等检测出用户的手腕骨骼点和手 肘骨骼点,该两个骨骼点如图中黑色填充原点所示。然后,终端设备利用三维视觉系统确定手腕骨骼点的三维坐标和手肘骨骼点三维坐标,根据该两个三维坐标确定手腕骨骼点和手肘骨骼点之间的距离,即可确定出用户的臂长(即前手臂臂长)。三维视觉系统可以为双目摄像头、多目摄像机、TOF相机等。确定出臂长后,查找数据库即可确定出臂长对应的预设距离L
max。其中,数据库是指存储有臂长、臂长对应的预设距离L
max的映射关系的数据库。示例性的,可参见表1。
表1
臂长(厘米) | 预设距离L max(厘米) |
35 | 27 |
36 | 27.5 |
37 | 28 |
38 | 28.5 |
39 | 29 |
40 | 30 |
41 | 30.5 |
42 | 31 |
43 | 32 |
44 | 33 |
表1中显示出部分臂长与预设距离L
max的映射关系。根据表1可知:不同臂长对应的预设距离L
max可能不同。
其次,根据时间调节方式进行可持续调节。
根据时间调节方式中,持续调节的调节量与隔空手势在拍摄范围内的保持时长正相关。用户可以在保持时长内持续移动隔空手势,也可以移动一段时间后,静止但保持在拍摄范围内。也就是说,保持时长包括第一时长和第二时长,第一时长对应的第一时间段内,隔空手势持续平移,第二时长对应的第二时间段内,隔空手势静止,第一时间段位于第二时间段之前。示例性的,可参见图15。
图15是本申请实施例提供的基于隔空手势的控制方法中调节目标功能的过程示意图。请参照图15,终端设备检测出隔空手势后,屏幕上出现功能原点。终端设备继续监测该隔空手势,当检测出隔空手势移动时,判断移动方向,根据移动方向确定该隔空手势的目标功能。之后,根据隔空手势的保持时长不断的调节目标功能。第一时长内,隔空手势一直向右平移,该段时长内调节量如图中两个虚线功能原点之间的距离。第一时长后,若用户觉得还需要继续调节,则继续保持隔空手势,该隔空手势不再平移而是静止,但是目标功能继续被调节。第二时长的调节量如图中第二个虚线功能原点和实线功能原点之间的距离。总调节量为第一时长内的调节量和第二时长内的调节量的总和。该过程中,摄像头每隔预设时长,如每隔0.1秒检测一帧画面,只要画面中还包括隔空手势则继续调节目标功能。也就是说,无论隔空手势继续移动还是静止,调节仍将继续。该过程中,第一时长至少为终端设备可以识别出隔空手势的时长,第二时长为自终端设备识别出隔空手势开始至用户感觉完成调节移出隔空手势的时间点之间的时长。另外,第二时长内,隔空手势也可以不一直静止,如持续移动或间歇移动等。
调节过程中,根据预设的单位时间调整量持续调节目标功能。例如,对于增加音量而言,单位时间调整量为10%每秒,则隔空手势每保持1秒,音量增加10%。再如,对于视频快进而言,单位时间调整量为2%每秒,则隔空手势每保持1秒,则音量被调整2%。之后,隔空手势移出摄像头的拍摄范围,持续调节结束。示例性的,可参见图16。
图16是本申请实施例提供的基于隔空手势的控制方法中检测隔空手势的过程示意图。请参照图16,摄像头每隔预设时长,如每隔0.1秒检测一帧画面,只要画面中还包括隔空手势则继续调节目标功能。若检测出画面中不包含隔空手势,说明用户认为已经目标功能调节到理想状态移出手势。
图17是本申请实施例提供的一种基于隔空手势的控制装置的结构示意图。本实施例所涉及的基于隔空手势的控制装置可以为终端设备,也可以为应用于终端设备的芯片。该基于隔空手势的控制装置可以用于执行上述实施例中终端设备的功能。如图17所示,该基于隔空手势的控制装置100可以包括:显示单元11和处理单元12。
当该基于隔空手势的控制装置100用于执行上述根据空间持续调整目标功能时,显示单元11,用于使能显示器显示应用的用户界面;处理单元12,用于获取摄像头采集的隔空手势,根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能,所述隔空手势为与显示器之间的距离大于预设阈值的手势,所述目标功能是所述应用的可持续调节功能,所述持续调节的调节量与所述第一距离正相关。
一种可行的设计中,所述第一距离至少根据所述摄像头的焦距、所述隔空手势与所述摄像头的光心之间的距离、第二距离确定,所述第二距离用于指示所述隔空手势在所述摄像头的拍摄范围内移动时,所述摄像头成像面中的隔空手势移动的距离。
一种可行的设计中,所述处理单元12,在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能之前,还用于确定所述摄像头的成像面中的隔空手势移动过程中的第一位置和第二位置,根据所述第一位置和所述第二位置之间的像素个数以及像素大小,确定所述第二距离。
一种可行的设计中,所述第一距离小于或等于预设距离,所述预设距离与所述用户的手臂的臂长正相关。
一种可行的设计中,所述处理单元12在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能之前,还用于识别所述用户的手腕骨骼点和手肘骨骼点,根据所述手腕骨骼点的三维坐标和所述手肘骨骼点的三维坐标,确定所述用户的臂长,根据所述臂长确定所述预设距离。
一种可行的设计中,所述处理单元12在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能时,用于自识别出所述隔空手势起,根据所述单位调节量持续调节所述隔空手势对应的目标功能,直到所述隔空手势移动所述第一距离,所述单位距离调节量是所述目标功能的总调整量与所述预设距离的比值。
一种可行的设计中,所述处理单元12在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能时,用于根据所述第一距离与所述预设距离的比值持续调节所述隔空手势对应的目标功能。
一种可行的设计中,所述处理单元12在根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能之前,还用于将所述摄像头的 焦距、所述隔空手势与所述摄像头的光心的距离、所述第二距离输入至预先训练好的预置模型,利用所述预置模型确定所述第一距离。
一种可行的设计中,所述处理单元12,还用于获取样本数据集,所述样本数据集包含多组样本数据,一组样本数据包含样本摄像头的焦距、样本隔空手势与样本摄像头的光心的距离、样本第二距离,利用所述样本数据集中的多组样本数据训练出所述预置模型。
当该基于隔空手势的控制装置100用于执行上述根据空间持续调整目标功能时,显示单元11,用于使能显示器显示应用的用户界面,处理单元12,用于获取摄像头采集的隔空手势,所述隔空手势为与显示器之间的距离大于预设阈值的手势,根据所述隔空手势在所述摄像头的拍摄范围内的保持时长,持续调节所述隔空手势对应的目标功能,所述目标功能是所述应用的可持续调节功能,所述持续调节的调节量与所述保持时长正相关。
一种可行的设计中,所述保持时长包括第一时长和第二时长,所述第一时长对应的第一时间段内,所述隔空手势持续平移,所述第二时长对应的第二时间段内,所述隔空手势静止,所述第一时间段位于所述第二时间段之前。
一种可行的设计中,所述处理单元12,在根据所述隔空手势在所述摄像头的拍摄范围内的保持时长,持续调节所述隔空手势对应的目标功能时,用于确定单位时间调节量;自识别出所述隔空手势起,根据所述单位时间内的调节量持续调节所述隔空手势对应的目标功能,直到所述隔空手势不再出现在所述摄像头的拍摄范围内。
一种可行的设计中,所述显示单元11,还用于当终端设备识别出所述隔空手势时,在所述显示器上显示功能原点,所述功能原点用于指示所述终端设备已识别出所述隔空手势并唤醒隔空手势功能。
一种可行的设计中,不同应用的可持续调节的功能对应相同的隔空手势。
一种可行的设计中,同一个应用不同的可持续调节调节功能对应不同的隔空手势。
一种可行的设计中,所述目标功能包括下述功能中的任意一个:音量调节、音视频进度调节、空调温度调节、椅背高度调节、360°环视视角调节、车窗高度调节、天窗大小调节、空调风量调节、氛围灯亮度调节。
一种可行的设计中,所述处理单元12在识别用户在摄像头的拍摄范围内做出的隔空手势时,用于利用所述摄像头持续拍摄所述摄像头的拍摄范围,判断最新拍摄到的图像帧中是否包含所述隔空手势,若所述最新拍摄到的图像帧中包含所述隔空手势,则识别出所述用户在所述摄像头的拍摄范围内做出的隔空手势。
图18为本申请实施例提供的一种终端设备的结构示意图。如图18所示,该终端设备200包括:
处理器21和存储器22;
所述存储器22存储计算机执行指令;
所述处理器21执行所述存储器22存储的计算机执行指令,以实现上述终端设备执行的方法。
可选的,终端设备200还包括:显示器23(如图18中虚线框所示)用于显示应用的用户界面;
上述的处理器21、存储器22、显示器23可以通过总线24连接。
在上述在通信装置的实现中,存储器和处理器之间直接或间接地电性连接,以实现数 据的传输或交互,也就是存储器和处理器可以通过接口连接,也可以集成在一起。例如,这些元件相互之间可以通过一条或者多条通信总线或信号线实现电性连接,如可以通过总线连接。存储器中存储有实现数据访问控制方法的计算机执行指令,包括至少一个可以软件或固件的形式存储于存储器中的软件功能模块,处理器通过运行存储在存储器内的软件程序以及模块,从而执行各种功能应用以及数据处理。
存储器可以是,但不限于,随机存取存储器(Random Access Memory,简称:RAM),只读存储器(Read Only Memory,简称:ROM),可编程只读存储器(Programmable Read-Only Memory,简称:PROM),可擦除只读存储器(Erasable Programmable Read-Only Memory,简称:EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-Only Memory,简称:EEPROM)等。其中,存储器用于存储程序,处理器在接收到执行指令后,执行程序。进一步地,上述存储器内的软件程序以及模块还可包括操作系统,其可包括各种用于管理系统任务(例如内存管理、存储设备控制、电源管理等)的软件组件和/或驱动,并可与各种硬件或软件组件相互通信,从而提供其他软件组件的运行环境。
处理器可以是一种集成电路芯片,具有信号的处理能力。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,简称:CPU)、网络处理器(Network Processor,简称:NP)等。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
在上述基础上,本申请还提供一种芯片,包括:逻辑电路、输入接口,其中:所述输入接口用于获取待处理的数据;所述逻辑电路用于对待处理的数据执行前述方法实施例中终端设备侧的技术方案,得到处理后的数据。
可选的,该芯片还可以包括:输出接口,所述输出接口用于输出处理后的数据。
上述输入接口获取的待处理的数据包括隔空手势在拍摄范围内的第一移动距离等,输出接口输出的处理后的数据包括持续调整的调整量等。
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质用于存储程序,所述程序在被处理器执行时用于执行前述实施例中终端设备的技术方案。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在终端设备上运行时,使得所述终端设备执行前述实施例中的技术方案。
本领域普通技术人员应理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质,具体的介质类型本申请不做限制。
Claims (19)
- 一种基于隔空手势的控制方法,其特征在于,包括:使能显示器显示应用的用户界面;获取摄像头采集的隔空手势,所述隔空手势为与所述显示器之间的距离大于预设阈值的手势;根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能,所述目标功能是所述应用的可持续调节功能,所述持续调节的调节量与所述第一距离正相关。
- 根据权利要求1所述的方法,其特征在于,所述第一距离至少根据所述摄像头的焦距、所述隔空手势与所述摄像头的光心之间的距离、第二距离确定,所述第二距离用于指示所述隔空手势在所述摄像头的拍摄范围内移动时,所述摄像头成像面中的隔空手势移动的距离。
- 根据权利要求1或2所述的方法,其特征在于,所述第一距离小于或等于预设距离,所述预设距离与所述用户的手臂的臂长正相关。
- 根据权利要求3所述的方法,其特征在于,所述根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能,包括:自识别出所述隔空手势起,根据单位调节量持续调节所述隔空手势对应的目标功能,直到所述隔空手势移动所述第一距离,所述单位距离调节量是所述目标功能的总调整量与所述预设距离的比值。
- 根据权利要求3所述的方法,其特征在于,所述根据所述隔空手势在所述摄像头的拍摄范围内移动的第一距离,持续调节所述隔空手势对应的目标功能,包括:根据所述第一距离与所述预设距离的比值,持续调节所述隔空手势对应的目标功能。
- 根据权利要求1~5任一项所述的方法,其特征在于,还包括:当终端设备识别出所述隔空手势时,使能所述显示器上显示功能原点,所述功能原点用于指示所述终端设备已识别出所述隔空手势并唤醒隔空手势功能。
- 根据权利要求1~6任一项所述的方法,其特征在于,不同应用的可持续调节的功能对应相同的隔空手势。
- 根据权利要求1~7任一项所述的方法,其特征在于,同一个应用不同的可持续调节调节功能对应不同的隔空手势。
- 根据权利要求1~8任一项所述的方法,其特征在于,所述目标功能包括下述功能中的任意一个:音量调节、音视频进度调节、空调温度调节、椅背高度调节、360°环视视角调节、车窗高度调节、天窗大小调节、空调风量调节、氛围灯亮度调节。
- 一种基于隔空手势的控制方法,其特征在于,包括:使能显示器显示应用的用户界面;获取摄像头采集的隔空手势,所述隔空手势为与所述显示器之间的距离大于预设阈值的手势;根据所述隔空手势在所述摄像头的拍摄范围内的保持时长,持续调节所述隔空手势对应的目标功能,所述目标功能是所述应用的可持续调节功能,所述持续调节的调节量与所 述保持时长正相关。
- 根据权利要求10所述的方法,其特征在于,所述保持时长包括第一时长和第二时长,所述第一时长对应的第一时间段内,所述隔空手势持续平移,所述第二时长对应的第二时间段内,所述隔空手势静止,所述第一时间段位于所述第二时间段之前。
- 根据权利要求10或11所述的方法,其特征在于,还包括:当终端设备识别出所述隔空手势时,使能所述显示器显示功能原点,所述功能原点用于指示所述终端设备已识别出所述隔空手势并唤醒隔空手势功能。
- 根据权利要求10~12任一项所述的方法,其特征在于,不同应用的可持续调节的功能对应相同的隔空手势。
- 根据权利要求10~13任一项所述的方法,其特征在于,同一个应用不同的可持续调节调节功能对应不同的隔空手势。
- 根据权利要求10~14任一项所述的方法,其特征在于,所述目标功能包括下述功能中的任意一个:音量调节、音视频进度调节、空调温度调节、椅背高度调节、360°环视视角调节、车窗高度调节、天窗大小调节、空调风量调节、氛围灯亮度调节。
- 一种电子设备,其特征在于,包括:一个或多个处理器;一个或多个存储器;以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求1-9中任一项所述的方法;或者,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求10-15中任一项所述的方法。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-9中任一项所述的方法;或者,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求10-15中任一项所述的方法。
- 一种芯片,其特征在于,所述芯片包括可编程逻辑电路和输入接口,所述输入接口用于获取待处理的数据,所述逻辑电路用于对待处理的数据执行如权利要求1-9中任一项所述的方法;或者,所述逻辑电路用于对待处理的数据执行如权利要求10-15中任一项所述的方法。
- 一种自动驾驶系统,其特征在于,包括:车辆本体,设置在所述车辆本体上的摄像头、显示器以及基于隔空手势的控制装置,所述摄像头、所述显示器分别与所述基于隔空手势的控制装置连接,其中:所述显示器,用于显示应用的用户界面;所述摄像头,用于采集用户做出的隔空手势,所述隔空手势为与所述显示器之间的距离大于预设阈值的手势;所述基于隔空手势的控制装置,用于执行如权利要求1-9中任一项所述的方法;或者,所述基于隔空手势的控制装置用于执行如权利要求10-15中任一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/088219 WO2021217570A1 (zh) | 2020-04-30 | 2020-04-30 | 基于隔空手势的控制方法、装置及系统 |
EP20933855.7A EP4137914A4 (en) | 2020-04-30 | 2020-04-30 | METHOD AND APPARATUS FOR IN-AIR GESTURE-BASED CONTROL, AND SYSTEM |
CN202080004890.7A CN112639689A (zh) | 2020-04-30 | 2020-04-30 | 基于隔空手势的控制方法、装置及系统 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/088219 WO2021217570A1 (zh) | 2020-04-30 | 2020-04-30 | 基于隔空手势的控制方法、装置及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021217570A1 true WO2021217570A1 (zh) | 2021-11-04 |
Family
ID=75291259
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/088219 WO2021217570A1 (zh) | 2020-04-30 | 2020-04-30 | 基于隔空手势的控制方法、装置及系统 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4137914A4 (zh) |
CN (1) | CN112639689A (zh) |
WO (1) | WO2021217570A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117032447A (zh) * | 2022-05-31 | 2023-11-10 | 荣耀终端有限公司 | 隔空手势交互方法、装置、电子芯片及电子设备 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115480662A (zh) * | 2021-05-28 | 2022-12-16 | 华为技术有限公司 | 悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器 |
CN113325987A (zh) * | 2021-06-15 | 2021-08-31 | 深圳地平线机器人科技有限公司 | 引导操作体进行隔空操作的方法和装置 |
CN113542832B (zh) * | 2021-07-01 | 2023-07-04 | 深圳创维-Rgb电子有限公司 | 显示控制方法、显示装置及计算机可读存储介质 |
CN116710979A (zh) * | 2021-12-31 | 2023-09-05 | 华为技术有限公司 | 一种人机交互方法、系统以及处理装置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102262438A (zh) * | 2010-05-18 | 2011-11-30 | 微软公司 | 用于操纵用户界面的姿势和姿势识别 |
CN102457688A (zh) * | 2011-12-30 | 2012-05-16 | 四川长虹电器股份有限公司 | 一种电视音量和频道智能调节方法 |
US20130159940A1 (en) * | 2011-08-22 | 2013-06-20 | International Technological University | Gesture-Controlled Interactive Information Board |
CN103914126A (zh) * | 2012-12-31 | 2014-07-09 | 腾讯科技(深圳)有限公司 | 一种多媒体播放器控制方法和装置 |
CN106507201A (zh) * | 2016-10-09 | 2017-03-15 | 乐视控股(北京)有限公司 | 一种视频播放控制方法及装置 |
CN109947249A (zh) * | 2019-03-15 | 2019-06-28 | 努比亚技术有限公司 | 穿戴式设备的交互方法、穿戴式设备和计算机存储介质 |
CN110058682A (zh) * | 2019-03-15 | 2019-07-26 | 努比亚技术有限公司 | 可穿戴设备控制方法、可穿戴设备及计算机可读存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8726194B2 (en) * | 2007-07-27 | 2014-05-13 | Qualcomm Incorporated | Item selection using enhanced control |
CN103782255B (zh) * | 2011-09-09 | 2016-09-28 | 泰利斯航空电子学公司 | 交通工具娱乐系统的眼动追踪控制 |
US9141198B2 (en) * | 2013-01-08 | 2015-09-22 | Infineon Technologies Ag | Control of a control parameter by gesture recognition |
US10514768B2 (en) * | 2016-03-15 | 2019-12-24 | Fisher-Rosemount Systems, Inc. | Gestures and touch in operator interface |
CN106055098B (zh) * | 2016-05-24 | 2019-03-15 | 北京小米移动软件有限公司 | 隔空手势操作方法及装置 |
CN110045819B (zh) * | 2019-03-01 | 2021-07-09 | 华为技术有限公司 | 一种手势处理方法及设备 |
CN110058777B (zh) * | 2019-03-13 | 2022-03-29 | 华为技术有限公司 | 快捷功能启动的方法及电子设备 |
-
2020
- 2020-04-30 CN CN202080004890.7A patent/CN112639689A/zh active Pending
- 2020-04-30 WO PCT/CN2020/088219 patent/WO2021217570A1/zh unknown
- 2020-04-30 EP EP20933855.7A patent/EP4137914A4/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102262438A (zh) * | 2010-05-18 | 2011-11-30 | 微软公司 | 用于操纵用户界面的姿势和姿势识别 |
US20130159940A1 (en) * | 2011-08-22 | 2013-06-20 | International Technological University | Gesture-Controlled Interactive Information Board |
CN102457688A (zh) * | 2011-12-30 | 2012-05-16 | 四川长虹电器股份有限公司 | 一种电视音量和频道智能调节方法 |
CN103914126A (zh) * | 2012-12-31 | 2014-07-09 | 腾讯科技(深圳)有限公司 | 一种多媒体播放器控制方法和装置 |
CN106507201A (zh) * | 2016-10-09 | 2017-03-15 | 乐视控股(北京)有限公司 | 一种视频播放控制方法及装置 |
CN109947249A (zh) * | 2019-03-15 | 2019-06-28 | 努比亚技术有限公司 | 穿戴式设备的交互方法、穿戴式设备和计算机存储介质 |
CN110058682A (zh) * | 2019-03-15 | 2019-07-26 | 努比亚技术有限公司 | 可穿戴设备控制方法、可穿戴设备及计算机可读存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4137914A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117032447A (zh) * | 2022-05-31 | 2023-11-10 | 荣耀终端有限公司 | 隔空手势交互方法、装置、电子芯片及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN112639689A (zh) | 2021-04-09 |
EP4137914A1 (en) | 2023-02-22 |
EP4137914A4 (en) | 2023-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021217570A1 (zh) | 基于隔空手势的控制方法、装置及系统 | |
WO2022000448A1 (zh) | 车内隔空手势的交互方法、电子装置及系统 | |
EP3070700A1 (en) | Systems and methods for prioritized driver alerts | |
US12049170B2 (en) | Adaptive rearview mirror adjustment method and apparatus | |
CN113228620B (zh) | 一种图像的获取方法以及相关设备 | |
WO2021217575A1 (zh) | 用户感兴趣对象的识别方法以及识别装置 | |
WO2022205243A1 (zh) | 一种变道区域获取方法以及装置 | |
WO2018087877A1 (ja) | 車両制御システム、車両制御方法、および車両制御プログラム | |
US20240137721A1 (en) | Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle | |
CN115239548A (zh) | 目标检测方法、装置、电子设备及介质 | |
WO2024093768A1 (zh) | 一种车辆告警方法以及相关设备 | |
WO2024131698A1 (zh) | 一种车辆中座椅的调整方法、泊车方法以及相关设备 | |
CN115170630B (zh) | 地图生成方法、装置、电子设备、车辆和存储介质 | |
CN115223122A (zh) | 物体的三维信息确定方法、装置、车辆与存储介质 | |
CN114572219B (zh) | 自动超车方法、装置、车辆、存储介质及芯片 | |
CN115082886B (zh) | 目标检测的方法、装置、存储介质、芯片及车辆 | |
CN114802435B (zh) | 车辆控制方法、装置、车辆、存储介质及芯片 | |
CN114771514B (zh) | 车辆行驶控制方法、装置、设备、介质、芯片及车辆 | |
CN115535004B (zh) | 距离生成方法、装置、存储介质及车辆 | |
CN115965947A (zh) | 一种数据处理方法及装置 | |
CN114964294A (zh) | 导航方法、装置、存储介质、电子设备、芯片和车辆 | |
CN115214629A (zh) | 自动泊车方法、装置、存储介质、车辆及芯片 | |
CN118288982A (zh) | 一种泊车方法、电子设备和介质 | |
CN115221261A (zh) | 地图数据融合方法、装置、车辆及存储介质 | |
CN114802258A (zh) | 车辆控制方法、装置、存储介质及车辆 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20933855 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020933855 Country of ref document: EP Effective date: 20221118 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |