CN117827075A - Gesture operation method and device, equipment and computer readable storage medium - Google Patents

Gesture operation method and device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN117827075A
CN117827075A CN202410006548.0A CN202410006548A CN117827075A CN 117827075 A CN117827075 A CN 117827075A CN 202410006548 A CN202410006548 A CN 202410006548A CN 117827075 A CN117827075 A CN 117827075A
Authority
CN
China
Prior art keywords
interface
gesture operation
application
split screen
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410006548.0A
Other languages
Chinese (zh)
Inventor
吴晓庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202410006548.0A priority Critical patent/CN117827075A/en
Publication of CN117827075A publication Critical patent/CN117827075A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a gesture operation method, a gesture operation device, gesture operation equipment and a computer-readable storage medium, wherein the gesture operation method comprises the following steps: receiving a first gesture operation on a display interface under the condition that a gesture operation switch is on; the first gesture operation representation operates in the full screen range of the display screen; the display interface is a full screen interface or a split screen interface; the split screen interface characterizes the display screen to display at least two application interfaces simultaneously; if the display interface is a split screen interface, identifying and screening the split screen interface, and determining a target application interface corresponding to the split screen interface; the target application interface is an application interface displayed in the split screen interface; and responding to the first gesture operation performed on the target application interface, and controlling the target application interface. In the scheme, the operation can be responded at any position within the full-screen range, so that the triggering accuracy of the vehicle operation system can be improved.

Description

Gesture operation method and device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of vehicle-mounted system operation, and in particular, to a gesture operation method and apparatus, and a computer readable storage medium.
Background
With the advent of the intellectualization era, the vehicle system has become one of the important equipment of modern automobiles. However, in the running process of the vehicle, the function of the vehicle system is not responded easily due to inaccurate touch, or the driving safety accident can be caused by looking at a screen during operation, which has become a very troublesome problem.
The prior art solves the problem of inconvenient touch screen during driving by adopting the characteristics of larger and fewer operation keys in the driving process, but the operation keys are larger, and only a certain range of accurate triggering is needed to be responded, and the operation keys cannot be triggered at any place at will, so that the triggering accuracy of the operation car machine system is easy to be low.
Disclosure of Invention
The embodiment of the application provides a gesture operation method, gesture operation device, gesture operation equipment and a computer readable storage medium, which can improve the accuracy of triggering an operation car machine system.
The technical scheme of the application is realized as follows:
the embodiment of the application provides a gesture operation method, which comprises the following steps:
receiving a first gesture operation on a display interface under the condition that a gesture operation switch is on; the first gesture operation representation operates in the full screen range of the display screen; the display interface is a full screen interface or a split screen interface; the split screen interface characterizes a display screen to simultaneously display at least two application interfaces;
If the display interface is the split screen interface, identifying and screening the split screen interface, and determining a target application interface corresponding to the split screen interface; the target application interface is an application interface displayed in the split screen interface;
and controlling the target application interface in response to the first gesture operation performed on the target application interface.
In the above scheme, if the display interface is the split screen interface, identifying and screening the split screen interface, and determining a target application interface from the at least two application interfaces includes:
if the display interface is the split screen interface, identifying the split screen interface, and determining at least two application interfaces corresponding to the split screen interface;
and based on the at least two application interfaces, priority screening is carried out, and a target application interface corresponding to the split screen interface is determined.
It can be understood that if the display interface is a split screen interface, the vehicle-mounted terminal identifies the split screen interface, determines at least two application interfaces corresponding to the split screen interface, performs priority screening based on the at least two application interfaces, determines a target application interface corresponding to the split screen interface, and facilitates subsequent response to a first gesture operation, and controls the target application interface, so that the accuracy of triggering the vehicle operation system is improved.
In the above scheme, the step of performing priority screening based on the at least two application interfaces to determine a target application interface corresponding to the split screen interface includes:
determining the corresponding priority value of each of the at least two application interfaces; the priority value characterizes the priority of displaying the application interface on the display screen;
and determining a target application interface corresponding to the split screen interface from the at least two application interfaces based on the priority values corresponding to the at least two application interfaces.
It can be understood that the vehicle-mounted terminal determines the priority values corresponding to at least two application interfaces respectively; based on the respective priority values of the at least two application interfaces, determining a target application interface corresponding to the split screen interface from the at least two application interfaces, and determining the target application interface according to the priority values of the application interfaces, the function control in response to the first gesture operation can be more accurate.
In the above solution, the determining the priority value corresponding to each of the at least two application interfaces includes:
acquiring first sub-priority values corresponding to the at least two application interfaces respectively; the first sub-priority value characterizes the importance degree of an application interface in a system for a user;
Acquiring second sub-priority values corresponding to the at least two application interfaces respectively; the second sub-priority value characterizes a user operating frequency for an application interface;
acquiring a third sub-priority value corresponding to each of the at least two application interfaces; the third sub-priority value represents the duty ratio of the application interface in the display area of the display screen;
and based on the first sub-priority value, the second sub-priority value and the third sub-priority value, operation is carried out, and the priority values corresponding to the at least two application interfaces are determined.
It can be understood that the vehicle-mounted terminal obtains the first sub-priority values corresponding to at least two application interfaces respectively; acquiring second sub-priority values corresponding to at least two application interfaces respectively; acquiring a third sub-priority value corresponding to each of at least two application interfaces; and based on the first sub-priority value, the second sub-priority value and the third sub-priority value, operation is performed to determine the priority values corresponding to at least two application interfaces, so that the target application interface can be determined conveniently according to the priority values.
In the above solution, after receiving the first gesture operation on the display interface when the gesture operation switch is on, the method further includes:
If the display interface is the full screen interface; identifying the full screen interface and determining a target application interface where the full screen interface is located; the full screen interface characterizes that the display screen displays only one application interface in full screen;
and responding to the first gesture operation aiming at the target application interface, and controlling the target application interface.
It can be understood that if the display interface is a full-screen interface, the vehicle-mounted terminal identifies the full-screen interface and determines the target application interface where the full-screen interface is located; and responding to the first gesture operation aiming at the target application interface to control the target application interface, wherein the first gesture operation can respond to the target application interface function at any position within the full screen range of the display screen, so that a user does not need to look at the screen to operate, and the driving safety can be improved.
In the above solution, the first gesture operation includes: default gesture operations and custom gesture operations; the default gesture operation is generated by a system setting; the user-defined gesture operation is preset and generated for a user;
the default gesture operation includes: double-finger left-sliding, double-finger right-sliding, double-finger lower-sliding and double-finger upper-sliding; the double-finger left slide, the double-finger right slide, the double-finger lower slide and the double-finger upper slide have different response effects for different application interfaces.
It can be understood that the default gesture in the vehicle-mounted terminal comprises a plurality of different gesture actions, and the different gesture actions represent different functional effects, so that the diversity of the gesture actions is enriched.
In the above scheme, the method further comprises:
receiving initial gesture operation in a custom trigger response editing mode; the initial gesture operation characterizes at least one of any gesture operations;
and determining the user-defined gesture operation based on the initial gesture operation.
It can be understood that the vehicle terminal receives the initial gesture operation in the custom trigger response editing mode; based on the initial gesture operation, the user-defined gesture operation is determined, so that personalized operation of a user is facilitated, and user experience is improved.
In the above solution, the determining, based on the initial gesture operation, the user-defined gesture operation includes:
if the default gesture operation is different from the initial gesture operation and the effect of the default gesture operation is different from that of the initial gesture operation, determining the initial gesture operation as the self-defining gesture operation;
if the default gesture operation is consistent with the gesture action of the initial gesture operation and the effect of the default gesture operation is different from that of the initial gesture operation, generating prompt information;
And modifying the gesture action of the initial gesture operation or deleting the default gesture operation consistent with the gesture action of the initial gesture operation based on the prompt information, and determining the custom gesture operation.
It can be understood that if the default gesture operation is different from the gesture of the initial gesture operation and the effect of the default gesture operation is different from that of the initial gesture operation, determining the initial gesture operation as the custom gesture operation; if the default gesture operation is consistent with the gesture action of the initial gesture operation and the effect of the default gesture operation is different from that of the initial gesture operation, generating prompt information; based on the prompt information, modifying the gesture action of the initial gesture operation or deleting the default gesture operation consistent with the gesture action of the initial gesture operation, and determining the custom gesture operation can solve the problem that the custom gesture operation conflicts with the default gesture operation, so that accurate response of the first gesture operation is ensured.
In the above aspect, after the determining the initial gesture operation as the custom gesture operation, the method further includes:
and carrying out gesture correction through operation habits based on the custom gesture operation to obtain updated custom gesture operation.
It can be understood that the vehicle-mounted terminal corrects the gesture through the operation habit based on the user-defined gesture operation, so as to obtain updated user-defined gesture operation, and facilitate the gesture operation of the subsequent user.
The embodiment of the application provides a gesture operation device, which comprises: a receiving unit, a determining unit and a control unit, wherein,
the receiving unit is used for receiving a first gesture operation on the display interface under the condition that the gesture operation switch is on; the first gesture operation representation operates in the full screen range of the display screen; the display interface is a full screen interface or a split screen interface; the split screen interface characterizes a display screen to simultaneously display at least two application interfaces;
the determining unit is used for identifying and screening the split screen interface if the display interface is the split screen interface, and determining a target application interface corresponding to the split screen interface; the target application interface is an application interface displayed in the split screen interface;
the control unit is used for responding to the first gesture operation performed on the target application interface and controlling the target application interface.
The embodiment of the application provides gesture operation equipment, which comprises:
A memory for storing executable data instructions;
and the processor is used for executing the executable instructions stored in the memory, and when the executable instructions are executed, the processor executes the gesture operation method.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions that, when executed by one or more processors, perform the gesture operation method.
The embodiment of the application provides a gesture operation method, a gesture operation device, gesture operation equipment and a computer-readable storage medium, wherein the gesture operation method comprises the following steps: receiving a first gesture operation on a display interface under the condition that a gesture operation switch is on; the first gesture operation representation operates in the full screen range of the display screen; the display interface is a full screen interface or a split screen interface; the split screen interface characterizes a display screen to simultaneously display at least two application interfaces; if the display interface is the split screen interface, identifying and screening the split screen interface, and determining a target application interface corresponding to the split screen interface; the target application interface is an application interface displayed in the split screen interface; and controlling the target application interface in response to the first gesture operation performed on the target application interface. In the above scheme, under the condition that the gesture operation switch is on, the first gesture operation can respond to the display interface function at any position within the full screen range of the display screen, so that a user does not need to watch the screen to operate, and the driving safety can be improved; meanwhile, the operation can be responded at any position within the full-screen range, so that the triggering accuracy of the vehicle-machine system can be improved.
Drawings
FIG. 1 is a schematic flow chart of an alternative gesture operation method according to an embodiment of the present application;
FIG. 2 is a second flowchart of an alternative gesture operation method according to the embodiment of the present application;
FIG. 3 is a third flowchart illustrating an alternative gesture operation method according to the embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative method for gesture operation according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a gesture operation device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a gesture operation device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the embodiments of the present application to be more apparent, the specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
In the following description reference is made to "some embodiments," "this embodiment," and examples, etc., which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
If a similar description of "first/second" appears in the application document, the following description is added, in which the terms "first/second/third" are merely distinguishing between similar objects and not representing a particular ordering of the objects, it being understood that the "first/second/third" may be interchanged with a particular order or precedence, if allowed, so that the embodiments of the application described herein may be implemented in an order other than that illustrated or described herein.
During the running process of the vehicle, the conventional operation of the operation vehicle machine system interface is as follows:
1. such as in a music interface where the user wants to go up, go down, pause. Clicking a button in a specific location is required to respond.
2. For example, in a navigation interface, the user perceives that this route does not meet his or her own expectations while navigating, and then needs to re-select the navigation route by entering the destination in a search box or language.
3. For example, in the incoming call process, if the call is to be answered or refused, the call can be answered or refused only by clicking the button for answering or refusing; for example, if a call is to be hung up while a call is being placed, the hang up button needs to be clicked to hang up.
4. For example, in an air conditioning interface, a user perceives that the current temperature of the main driving/co-driving is too high/low, and wants to adjust the temperature, and needs to click a corresponding button for adjustment.
5. For example, in a theme mall interface, a user wants to view various styles, needs to click different buttons, and then a preview style appears.
The user needs to click the corresponding button to trigger in response to a certain operation, and in the running process, the user easily clicks the dislocation and does not respond, or the user needs to look at the screen and does not look at the road ahead when clicking, so that the accident problem is caused. It is apparent to the driver that accurate triggering in a safe situation is a problem. Therefore, the embodiment of the application provides the gesture operation method, so that the interface function can be responded through blind gesture operation at any position in the driving state, and a user does not need to look at a screen during operation, so that driving safety is improved.
An embodiment of the present application provides a gesture operation method, and fig. 1 is a schematic flowchart of an alternative gesture operation method provided in the embodiment of the present application, and will be described with reference to the steps shown in fig. 1.
S101, under the condition that a gesture operation switch is on, receiving a first gesture operation on a display interface; the first gesture operation representation operates in the full screen range of the display screen; the display interface is a full screen interface or a split screen interface; the split screen interface characterizes the display screen to display at least two application interfaces simultaneously.
In some embodiments of the present application, the first gesture operation includes: default gesture operations and custom gesture operations; default gesture operations are generated by the system settings; the user-defined gesture operation is preset and generated for a user.
In some embodiments of the present application, the default gesture operation includes: double-finger left-sliding, double-finger right-sliding, double-finger lower-sliding and double-finger upper-sliding; the double-finger sliding device comprises a double-finger sliding device, a double-finger sliding device and a double-finger sliding device.
For example, in a music interface, at any position of the screen, the left slide of the two fingers can switch the next one, the right slide of the two fingers can switch the last one, play is paused when the two fingers slide down, and play is continued when the two fingers slide up. For example, at any position of a navigation interface on a screen, a two-finger left slide can switch a next route, a two-finger right slide can switch a previous route, and the two-finger up slide exits from navigation. For example, at any position of the screen on the air conditioner interface, the two fingers slide left to increase the temperature of the air conditioner, the two fingers slide right to decrease the temperature of the air conditioner, the two fingers slide down to close the air conditioner, and the two fingers slide up to open the air conditioner. For example, in a telephone answering interface, at any position of a screen, a double finger slides downwards to hang up a telephone; for example, in the incoming call interface, the double fingers slide down to hang up, and the double fingers slide up to answer the call. For example, in the theme mall interface, at any position of the screen, the two-finger left slide is used for switching the next style, the two-finger right slide is used for switching the last style, the two-finger lower slide is used for applying the current style, and the two-finger upper slide is used for recovering the last set style. Such as in a WeChat chat interface, with two fingers slid up to send a message. The embodiment of the application is not limited to the scene interface, and includes all application interfaces.
In some embodiments of the present application, the full screen interface characterizes that the display screen displays only one application interface full screen; the split screen interface characterizes the display screen to display at least two application interfaces simultaneously.
In some embodiments of the present application, the display interface may be a music interface, a navigation page, a theme mall page, a phone call/answer status page, an air conditioner page, a vehicle setting interface, a user center interface, and the embodiments of the present application are not limited to these pages, and may refer to any page.
In some embodiments of the present application, the gesture operation under the vehicle-mounted operation system is suitable for vehicle driving.
In some embodiments of the present application, the execution subject of the gesture operation method is a vehicle-mounted terminal.
In some embodiments of the present application, in a state in which the gesture operation switch is turned on, the vehicle-mounted terminal may receive a first gesture operation performed on the display interface within a full screen range of the display screen.
In some embodiments of the present application, the gesture operation switch is turned off by default, and the user may manually turn on the gesture operation switch, or the gesture operation switch may be automatically turned on when the vehicle is in a driving state. When the user actively turns off the gesture operation switch or the current state is changed from the driving mode to the static state, the gesture operation switch is automatically turned off.
S102, if the display interface is a split screen interface, identifying and screening the split screen interface, and determining a target application interface corresponding to the split screen interface; the target application interface is an application interface displayed in the split screen interface.
In some embodiments of the present application, if the display interface is a split screen interface, it is indicated that the display screen currently displays at least two application interfaces, and it is required to determine that one application interface is a target application interface from the at least two application interfaces.
In some embodiments of the application, the vehicle-mounted terminal identifies the split screen interface and determines at least two application interfaces corresponding to the split screen interface; and based on at least two application interfaces, priority screening is carried out, and a target application interface corresponding to the split screen interface is determined.
In some embodiments of the present application, the vehicle terminal may calculate priorities of at least two application interfaces, and determine priority values corresponding to the at least two application interfaces respectively; and determining a target application interface corresponding to the split screen interface from the at least two application interfaces based on the priority values corresponding to the at least two application interfaces.
S103, responding to the first gesture operation aiming at the target application interface, and controlling the target application interface.
In some embodiments of the present application, after obtaining the target application interface, the vehicle-mounted terminal controls the target application interface in response to a first gesture operation for the target application interface.
In some embodiments of the present application, the vehicle-mounted terminal may control the target application interface in response to the first gesture operation for the target application interface.
The target application interface is an air-conditioning interface, and the air-conditioning temperature is increased in response to the double-finger sliding of the air-conditioning interface, the air-conditioning temperature is reduced in response to the double-finger sliding of the air-conditioning interface, the air-conditioning is closed in response to the double-finger sliding of the air-conditioning interface, and the air-conditioning is opened in response to the double-finger sliding of the air-conditioning interface. When the target application interface is determined to be a music interface, switching the music to the next one in response to the double-finger left sliding of the music interface, switching the music to the last one in response to the double-finger right sliding of the music interface, suspending playing of the music in response to the double-finger down sliding of the music interface, or continuing playing of the music in response to the double-finger up sliding of the music interface.
It can be understood that the vehicle-mounted terminal receives the first gesture operation on the display interface under the condition that the gesture operation switch is on; the first gesture operation representation operates in the full screen range of the display screen; the display interface is a full screen interface or a split screen interface; and responding to a first gesture operation for the full-screen interface or the split-screen interface, and controlling the full-screen interface or the split-screen interface. In the above scheme, under the condition that the gesture operation switch is on, the first gesture operation can respond to the display interface function at any position within the full screen range of the display screen, so that a user does not need to watch the screen to operate, and the driving safety can be improved; meanwhile, the operation can be responded at any position within the full-screen range, so that the triggering accuracy of the vehicle-machine system can be improved.
In some embodiments of the present application, fig. 2 is a second flowchart of an alternative gesture operation method provided in the embodiments of the present application, as shown in fig. 2, S102 may be implemented by S1021 and S1022 as follows:
s1021, identifying the split screen interfaces, and determining at least two application interfaces corresponding to the split screen interfaces.
In some embodiments of the present application, the vehicle terminal performs interface identification on the split screen interface, and determines at least two application interfaces corresponding to the split screen interface.
It should be noted that at least two application interfaces are interfaces of different applications.
And S1022, screening the priority based on at least two application interfaces, and determining a target application interface corresponding to the split screen interface.
In some embodiments of the present application, the vehicle terminal may calculate priorities of at least two application interfaces, and determine priority values corresponding to the at least two application interfaces respectively; and determining a target application interface corresponding to the split screen interface from the at least two application interfaces based on the priority values corresponding to the at least two application interfaces.
In some embodiments of the present application, the vehicle terminal may sort priority values corresponding to at least two application interfaces, and select an application interface with a large priority value from the at least two application interfaces, and determine the application interface as the target application interface.
It can be understood that the vehicle-mounted terminal identifies the split screen interface and determines at least two application interfaces corresponding to the split screen interface; and based on at least two application interfaces, priority screening is carried out, and a target application interface corresponding to the split screen interface is determined, so that the target application interface can be conveniently controlled in function in response to the first gesture operation.
In some embodiments of the present application, after executing S101, the gesture operation method may further execute identifying a full-screen interface or a split-screen interface, to determine a target application interface corresponding to each full-screen interface or split-screen interface; the full screen interface characterizes that the display screen displays only one application interface in full screen; the split screen interface characterizes the display screen to display at least two application interfaces simultaneously.
In some embodiments of the present application, a full screen interface means that only one application interface is displayed on the current display screen and is full screen display; the split screen interface is that the current display screen displays at least two application interfaces, and the at least two application interfaces are not overlapped.
In some embodiments of the application, the vehicle-mounted terminal identifies a full-screen interface and determines a target application interface where the full-screen interface is located; or identifying the split screen interface, determining at least two application interfaces corresponding to the split screen interface, screening priority based on the at least two application interfaces, and determining a target application interface corresponding to the split screen interface.
It can be understood that the vehicle-mounted terminal identifies the full-screen interface and determines the target application interface, so that the target application interface is controlled in response to the first gesture operation, and the user does not need to look at the screen to operate because the first gesture operation can respond to the function of the target application interface at any position within the full-screen range of the display screen, so that the driving safety can be improved.
In some embodiments of the present application, S1022 may be implemented by S10221 and S10222 as follows:
s10221, determining respective corresponding priority values of at least two application interfaces; the priority value characterizes a priority of displaying the application interface on the display screen.
In some embodiments of the present application, the vehicle-mounted terminal obtains first sub-priority values corresponding to at least two application interfaces respectively; acquiring second sub-priority values corresponding to at least two application interfaces respectively; acquiring a third sub-priority value corresponding to each of at least two application interfaces; and based on the first sub-priority value, the second sub-priority value and the third sub-priority value, carrying out operation and determining the priority values corresponding to at least two application interfaces.
In some embodiments of the present application, the first sub-priority value characterizes a degree of importance to the user of an application interface in the system; the second sub-preference value characterizes a user operating frequency for the application interface; the third sub-priority value characterizes a duty cycle of the application interface in a display area of the display screen.
In some embodiments of the present application, the priorities represented by the first, second, and third sub-priority values are the first, second, and third sub-priority values from top to bottom. The first sub-priority value is a system default (p 0), the second sub-priority value is a user habit (p 1), and the third sub-priority value is an area duty cycle (p 2).
In some embodiments of the present application, the first priority value (system default p 0), all interfaces in the system, different interfaces are of different importance to the user. For example, telephone-related, priority is particularly high, for example, may be set to 1000, for example, a game interface during travel, and priority is particularly low for the driver, for example, set to 0. Then there is a corresponding p0 value for the default priority for the different interfaces. The second priority value (user habit p 1), for default priorities, is typically a prioritization given by the results of product-wise, market-wise investigation, for which a popular result may not be applicable for the physical user. The user's habits may be obtained by VIN (unique identification of each vehicle) +user (each vehicle may be multi-user logged in u0, u1.. For example, when the current vehicle is running, every time the music interface is operated, the user of the music interface is used to p1 to be +1, and the more the operation is, the higher the value of p1 is used to by the user. The third priority value (area ratio p 2), for example, the current screen shows five applications, including navigation, music, telephone, vehicle settings, games, 4/10,2/10, 1/10, and 1/10, respectively. Then the p2 priority of navigation is 4/10, i.e. 2/5, and the p2 priority of music is 2/10, i.e. 1/5.
For the priorities p0, p1 and p2, the system can acquire the priorities when the user operates the interface in the split screen state, and a final priority can be obtained through calculation for the three priorities, and when the final priority is the maximum, the application operation is preferentially responded. The priority value calculation formula can be implemented by formula (1), as follows:
P=(p0+p1)*p2(1)
wherein P is a priority value; p0 is a first sub-priority value; p1 is a second sub-priority value; p2 is the third sub-priority value.
It should be noted that p0 is already a certain value at the time of system marketing. When the user starts to use, the system defaults to p0, and as the use times of the user are increased, p1 is increased, the user is gradually moved closer to the current use habit of the user of the current vehicle.
S10222, determining a target application interface corresponding to the split screen interface from at least two application interfaces based on the priority values corresponding to at least two application interfaces.
In some embodiments of the present application, the vehicle-mounted terminal sorts the priority values corresponding to the at least two application interfaces, so as to obtain a priority value sorting result of the at least two application interfaces, and selects an application interface with the largest priority value from the sorting result, so as to determine the application interface as a target application interface corresponding to the split screen interface.
It can be understood that the vehicle-mounted terminal determines the priority values corresponding to at least two application interfaces respectively; based on the respective priority values of the at least two application interfaces, determining a target application interface corresponding to the split screen interface from the at least two application interfaces, and determining the target application interface according to the priority values of the application interfaces, the function control in response to the first gesture operation can be more accurate.
In some embodiments of the present application, after performing S101, the gesture operation method further includes
If the display interface is a full-screen interface, identifying the full-screen interface, and determining a target application interface where the full-screen interface is located;
and responding to the first gesture operation aiming at the target application interface, and controlling the target application interface.
In some embodiments of the application, the vehicle-mounted terminal identifies a full-screen interface and determines a target application interface displayed by the full-screen interface.
It should be noted that the target application interface may be any application interface.
It can be understood that if the display interface is a full-screen interface, the vehicle-mounted terminal identifies the full-screen interface and determines the target application interface, so that the target application interface is controlled in response to the first gesture operation, and because the first gesture operation can respond to the function of the target application interface at any position within the full-screen range of the display screen, the user does not need to look at the screen to operate, and the driving safety can be improved.
In some embodiments of the present application, fig. 3 is a schematic diagram three of an alternative flow chart of a gesture operation method provided in the embodiment of the present application, as shown in fig. 3, where the gesture operation method further includes: s301 and S302 are as follows:
s301, receiving initial gesture operation in a self-defined trigger response editing mode; the initial gesture operation characterizes at least one of the arbitrary gesture operations.
In some embodiments of the present application, in response to a control operation, an in-vehicle terminal inputs an initial custom gesture in a custom trigger response editing mode, and stores the gesture.
Illustratively, the system provides each interface with default two-finger left-slide, right-slide, up-slide, down-slide response effects. However, since different people have different habits, custom gestures or custom trigger responses may be used. The user can customize gestures, such as three-finger, four-finger, five-finger sliding, through his own habits. When the user is in a non-driving state, the user can enter a system setting to start a custom gesture/custom trigger response editing mode, then enter the custom editing mode, and then input the custom gesture and store by clicking a control (such as a dotted box control in fig. 4, which can all customize the gesture/custom response). When the user sets the custom gesture and saves, a response is triggered. For example, in the music playing interface, there are not only the previous button, the next button and the pause/play button, but also a music progress bar, and in the driving process, the user wants to switch the previous and next songs, pause/play music, but wants to fast forward or backward to a certain point of the music, and the conventional operation is to operate the sliding progress bar, so that the user responds. The user wants to operate the response at any position on the screen as if switching the previous song to the next song, and the user can click on the progress bar in the custom trigger response editing mode and then input a custom gesture (such as three-finger left-hand sliding) to save. When the user is driving next time and is in the song listening interface, the user can fast forward or backward music through three-finger left slide or three-finger right slide.
In some embodiments of the present application, after the custom gesture/custom trigger response is set, the system may enter a test response effect state for the user to test, and check whether the self-set gesture response effect is consistent with the self-expectation. (the p1 value during the test is denoted by p1_test, but the priority is calculated during the test. When exiting the test state, the priority is calculated using p1 for the user habit, i.e. p (final priority in the test state) = (p0+p1_test) = (p 2).
S302, determining the user-defined gesture operation based on the initial gesture operation.
In some embodiments of the present application, if the default gesture operation is different from the gesture of the initial gesture operation and the effect of the default gesture operation is different from the effect of the initial gesture operation, determining the initial gesture operation as a custom gesture operation; if the default gesture operation is consistent with the gesture action of the initial gesture operation and the effect of the default gesture operation is different from that of the initial gesture operation, generating prompt information; and modifying the gesture action of the initial gesture operation or deleting the default gesture operation consistent with the gesture action of the initial gesture operation based on the prompt information, and determining the self-defined gesture operation.
In some embodiments of the present application, after obtaining the initial gesture operation, if the default gesture operation is different from the gesture of the initial gesture operation and the effect of the default gesture operation is different from that of the initial gesture operation, it is indicated that there is no conflict between the default gesture operation and the initial gesture operation, and the initial gesture operation is determined as the custom gesture operation.
In some embodiments of the present application, after obtaining the initial gesture operation, if the default gesture operation is consistent with the gesture action of the initial gesture operation and the effect of the default gesture operation is different from that of the initial gesture operation, the vehicle terminal indicates that the default gesture operation and the initial gesture operation have a conflict, and generates prompt information; based on the prompt information, modifying the gesture action of the initial gesture operation, and determining the modified initial gesture operation as a user-defined gesture operation.
In some embodiments of the present application, when a default gesture operation conflicts with an initial gesture operation, a prompt message is generated; based on the prompt information, determining the initial gesture operation as a custom gesture operation when deleting a default gesture operation consistent with the gesture action of the initial gesture operation.
For example, when the user clicks on the music progress bar under the default of the custom trigger response editing, and then inputs a custom gesture (such as double-finger left-hand sliding), then the music switch next (default trigger response of the system in the application) conflicts, the system prompts that the music switch next trigger response can be deleted, or modifies the music switch next trigger response gesture (such as modifying to five fingers) first, and then inputs the music progress bar custom gesture for saving.
It can be understood that the vehicle terminal receives the initial gesture operation in the custom trigger response editing mode; based on the initial gesture operation, the user-defined gesture operation is determined, so that personalized operation of a user is facilitated, and user experience is improved.
In some embodiments of the present application, after S302 is performed, S303 is also performed as follows:
s303, based on the user-defined gesture operation, gesture correction is carried out through operation habits, and updated user-defined gesture operation is obtained.
In some embodiments of the present application, the vehicle-mounted terminal may correct, according to the function of the user-defined gesture operation, the gesture operation related to the function of the user-defined gesture operation through the user operation habit, so as to obtain the updated user-defined gesture operation.
For example, if the user is on a music interface, the system defaults to double-finger left-slide to next and double-finger right-slide to next. And the user switches the next setting to be three-finger left slide through the custom gesture, and sets the music back to be two-finger left slide. Then the left slide of the double fingers is a rollback music, and the right slide of the double fingers is a previous switch; the three-finger left slide is used for switching the next head, and the three-finger right slide is used for fast-forward music. Thereby causing confusion under the sense of user operation. For this case, when the double-finger left slide is set as the rollback music, the system automatically checks, and sets and saves the double-finger right slide as the fast forward music. When the three-finger left slide is set to be switched to the next one, the system can automatically calibrate, and the three-finger right slide is set to be saved to be switched to the last one.
It can be understood that the vehicle-mounted terminal corrects the gesture through the operation habit based on the user-defined gesture operation, so as to obtain updated user-defined gesture operation, and facilitate the gesture operation of the subsequent user.
The embodiment of the application further provides a gesture operation device, as shown in fig. 5, fig. 5 is a schematic structural diagram of a system testing device for powering up and down according to the embodiment of the application, where the gesture operation device 5 includes: a receiving unit 501, a determining unit 502, and a control unit 503; wherein,
the receiving unit 501 is configured to receive a first gesture operation on the display interface when the gesture operation switch is turned on; the first gesture operation representation operates in the full screen range of the display screen; the display interface is a full screen interface or a split screen interface;
the determining unit 502 is configured to identify and screen the split screen interface if the display interface is the split screen interface, and determine a target application interface corresponding to the split screen interface; the target application interface is an application interface displayed in the split screen interface;
the control unit 503 is configured to control the target application interface in response to the first gesture operation performed on the target application interface.
In some embodiments of the present application, the determining unit 502 is further configured to identify the split screen interface, and determine at least two application interfaces corresponding to the split screen interface; and based on the at least two application interfaces, priority screening is carried out, and a target application interface corresponding to the split screen interface is determined.
In some embodiments of the present application, the determining unit 502 is further configured to determine a priority value corresponding to each of the at least two application interfaces; the priority value characterizes the priority of displaying the application interface on the display screen; and determining a target application interface corresponding to the split screen interface from the at least two application interfaces based on the priority values corresponding to the at least two application interfaces.
In some embodiments of the present application, the gesture operation apparatus 5 further includes: an acquisition unit 504;
in some embodiments of the present application, the obtaining unit 504 is configured to obtain a first sub-priority value corresponding to each of the at least two application interfaces; the first sub-priority value characterizes the importance degree of an application interface in a system for a user; acquiring second sub-priority values corresponding to the at least two application interfaces respectively; the second sub-priority value characterizes a user operating frequency for an application interface; acquiring a third sub-priority value corresponding to each of the at least two application interfaces; the third sub-priority value represents the duty ratio of the application interface in the display area of the display screen;
The determining unit 503 is further configured to perform an operation based on the first sub-priority value, the second sub-priority value, and the third sub-priority value, and determine the priority values corresponding to the at least two application interfaces respectively.
In some embodiments of the present application, the determining unit 502 is further configured to identify the full-screen interface if the display interface is the full-screen interface, and determine a target application interface where the full-screen interface is located;
the control unit 503 is further configured to control the target application interface in response to the first gesture operation for the target application interface.
In some embodiments of the present application, the first gesture operation includes: default gesture operations and custom gesture operations; the default gesture operation is generated by a system setting; the user-defined gesture operation is preset and generated for a user;
the default gesture operation includes: double-finger left-sliding, double-finger right-sliding, double-finger lower-sliding and double-finger upper-sliding; the double-finger left slide, the double-finger right slide, the double-finger lower slide and the double-finger upper slide have different response effects for different application interfaces.
In some embodiments of the present application, the receiving unit 501 is further configured to receive an initial gesture operation in a custom trigger-response editing mode; the initial gesture operation characterizes at least one of any gesture operations;
The determining unit 502 is further configured to determine that the gesture operation is the user-defined gesture operation based on the initial gesture operation.
In some embodiments of the present application, the determining unit 502 is further configured to determine the initial gesture operation as the custom gesture operation if the default gesture operation is different from the initial gesture operation and the default gesture operation is different from the initial gesture operation in effect;
the obtaining unit 504 is further configured to generate a prompt message if the default gesture operation is consistent with the gesture action of the initial gesture operation and the default gesture operation is different from the initial gesture operation in effect;
the determining unit 502 is further configured to modify a gesture action of the initial gesture operation or delete the default gesture operation consistent with the gesture action of the initial gesture operation based on the prompt information, and determine the custom gesture operation.
In some embodiments of the present application, the obtaining unit 504 is further configured to, after determining the initial gesture operation as the custom gesture operation, correct the gesture according to an operation habit based on the custom gesture operation, and obtain an updated custom gesture operation.
Based on the gesture operation method of the foregoing embodiment, the embodiment of the present application further provides a gesture operation device, as shown in fig. 6, fig. 6 is a schematic structural diagram of the gesture operation device provided in the embodiment of the present application, where the gesture operation device 6 includes: a processor 601 and a memory 602. The memory 602 is used for storing a computer program; the processor 601 is configured to call and run the computer program from the memory to perform the gesture operation method according to the above embodiment.
In an embodiment of the present application, the processor 601 may be at least one of an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), a programmable logic device (ProgRAMmable Logic Device, PLD), a field programmable gate array (Field ProgRAMmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic device for implementing the above-mentioned processor function may be other for different apparatuses, and embodiments of the present application are not specifically limited.
The embodiment of the application provides a computer readable storage medium storing a computer program for implementing the gesture operation method according to any embodiment when being executed by a processor.
For example, the program instruction corresponding to one gesture operation method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disc, or a usb disk, and when the program instruction corresponding to one gesture operation method in the storage medium is read or executed by an electronic device, the gesture operation method described in any one of the embodiments may be implemented.
In addition, each functional module in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional modules.
The integrated units, if implemented in the form of software functional modules, may be stored in a computer-readable storage medium, if not sold or used as separate products, and based on this understanding, the technical solution of the present embodiment may be embodied essentially or partly in the form of a software product, or all or part of the technical solution may be embodied in a storage medium, which includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or processor (processor) to perform all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "some embodiments" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "in some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments. The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other and is not repeated herein for the sake of brevity.
The modules described above as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules; can be located in one place or distributed to a plurality of network units; some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each module may be separately used as one unit, or two or more modules may be integrated in one unit; the integrated modules may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
The methods disclosed in the method embodiments provided in the embodiments of the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in several product embodiments provided in the embodiments of the present application may be combined arbitrarily without conflict to obtain new product embodiments.
The features disclosed in several method or apparatus embodiments provided in the embodiments of the present application may be arbitrarily combined without any conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely an implementation manner of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure of the present application, and the changes or substitutions should be covered in the protection scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A gesture operation method, comprising:
receiving a first gesture operation on a display interface under the condition that a gesture operation switch is on; the first gesture operation representation operates in the full screen range of the display screen; the display interface is a full screen interface or a split screen interface; the split screen interface characterizes a display screen to simultaneously display at least two application interfaces;
if the display interface is the split screen interface, identifying and screening the split screen interface, and determining a target application interface corresponding to the split screen interface; the target application interface is an application interface displayed in the split screen interface;
And controlling the target application interface in response to the first gesture operation performed on the target application interface.
2. The method of claim 1, wherein if the display interface is the split screen interface, identifying and screening the split screen interface, and determining a target application interface from the at least two application interfaces, comprises:
if the display interface is the split screen interface, identifying the split screen interface, and determining at least two application interfaces corresponding to the split screen interface;
and based on the at least two application interfaces, priority screening is carried out, and a target application interface corresponding to the split screen interface is determined.
3. The method according to claim 1 or 2, wherein the determining the target application interface corresponding to the split screen interface based on the priority screening of the at least two application interfaces includes:
determining the corresponding priority value of each of the at least two application interfaces; the priority value characterizes the priority of displaying the application interface on the display screen;
and determining a target application interface corresponding to the split screen interface from the at least two application interfaces based on the priority values corresponding to the at least two application interfaces.
4. A method according to claim 3, wherein said determining the respective priority values of the at least two application interfaces comprises:
acquiring first sub-priority values corresponding to the at least two application interfaces respectively; the first sub-priority value characterizes the importance degree of an application interface in a system for a user;
acquiring second sub-priority values corresponding to the at least two application interfaces respectively; the second sub-priority value characterizes a user operating frequency for an application interface;
acquiring a third sub-priority value corresponding to each of the at least two application interfaces; the third sub-priority value represents the duty ratio of the application interface in the display area of the display screen;
and based on the first sub-priority value, the second sub-priority value and the third sub-priority value, operation is carried out, and the priority values corresponding to the at least two application interfaces are determined.
5. The method of claim 1, wherein, after receiving the first gesture operation on the display interface with the gesture operation switch on, the method further comprises:
if the display interface is the full screen interface; identifying the full screen interface and determining a target application interface where the full screen interface is located; the full screen interface characterizes that the display screen displays only one application interface in full screen;
And controlling the target application interface in response to the first gesture operation for the target application interface.
6. The method of claim 1 or 5, wherein the first gesture operation comprises: default gesture operations and custom gesture operations; the default gesture operation is generated by a system setting; the user-defined gesture operation is preset and generated for a user;
the default gesture operation includes: double-finger left-sliding, double-finger right-sliding, double-finger lower-sliding and double-finger upper-sliding; the double-finger left slide, the double-finger right slide, the double-finger lower slide and the double-finger upper slide have different response effects for different application interfaces.
7. The method of claim 6, wherein the method further comprises:
receiving initial gesture operation in a custom trigger response editing mode; the initial gesture operation characterizes at least one of any gesture operations;
and determining the user-defined gesture operation based on the initial gesture operation.
8. The method of claim 7, wherein the determining of the custom gesture operation based on the initial gesture operation comprises:
If the default gesture operation is different from the initial gesture operation and the effect of the default gesture operation is different from that of the initial gesture operation, determining the initial gesture operation as the self-defining gesture operation;
if the default gesture operation is consistent with the gesture action of the initial gesture operation and the effect of the default gesture operation is different from that of the initial gesture operation, generating prompt information;
and modifying the gesture action of the initial gesture operation or deleting the default gesture operation consistent with the gesture action of the initial gesture operation based on the prompt information, and determining the custom gesture operation.
9. The method of claim 7, wherein after the determining the initial gesture operation as the custom gesture operation, the method further comprises:
and carrying out gesture correction through operation habits based on the custom gesture operation to obtain updated custom gesture operation.
10. A gesture operation apparatus, comprising: a receiving unit, a determining unit and a control unit, wherein,
the receiving unit is used for receiving a first gesture operation on the display interface under the condition that the gesture operation switch is on; the first gesture operation representation operates in the full screen range of the display screen; the display interface is a full screen interface or a split screen interface; the split screen interface characterizes a display screen to simultaneously display at least two application interfaces;
The determining unit is used for identifying and screening the split screen interface if the display interface is the split screen interface, and determining a target application interface corresponding to the split screen interface; the target application interface is an application interface displayed in the split screen interface;
the control unit is used for responding to the first gesture operation performed on the target application interface and controlling the target application interface.
11. A gesture operation apparatus, comprising:
a memory for storing executable data instructions;
a processor for implementing the gesture operation method according to any one of claims 1 to 9 when executing the executable instructions stored in the memory.
12. A computer readable storage medium storing executable instructions for causing a processor to perform the gesture operation method of any one of claims 1 to 9.
CN202410006548.0A 2024-01-02 2024-01-02 Gesture operation method and device, equipment and computer readable storage medium Pending CN117827075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410006548.0A CN117827075A (en) 2024-01-02 2024-01-02 Gesture operation method and device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410006548.0A CN117827075A (en) 2024-01-02 2024-01-02 Gesture operation method and device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117827075A true CN117827075A (en) 2024-04-05

Family

ID=90509508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410006548.0A Pending CN117827075A (en) 2024-01-02 2024-01-02 Gesture operation method and device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117827075A (en)

Similar Documents

Publication Publication Date Title
CN110928409B (en) Vehicle-mounted scene mode control method and device, vehicle and storage medium
CN105573740B (en) Split screen display mode operation method and terminal
US10409449B2 (en) In-vehicle display apparatus and controlling program
US20170212646A1 (en) Client interface loading control method and apparatus
JP7017216B2 (en) Screen control method, device device and computer readable storage medium
WO2021185017A1 (en) Video stream playback control method, device, and storage medium
CN112148181B (en) Menu display method and device, electronic equipment and storage medium
US20120047574A1 (en) Terminal and method for recognizing multi user input
CN111831190B (en) Music application program control method and device and electronic equipment
US20180181295A1 (en) Method for displaying information, and terminal equipment
CN111923731A (en) Configuration method and device of virtual keys of vehicle steering wheel
CN110780783B (en) Interface element moving method, system, vehicle and storage medium
CN110764616A (en) Gesture control method and device
CN108549518A (en) A kind of method, apparatus that music information is shown and terminal device
CN105930163B (en) Interface switching method and device
CN108920266B (en) Program switching method, intelligent terminal and computer readable storage medium
CN116547640B (en) Application recommendation method and electronic equipment
JP5980173B2 (en) Information processing apparatus and information processing method
CN117827075A (en) Gesture operation method and device, equipment and computer readable storage medium
US20170168694A1 (en) Method and electronic device for adjusting sequence of shortcut switches in control center
CN111309414B (en) User interface integration method and vehicle-mounted device
CN112383826A (en) Control method and device of vehicle-mounted entertainment terminal, storage medium, terminal and automobile
CN115904204A (en) Control method of programmable gestures of new energy automobile, storage medium and automobile
KR20180032436A (en) Apparatu and method for executing a plurality of instructionsin and method for storaging thereof
EP3563216B1 (en) Method and apparatus for controlling a mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination