CN117501221A - Touch detection method, device and storage medium - Google Patents

Touch detection method, device and storage medium Download PDF

Info

Publication number
CN117501221A
CN117501221A CN202280004620.5A CN202280004620A CN117501221A CN 117501221 A CN117501221 A CN 117501221A CN 202280004620 A CN202280004620 A CN 202280004620A CN 117501221 A CN117501221 A CN 117501221A
Authority
CN
China
Prior art keywords
probability
touch
determining
shoulder key
virtual shoulder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280004620.5A
Other languages
Chinese (zh)
Inventor
张逸帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Publication of CN117501221A publication Critical patent/CN117501221A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means

Abstract

The disclosure relates to a touch detection method, a touch detection device and a storage medium. The touch detection method comprises the following steps: in response to detecting the touch input in the frame area, determining that the touch input meets a first condition, and determining that the touch input corresponds to touch data; determining a first probability of a user touching the virtual shoulder key by using a first neural network model according to touch data corresponding to touch input; the first neural network model is a neural network model for determining the probability of a user touching the virtual shoulder key based on touch data; acquiring inertial data measured by an inertial measurement unit; determining a second probability of the user touching the virtual shoulder key by using a second neural network model according to the inertial data; the second neural network model is a neural network model for determining the probability of a user touching the virtual shoulder key based on the inertial data; and detecting touch control of the virtual shoulder key according to the first probability and the second probability. According to the method and the device for detecting the touch control and the sensor data of the terminal, the touch control and the sensor data of the terminal are detected simultaneously, and the accuracy of distinguishing the pressing from the false touch is improved.

Description

Touch detection method, device and storage medium Technical Field
The disclosure relates to the technical field of terminals, and in particular relates to a touch detection method, a touch detection device and a storage medium.
Background
The application of terminal technology in daily life is becoming more and more widespread, wherein the application of virtual keys in terminals is also a relatively important study.
In the related art, the entity keys in the terminal bring great convenience to some users and facilitate the operation of the users. Such as a shoulder key design in a gaming terminal. However, the design of the physical key has limitation and cannot be popularized to all users.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a touch detection method, a touch detection device and a storage medium.
According to a first aspect of embodiments of the present disclosure, a touch detection method is provided, which is applied to a terminal, a virtual shoulder key is provided in a full-screen frame area of the terminal, and an inertial measurement unit is provided in the terminal, and includes: in response to detecting touch input in the frame area, determining that the touch input meets a first condition, and determining touch data corresponding to the touch input; determining a first probability of the user touching the virtual shoulder key by using a first neural network model according to touch data corresponding to the touch input; the first neural network model is a neural network model for determining the probability of a user touching a virtual shoulder key based on touch data; acquiring inertial data measured by the inertial measurement unit; determining a second probability of the user touching the virtual shoulder key by using a second neural network model according to the inertial data; the second neural network model is used for determining the probability of a user touching the virtual shoulder key based on inertial data; and detecting touch control of the virtual shoulder key according to the first probability and the second probability.
In one embodiment, the detecting the touch of the virtual shoulder key according to the first probability and the second probability includes: weighting the first probability and the second probability to obtain a third probability of the user touching the virtual shoulder key; determining that the virtual shoulder key is touched under the condition that the third probability is larger than a probability threshold; and under the condition that the third probability is smaller than or equal to the probability threshold value, determining that the virtual shoulder key is not touched.
In yet another embodiment, the determining that the touch input satisfies the first condition includes: determining that the touch position corresponding to the touch input is located in the area range of the virtual shoulder key; or determining an anchor frame area corresponding to the touch position in the frame area, and determining that the virtual shoulder key is included in the anchor frame area.
In yet another embodiment, the method further comprises: controlling a motor corresponding to the virtual shoulder key to vibrate under the condition that the virtual shoulder key is determined to be touched; wherein, different virtual shoulder keys correspond to different motors.
In yet another embodiment, the controlling the motor vibration corresponding to the virtual shoulder key includes; determining the target touch intensity of the virtual shoulder key touched; determining the number of target motors corresponding to the target touch intensity according to the corresponding relation between the touch intensity and the number of motors; controlling the motor vibration of the target motor number.
In yet another embodiment, the vibrating motor is a motor determined from near to far based on a distance from the virtual shoulder key.
According to a second aspect of the embodiments of the present disclosure, there is provided a touch detection device applied to a terminal, a full-screen bezel area of the terminal is provided with a virtual shoulder key, and an inertial measurement unit is provided in the terminal, including: the determining unit is used for responding to detection of touch input in the frame area, determining that the touch input meets a first condition and determining touch data corresponding to the touch input; the processing unit is used for determining a first probability of the user touching the virtual shoulder key by using a first neural network model according to touch data corresponding to the touch input; the first neural network model is a neural network model for determining the probability of a user touching a virtual shoulder key based on touch data; an acquisition unit configured to acquire inertial data measured by the inertial measurement unit; the processing unit is further used for determining a second probability of the user touching the virtual shoulder key by utilizing a second neural network model according to the inertial data; the second neural network model is used for determining the probability of a user touching the virtual shoulder key based on inertial data; and the detection unit is used for detecting touch control of the virtual shoulder key according to the first probability and the second probability.
In one embodiment, the detection unit detects the touch of the virtual shoulder key according to the first probability and the second probability in the following manner: weighting the first probability and the second probability to obtain a third probability of the user touching the virtual shoulder key; determining that the virtual shoulder key is touched under the condition that the third probability is larger than a probability threshold; and under the condition that the third probability is smaller than or equal to the probability threshold value, determining that the virtual shoulder key is not touched.
In another embodiment, the determining unit determines that the touch input meets a first condition in the following manner: determining that the touch position corresponding to the touch input is located in the area range of the virtual shoulder key; or determining an anchor frame area corresponding to the touch position in the frame area, and determining that the virtual shoulder key is included in the anchor frame area.
In another embodiment, the apparatus further comprises: the control unit is used for controlling the motor corresponding to the virtual shoulder key to vibrate under the condition that the virtual shoulder key is determined to be touched; wherein, different virtual shoulder keys correspond to different motors.
In another embodiment, the control unit controls the motor corresponding to the virtual shoulder key to vibrate in the following manner; determining the target touch intensity of the virtual shoulder key touched; determining the number of target motors corresponding to the target touch intensity according to the corresponding relation between the touch intensity and the number of motors; controlling the motor vibration of the target motor number.
In another embodiment, the vibrating motor is a motor determined from near to far based on a distance from the virtual shoulder key.
According to a third aspect of the embodiments of the present disclosure, there is provided a touch detection device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: a method for performing touch detection in the first aspect or any implementation manner of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, wherein instructions are stored in the storage medium, which when executed by a processor of a terminal, enable the terminal including the processor to perform the method of touch detection in the first aspect or any one of the implementation manners of the first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: detecting touch data and terminal inertia data of a border area of a terminal to obtain first probability of a user touching a virtual shoulder key and second probability of the user touching the virtual shoulder key, and fusing the first probability and the second probability to obtain probability of the user touching the virtual shoulder key. According to the touch control and sensor data, accuracy of distinguishing a pressing scene from false touch is improved. The touch detection accuracy of the virtual shoulder key is improved, the situation that false touch is difficult to occur is avoided, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a touch detection method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of detecting touch of a virtual shoulder key according to a first probability and a second probability, according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a method of determining that a touch input satisfies a first condition according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating a method of touch detection according to an exemplary embodiment.
Fig. 5 is a flowchart illustrating a method of controlling motor vibration corresponding to a virtual shoulder key according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating a method of controlling motor vibration corresponding to a virtual shoulder key according to an exemplary embodiment.
Fig. 7 is a schematic diagram illustrating an area of a virtual key in a mobile terminal according to an exemplary embodiment.
Fig. 8 is a schematic diagram of a sensor in a mobile terminal shown in an exemplary embodiment.
Fig. 9 is a schematic diagram illustrating a mobile terminal in which a screen side is pressed in accordance with an exemplary embodiment.
Fig. 10 is a block diagram of a touch detection device according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating an apparatus 200 for touch detection according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure.
In the related art, the design of the shoulder key in the terminal is convenient for a plurality of game users, but the design of the shoulder key is limited in application to the game terminal, has small audience and cannot be popularized to all users.
The touch control detection method is used for simulating the virtual shoulder key in the full-screen terminal, helping a user to perform multi-finger operation, reducing game difficulty, expanding audience of the virtual shoulder key and improving game experience of the user.
Fig. 1 is a flowchart of a touch detection method according to an exemplary embodiment, and as shown in fig. 1, the touch detection method is applied to a terminal, and includes the following steps.
In step S11, in response to detecting the touch input in the frame area, it is determined that the touch input satisfies the first condition, and touch data corresponding to the touch input is determined.
In the embodiment of the disclosure, a detection unit is disposed in a frame area of a terminal and is used for detecting touch input. The detection of the touch input is to detect the change of the capacitance and the voltage of the terminal frame. And determining that the touch input meets a first condition, and acquiring touch data of the touch input.
In step S12, according to touch data corresponding to the touch input, determining a first probability of the user touching the virtual shoulder key by using the first neural network model; the first neural network model is a neural network model for determining the probability of a user touching the virtual shoulder key based on touch data.
In the disclosed embodiment, the neural network model is a complex network system model formed by a large number of simple processing units which are widely connected with each other, reflects a plurality of basic characteristics of human brain functions, and is a highly complex nonlinear dynamic learning system model. The touch data are input into a first neural network model, and the first probability of the user touching the virtual shoulder key is output through the first neural network model.
In the embodiment of the disclosure, the first neural network model is trained in advance, input as touch data, and output as the probability of touch.
In step S13, inertial data measured by the inertial measurement unit is acquired.
In the embodiments of the present disclosure, the inertial measurement unit is a device that measures three-axis attitude angles (or angular rates) of an object, as well as acceleration. The three-dimensional acceleration sensor comprises three accelerometers and three gyroscopes, wherein the accelerometers detect acceleration signals of the terminal on independent three axes of a carrier coordinate system, the gyroscopes detect angular velocity signals of the carrier relative to a navigation coordinate system, angular velocity and acceleration of the terminal in a three-dimensional space are measured, and the gesture of the terminal is calculated according to the angular velocity and the acceleration signals. The inertial data includes acceleration of the terminal triaxial, and yaw angle.
In step S14, determining a second probability of the user touching the virtual shoulder key according to the inertial data using a second neural network model; the second neural network model is used for determining the probability of the user touching the virtual shoulder key based on the inertia data.
In the embodiment of the disclosure, the inertial data is input into the second neural network model, and the second probability of the user touching the virtual shoulder key is output through the second neural network model. The second neural network model is trained in advance, is input into inertial data, and is output into the probability of touch.
In step S15, touch control of the virtual shoulder key is detected according to the first probability and the second probability.
In the embodiment of the disclosure, the first probability and the second probability are fused, and the probability of touch control of the virtual shoulder key is detected.
According to the touch detection method, the first probability and the second probability are obtained by detecting the data of the user touch terminal and the inertial data of the terminal, and the touch of the virtual shoulder key is detected according to the first probability and the second probability. And meanwhile, the accuracy of distinguishing pressing from false touch is improved by using touch control and sensor data. The touch detection accuracy of the virtual shoulder key is improved, and the situation of false touch is not easy to occur.
The following embodiments of the present disclosure further explain and explain a method for detecting touch of a virtual shoulder key according to the first probability and the second probability in the above embodiments of the present disclosure.
Fig. 2 is a flowchart illustrating a method of detecting a touch of a virtual shoulder key according to a first probability and a second probability, according to which the method of detecting a touch of a virtual shoulder key includes the following steps, as shown in fig. 2.
In step S21, the first probability and the second probability are weighted, so as to obtain a third probability of the user touching the virtual shoulder key.
In the embodiment of the disclosure, weighting the first probability and the second probability may be expressed by referring to the following formula: pr=a pr1+b pr2+c. Wherein the third probability is denoted as Pr, the first probability is denoted as Pr1, the second probability is denoted as Pr2, A, B, C is a constant.
In step S22, in the case where the third probability is greater than the probability threshold, it is determined that the virtual shoulder key is touched.
In the embodiment of the disclosure, a probability threshold is preset, for example, the probability threshold is set to 80%, and when the third probability is greater than 80%, it is determined that the virtual shoulder key is touched.
In step S23, in the case where the third probability is less than or equal to the probability threshold, it is determined that the virtual shoulder key is not touched.
In the embodiment of the disclosure, for example, the probability threshold is set to 80%, and when the third probability is less than or equal to 80%, it is determined that the virtual shoulder key is not touched.
The following embodiments of the present disclosure further explain and explain a method for determining that a touch input satisfies a first condition in the above embodiments of the present disclosure.
Fig. 3 is a flowchart illustrating a method of determining that a touch input satisfies a first condition according to an exemplary embodiment, and the method of determining that the touch input satisfies the first condition as shown in fig. 3 includes the following steps.
In step S31, it is determined that the touch position corresponding to the touch input is located within the area range of the virtual shoulder key.
In the embodiment of the present disclosure, an area of a virtual shoulder key is preset in a terminal frame area, where the length of the area of the virtual shoulder key may be equal to the length of the terminal, and the width may be a preset width. For example, the terminal may have a length of 20 cm and a width of 10 cm, and the virtual shoulder key may have a region length of 20 cm and a width of 2 cm. And determining whether the touch position of the touch input is in a preset virtual shoulder key area.
In step S32, or, an anchor frame area corresponding to the touch position is determined in the border area, and it is determined that the anchor frame area includes a virtual shoulder key.
In the embodiment of the disclosure, the anchor frame region may be a pixel frame that traverses all possible pixel frames on the input image, then selects a correct target frame, and adjusts the position and size to complete the target detection task.
In the embodiment of the disclosure, an anchor frame area corresponding to a touch position is determined, and the anchor frame area is determined to contain a virtual shoulder key.
The following embodiments of the present disclosure further explain and explain the method of touch detection in the above embodiments of the present disclosure.
Fig. 4 is a flowchart illustrating a method of touch detection according to an exemplary embodiment, and as shown in fig. 4, the method of touch detection includes the following steps.
In step S41, in the case where it is determined that the virtual shoulder key is touched, the motor corresponding to the virtual shoulder key is controlled to vibrate.
In the embodiment of the disclosure, it is determined that the virtual shoulder key is touched, and the preset corresponding motor vibration is controlled. For example, the first virtual shoulder key corresponds to the first motor, and when the first virtual shoulder key is touched, the first motor is controlled to vibrate.
In step S42, different virtual shoulder keys correspond to different motors.
In the embodiment of the disclosure, for example, the first virtual shoulder key corresponds to the first motor, and the second virtual shoulder key corresponds to the second motor.
The following embodiments of the present disclosure further explain and explain a method of controlling motor vibration corresponding to a virtual shoulder key in the above embodiments of the present disclosure.
Fig. 5 is a flowchart illustrating a method of controlling motor vibration corresponding to a virtual shoulder key according to an exemplary embodiment, and as shown in fig. 5, the method of controlling motor vibration corresponding to a virtual shoulder key includes the following steps.
In step S51, the target touch intensity at which the virtual shoulder key is touched is determined.
In the embodiment of the disclosure, the target touch intensity may be a long-press virtual shoulder key or a click virtual shoulder key.
In step S52, the number of motors corresponding to the target touch intensity is determined according to the correspondence between the touch intensity and the number of motors.
In the embodiment of the disclosure, a long-press virtual shoulder key with high touch intensity may correspond to 2 motors, and a point-press virtual shoulder key with low touch intensity may correspond to 1 motor.
In step S53, the motor vibration of the target motor number is controlled.
In the embodiment of the disclosure, for example, when the virtual shoulder key is pressed for a long time, 2 motors corresponding to the virtual shoulder key is controlled to vibrate. When the virtual shoulder key is pressed, the control point presses 1 motor corresponding to the virtual shoulder key to vibrate.
The following embodiments of the present disclosure further explain and explain a method of controlling motor vibration corresponding to a virtual shoulder key in the above embodiments of the present disclosure.
Fig. 6 is a flowchart illustrating a method of controlling motor vibration corresponding to a virtual shoulder key according to an exemplary embodiment, and as shown in fig. 6, the method of controlling motor vibration corresponding to a virtual shoulder key includes the following steps.
In step S61, the vibrating motor is a motor determined from near to far based on the distance from the virtual shoulder key.
In the embodiment of the disclosure, the touch control virtual shoulder key can be set to preferentially control the motor vibration close to the virtual shoulder key.
In step S62, the motor vibration determined from near to far is controlled.
In the embodiment of the disclosure, when the first virtual shoulder key is pressed, the first motor nearest to the first virtual shoulder key can be controlled to vibrate. When the first virtual shoulder key is pressed for a long time, the first motor vibration and the second motor vibration closest to the first virtual shoulder key are controlled.
In the following, a mobile terminal is taken as an example to illustrate a touch detection method and an actual application related to the above embodiment of the present disclosure.
Fig. 7 is a schematic diagram illustrating an area of a virtual key in a mobile terminal according to an exemplary embodiment.
Referring to fig. 7, in the embodiment of the present disclosure, the touch area of the mobile terminal may be an area of 10 cm by 5 cm, the touch area may be an area near the edge of any one of two long sides of the mobile terminal, and the touch areas a and b may be small areas of 2 cm by 3 cm.
In the embodiment of the disclosure, there may be two judging manners for whether the mobile terminal is pressed, wherein in the first manner, the touch areas a and b are detected, if the pressing data in the touch areas a and b are greater than a preset threshold, for example, 50, a pressing operation is reported, and if the pressing data in the touch areas a and b are both less than or equal to the preset threshold, no pressing operation is reported.
In the embodiment of the present disclosure, the second way to determine whether the mobile terminal has a press is to detect a virtual shoulder key area of the mobile terminal, where the virtual shoulder key area may be an area of 10 cm by 2 cm. The anchor frame is drawn by a target detection method, the anchor frame region can be a pixel frame which traverses all possible pixel frames on an input image, then a correct target frame is selected, and the position and the size are adjusted to complete a target detection task. An anchor frame containing the area a or b is found, and the anchor frame (the area is more than or equal to 2 cm or 3 cm) is judged; if not found, no press is reported. This way, global changes can be seen, and the whole virtual shoulder key area is judged because only the a and b areas are not changed when the user presses the virtual shoulder key.
In the embodiment of the disclosure, touch data is input into a first neural network model to obtain a touch probability, which is recorded as a first probability. The first neural network model is preset to input touch data and output as touch probability.
Fig. 8 is a schematic diagram of a sensor in a mobile terminal shown in an exemplary embodiment. Fig. 9 is a schematic diagram illustrating a mobile terminal in which a screen side is pressed in accordance with an exemplary embodiment.
Referring to fig. 8, in the embodiment of the present disclosure, when a virtual shoulder key is pressed, acceleration of x, y axes where a mobile terminal is located, and a value of a gyroscope should be changed, and a yaw angle is calculated by an inertial measurement unit.
In the embodiment of the disclosure, inertial data including a higher angle, terminal acceleration and the like are input into a second neural network model, and an inertial probability is obtained and recorded as a second probability. The second neural network model is preset to input inertial data and output is inertial probability.
In the embodiment of the disclosure, the first probability and the second probability are fused, and the probability of pressing the virtual shoulder key of the mobile terminal is obtained as the third probability. Can be expressed by the following formula: pr=A. Pr1+B. Pr2+C, where the first probability is denoted Pr1, the second probability is denoted Pr2, the third probability is denoted Pr, and A, B, C denotes an arbitrary constant. The preset pressing threshold may be, for example, 80%, and if the third probability Pr is greater than the pressing threshold 80%, it is considered that one pressing occurs.
In the embodiment of the present disclosure, since the pressing may be one pressing or a long pressing, the current pressing is considered to be ended when a value smaller than the threshold occurs in the pressing area.
In the embodiment of the disclosure, one or a plurality of linear motors are added at the edge of a screen of the mobile terminal, and different keys are represented by different motor vibrations. For example, when the screen is horizontally placed, the left virtual shoulder key and the right virtual shoulder key are divided into a left virtual shoulder key and a right virtual shoulder key, respectively, are named as a first virtual shoulder key and a second virtual shoulder key. The virtual shoulder key can be set to be pressed, one linear motor vibrates, the virtual shoulder key is pressed for a long time, and two linear motors vibrate.
In the embodiment of the disclosure, a first virtual shoulder key is clicked, one linear motor close to the first virtual shoulder key vibrates, a second virtual shoulder key is long-pressed, and two linear motors close to the second virtual shoulder key vibrate. The accuracy of the key can be set by the user, and can be resolved through the vibration intensity.
It should be understood by those skilled in the art that the various implementations/embodiments of the present disclosure may be used in combination with the foregoing embodiments or may be used independently. Whether used alone or in combination with the previous embodiments, the principles of implementation are similar. In the practice of the present disclosure, some of the examples are described in terms of implementations that are used together. Of course, those skilled in the art will appreciate that such illustration is not limiting of the disclosed embodiments.
Based on the same conception, the embodiment of the disclosure also provides a touch detection device.
It can be understood that, in order to implement the above-mentioned functions, the touch detection device provided in the embodiments of the present disclosure includes a hardware structure and/or a software module that perform each function. The disclosed embodiments may be implemented in hardware or a combination of hardware and computer software, in combination with the various example elements and algorithm steps disclosed in the embodiments of the disclosure. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as beyond the scope of the embodiments of the present disclosure.
Fig. 10 is a block diagram of a touch detection device according to an exemplary embodiment. Referring to fig. 10, the apparatus includes a determination unit 101, a processing unit 102, an acquisition unit 103, and a detection unit 104.
The determining unit 101 determines touch data corresponding to the touch input in response to detecting the touch input in the frame area and determining that the touch input meets a first condition.
The processing unit 102 is configured to determine, according to touch data corresponding to the touch input, a first probability of the user touching the virtual shoulder key using a first neural network model; the first neural network model is a neural network model for determining the probability of a user touching a virtual shoulder key based on touch data.
The acquiring unit 103 is configured to acquire inertial data measured by the inertial measurement unit.
The processing unit 102 is further configured to determine, according to the inertial data, a second probability of the user touching the virtual shoulder key using a second neural network model; the second neural network model is used for determining the probability of a user touching the virtual shoulder key based on inertial data.
The detecting unit 104 is configured to detect touch control on the virtual shoulder key according to the first probability and the second probability.
In one embodiment, the detecting unit 104 detects the touch of the virtual shoulder key according to the first probability and the second probability in the following manner: weighting the first probability and the second probability to obtain a third probability of the user touching the virtual shoulder key; determining that the virtual shoulder key is touched under the condition that the third probability is larger than a probability threshold; and under the condition that the third probability is smaller than or equal to the probability threshold value, determining that the virtual shoulder key is not touched.
In another embodiment, the determining unit 101 determines that the touch input meets the first condition in the following manner: determining that the touch position corresponding to the touch input is located in the area range of the virtual shoulder key; or determining an anchor frame area corresponding to the touch position in the frame area, and determining that the virtual shoulder key is included in the anchor frame area.
In another embodiment, the apparatus further comprises: a control unit 105, configured to control motor vibration corresponding to the virtual shoulder key when it is determined that the virtual shoulder key is touched; wherein, different virtual shoulder keys correspond to different motors.
In another embodiment, the control unit 105 controls the motor vibration corresponding to the virtual shoulder key as follows; determining the target touch intensity of the virtual shoulder key touched; determining the number of target motors corresponding to the target touch intensity according to the corresponding relation between the touch intensity and the number of motors; controlling the motor vibration of the target motor number.
In another embodiment, the vibrating motor is a motor determined from near to far based on a distance from the virtual shoulder key.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 11 is a block diagram illustrating an apparatus 200 for touch detection according to an example embodiment. For example, apparatus 200 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 11, the apparatus 200 may include one or more of the following components: a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, an input/output (I/O) interface 212, a sensor component 214, and a communication component 216.
The processing component 202 generally controls overall operation of the apparatus 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 202 may include one or more processors 220 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interactions between the processing component 202 and other components. For example, the processing component 202 may include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The memory 204 is configured to store various types of data to support operations at the apparatus 200. Examples of such data include instructions for any application or method operating on the device 200, contact data, phonebook data, messages, pictures, videos, and the like. The memory 204 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 206 provides power to the various components of the device 200. The power components 206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 200.
The multimedia component 208 includes a screen between the device 200 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 200 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 210 is configured to output and/or input audio signals. For example, the audio component 210 includes a Microphone (MIC) configured to receive external audio signals when the device 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 further includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing assembly 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 214 includes one or more sensors for providing status assessment of various aspects of the apparatus 200. For example, the sensor assembly 214 may detect the on/off state of the device 200, the relative positioning of the components, such as the display and keypad of the device 200, the sensor assembly 214 may also detect a change in position of the device 200 or a component of the device 200, the presence or absence of user contact with the device 200, the orientation or acceleration/deceleration of the device 200, and a change in temperature of the device 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate communication between the apparatus 200 and other devices in a wired or wireless manner. The device 200 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 216 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 204, including instructions executable by processor 220 of apparatus 200 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It is further understood that the term "plurality" in this disclosure means two or more, and other adjectives are similar thereto. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It is further understood that the terms "first," "second," and the like are used to describe various information, but such information should not be limited to these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the expressions "first", "second", etc. may be used entirely interchangeably. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the scope of the appended claims.

Claims (14)

  1. The touch detection method is characterized by being applied to a terminal, wherein a virtual shoulder key is arranged in a comprehensive screen frame area of the terminal, and an inertial measurement part is arranged in the terminal, and the method comprises the following steps:
    in response to detecting touch input in the frame area, determining that the touch input meets a first condition, and determining touch data corresponding to the touch input;
    determining a first probability of a user touching the virtual shoulder key by using a first neural network model according to touch data corresponding to the touch input; the first neural network model is a neural network model for determining the probability of a user touching a virtual shoulder key based on touch data;
    acquiring inertial data measured by the inertial measurement unit;
    determining a second probability of a user touching the virtual shoulder key by using a second neural network model according to the inertial data; the second neural network model is used for determining the probability of a user touching the virtual shoulder key based on inertial data;
    and detecting touch control of the virtual shoulder key according to the first probability and the second probability.
  2. The method of claim 1, wherein detecting the touch of the virtual shoulder key based on the first probability and the second probability comprises:
    weighting the first probability and the second probability to obtain a third probability of the user touching the virtual shoulder key;
    determining that the virtual shoulder key is touched under the condition that the third probability is larger than a probability threshold;
    and under the condition that the third probability is smaller than or equal to the probability threshold value, determining that the virtual shoulder key is not touched.
  3. The method of claim 1, wherein the determining that the touch input satisfies a first condition comprises:
    determining that the touch position corresponding to the touch input is located in the area range of the virtual shoulder key; or alternatively, the first and second heat exchangers may be,
    and determining an anchor frame area corresponding to the touch position in the frame area, and determining that the virtual shoulder key is included in the anchor frame area.
  4. The method according to claim 1, wherein the method further comprises:
    controlling a motor corresponding to the virtual shoulder key to vibrate under the condition that the virtual shoulder key is determined to be touched;
    wherein, different virtual shoulder keys correspond to different motors.
  5. The method of claim 4, wherein controlling the motor vibration corresponding to the virtual shoulder key comprises;
    determining the target touch intensity of the virtual shoulder key touched;
    determining the number of target motors corresponding to the target touch intensity according to the corresponding relation between the touch intensity and the number of motors;
    controlling the motor vibration of the target motor number.
  6. The method of any one of claims 4 to 5, wherein the vibrating motor is a motor determined from near to far based on a distance from the virtual shoulder key.
  7. A touch detection device, characterized in that is applied to a terminal, the comprehensive screen frame area of terminal is provided with virtual shoulder key, just be provided with inertial measurement unit in the terminal, the device includes:
    the determining unit is used for responding to detection of touch input in the frame area, determining that the touch input meets a first condition and determining touch data corresponding to the touch input;
    the processing unit is used for determining a first probability of a user touching the virtual shoulder key by using a first neural network model according to touch data corresponding to the touch input; the first neural network model is a neural network model for determining the probability of a user touching a virtual shoulder key based on touch data;
    an acquisition unit configured to acquire inertial data measured by the inertial measurement unit;
    the processing unit is further used for determining a second probability of the user touching the virtual shoulder key by utilizing a second neural network model according to the inertial data; the second neural network model is used for determining the probability of a user touching the virtual shoulder key based on inertial data;
    and the detection unit is used for detecting touch control of the virtual shoulder key according to the first probability and the second probability.
  8. The apparatus according to claim 7, wherein the detection unit detects the touch of the virtual shoulder key according to the first probability and the second probability by:
    weighting the first probability and the second probability to obtain a third probability of the user touching the virtual shoulder key;
    determining that the virtual shoulder key is touched under the condition that the third probability is larger than a probability threshold;
    and under the condition that the third probability is smaller than or equal to the probability threshold value, determining that the virtual shoulder key is not touched.
  9. The apparatus according to claim 7, wherein the determining unit determines that the touch input satisfies a first condition by:
    determining that the touch position corresponding to the touch input is located in the area range of the virtual shoulder key; or alternatively, the first and second heat exchangers may be,
    and determining an anchor frame area corresponding to the touch position in the frame area, and determining that the virtual shoulder key is included in the anchor frame area.
  10. The apparatus of claim 7, wherein the apparatus further comprises:
    the control unit is used for controlling the motor corresponding to the virtual shoulder key to vibrate under the condition that the virtual shoulder key is determined to be touched;
    wherein, different virtual shoulder keys correspond to different motors.
  11. The apparatus of claim 10, wherein the control unit controls the motor vibration corresponding to the virtual shoulder key in such a manner that;
    determining the target touch intensity of the virtual shoulder key touched;
    determining the number of target motors corresponding to the target touch intensity according to the corresponding relation between the touch intensity and the number of motors;
    controlling the motor vibration of the target motor number.
  12. The apparatus of any one of claims 10 to 11, wherein the vibrating motor is a motor determined from near to far based on a distance from the virtual shoulder key.
  13. A touch detection device, comprising:
    a processor;
    a memory for storing processor-executable instructions;
    wherein the processor is configured to: a method for performing touch detection as claimed in any one of claims 1 to 6.
  14. A storage medium having instructions stored therein which, when executed by a processor of a terminal, enable the terminal comprising the processor to perform the method of touch detection of any one of claims 1 to 6.
CN202280004620.5A 2022-05-31 2022-05-31 Touch detection method, device and storage medium Pending CN117501221A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/096207 WO2023230829A1 (en) 2022-05-31 2022-05-31 Touch detection method, apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN117501221A true CN117501221A (en) 2024-02-02

Family

ID=89026535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280004620.5A Pending CN117501221A (en) 2022-05-31 2022-05-31 Touch detection method, device and storage medium

Country Status (2)

Country Link
CN (1) CN117501221A (en)
WO (1) WO2023230829A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11614793B2 (en) * 2018-02-28 2023-03-28 Logitech Europe S.A. Precision tracking of user interaction with a virtual input device
CN109213419B (en) * 2018-10-19 2020-10-16 北京小米移动软件有限公司 Touch operation processing method and device and storage medium
US11126283B2 (en) * 2019-06-05 2021-09-21 Apple Inc. Systems, methods, and computer-readable media for handling user input gestures on an extended trackpad of an electronic device
US20210311621A1 (en) * 2020-04-02 2021-10-07 Qualcomm Incorporated Swipe gestures on a virtual keyboard with motion compensation
CN113805980A (en) * 2020-06-16 2021-12-17 华为技术有限公司 Method and terminal for displaying notification
CN111930274B (en) * 2020-08-10 2022-05-31 Oppo(重庆)智能科技有限公司 Virtual key, electronic equipment and touch operation detection method

Also Published As

Publication number Publication date
WO2023230829A1 (en) 2023-12-07

Similar Documents

Publication Publication Date Title
US10331231B2 (en) Mobile terminal and method for determining scrolling speed
EP3099063A1 (en) Video communication method and apparatus
EP3232299A2 (en) Physical key component, terminal, and touch response method and device
EP3173970A1 (en) Image processing method and apparatus
EP3176776A1 (en) Luminance adjusting method and apparatus, computer program and recording medium
US20190235745A1 (en) Method and device for displaying descriptive information
US10444953B2 (en) View angle switching method and apparatus
US10248855B2 (en) Method and apparatus for identifying gesture
CN107515669B (en) Display method and device
EP3147802B1 (en) Method and apparatus for processing information
EP3407220A1 (en) Method and device for distributing application
EP3232301B1 (en) Mobile terminal and virtual key processing method
EP3276301A1 (en) Mobile terminal and method for calculating a bending angle
CN106354504B (en) Message display method and device
CN107798309B (en) Fingerprint input method and device and computer readable storage medium
EP3001305A1 (en) Method and device for controlling display of video
US9921796B2 (en) Sharing of input information superimposed on images
EP3438924B1 (en) Method and device for processing picture
EP3337104B1 (en) Method, device and storage medium for outputting communication message
US11221734B2 (en) Punch-hole screen display method and apparatus
CN109255839B (en) Scene adjustment method and device
CN116092147A (en) Video processing method, device, electronic equipment and storage medium
CN112764658B (en) Content display method and device and storage medium
CN117501221A (en) Touch detection method, device and storage medium
US20160195992A1 (en) Mobile terminal and method for processing signals generated from touching virtual keys

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination