WO2023230829A1 - 一种触控检测方法、装置及存储介质 - Google Patents

一种触控检测方法、装置及存储介质 Download PDF

Info

Publication number
WO2023230829A1
WO2023230829A1 PCT/CN2022/096207 CN2022096207W WO2023230829A1 WO 2023230829 A1 WO2023230829 A1 WO 2023230829A1 CN 2022096207 W CN2022096207 W CN 2022096207W WO 2023230829 A1 WO2023230829 A1 WO 2023230829A1
Authority
WO
WIPO (PCT)
Prior art keywords
probability
touch
virtual shoulder
shoulder key
virtual
Prior art date
Application number
PCT/CN2022/096207
Other languages
English (en)
French (fr)
Inventor
张逸帆
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/096207 priority Critical patent/WO2023230829A1/zh
Priority to CN202280004620.5A priority patent/CN117501221A/zh
Publication of WO2023230829A1 publication Critical patent/WO2023230829A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means

Definitions

  • the present disclosure relates to the field of terminal technology, and in particular, to a touch detection method, device and storage medium.
  • Terminal technology is increasingly used in daily life, and the application of virtual keys in terminals is also an important research topic.
  • buttons in terminals bring great convenience to some users and facilitate user operations.
  • the shoulder key design in game terminals For example, the shoulder key design in game terminals.
  • the design of physical buttons has limitations and cannot be extended to all users.
  • the present disclosure provides a touch detection method, device and storage medium.
  • a touch detection method is provided, applied to a terminal, a full-screen frame area of the terminal is provided with virtual shoulder keys, and an inertial measurement component is provided in the terminal, including: responding After detecting the touch input in the frame area and determining that the touch input satisfies the first condition, the touch data corresponding to the touch input is determined; and based on the touch data corresponding to the touch input, using the third A neural network model that determines the first probability that the user touches the virtual shoulder key; wherein the first neural network model is a neural network model that determines the probability that the user touches the virtual shoulder key based on touch data; obtaining the Inertial data measured by the inertial measurement component; according to the inertial data, a second neural network model is used to determine the second probability that the user touches the virtual shoulder button; wherein the second neural network model is based on inertia The data determines a neural network model of the probability that the user touches the virtual shoulder key; and detects the touch
  • detecting a touch on the virtual shoulder key according to the first probability and the second probability includes: weighting the first probability and the second probability, Obtain a third probability that the user touches the virtual shoulder key; when the third probability is greater than a probability threshold, determine that the virtual shoulder key is touched; when the third probability is less than or equal to the In the case of a probability threshold, it is determined that the virtual shoulder key has not been touched.
  • determining that the touch input satisfies the first condition includes: determining that the touch position corresponding to the touch input is located within the area of the virtual shoulder key; or, in the An anchor box area corresponding to the touch position is determined in the border area, and the virtual shoulder key is included in the anchor box area.
  • the method further includes: when it is determined that the virtual shoulder key is touched, controlling a motor corresponding to the virtual shoulder key to vibrate; wherein different virtual shoulder keys correspond to different of motor.
  • controlling the vibration of the motor corresponding to the virtual shoulder key includes: determining the target touch intensity at which the virtual shoulder key is touched; and determining the target touch intensity according to the corresponding relationship between the touch intensity and the number of motors.
  • the vibration motor is a motor determined based on the distance from the virtual shoulder key from near to far.
  • a touch detection device which is applied to a terminal.
  • a virtual shoulder key is provided in a full-screen frame area of the terminal, and an inertial measurement component is provided in the terminal, including: determining A unit, in response to detecting the touch input in the frame area and determining that the touch input satisfies the first condition, determining the touch data corresponding to the touch input; a processing unit, configured to determine the touch input according to the touch input
  • the corresponding touch data uses a first neural network model to determine the first probability that the user touches the virtual shoulder button; wherein the first neural network model determines the probability that the user touches the virtual shoulder button based on the touch data.
  • the neural network model the acquisition unit is used to acquire the inertial data measured by the inertial measurement component; the processing unit is also used to use the second neural network model according to the inertial data to determine that the user touches the virtual a second probability of the shoulder key; wherein the second neural network model is a neural network model that determines the probability of the user touching the virtual shoulder key based on inertial data; a detection unit configured to determine the probability of the user touching the virtual shoulder key based on the first probability and the second Probability, detecting a touch on the virtual shoulder key.
  • the detection unit detects the touch on the virtual shoulder key according to the first probability and the second probability in the following manner: Weighting is performed to obtain a third probability that the user touches the virtual shoulder key; when the third probability is greater than a probability threshold, it is determined that the virtual shoulder key is touched; when the third probability is less than or If equal to the probability threshold, it is determined that the virtual shoulder key has not been touched.
  • the determining unit determines that the touch input satisfies the first condition in the following manner: determining that the touch position corresponding to the touch input is located within the area of the virtual shoulder key; or, An anchor frame area corresponding to the touch position is determined in the border frame area, and the virtual shoulder key is included in the anchor frame area.
  • the device further includes: a control unit configured to control the vibration of the motor corresponding to the virtual shoulder key when it is determined that the virtual shoulder key is touched; wherein different Virtual shoulder keys correspond to different motors.
  • control unit controls the vibration of the motor corresponding to the virtual shoulder key in the following manner; determines the target touch intensity at which the virtual shoulder key is touched; and based on the correspondence between the touch intensity and the number of motors relationship, determine the number of target motors corresponding to the target touch intensity, and control the vibration of the motors of the target number of motors.
  • the vibration motor is a motor determined based on the distance from the virtual shoulder key from near to far.
  • a touch detection device which is characterized in that it includes: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to: Perform the touch detection method in the first aspect or any embodiment of the first aspect.
  • a storage medium characterized in that instructions are stored in the storage medium, and when the instructions in the storage medium are executed by a processor of the terminal, the terminal including the processor can execute the first step.
  • the touch detection method in any embodiment of the aspect or the first aspect.
  • the technical solution provided by the embodiments of the present disclosure may include the following beneficial effects: detecting the touch data of the terminal frame area and the terminal inertia data, obtaining the first probability of the user touching the virtual shoulder key and the second probability of the user touching the virtual shoulder key, The first probability and the second probability are combined to obtain the probability that the user touches the virtual shoulder key. Based on touch and sensor data, the accuracy of distinguishing between press scenes and false touches is improved. This improves the touch detection accuracy of the virtual shoulder keys, making accidental touches less likely to occur and improving the user experience.
  • FIG. 1 is a flow chart of a touch detection method according to an exemplary embodiment.
  • FIG. 2 is a flowchart of a method for detecting a touch on a virtual shoulder key according to a first probability and a second probability according to an exemplary embodiment.
  • FIG. 3 is a flowchart of a method for determining that a touch input satisfies a first condition according to an exemplary embodiment.
  • FIG. 4 is a flowchart of a touch detection method according to an exemplary embodiment.
  • FIG. 5 is a flow chart illustrating a method of controlling vibration of a motor corresponding to a virtual shoulder key according to an exemplary embodiment.
  • FIG. 6 is a flowchart illustrating a method of controlling vibration of a motor corresponding to a virtual shoulder key according to an exemplary embodiment.
  • FIG. 7 is a schematic diagram of a virtual key area in a mobile terminal according to an exemplary embodiment.
  • FIG. 8 is a schematic diagram of a sensor in a mobile terminal according to an exemplary embodiment.
  • FIG. 9 is a schematic diagram of a mobile terminal when there is pressure on the side of the screen according to an exemplary embodiment.
  • Figure 10 is a block diagram of a touch detection device according to an exemplary embodiment.
  • FIG. 11 is a block diagram of a device 200 for touch detection according to an exemplary embodiment.
  • the design of shoulder keys in terminals has facilitated many game users.
  • the design of shoulder keys has limitations when applied to game terminals. The audience is small and cannot be promoted to all users.
  • the present disclosure provides a touch detection method to simulate virtual shoulder keys in a full-screen terminal to help users perform multi-finger operations and reduce game difficulty. At the same time, it expands the audience of virtual shoulder keys and improves the user's gaming experience. .
  • FIG. 1 is a flow chart of a touch detection method according to an exemplary embodiment. As shown in Figure 1, the touch detection method is applied in a terminal and includes the following steps.
  • step S11 in response to detecting the touch input in the frame area and determining that the touch input satisfies the first condition, touch data corresponding to the touch input is determined.
  • a detection unit is provided in the frame area of the terminal for detecting touch input.
  • the detection of touch input is to detect changes in capacitance and voltage of the terminal frame. It is determined that the touch input satisfies the first condition, and the touch data of the touch input is obtained.
  • a first neural network model is used to determine the first probability that the user touches the virtual shoulder key based on the touch data corresponding to the touch input; wherein the first neural network model determines the first probability of the user touching the virtual shoulder key based on the touch data.
  • a probabilistic neural network model of shoulder keys is used to determine the first probability that the user touches the virtual shoulder key based on the touch data corresponding to the touch input; wherein the first neural network model determines the first probability of the user touching the virtual shoulder key based on the touch data.
  • the neural network model is a complex network system model formed by a large number of simple processing units that are widely connected to each other. It reflects many basic characteristics of human brain functions and is a highly complex nonlinear dynamic learning system model.
  • the touch data is input into the first neural network model, and through the first neural network model, the output is the first probability of the user touching the virtual shoulder key.
  • the first neural network model is pre-trained, the input is touch data, and the output is the probability of touch.
  • step S13 the inertial data measured by the inertial measurement component is obtained.
  • the inertial measurement component is a device that measures the three-axis attitude angle (or angular rate) and acceleration of an object. It contains three accelerometers and three gyroscopes.
  • the accelerometer detects the acceleration signal of the terminal in three independent axes of the carrier coordinate system, while the gyroscope detects the angular velocity signal of the carrier relative to the navigation coordinate system, and measures the angular velocity sum of the terminal in the three-dimensional space. Acceleration, and use this to calculate the attitude of the terminal.
  • the inertial data includes the acceleration of the three axes of the terminal and the yaw angle.
  • a second neural network model is used to determine the second probability that the user touches the virtual shoulder key based on the inertial data; wherein the second neural network model is a neural network model that determines the probability that the user touches the virtual shoulder key based on the inertial data. network model.
  • the inertial data is input into the second neural network model, and through the second neural network model, the output is the second probability of the user touching the virtual shoulder key.
  • the second neural network model is pre-trained, the input is inertial data, and the output is the probability of touch.
  • step S15 a touch on the virtual shoulder key is detected according to the first probability and the second probability.
  • the first probability and the second probability are combined to detect the probability of touching the virtual shoulder key.
  • the touch detection method provided by the embodiment of the present disclosure obtains the first probability and the second probability by detecting the data of the user touching the terminal and the inertia data of the terminal. According to the first probability and the second probability, the virtual shoulder key is touched. Perform testing. Using touch and sensor data simultaneously improves the accuracy of distinguishing between presses and accidental touches. This improves the touch detection accuracy of the virtual shoulder keys and makes accidental touches less likely to occur.
  • Figure 2 is a flow chart of a method for detecting a touch on a virtual shoulder key according to a first probability and a second probability according to an exemplary embodiment. As shown in Figure 2, according to the first probability and the second probability , the method of detecting the touch on the virtual shoulder key includes the following steps.
  • step S21 the first probability and the second probability are weighted to obtain a third probability that the user touches the virtual shoulder key.
  • Pr A*Pr1+B*Pr2+C.
  • the third probability is expressed as Pr
  • the first probability is expressed as Pr1
  • the second probability is expressed as Pr2
  • A, B, and C are constants.
  • step S22 if the third probability is greater than the probability threshold, it is determined that the virtual shoulder key is touched.
  • the probability threshold is set in advance, for example, the probability threshold is set to 80%.
  • the probability threshold is set to 80%.
  • step S23 if the third probability is less than or equal to the probability threshold, it is determined that the virtual shoulder key has not been touched.
  • the probability threshold is set to 80%.
  • the third probability is less than or equal to 80%, it is determined that the virtual shoulder key has not been touched.
  • FIG. 3 is a flowchart of a method for determining that a touch input satisfies a first condition according to an exemplary embodiment. As shown in FIG. 3 , the method for determining that a touch input satisfies the first condition includes the following steps.
  • step S31 it is determined that the touch position corresponding to the touch input is located within the area of the virtual shoulder key.
  • a virtual shoulder key area is preset in the terminal frame area, where the length of the virtual shoulder key area may be equal to the length of the terminal, and the width may be a preset width.
  • the terminal length is 20 cm and the width is 10 cm
  • the area of the virtual shoulder key can be 20 cm in length and 2 cm in width. Determine whether the touch position of the touch input is in the preset virtual shoulder key area.
  • step S32 alternatively, determine the anchor frame area corresponding to the touch position in the frame area, and determine that the anchor frame area includes the virtual shoulder key.
  • the anchor frame area can be a pixel frame that traverses all possible pixel frames on the input image, then selects the correct target frame, and adjusts the position and size to complete the target detection task.
  • the anchor frame area corresponding to the touch position is determined, and the anchor frame area is determined to include the virtual shoulder key.
  • FIG. 4 is a flow chart of a touch detection method according to an exemplary embodiment. As shown in FIG. 4 , the touch detection method includes the following steps.
  • step S41 when it is determined that the virtual shoulder key is touched, the motor corresponding to the virtual shoulder key is controlled to vibrate.
  • the virtual shoulder key it is determined that the virtual shoulder key is touched, and the preset corresponding motor vibration is controlled.
  • the first virtual shoulder key corresponds to the first motor, and when the first virtual shoulder key is touched, the first motor is controlled to vibrate.
  • step S42 different virtual shoulder keys correspond to different motors.
  • the first virtual shoulder key corresponds to the first motor
  • the second virtual shoulder key corresponds to the second motor
  • Figure 5 is a flow chart of a method of controlling vibration of a motor corresponding to a virtual shoulder key according to an exemplary embodiment. As shown in Figure 5, the method of controlling vibration of a motor corresponding to a virtual shoulder key includes the following steps.
  • step S51 the target touch intensity at which the virtual shoulder key is touched is determined.
  • the target touch intensity may be a long press on the virtual shoulder key or a tap on the virtual shoulder key.
  • step S52 a target number of motors corresponding to the target touch intensity is determined based on the corresponding relationship between the touch intensity and the number of motors.
  • long-pressing the virtual shoulder key with high touch intensity can correspond to two motors, and clicking the virtual shoulder key with low touch intensity can correspond to one motor.
  • step S53 motor vibration of the target number of motors is controlled.
  • Figure 6 is a flow chart of a method of controlling vibration of a motor corresponding to a virtual shoulder key according to an exemplary embodiment. As shown in Figure 6, the method of controlling vibration of a motor corresponding to a virtual shoulder key includes the following steps.
  • step S61 the vibration motor is determined based on the distance from the virtual shoulder key from near to far.
  • the touch virtual shoulder key can be set to preferentially control the vibration of the motor close to the virtual shoulder key.
  • step S62 the motor vibration determined from near to far is controlled.
  • it can be configured to control the vibration of the first motor closest to the first virtual shoulder key when the first virtual shoulder key is clicked.
  • the vibration of the first motor and the vibration of the second motor closest to the first virtual shoulder key are controlled.
  • FIG. 7 is a schematic diagram of a virtual key area in a mobile terminal according to an exemplary embodiment.
  • the touch area of the mobile terminal can be an area of 10 cm * 5 cm, and the touch area can be an area close to the edge of any of the two long sides of the mobile terminal.
  • the touch area a, b can be a small area of 2cm*3cm.
  • the first way is to detect the touch areas a and b. If the press data in the touch areas a and b is greater than the preset When the threshold is, for example, 50, a press operation is reported. If the press data in touch areas a and b are both less than or equal to the preset threshold, no press operation is reported.
  • the second way to determine whether the mobile terminal is pressed is to detect the virtual shoulder key area of the mobile terminal.
  • the virtual shoulder key area can be an area of 10 cm * 2 cm.
  • An anchor frame is drawn through the target detection method.
  • the anchor frame area can be a pixel frame that traverses all possible pixel frames on the input image, then selects the correct target frame, and adjusts the position and size to complete the target detection task. Find the anchor box containing area a or b, and judge this anchor box (the area is greater than or equal to 2 cm * 3 cm); if it is not found, report that it is not pressed. This method allows you to view global changes, because often not only the a and b areas change when pressing, so the entire virtual shoulder key area is judged.
  • the touch data is input into the first neural network model to obtain the touch probability, which is recorded as the first probability.
  • the first neural network model is preset to input touch data and output touch probability.
  • FIG. 8 is a schematic diagram of a sensor in a mobile terminal according to an exemplary embodiment.
  • FIG. 9 is a schematic diagram of a mobile terminal when there is pressure on the side of the screen according to an exemplary embodiment.
  • the acceleration of the x and y axes where the mobile terminal is located and the values of the gyroscope should change, and the yaw angle is calculated through the inertial measurement component.
  • inertia data including elevation angle and terminal acceleration are input into the second neural network model to obtain an inertia probability, which is recorded as the second probability.
  • the second neural network model is preset to input inertia data and output inertia probability.
  • the preset pressing threshold may be, for example, 80%. If the third probability Pr is greater than the pressing threshold 80%, it is considered that a pressing has occurred.
  • the pressing can be a single pressing or a long-term pressing
  • the current pressing is considered to be completed only when a value smaller than the threshold appears in the pressing area.
  • one or several linear motors are added to the edge of the mobile terminal screen, and different keys are represented by different motor vibrations.
  • different keys are represented by different motor vibrations.
  • the left virtual shoulder key is named the first virtual shoulder key
  • the right virtual shoulder key is named the second virtual shoulder key. You can set the virtual shoulder button to be clicked to cause one linear motor to vibrate, and the virtual shoulder button to be long pressed to cause two linear motors to vibrate.
  • a linear motor close to the first virtual shoulder key vibrates, and when the second virtual shoulder key is long pressed, two linear motors close to the second virtual shoulder key vibrate. It can be set by the user, or the accuracy of the keys can be distinguished by the vibration intensity.
  • embodiments of the present disclosure also provide a touch detection device.
  • the touch detection device provided by the embodiment of the present disclosure includes hardware structures and/or software modules corresponding to each function.
  • the embodiments of the present disclosure can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving the hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions for each specific application, but such implementation should not be considered to go beyond the scope of the technical solutions of the embodiments of the present disclosure.
  • Figure 10 is a block diagram of a touch detection device according to an exemplary embodiment.
  • the device includes a determination unit 101 , a processing unit 102 , an acquisition unit 103 and a detection unit 104 .
  • the determining unit 101 determines the touch data corresponding to the touch input in response to detecting the touch input in the frame area and determining that the touch input satisfies the first condition.
  • the processing unit 102 is configured to use a first neural network model to determine the first probability that the user touches the virtual shoulder key according to the touch data corresponding to the touch input; wherein the first neural network model is A neural network model that determines the probability of a user touching a virtual shoulder button based on touch data.
  • the acquisition unit 103 is used to acquire inertial data measured by the inertial measurement component.
  • the processing unit 102 is also configured to use a second neural network model to determine the second probability that the user touches the virtual shoulder key based on the inertial data; wherein the second neural network model is based on the inertial data.
  • Neural network model that determines the probability of a user touching a virtual shoulder button.
  • the detection unit 104 is configured to detect a touch on the virtual shoulder key according to the first probability and the second probability.
  • the detection unit 104 detects the touch on the virtual shoulder key according to the first probability and the second probability in the following manner: The probabilities are weighted to obtain a third probability that the user touches the virtual shoulder key; when the third probability is greater than the probability threshold, it is determined that the virtual shoulder key is touched; when the third probability is less than Or equal to the probability threshold, it is determined that the virtual shoulder key has not been touched.
  • the determining unit 101 determines that the touch input satisfies the first condition in the following manner: determines that the touch position corresponding to the touch input is located within the area of the virtual shoulder key; or , determine the anchor frame area corresponding to the touch position in the frame area, and determine that the virtual shoulder key is included in the anchor frame area.
  • the device further includes: a control unit 105, configured to control the vibration of the motor corresponding to the virtual shoulder key when it is determined that the virtual shoulder key is touched; wherein, different The virtual shoulder keys correspond to different motors.
  • control unit 105 controls the vibration of the motor corresponding to the virtual shoulder key in the following manner; determines the target touch intensity at which the virtual shoulder key is touched; and determines the target touch intensity based on the touch intensity and the number of motors.
  • the corresponding relationship determines the number of target motors corresponding to the target touch intensity; and controls the vibration of the motors of the target number of motors.
  • the vibration motor is a motor determined based on the distance from the virtual shoulder key from near to far.
  • FIG. 11 is a block diagram of a device 200 for touch detection according to an exemplary embodiment.
  • the device 200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
  • device 200 may include one or more of the following components: processing component 202, memory 204, power component 206, multimedia component 208, audio component 210, input/output (I/O) interface 212, sensor component 214, and Communication component 216.
  • Processing component 202 generally controls the overall operations of device 200, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 202 may include one or more processors 220 to execute instructions to complete all or part of the steps of the above method.
  • processing component 202 may include one or more modules that facilitate interaction between processing component 202 and other components.
  • processing component 202 may include a multimedia module to facilitate interaction between multimedia component 208 and processing component 202.
  • Memory 204 is configured to store various types of data to support operations at device 200 . Examples of such data include instructions for any application or method operating on device 200, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 204 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power component 206 provides power to various components of device 200 .
  • Power components 206 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 200 .
  • Multimedia component 208 includes a screen that provides an output interface between the device 200 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • multimedia component 208 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 210 is configured to output and/or input audio signals.
  • audio component 210 includes a microphone (MIC) configured to receive external audio signals when device 200 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signals may be further stored in memory 204 or sent via communications component 216 .
  • audio component 210 also includes a speaker for outputting audio signals.
  • the I/O interface 212 provides an interface between the processing component 202 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 214 includes one or more sensors for providing various aspects of status assessment for device 200 .
  • the sensor component 214 can detect the open/closed state of the device 200, the relative positioning of components, such as the display and keypad of the device 200, and the sensor component 214 can also detect a change in position of the device 200 or a component of the device 200. , the presence or absence of user contact with the device 200 , device 200 orientation or acceleration/deceleration and temperature changes of the device 200 .
  • Sensor assembly 214 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 216 is configured to facilitate wired or wireless communication between apparatus 200 and other devices.
  • Device 200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 216 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 216 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 200 may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • a non-transitory computer-readable storage medium including instructions such as a memory 204 including instructions, which can be executed by the processor 220 of the device 200 to complete the above method is also provided.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • “plurality” in this disclosure refers to two or more, and other quantifiers are similar.
  • “And/or” describes the relationship between related objects, indicating that there can be three relationships.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone.
  • the character “/” generally indicates that the related objects are in an “or” relationship.
  • the singular forms “a”, “the” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
  • first, second, etc. are used to describe various information, but such information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other and do not imply a specific order or importance. In fact, expressions such as “first” and “second” can be used interchangeably.
  • first information may also be called second information, and similarly, the second information may also be called first information.

Abstract

本公开是关于一种触控检测方法、装置及存储介质。触控检测方法包括:响应于在边框区域检测到触控输入,并确定触控输入满足第一条件,确定触控输入对应触控数据;根据触控输入对应的触控数据,利用第一神经网络模型,确定用户触控虚拟肩键的第一概率;第一神经网络模型为基于触控数据确定用户触控虚拟肩键的概率的神经网络模型;获取惯性测量部件测量到的惯性数据;根据惯性数据,利用第二神经网络模型,确定用户触控虚拟肩键的第二概率;第二神经网络模型为基于惯性数据确定用户触控虚拟肩键的概率的神经网络模型;根据第一概率和第二概率,检测对虚拟肩键的触控。通过本公开,同时检测终端的触控和传感器数据,提高了区分按压和误触的精确度。

Description

一种触控检测方法、装置及存储介质 技术领域
本公开涉及终端技术领域,尤其涉及一种触控检测方法、装置及存储介质。
背景技术
终端技术在日常生活中的应用越来越广泛,其中,终端中虚拟按键的应用也是较为重要的一项研究。
相关技术中,终端中实体按键给一些用户带来了很大的便利,方便了用户的操作。例如,游戏终端中的肩键设计。然而实体按键的设计存在局限性,不能推广到所有用户。
发明内容
为克服相关技术中存在的问题,本公开提供一种触控检测方法、装置及存储介质。
根据本公开实施例的第一方面,提供一种触控检测方法,应用于终端,所述终端的全面屏边框区域设置有虚拟肩键,且所述终端中设置有惯性测量部件,包括:响应于在所述边框区域检测到触控输入,并确定所述触控输入满足第一条件,确定所述触控输入对应的触控数据;根据所述触控输入对应的触控数据,利用第一神经网络模型,确定所述用户触控虚拟肩键的第一概率;其中,所述第一神经网络模型为基于触控数据确定用户触控虚拟肩键的概率的神经网络模型;获取所述惯性测量部件测量到的惯性数据;根据所述惯性数据,利用第二神经网络模型,确定所述用户触控所述虚拟肩键的第二概率;其中,所述第二神经网络模型为基于惯性数据确定用户触控虚拟肩键的概率的神经网络模型;根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控。
在一种实施方式中,所述根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控,包括:对所述第一概率和所述第二概率进行加权,得到所述用户触控所述虚拟肩键的第三概率;在所述第三概率大于概率阈值的情况下,确定所述虚拟肩键被触控;在所述第三概率小于或等于所述概率阈值的情况下,确定所述虚拟肩键未被触控。
在又一种实施方式中,所述确定所述触控输入满足第一条件,包括:确定所述触控输入对应的触控位置位于所述虚拟肩键的区域范围内;或,在所述边框区域中确定所述触控位置对应的锚框区域,并确定所述锚框区域内包括所述虚拟肩键。
在又一种实施方式中,所述方法还包括:在确定所述虚拟肩键被所述触控的情况下,控制所述虚拟肩键对应的马达震动;其中,不同的虚拟肩键对应不同的马达。
在又一种实施方式中,所述控制所述虚拟肩键对应的马达震动,包括;确定所述虚拟 肩键被触控的目标触控强度;根据触控强度与马达数量的对应关系,确定所述目标触控强度对应的目标马达数量;控制所述目标马达数量的马达震动。
在又一种实施方式中,所述震动的马达为基于与所述虚拟肩键距离由近至远确定的马达。
根据本公开实施例的第二方面,提供一种触控检测装置,应用于终端,所述终端的全面屏边框区域设置有虚拟肩键,且所述终端中设置有惯性测量部件,包括:确定单元,响应于在所述边框区域检测到触控输入,并确定所述触控输入满足第一条件,确定所述触控输入对应的触控数据;处理单元,用于根据所述触控输入对应的触控数据,利用第一神经网络模型,确定所述用户触控虚拟肩键的第一概率;其中,所述第一神经网络模型为基于触控数据确定用户触控虚拟肩键的概率的神经网络模型;获取单元,用于获取所述惯性测量部件测量到的惯性数据;处理单元,还用于根据所述惯性数据,利用第二神经网络模型,确定所述用户触控所述虚拟肩键的第二概率;其中,所述第二神经网络模型为基于惯性数据确定用户触控虚拟肩键的概率的神经网络模型;检测单元,用于根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控。
在一种实施方式中,所述检测单元采用如下方式根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控:对所述第一概率和所述第二概率进行加权,得到所述用户触控所述虚拟肩键的第三概率;在所述第三概率大于概率阈值的情况下,确定所述虚拟肩键被触控;在所述第三概率小于或等于所述概率阈值的情况下,确定所述虚拟肩键未被触控。
在另一种实施方式中,所述确定单元采用如下方式确定所述触控输入满足第一条件:确定所述触控输入对应的触控位置位于所述虚拟肩键的区域范围内;或,在所述边框区域中确定所述触控位置对应的锚框区域,并确定所述锚框区域内包括所述虚拟肩键。
在另一种实施方式中,所述装置还包括:控制单元,用于在确定所述虚拟肩键被所述触控的情况下,控制所述虚拟肩键对应的马达震动;其中,不同的虚拟肩键对应不同的马达。
在另一种实施方式中,所述控制单元采用如下方式控制所述虚拟肩键对应的马达震动;确定所述虚拟肩键被触控的目标触控强度;根据触控强度与马达数量的对应关系,确定所述目标触控强度对应的目标马达数量;控制所述目标马达数量的马达震动。
在另一种实施方式中,所述震动的马达为基于与所述虚拟肩键距离由近至远确定的马达。
根据本公开实施例的第三方面,提供一种触控检测装置,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:用于执行第一方面或 者第一方面任意一种实施方式中的触控检测的方法。
根据本公开实施例第四方面,提供一种存储介质,其特征在于,存储介质中存储有指令,当存储介质中的指令由终端的处理器执行时,使得包括处理器的终端能够执行第一方面或者第一方面任意一种实施方式中的触控检测的方法。
本公开的实施例提供的技术方案可以包括以下有益效果:检测终端边框区域触控数据以及终端惯性数据,得到用户触控虚拟肩键的第一概率以及用户触控虚拟肩键的第二概率,将第一概率与第二概率融合,得到用户触控虚拟肩键的概率。根据触控和传感器数据,提高了对按压场景和误触区分的精确度。使虚拟肩键的触控检测准确性提高,不易有误触的情况发生,提高用户体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的一种触控检测方法的流程图。
图2是根据一示例性实施例示出的一种根据第一概率和第二概率,检测对虚拟肩键的触控的方法的流程图。
图3是根据一示例性实施例示出的一种确定触控输入满足第一条件的方法的流程图。
图4是根据一示例性实施例示出的一种触控检测的方法的流程图。
图5是根据一示例性实施例示出的一种控制虚拟肩键对应的马达震动的方法的流程图。
图6是根据一示例性实施例示出的一种控制虚拟肩键对应的马达震动的方法的流程图。
图7是一示例性实施例示出的移动终端中虚拟按键的区域的示意图。
图8是一示例性实施例示出的移动终端中传感器的示意图。
图9是一示例性实施例示出的移动终端中屏幕侧边有按压时的示意图。
图10是根据一示例性实施例示出的一种触控检测装置框图。
图11是根据一示例性实施例示出的一种用于触控检测的装置200的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中 所描述的实施方式并不代表与本公开相一致的所有实施方式。
相关技术中,终端中肩键的设计便利了许多游戏用户,但肩键设计应用于游戏终端中存在局限性,受众较小,不能推广到全体用户中。
本公开提供一种触控检测方法,以实现在全面屏终端中,模拟出虚拟肩键,帮助用户进行多指操作,降低游戏难度,的同时,扩大虚拟肩键的受众,提升用户的游戏体验。
图1是根据一示例性实施例示出的一种触控检测方法的流程图,如图1所示,触控检测方法应用于终端中,包括以下步骤。
在步骤S11中,响应于在边框区域检测到触控输入,并确定触控输入满足第一条件,确定触控输入对应的触控数据。
本公开实施例中,在终端的边框区域设置检测单元,用于检测触控输入。其中,触控输入的检测为检测终端边框的电容以及电压的变化。确定触控输入满足第一条件,获取触控输入的触控数据。
在步骤S12中,根据触控输入对应的触控数据,利用第一神经网络模型确定用户触控虚拟肩键的第一概率;其中,第一神经网络模型为基于触控数据确定用户触控虚拟肩键的概率的神经网络模型。
本公开实施例中,神经网络模型是由大量的、简单的处理单元广泛地互相连接而形成的复杂网络系统模型,它反映了人脑功能的许多基本特征,是一个高度复杂的非线性动力学习系统模型。其中,将触控数据输入至第一神经网络模型中,经过第一神经网络模型,输出为用户触控虚拟肩键的第一概率。
本公开实施例中,第一神经网络模型经过预先训练,输入为触控数据,输出为触控的概率。
在步骤S13中,获取惯性测量部件测量到的惯性数据。
本公开实施例中,惯性测量部件是测量物体三轴姿态角(或角速率)以及加速度的装置。包含了三个加速度计和三个陀螺仪,加速度计检测终端在载体坐标系统独立三轴的加速度信号,而陀螺仪检测载体相对于导航坐标系的角速度信号,测量终端在三维空间中的角速度和加速度,并以此解算出终端的姿态。其中,惯性数据包括终端三轴的加速度,以及偏航角。
在步骤S14中,根据惯性数据,利用第二神经网络模型,确定用户触控虚拟肩键的第二概率;其中,第二神经网络模型为基于惯性数据确定用户触控虚拟肩键的概率的神经网络模型。
本公开实施例中,将惯性数据输入至第二神经网络模型中,经过第二神经网络模型, 输出为用户触控虚拟肩键的第二概率。第二神经网络模型经过预先训练,输入为惯性数据,输出为触控的概率。
在步骤S15中,根据第一概率和第二概率,检测对虚拟肩键的触控。
本公开实施例中,融合第一概率和第二概率,检测对虚拟肩键的触控的概率。
本公开实施例提供的触控检测方法,通过检测用户触控终端的数据以及终端的惯性数据,得到第一概率和第二概率,根据第一概率和第二概率,对虚拟肩键的触控进行检测。同时使用触控和传感器数据,提高了对按压和误触区分的精确度。使虚拟肩键的触控检测准确性提高,不容易有误触的情况发生。
本公开以下实施例对本公开上述实施例中的根据第一概率和第二概率,检测对虚拟肩键的触控的方法进行进一步的解释和说明。
图2是根据一示例性实施例示出的一种根据第一概率和第二概率,检测对虚拟肩键的触控的方法的流程图,如图2所示,根据第一概率和第二概率,检测对虚拟肩键的触控的方法包括以下步骤。
在步骤S21中,对第一概率和第二概率进行加权,得到用户触控虚拟肩键的第三概率。
本公开实施例中,对第一概率和第二概率进行加权可以参照如下公式表示:Pr=A*Pr1+B*Pr2+C。其中,第三概率表示为Pr,第一概率表示为Pr1,第二概率表示为Pr2,A、B、C为常数。
在步骤S22中,在第三概率大于概率阈值的情况下,确定虚拟肩键被触控。
本公开实施例中,预先设置概率阈值,例如将概率阈值设置为80%,当第三概率大于80%时,判定虚拟肩键被触控。
在步骤S23中,在第三概率小于或等于概率阈值的情况下,确定虚拟肩键未被触控。
本公开实施例中,例如将概率阈值设置为80%,当第三概率小于或等于80%时,判定虚拟肩键未被触控。
本公开以下实施例对本公开上述实施例中的确定触控输入满足第一条件的方法进行进一步的解释和说明。
图3是根据一示例性实施例示出的一种确定触控输入满足第一条件的方法的流程图,如图3所示,确定触控输入满足第一条件的方法包括以下步骤。
在步骤S31中,确定触控输入对应的触控位置位于虚拟肩键的区域范围内。
本公开实施例中,在终端边框区域中预设虚拟肩键的区域,其中,虚拟肩键的区域长度可以与终端的长度相等,宽度可以为预设的宽度。例如,终端长度为20厘米,宽度为10厘米,虚拟肩键的区域长度可以为20厘米,宽度可以为2厘米。确定触控输入的触控位置 是否在预设的虚拟肩键的区域中。
在步骤S32中,或,在边框区域中确定触控位置对应的锚框区域,并确定锚框区域内包括虚拟肩键。
本公开实施例中,锚框区域可以为遍历输入图像上所有可能的像素框,然后选出正确的目标框,并对位置和大小进行调整就可以完成目标检测任务的像素框。
本公开实施例中,确定触控位置对应的锚框区域,确定锚框区域中包含虚拟肩键。
本公开以下实施例对本公开上述实施例中的触控检测的方法进行进一步的解释和说明。
图4是根据一示例性实施例示出的一种触控检测的方法的流程图,如图4所示,触控检测的方法包括以下步骤。
在步骤S41中,在确定虚拟肩键被触控的情况下,控制虚拟肩键对应的马达震动。
本公开实施例中,确定虚拟肩键被触控,控制预设对应的马达震动。例如第一虚拟肩键对应第一马达,当第一虚拟肩键被触控时,控制第一马达震动。
在步骤S42中,不同的虚拟肩键对应不同的马达。
本公开实施例中,例如第一虚拟肩键对应第一马达,第二虚拟肩键对应第二马达。
本公开以下实施例对本公开上述实施例中的控制虚拟肩键对应的马达震动的方法进行进一步的解释和说明。
图5是根据一示例性实施例示出的一种控制虚拟肩键对应的马达震动的方法的流程图,如图5所示,控制虚拟肩键对应的马达震动的方法包括以下步骤。
在步骤S51中,确定虚拟肩键被触控的目标触控强度。
本公开实施例中,目标触控强度可以为长按虚拟肩键或是点按虚拟肩键。
在步骤S52中,根据触控强度与马达数量的对应关系,确定目标触控强度对应的目标马达数量。
本公开实施例中,触控强度高的长按虚拟肩键可以对应2个马达,触控强度低的点按虚拟肩键可以对应1个马达。
在步骤S53中,控制目标马达数量的马达震动。
本公开实施例中,例如长按虚拟肩键时,控制长按虚拟肩键对应的2个马达震动。点按虚拟肩键时,控制点按虚拟肩键对应的1个马达震动。
本公开以下实施例对本公开上述实施例中的控制虚拟肩键对应的马达震动的方法进行进一步的解释和说明。
图6是根据一示例性实施例示出的一种控制虚拟肩键对应的马达震动的方法的流程 图,如图6所示,控制虚拟肩键对应的马达震动的方法包括以下步骤。
在步骤S61中,震动的马达为基于与虚拟肩键距离由近至远确定的马达。
本公开实施例中,可以设置触控虚拟肩键优先控制距离虚拟肩键近的马达震动。
在步骤S62中,控制由近至远确定的马达震动。
本公开实施例中,可以设置点按第一虚拟肩键时,控制距离第一虚拟肩键最近的第一马达震动。长按第一虚拟肩键时,控制距离第一虚拟肩键最近的第一马达震动和第二马达震动。
本公开实施例以下以一移动终端为例,对本公开上述实施例涉及的触控检测方法以及实际中的应用进行举例说明。
图7是一示例性实施例示出的移动终端中虚拟按键的区域的示意图。
参阅图7所示,本公开实施例中,移动终端的触控区域可以为10厘米*5厘米的区域,触控区域可以为移动终端两长边中任意一边中靠近边缘的区域,触控区域a,b可以为2厘米*3厘米的小区域。
本公开实施例中,对移动终端是否有按压可以有两种判断方式,其中,第一种方式,对触控区域a,b进行检测,若触控区域a,b内的按压数据大于预设阈值,例如50时,上报有按压操作,若触控区域a,b内的按压数据均小于等于预设阈值,则上报无按压操作。
本公开实施例中,对移动终端是否有按压进行判断的第二种方式,为对移动终端虚拟肩键区域进行检测,虚拟肩键区域可以为10厘米*2厘米的区域。通过目标检测的方法画出锚框,锚框区域可以为遍历输入图像上所有可能的像素框,然后选出正确的目标框,并对位置和大小进行调整就可以完成目标检测任务的像素框。找到包含a或b区域的锚框,对这个锚框(面积大于等于2厘米*3厘米)进行判断;如果没有找到,上报没有按压。此种方式可以查看全局变化,因为往往按压时不会只有a和b区域有变化,所以对整个虚拟肩键区域进行判断。
本公开实施例中,将触控数据输入至第一神经网络模型中,得到触控概率,记为第一概率。第一神经网络模型被预先设置为输入触控数据,输出为触控概率。
图8是一示例性实施例示出的移动终端中传感器的示意图。图9是一示例性实施例示出的移动终端中屏幕侧边有按压时的示意图。
参阅图8所示,本公开实施例中,在按压虚拟肩键时,移动终端所处的x,y轴的加速度,和陀螺仪的数值应该发生变化,通过惯性测量部件计算偏航角。
本公开实施例中,将包含偏高角以及终端加速的等的惯性数据输入至第二神经网络模型中,得到惯性概率,记为第二概率。第二神经网络模型被预先设置为输入惯性数据,输 出为惯性概率。
本公开实施例中,融合第一概率以及第二概率,得到对移动终端虚拟肩键的按压的概率为第三概率。可以用如下公式表示:Pr=A*Pr1+B*Pr2+C,其中,第一概率表示为Pr1,第二概率表示为Pr2,第三概率表示为Pr,A、B、C表示任意常数。预设按压阈值,例如可以为80%,若第三概率Pr大于按压阈值80%,认为发生了一次按压。
本公开实施例中,由于按压可以是一次按压或长时间按压,所以在按压区域出现小于阈值的值时,才认为当前按压结束。
本公开实施例中,在移动终端屏幕边缘增加一个或数个线性马达,通过不同的马达震动表示不同的按键。例如屏幕横放时,分左右两个虚拟肩键,将左边虚拟肩键命名为第一虚拟肩键,右边虚拟肩键命名为第二虚拟肩键。可以设置点按虚拟肩键,一个线性马达震动,长按虚拟肩键,两个线性马达震动。
本公开实施例中,点按第一虚拟肩键,靠近第一虚拟肩键的一个线性马达震动,长按第二虚拟肩键,靠近第二虚拟肩键的两个线性马达震动。可以由用户自行设置,也可以通过震动强度分辨按键的准确度。
需要说明的是,本领域内技术人员可以理解,本公开实施例上述涉及的各种实施方式/实施例中可以配合前述的实施例使用,也可以是独立使用。无论是单独使用还是配合前述的实施例一起使用,其实现原理类似。本公开实施中,部分实施例中是以一起使用的实施方式进行说明的。当然,本领域内技术人员可以理解,这样的举例说明并非对本公开实施例的限定。
基于相同的构思,本公开实施例还提供一种触控检测装置。
可以理解的是,本公开实施例提供的触控检测装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。结合本公开实施例中所公开的各示例的单元及算法步骤,本公开实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同的方法来实现所描述的功能,但是这种实现不应认为超出本公开实施例的技术方案的范围。
图10是根据一示例性实施例示出的一种触控检测装置框图。参照图10,该装置包括确定单元101,处理单元102,获取单元103和检测单元104。
该确定单元101,响应于在所述边框区域检测到触控输入,并确定所述触控输入满足第一条件,确定所述触控输入对应的触控数据。
该处理单元102,用于根据所述触控输入对应的触控数据,利用第一神经网络模型, 确定所述用户触控虚拟肩键的第一概率;其中,所述第一神经网络模型为基于触控数据确定用户触控虚拟肩键的概率的神经网络模型。
该获取单元103,用于获取所述惯性测量部件测量到的惯性数据。
该处理单元102,还用于根据所述惯性数据,利用第二神经网络模型,确定所述用户触控所述虚拟肩键的第二概率;其中,所述第二神经网络模型为基于惯性数据确定用户触控虚拟肩键的概率的神经网络模型。
该检测单元104,用于根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控。
在一种实施方式中,所述检测单元104采用如下方式根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控:对所述第一概率和所述第二概率进行加权,得到所述用户触控所述虚拟肩键的第三概率;在所述第三概率大于概率阈值的情况下,确定所述虚拟肩键被触控;在所述第三概率小于或等于所述概率阈值的情况下,确定所述虚拟肩键未被触控。
在另一种实施方式中,所述确定单元101采用如下方式确定所述触控输入满足第一条件:确定所述触控输入对应的触控位置位于所述虚拟肩键的区域范围内;或,在所述边框区域中确定所述触控位置对应的锚框区域,并确定所述锚框区域内包括所述虚拟肩键。
在另一种实施方式中,所述装置还包括:控制单元105,用于在确定所述虚拟肩键被所述触控的情况下,控制所述虚拟肩键对应的马达震动;其中,不同的虚拟肩键对应不同的马达。
在另一种实施方式中,所述控制单元105采用如下方式控制所述虚拟肩键对应的马达震动;确定所述虚拟肩键被触控的目标触控强度;根据触控强度与马达数量的对应关系,确定所述目标触控强度对应的目标马达数量;控制所述目标马达数量的马达震动。
在另一种实施方式中,所述震动的马达为基于与所述虚拟肩键距离由近至远确定的马达。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图11是根据一示例性实施例示出的一种用于触控检测的装置200的框图。例如,装置200可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图11,装置200可以包括以下一个或多个组件:处理组件202,存储器204,电力组件206,多媒体组件208,音频组件210,输入/输出(I/O)接口212,传感器组件214, 以及通信组件216。
处理组件202通常控制装置200的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件202可以包括一个或多个处理器220来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件202可以包括一个或多个模块,便于处理组件202和其他组件之间的交互。例如,处理组件202可以包括多媒体模块,以方便多媒体组件208和处理组件202之间的交互。
存储器204被配置为存储各种类型的数据以支持在装置200的操作。这些数据的示例包括用于在装置200上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器204可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件206为装置200的各种组件提供电力。电力组件206可以包括电源管理系统,一个或多个电源,及其他与为装置200生成、管理和分配电力相关联的组件。
多媒体组件208包括在所述装置200和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件208包括一个前置摄像头和/或后置摄像头。当装置200处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件210被配置为输出和/或输入音频信号。例如,音频组件210包括一个麦克风(MIC),当装置200处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器204或经由通信组件216发送。在一些实施例中,音频组件210还包括一个扬声器,用于输出音频信号。
I/O接口212为处理组件202和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件214包括一个或多个传感器,用于为装置200提供各个方面的状态评估。例如,传感器组件214可以检测到装置200的打开/关闭状态,组件的相对定位,例如所述 组件为装置200的显示器和小键盘,传感器组件214还可以检测装置200或装置200一个组件的位置改变,用户与装置200接触的存在或不存在,装置200方位或加速/减速和装置200的温度变化。传感器组件214可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件214还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件214还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件216被配置为便于装置200和其他设备之间有线或无线方式的通信。装置200可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件216经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件216还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置200可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器204,上述指令可由装置200的处理器220执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
进一步可以理解的是,本公开中“多个”是指两个或两个以上,其它量词与之类似。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
进一步可以理解的是,术语“第一”、“第二”等用于描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开,并不表示特定的顺序或者重要程度。实际上,“第一”、“第二”等表述完全可以互换使用。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。
进一步可以理解的是,本公开实施例中尽管在附图中以特定的顺序描述操作,但是不应将其理解为要求按照所示的特定顺序或是串行顺序来执行这些操作,或是要求执行全部所示的操作以得到期望的结果。在特定环境中,多任务和并行处理可能是有利的。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利范围来限制。

Claims (14)

  1. 一种触控检测方法,其特征在于,应用于终端,所述终端的全面屏边框区域设置有虚拟肩键,且所述终端中设置有惯性测量部件,所述方法包括:
    响应于在所述边框区域检测到触控输入,并确定所述触控输入满足第一条件,确定所述触控输入对应的触控数据;
    根据所述触控输入对应的触控数据,利用第一神经网络模型,确定用户触控所述虚拟肩键的第一概率;其中,所述第一神经网络模型为基于触控数据确定用户触控虚拟肩键的概率的神经网络模型;
    获取所述惯性测量部件测量到的惯性数据;
    根据所述惯性数据,利用第二神经网络模型,确定用户触控所述虚拟肩键的第二概率;其中,所述第二神经网络模型为基于惯性数据确定用户触控虚拟肩键的概率的神经网络模型;
    根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控,包括:
    对所述第一概率和所述第二概率进行加权,得到所述用户触控所述虚拟肩键的第三概率;
    在所述第三概率大于概率阈值的情况下,确定所述虚拟肩键被触控;
    在所述第三概率小于或等于所述概率阈值的情况下,确定所述虚拟肩键未被触控。
  3. 根据权利要求1所述的方法,其特征在于,所述确定所述触控输入满足第一条件,包括:
    确定所述触控输入对应的触控位置位于所述虚拟肩键的区域范围内;或,
    在所述边框区域中确定所述触控位置对应的锚框区域,并确定所述锚框区域内包括所述虚拟肩键。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在确定所述虚拟肩键被所述触控的情况下,控制所述虚拟肩键对应的马达震动;
    其中,不同的虚拟肩键对应不同的马达。
  5. 根据权利要求4所述的方法,其特征在于,所述控制所述虚拟肩键对应的马达震动,包括;
    确定所述虚拟肩键被触控的目标触控强度;
    根据触控强度与马达数量的对应关系,确定所述目标触控强度对应的目标马达数量;
    控制所述目标马达数量的马达震动。
  6. 根据权利要求4至5中任意一项所述的方法,其特征在于,所述震动的马达为基于与所述虚拟肩键距离由近至远确定的马达。
  7. 一种触控检测装置,其特征在于,应用于终端,所述终端的全面屏边框区域设置有虚拟肩键,且所述终端中设置有惯性测量部件,所述装置包括:
    确定单元,响应于在所述边框区域检测到触控输入,并确定所述触控输入满足第一条件,确定所述触控输入对应的触控数据;
    处理单元,用于根据所述触控输入对应的触控数据,利用第一神经网络模型,确定用户触控所述虚拟肩键的第一概率;其中,所述第一神经网络模型为基于触控数据确定用户触控虚拟肩键的概率的神经网络模型;
    获取单元,用于获取所述惯性测量部件测量到的惯性数据;
    处理单元,还用于根据所述惯性数据,利用第二神经网络模型,确定用户触控所述虚拟肩键的第二概率;其中,所述第二神经网络模型为基于惯性数据确定用户触控虚拟肩键的概率的神经网络模型;
    检测单元,用于根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控。
  8. 根据权利要求7所述的装置,其特征在于,所述检测单元采用如下方式根据所述第一概率和所述第二概率,检测对所述虚拟肩键的触控:
    对所述第一概率和所述第二概率进行加权,得到所述用户触控所述虚拟肩键的第三概率;
    在所述第三概率大于概率阈值的情况下,确定所述虚拟肩键被触控;
    在所述第三概率小于或等于所述概率阈值的情况下,确定所述虚拟肩键未被触控。
  9. 根据权利要求7所述的装置,其特征在于,所述确定单元采用如下方式确定所述触控输入满足第一条件:
    确定所述触控输入对应的触控位置位于所述虚拟肩键的区域范围内;或,
    在所述边框区域中确定所述触控位置对应的锚框区域,并确定所述锚框区域内包括所述虚拟肩键。
  10. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    控制单元,用于在确定所述虚拟肩键被所述触控的情况下,控制所述虚拟肩键对应的马达震动;
    其中,不同的虚拟肩键对应不同的马达。
  11. 根据权利要求10所述的装置,其特征在于,所述控制单元采用如下方式控制所述 虚拟肩键对应的马达震动;
    确定所述虚拟肩键被触控的目标触控强度;
    根据触控强度与马达数量的对应关系,确定所述目标触控强度对应的目标马达数量;
    控制所述目标马达数量的马达震动。
  12. 根据权利要求10至11中任意一项所述的装置,其特征在于,所述震动的马达为基于与所述虚拟肩键距离由近至远确定的马达。
  13. 一种触控检测装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:用于执行权利要求1至6中任意一项所述的触控检测的方法。
  14. 一种存储介质,其特征在于,所述存储介质中存储有指令,当所述存储介质中的指令由终端的处理器执行时,使得包括所述处理器的终端能够执行权利要求1至6中任意一项所述的触控检测的方法。
PCT/CN2022/096207 2022-05-31 2022-05-31 一种触控检测方法、装置及存储介质 WO2023230829A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/096207 WO2023230829A1 (zh) 2022-05-31 2022-05-31 一种触控检测方法、装置及存储介质
CN202280004620.5A CN117501221A (zh) 2022-05-31 2022-05-31 一种触控检测方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/096207 WO2023230829A1 (zh) 2022-05-31 2022-05-31 一种触控检测方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2023230829A1 true WO2023230829A1 (zh) 2023-12-07

Family

ID=89026535

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/096207 WO2023230829A1 (zh) 2022-05-31 2022-05-31 一种触控检测方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN117501221A (zh)
WO (1) WO2023230829A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213419A (zh) * 2018-10-19 2019-01-15 北京小米移动软件有限公司 触摸操作处理方法、装置及存储介质
US20190265781A1 (en) * 2018-02-28 2019-08-29 Logitech Europe S.A. Precision tracking of user interaction with a virtual input device
CN111930274A (zh) * 2020-08-10 2020-11-13 Oppo(重庆)智能科技有限公司 虚拟按键、电子设备及触控操作的检测方法
US20200387245A1 (en) * 2019-06-05 2020-12-10 Apple Inc. Systems, methods, and computer-readable media for handling user input gestures on an extended trackpad of an electronic device
US20210311621A1 (en) * 2020-04-02 2021-10-07 Qualcomm Incorporated Swipe gestures on a virtual keyboard with motion compensation
WO2021254293A1 (zh) * 2020-06-16 2021-12-23 华为技术有限公司 一种显示通知的方法和终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190265781A1 (en) * 2018-02-28 2019-08-29 Logitech Europe S.A. Precision tracking of user interaction with a virtual input device
CN109213419A (zh) * 2018-10-19 2019-01-15 北京小米移动软件有限公司 触摸操作处理方法、装置及存储介质
US20200387245A1 (en) * 2019-06-05 2020-12-10 Apple Inc. Systems, methods, and computer-readable media for handling user input gestures on an extended trackpad of an electronic device
US20210311621A1 (en) * 2020-04-02 2021-10-07 Qualcomm Incorporated Swipe gestures on a virtual keyboard with motion compensation
WO2021254293A1 (zh) * 2020-06-16 2021-12-23 华为技术有限公司 一种显示通知的方法和终端
CN111930274A (zh) * 2020-08-10 2020-11-13 Oppo(重庆)智能科技有限公司 虚拟按键、电子设备及触控操作的检测方法

Also Published As

Publication number Publication date
CN117501221A (zh) 2024-02-02

Similar Documents

Publication Publication Date Title
US10498873B2 (en) Screen control method, apparatus, and non-transitory tangible computer readable storage medium
WO2017124773A1 (zh) 手势识别方法及装置
US20170344192A1 (en) Method and device for playing live videos
US10331231B2 (en) Mobile terminal and method for determining scrolling speed
US20160210034A1 (en) Method and apparatus for switching display mode
EP3115982A1 (en) Method and apparatus for road condition prompt
US10612918B2 (en) Mobile computing device and method for calculating a bending angle
KR101788496B1 (ko) 단말 및 비디오 이미지를 제어하는 장치 및 방법
US10444953B2 (en) View angle switching method and apparatus
US10248855B2 (en) Method and apparatus for identifying gesture
EP3125512A1 (en) Silent ring indication while listening music over a headset
EP3147802A1 (en) Method and apparatus for processing information
CN107102801A (zh) 终端屏幕旋转方法及装置
US20180139790A1 (en) Methods, apparatuses and storage medium for controlling a wireless connection
US20160313969A1 (en) Electronic apparatus, image display system, and recording medium
WO2016095395A1 (zh) 激活移动终端的操作状态的方法及装置
US10846513B2 (en) Method, device and storage medium for processing picture
US9986075B2 (en) Mobile device including a substantially centrally located earpiece
CN107948876B (zh) 控制音箱设备的方法、装置及介质
WO2023230829A1 (zh) 一种触控检测方法、装置及存储介质
CN108962189A (zh) 亮度调整方法及装置
CN108509863A (zh) 信息提示方法、装置和电子设备
CN107203315A (zh) 点击事件的处理方法、装置及终端
EP2924568A1 (en) Execution method and device for program string
CN112596696A (zh) 歌曲录制方法、装置、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22944167

Country of ref document: EP

Kind code of ref document: A1