CN111507144A - Touch area acquisition method and device, intelligent device and storage medium - Google Patents

Touch area acquisition method and device, intelligent device and storage medium Download PDF

Info

Publication number
CN111507144A
CN111507144A CN201910101447.0A CN201910101447A CN111507144A CN 111507144 A CN111507144 A CN 111507144A CN 201910101447 A CN201910101447 A CN 201910101447A CN 111507144 A CN111507144 A CN 111507144A
Authority
CN
China
Prior art keywords
touch
triggered
sensors
determining
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910101447.0A
Other languages
Chinese (zh)
Other versions
CN111507144B (en
Inventor
杨坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910101447.0A priority Critical patent/CN111507144B/en
Publication of CN111507144A publication Critical patent/CN111507144A/en
Application granted granted Critical
Publication of CN111507144B publication Critical patent/CN111507144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1324Sensors therefor by using geometrical optics, e.g. using prisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers

Abstract

The disclosure provides a touch area acquisition method, a touch area acquisition device, an intelligent device and a storage medium, wherein the method comprises the following steps: determining the number of triggered sensors on the touch module after a user performs a touch action; determining a touch area based on the number of triggered sensors and a pre-stored sensor size. The touch module can acquire the number of triggered sensors, namely the parameter, only from the touch module, so that the information quantity required to be transmitted by the touch module is reduced, and the acquired touch area is more accurate based on the number of the sensors and the size of the sensors.

Description

Touch area acquisition method and device, intelligent device and storage medium
Technical Field
The present disclosure relates to the field of network communication technologies, and in particular, to a method and an apparatus for acquiring a touch area, an intelligent device, and a storage medium.
Background
With the continuous development of network communication technology, intelligent devices with touch screens are also widely used. For an intelligent device with a touch screen, generally, when a touch module on the intelligent device detects a touch action of a user on the touch screen, acquiring corresponding touch coordinates and reporting the touch coordinates to a processor, so that the processor performs feedback according to the touch coordinates; in some touch schemes of touch screen, the touch module obtains the touch coordinate and the touch area information at the same time, so that the processor can feedback the touch coordinate and the touch area information, or transmit the touch coordinate and the touch area information to other modules, and in the related art, the touch area is usually represented by a resolution or a default ratio range (for example, a range of 0 to 1).
However, in the process of implementing the invention, the inventor finds that: firstly, taking the resolution as touch area information, reporting each touch coordinate in combination with the touch area information (the resolution is 120 × 120) by a touch module, and if a plurality of touch points exist at the same time, forming a larger total data volume to cause the reading load of a processor to be increased; secondly, a default proportion range (such as a range of 0 to 1) is used as the touch area information, although the reported data amount can be reduced, the reported data amount is not completely consistent with the actual physical touch area, and the size of the actual touch area cannot be represented, so that the obtained area data has a poor application effect in certain touch screen touch schemes.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a touch area acquisition method, apparatus, smart device, and storage medium.
According to a first aspect of the embodiments of the present disclosure, a touch area obtaining method is provided, which is applied to an intelligent device, where the intelligent device includes a touch module, and the touch module includes a plurality of sensors; the method comprises the following steps:
determining the number of triggered sensors on the touch module after a user performs a touch action;
determining a touch area based on the number of triggered sensors and a pre-stored sensor size.
Optionally, the triggered sensor is a sensor that generates a touch signal based on the touch action, and the semaphore of the touch signal is greater than a preset threshold value.
Optionally, the number of triggered sensors includes a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction in a preset two-dimensional coordinate system.
Optionally, determining the number of triggered sensors on the touch module comprises:
receiving the arrangement number of the triggered sensor or the coordinate in a preset two-dimensional coordinate system transmitted by the touch module;
determining the number of triggered sensors based on the arrangement number or the number of coordinates.
Optionally, the method further comprises:
and determining the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system.
Optionally, the smart device further comprises an off-screen fingerprint module;
determining a touch area based on the number of triggered sensors and a pre-stored sensor size, specifically including:
determining the area of a fingerprint light spot based on the number of the triggered sensors and the size of the prestored sensors, and acquiring the fingerprint under a screen; the fingerprint light spot is used for exposing the fingerprint module under the screen.
Optionally, before the performing of the under-screen fingerprint acquisition, the method further includes:
determining the location of the fingerprint spot based on the location of the respective triggered sensor.
Optionally, the under-screen fingerprint module comprises a plurality of light-emitting sub-modules; the fingerprint light spot is generated based on the light-emitting sub-module;
then said determining the area of the fingerprint spot based on the number of triggered sensors and the pre-stored sensor size comprises:
determining a number of activations of the light emitting sub-modules based on the number of activated sensors.
Said determining the location of the fingerprint spot based on the location of the respective triggered sensor comprises:
and determining the triggering position of the light-emitting sub-module based on the position of each triggered sensor, so as to adjust the position of the fingerprint light spot.
Optionally, the method further comprises:
if the touch area is larger than or equal to a preset area threshold, judging that the palm touches, and executing a function corresponding to the palm touch;
and if the touch area is smaller than a preset area threshold, judging that the finger touches, and executing a function corresponding to the finger touch.
Optionally, the method further comprises:
and determining a corresponding trigger event based on the touch action and the touch area, and executing the trigger event.
Optionally, the method further comprises:
and acquiring the matched input code according to the touch area, and unlocking.
Optionally, the obtaining the matched input code according to the touch area includes:
determining a touch area based on the location of the respective triggered sensor;
and acquiring a matched input code according to the touch area and the touch area.
According to a second aspect of the embodiments of the present disclosure, a touch area obtaining apparatus is provided, which is applied to an intelligent device, where the intelligent device includes a touch module, and the touch module includes a plurality of sensors; the device comprises:
the quantity determining unit is used for determining the quantity of the triggered sensors on the touch module after a user performs a touch action;
and the area determining unit is used for determining the touch area based on the number of the triggered sensors and the prestored sensor size.
Optionally, the triggered sensor is a sensor that generates a touch signal based on the touch action, and the semaphore of the touch signal is greater than a preset threshold value.
Optionally, the number of triggered sensors includes a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction in a preset two-dimensional coordinate system.
Optionally, the number determination unit comprises a sensor determination subunit, a transmission subunit and a number determination subunit;
the sensor determining subunit is configured to determine a triggered sensor on the touch module after a user performs a touch action;
the transmission subunit is used for receiving the arrangement number of the triggered sensor transmitted by the touch module or the coordinate in a preset two-dimensional coordinate system;
the number determination subunit is configured to determine the number of triggered sensors based on the arrangement number or the number of coordinates.
Optionally, the method further comprises:
and the position determining unit is used for determining the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system.
Optionally, the smart device further comprises an off-screen fingerprint module;
the area determining unit specifically includes:
the area acquisition subunit is used for determining the area of the fingerprint light spot based on the number of the triggered sensors and the size of the prestored sensors; the fingerprint light spot is used for exposing the under-screen fingerprint module; and
and the fingerprint acquisition subunit is used for acquiring fingerprints under the screen.
Optionally, before the fingerprint acquisition subunit, the method further includes:
a position determining subunit for determining the position of the fingerprint spot based on the position of the respective triggered sensor.
Optionally, the under-screen fingerprint module comprises a plurality of light-emitting sub-modules; the fingerprint light spot is generated based on the light-emitting sub-module;
the area obtaining subunit includes:
determining a number of activations of the light emitting sub-modules based on the number of activated sensors.
The position determining subunit includes:
and determining the triggering position of the light-emitting sub-module based on the position of each triggered sensor, so as to adjust the position of the fingerprint light spot.
Optionally, the method further comprises:
the execution unit is used for judging that the palm touches the touch screen when the touch area is larger than or equal to a preset area threshold value, and executing a function corresponding to the palm touch; and when the touch area is smaller than a preset area threshold, judging that the finger touches, and executing a function corresponding to the finger touch.
Optionally, the method further comprises:
the execution unit is further configured to determine a corresponding trigger event based on the touch action and the touch area, and execute the trigger event.
Optionally, the method further comprises:
and the unlocking unit is used for acquiring the matched input code according to the touch area and unlocking.
Optionally, the unlocking unit includes:
a touch area determination subunit configured to determine a touch area based on the positions of the respective triggered sensors;
the input code acquisition subunit is used for acquiring a matched input code according to the touch area and the touch area;
and the unlocking subunit is used for unlocking based on the input code.
According to a third aspect of the embodiments of the present disclosure, there is provided a smart device, including a touch module, a processor, and a memory;
the memory is used for storing a computer program executed by the processor;
the touch module is used for acquiring touch signals and informing the processor of the number of sensors triggered by the touch signals according to the touch signals;
the processor is configured to: determining a number of triggered sensors on the touch module, determining a touch area based on the number of triggered sensors and a pre-stored sensor size.
Optionally, the smart device further comprises an off-screen fingerprint module;
the touch module is also used for informing the under-screen fingerprint module of the number of the sensors triggered by the touch signal according to the touch signal;
the under-screen fingerprint module is used for determining the area of fingerprint light spots based on the number of the triggered sensors and the size of the prestored sensors and acquiring under-screen fingerprints; the fingerprint light spot is used for exposing the fingerprint module under the screen.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the intelligent device, after a user performs a touch action, the number of the triggered sensors on the touch module is determined, then the touch area is determined based on the number of the triggered sensors and the size of the prestored sensors, and only the parameter of the number of the triggered sensors needs to be obtained from the touch module, so that the information quantity needing to be transmitted by the touch module is reduced, and the obtained touch area is more accurate based on the number of the sensors and the size of the sensors.
Furthermore, the triggered sensors are sensors based on the fact that the semaphore of the touch signal generated by the touch action is larger than a preset threshold value, and the number of the triggered sensors can be determined more accurately by screening the sensors generating the touch signal, so that the detection accuracy is improved.
Further, the present disclosure provides a data transmission format capable of data transmission in a data transmission format including two parameters of a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction, thereby reducing an information transmission amount.
Further, the present disclosure provides another data transmission format, and the touch module may further transmit an arrangement number of the triggered sensors or coordinates in a preset two-dimensional coordinate system, so that the smart device determines the number of the triggered sensors based on the arrangement number or the number of the coordinates, thereby reducing an information transmission amount.
Further, the present disclosure may also determine the location of each triggered sensor based on the array number or coordinates of the triggered sensor.
Further, this disclosure's smart machine still includes fingerprint module under the screen, can be based on the affirmation the area of fingerprint facula carries out fingerprint collection to make the fingerprint of gathering have high identification accuracy.
Further, the under-screen fingerprint module can also determine the position of the fingerprint light spot based on the positions of the triggered sensors, so that the fingerprint identification precision is further improved.
Further, the under-screen fingerprint module can determine the triggering number of the light-emitting sub-modules based on the number of the triggered sensors, so that the area of the fingerprint light spot can be determined simply, conveniently and quickly; and determining the triggering position of the light-emitting sub-module based on the position of each triggered sensor, thereby conveniently and quickly determining the position of the fingerprint light spot.
Furthermore, the method and the device can execute different functions based on the size relation between the touch area and the preset area threshold value, provide more operation choices and improve the convenience of a user when using the intelligent device.
Furthermore, different trigger events can be executed based on the touch action and the touch area, more operation options are provided, and convenience of a user in using the intelligent device is improved.
Furthermore, the intelligent device disclosed by the invention can match the input code based on the touch area, so that the unlocking function is realized under the condition that the password is not exposed, and the unlocking safety is improved.
Furthermore, the intelligent device can acquire matched input codes according to the touch area and the touch area, so that more input code acquisition forms are provided, and the unlocking safety is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a flow chart illustrating a touch area acquisition method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating the touch module determining coordinates of a triggered sensor according to an exemplary embodiment of the present disclosure;
FIG. 3 is another flow chart diagram illustrating a touch area acquisition method according to an exemplary embodiment of the present disclosure;
FIG. 4 is a third flowchart illustrating a touch area acquisition method according to an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating touch area division according to an exemplary embodiment of the present disclosure;
FIG. 6 is a fourth flowchart illustrating a touch area acquisition method according to an exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram illustrating a touch area acquisition device according to an exemplary embodiment of the present disclosure;
fig. 8 is a block diagram illustrating the structure of a smart device according to an exemplary embodiment of the present disclosure.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to solve the problem that the amount of transmitted information is large due to the fact that the touch area is represented by the resolution and the problem that the actual touch area cannot be represented due to the fact that the touch area is represented by the preset proportion range in the related art, the embodiment of the disclosure provides a touch area obtaining method. The touch area obtaining method of the embodiment of the disclosure can be applied to an intelligent device, where the intelligent device refers to any device, instrument or machine with computing processing capability, for example, the intelligent device may be a device with a touch screen, such as a smart phone, a smart tablet, a computer, an electronic whiteboard, or a smart home product. In the disclosure, the smart device includes a touch module, the touch module includes a plurality of sensors, the sensors have the same size and are uniformly arranged in sequence to form a sensor matrix, and the formed sensor matrix corresponds to the touch screen, so that the touch module can respond based on a touch action of a user on the touch screen.
As shown in fig. 1, fig. 1 is a flowchart illustrating a touch area acquisition method according to an exemplary embodiment of the present disclosure, where the smart device includes a processor, and the method may be executed by the processor and includes:
in step S101, after the user performs a touch action, the number of triggered sensors on the touch module is determined.
In step S102, a touch area is determined based on the number of triggered sensors and a pre-stored sensor size.
In step S101, when a user performs a touch action, such as clicking or sliding, on a touch screen of the smart device, a sensor on the touch module generates a touch signal based on the touch action, the touch module acquires the touch signal and notifies the number of sensors triggered by a processor of the smart device according to the touch signal, that is, the touch module may correspondingly acquire the number of triggered sensors according to the number of generated touch signals, so that the processor determines the number of triggered sensors on the touch module after the user performs the touch action. In the step, only the parameter of the number of triggered sensors needs to be obtained from the touch module, so that the information quantity needing to be transmitted by the touch module is obviously reduced; the specific type of the sensor is not limited in any way, the sensor can generate a touch signal based on a touch action of a user, and the specific type can be specifically set according to actual conditions.
In one possible implementation, the touch module may transmit data to the processor of the smart device in a data transmission format of the maximum number of sensors triggered in the x-axis direction and the maximum number of sensors triggered in the y-axis direction, i.e., the number of triggered sensors determined by the processor includes the maximum number of sensors triggered in the x-axis direction and the maximum number of sensors triggered in the y-axis direction in a preset two-dimensional coordinate system, in one example, the touch module transmits data to the smart device in the format of (1; 2), it is understood that the specific representation meaning of each parameter may be set according to the actual situation, such as the data format (1; 2), in one case, the x-axis direction may be represented by a first parameter, and the y-axis direction may be represented by a second parameter, in another case, the first parameter may also represent the y-axis direction, and the second parameter represents the x-axis direction, and in the first case, the number of sensors triggered in the x-axis direction is 1, the y-axis direction is represented by a second parameter representing the y-axis direction, and the number of sensors triggered in the x-axis direction is 352, but the final trigger information is obtained by the present embodiment, the number of triggered sensors is 352, and the trigger information is reduced by the present embodiment.
In another possible implementation manner, the touch module may transmit an arrangement number of the triggered sensors or coordinates in a preset two-dimensional coordinate system to a processor of the smart device, so that the processor may determine the number of the triggered sensors based on the arrangement number or the number of the coordinates, please refer to fig. 2, which illustrates an example in which the touch module transmits coordinates, for example, a sensor matrix (7 × 5) is formed on the touch module, and if the sensors triggered by the touch action of the user are sensors in a black area, an abscissa is represented by a first parameter, and a ordinate is represented by a second parameter, the touch module transmits coordinate information (4, 3; 4, 4; 5, 4; 6,4) to the smart device, which indicates that 4 coordinates are triggered, so that the smart device may determine the number of the triggered sensors based on the number of coordinates.
In one embodiment, when a touch portion of a user contacts a touch screen, a sensor corresponding to the touch position of the user generates a touch signal, but there is a case that: although the touch part of the user does not actually contact the sensor corresponding to the edge of the touch part of the user, the sensors may also generate weak touch signals due to the approach of the touch part of the user, in the disclosure, the touch module compares the signal quantity of the touch signals generated by the sensors with a preset threshold value, and then the sensor with the signal quantity of the touch signals larger than the preset threshold value is taken as a triggered sensor, so that the number of the actually triggered sensors can be accurately determined, and the detection accuracy is improved; the specific value of the preset threshold value is not limited in any way, and can be specifically set according to actual conditions.
In step S102, the intelligent module determines a touch area based on the number of triggered sensors and the pre-stored sensor sizes after determining the number of triggered sensors, wherein the size of the sensors can be determined according to the type of the selected sensor, which is not limited by the present disclosure, for example, the size of a single sensor is 1mm × 1mm, the determined number of triggered sensors is 2, and the touch area is 2mm2Therefore, the actual touch area is determined, and the acquired touch area is more accurate.
The intelligent device determines the number of triggered sensors on the touch module after a user performs a touch action, then determines the touch area based on the number of triggered sensors and the pre-stored sensor size, and only needs to obtain the parameter of the number of triggered sensors from the touch module, so that the information quantity required to be transmitted by the touch module is reduced, and the acquired touch area is more accurate based on the number of sensors and the size of the sensors.
FIG. 3 is another flow chart diagram illustrating a touch area acquisition method according to an exemplary embodiment of the present disclosure. Fig. 3 depicts aspects of the present disclosure in more detail with respect to fig. 1.
As shown in fig. 3, the method may be applied to the smart device and executed by a processor of the smart device, and includes the steps of:
in step S301, after a user performs a touch action, determining the number of triggered sensors on the touch module; similar to step S101 in fig. 1, the description is omitted here.
In step S302, determining a touch area based on the number of triggered sensors and a pre-stored sensor size; similar to step S102 in fig. 1, the description is omitted here.
In step S303, based on whether the touch area is greater than or equal to a preset area threshold, different functions are executed; if the touch area is larger than or equal to a preset area threshold, judging that the palm touches, and executing a function corresponding to the palm touch; and if the touch area is smaller than a preset area threshold, judging that the finger touches, and executing a function corresponding to the finger touch.
In step S303, after obtaining a touch area corresponding to a touch action of a user, the smart device further determines whether the touch area is greater than or equal to a preset area threshold, if the touch area is greater than or equal to the preset area threshold, determines that the user touches the smart device with a palm, and executes a function corresponding to the palm touch, and if the touch area is less than the preset area threshold, determines that the user touches the smart device with a finger, and executes a function corresponding to the finger touch.
In an embodiment, the intelligent terminal may further determine, based on the touch action and the touch area, a corresponding trigger event, for example, multiple types of the touch action, such as a click, a slide, or a long press, and determine, based on the type of the touch action and the size of the touch area, a corresponding trigger event, it may be understood that the specific function setting of the trigger event is not limited in any way, and may be specifically set according to an actual situation, for example, when the touch area is greater than or equal to a preset area threshold value and is determined as a palm touch, the corresponding trigger event is further determined based on the type of the touch action, for example, when a palm touches, the touch action is a slide, and a screen capture function is executed; under the condition of palm touch, the touch action is taken as long press, a sharing function is executed, and the like; when the touch area is smaller than a preset area threshold, determining that the finger touches the touch screen, and further determining a corresponding trigger event based on the type of the touch action, for example, when the finger touches the touch screen, the touch action is sliding, and a page turning function is executed; in the case of finger touch, the touch action is a click, a link jump function is executed, and the like. More operation options are provided in the embodiment, and convenience of a user in using the intelligent device is improved.
It can be found that after the touch area is obtained, the corresponding trigger event can be set based on the size of the touch area, so that more convenient operation selection is provided for a user, excessive key pressing steps are reduced, and the convenience of the user in using the intelligent device is improved.
FIG. 4 is another flow chart diagram illustrating a touch area acquisition method according to an exemplary embodiment of the present disclosure. Fig. 4 depicts aspects of the present disclosure in more detail with respect to fig. 1.
As shown in fig. 4, the method may be applied to the smart device and executed by a processor of the smart device, and includes the steps of:
in step S401, after the user performs a touch action, determining the number of triggered sensors on the touch module; similar to step S101 in fig. 1, the description is omitted here.
In step S402, determining a touch area based on the number of triggered sensors and a pre-stored sensor size; similar to step S102 in fig. 1, the description is omitted here.
In step S403, a matched input code is obtained according to the touch area, and unlocking is performed.
In step S403, firstly, the user may preset a corresponding relationship between the touch area and the input code, which is not limited in this disclosure, in an example, for example, the input code "1" is set in a first area range, the input code "2" is set in a second area range, and the like, where the setting of the area range may be specifically set according to actual situations, for example, the first area range is set to be smaller than the second area range; then, after the unlocking module obtains the touch area, the matched input code can be obtained according to the touch area and the preset corresponding relation, for example, when the touch area is in the first area range, the corresponding input code "1" is obtained, then the unlocking module unlocks based on the obtained input code, so that the unlocking function is realized under the condition that the password is not exposed, and the unlocking safety is improved.
In an embodiment, the processor of the smart device may further determine the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in the preset two-dimensional coordinate system, and then the processor may determine the touch area based on the position of each triggered sensor, as shown in fig. 5, the processor may divide the touch screen area into several touch areas in advance based on the setting of a program or the selection of a user, wherein the division number of the touch areas may be specifically set according to the actual situation, and the touch area and the corresponding relationship between the touch area and the input code may be preset by the user, in one example, for example, the setting is that the input code "1" is represented in the first area range, the input code "2" is represented in the second area range, the setting is that the input code "1A" is represented in the touch area 1, touch area 2 represents input code "2B", and then after determining the touch area and the touch area, the processor may obtain a matching input code according to the touch area and the touch area, so as to perform unlocking based on the matching input code, for example, if the determined touch area is touch area 2 and the touch area is within the first area range, and according to a possible ordering of touch areas after touch areas, the matching input code is "2B 1". The embodiment provides more forms for acquiring the input code, so that not only is the personalized password setting of the user realized, but also the unlocking safety is further improved.
In one possible implementation, when the touch module transmits data to the processor of the smart device in a data transmission format of the maximum number of triggered sensors in the x-axis direction and the maximum number of triggered sensors in the y-axis direction, the processor may acquire a touch shape and a touch angle with respect to a specified reference position based on the maximum number of triggered sensors in the x-axis direction and the maximum number of triggered sensors in the y-axis direction.
It is understood that the present disclosure does not limit the reference position of the touch angle, for example, the touch angle may be described with respect to the upper left corner of the touch screen, or the touch angle may also be described with respect to the upper edge of the touch screen; the touch shape and the touch angle of the touch screen are not limited in any way, for example, if the maximum number of the acquired sensors triggered in the x-axis direction is equal to the maximum number of the sensors triggered in the y-axis direction, the touch shape may be a square or a circle, and if the maximum number of the sensors triggered in the x-axis direction is not equal to the maximum number of the sensors triggered in the y-axis direction, the touch shape may be a rectangle or an ellipse; furthermore, if the maximum number of the triggered sensors in the x-axis direction is greater than or equal to the maximum number of the triggered sensors in the y-axis direction, the touch angle is touched towards the x-axis direction relative to the designated reference position, and if the number of the sensors in the x-axis direction is less than the number of the sensors in the y-axis direction, the touch angle is touched towards the y-axis direction relative to the designated reference position, in one example, if the touch module transmits data to the smart device in the format of (1; 2), if the first parameter represents the x-axis direction, the second parameter represents the y-axis direction, it indicates that 1 sensor is triggered in the x-axis direction, and 2 sensors are triggered in the y-axis direction, it may be determined that the touch shape is rectangular, and the touch angle is touched towards the y-axis direction; if the first parameter represents the y-axis direction and the second parameter represents the x-axis direction, indicating that 1 sensor is triggered in the y-axis direction and 2 sensors are triggered in the x-axis direction, it can be determined that the touch shape is a rectangle and the touch angle is a touch toward the x-axis direction.
In another possible implementation manner, the smart device can further determine the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system, and further determine the touch shape based on the position of each triggered sensor, and determine the touch angle relative to a specified reference position based on the relationship between the positions of each triggered sensor; referring to fig. 2, taking the example that the touch module sends coordinates as an example, the touch module transmits coordinate information (4, 3; 4, 4; 5, 4; 6,4) to the smart device, so that the smart device can determine the positions of 4 triggered sensors based on the coordinate information, further determine the touch shape more accurately based on the positions of the triggered sensors, and accurately determine the touch angle of the triggered sensors relative to a specified reference position based on the relationship between the positions of the triggered sensors, for example, the specified reference position is the upper edge of the touch screen, and the touch angle is inclined to the y-axis direction (or vertical direction) relative to the upper edge of the touch screen by a certain angle.
In an embodiment, the touch area, the touch shape, and the corresponding relationship between the touch angle and the input code may be preset by the user, and of course, the present disclosure does not limit this to any way, and in an example, for example, if the input code "1" is set in the first area range, the input code "2" is set in the second area range, the input code "a" is set in the oval shape, the input code "B" is set in the circular shape, the input code "a" is set in the horizontal direction, the input code "B" is set in the vertical direction, and the like, the processor of the smart device obtains the matched input code according to the touch area, the touch shape, and the touch angle after determining the touch shape and the touch angle relative to the designated reference position, and performs unlocking. The embodiment provides more forms for acquiring the input code, and further improves unlocking safety.
In an embodiment, the user may further preset a corresponding relationship between the touch area, the touch shape, and the touch angle and the input code, and after the user performs a touch action, the processor determines a matched input code based on the determined touch area, touch shape, touch angle, and the corresponding relationship. In the embodiment, only one touch action of the user is needed, the input code with higher safety can be obtained for unlocking operation, so that the operation of the user is facilitated, and the unlocking safety is further improved.
According to the technical characteristics of the touch area-based input code acquisition and matching, the unlocking function can be realized without exposing the password, and the unlocking safety is improved.
In addition, in the related art, the technology for acquiring fingerprints under the screen is usually that the fingerprint module under the screen is exposed for a fixed time under the irradiation of the fingerprint light spot with fixed brightness, so that the light energy required by the fingerprint sensor under the screen is met, and the fingerprint sensor under the screen performs fingerprint acquisition based on the light energy. In view of the above problem, the present application provides a touch area obtaining method, please refer to fig. 6, where fig. 6 is a flowchart illustrating a touch area obtaining method according to an exemplary embodiment of the present disclosure. The touch area obtaining method of this embodiment may be applied to an intelligent device, where the intelligent device further includes an underscreen fingerprint module, and further, the touch area obtaining method may be executed by the underscreen fingerprint module and includes:
in step S601, after the user performs a touch action, the number of triggered sensors on the touch module is determined.
In step S602, determining the area of a fingerprint spot based on the number of triggered sensors and the size of a prestored sensor, and performing underscreen fingerprint acquisition; the fingerprint light spot is used for exposing the fingerprint module under the screen.
In step S601, when a user performs a touch action such as clicking or sliding on the touch screen of the smart device, the sensors on the touch module generate touch signals based on the touch action, the touch module acquires the touch signals and notifies the number of the sensors triggered by the fingerprint module under the screen according to the touch signals, that is, the touch module may correspondingly acquire the number of the triggered sensors according to the number of the generated touch signals, so that the fingerprint module under the screen determines the number of the triggered sensors on the touch module after the user performs the touch action. In this step, only the parameter of the number of triggered sensors needs to be obtained from the touch module, so that the information quantity needing to be transmitted by the touch module is obviously reduced.
In one possible implementation, the touch module may transmit data to the under-screen fingerprint module of the smart device in a data transmission format of the maximum number of sensors triggered in the x-axis direction and the maximum number of sensors triggered in the y-axis direction, that is, the number of triggered sensors determined by the under-screen fingerprint module includes the maximum number of sensors triggered in the x-axis direction and the maximum number of sensors triggered in the y-axis direction in a preset two-dimensional coordinate system, in one example, the touch module transmits data to the smart device in the format of (1; 2), it is understood that the specific meaning of each parameter may be set according to the actual situation, such as the data format (1; 2), in one case, the x-axis direction may be represented by a first parameter, the y-axis direction may be represented by a second parameter, in another case, the y-axis direction may also represent the y-axis direction, the x-axis direction may represent the x-axis direction, in the first case, the x-axis direction represents 1, the y-axis direction represents the y-axis direction, and the final triggering information may be obtained by the triggering of the second sensor 352, if the information is 352, the case, the final triggering information is obtained by the present embodiment, the method.
In another possible implementation manner, the touch module may transmit an arrangement number of the triggered sensors or coordinates in a preset two-dimensional coordinate system to an off-screen fingerprint module of the smart device, so that the off-screen fingerprint module may determine the number of triggered sensors based on the arrangement number or the number of coordinates.
In one embodiment, when a touch portion of a user contacts a touch screen, a sensor corresponding to the touch position of the user generates a touch signal, but there is a case that: although the touch part of the user does not actually contact the sensor corresponding to the edge of the touch part of the user, the sensors may also generate weak touch signals due to the approach of the touch part of the user, in the disclosure, the touch module compares the signal quantity of the touch signals generated by the sensors with a preset threshold value, and then the sensor with the signal quantity of the touch signals larger than the preset threshold value is taken as a triggered sensor, so that the number of the triggered sensors can be accurately determined, and the detection accuracy is improved; the specific value of the preset threshold value is not limited in any way, and can be specifically set according to actual conditions.
In step S602, the area of the fingerprint light spot is determined by the underscreen fingerprint module based on the number of the triggered sensors and the size of the prestored sensors, the position of the fingerprint light spot can be a preset position, and then the underscreen fingerprint module performs fingerprint acquisition based on the area of the fingerprint light spot and the preset position, so as to prevent the influence of too large or too small fingerprint light spot on the final fingerprint identification effect, ensure the exposure of the underscreen fingerprint module is proper, so that the acquired fingerprint image has clearer texture when the underscreen fingerprint module performs underscreen fingerprint acquisition, and improve the fingerprint identification precision.
In an embodiment, the under-screen fingerprint module may further determine the position of each triggered sensor based on the arrangement number of the triggered sensor or the coordinates in a preset two-dimensional coordinate system, and then determine the position of the fingerprint light spot according to the position of each triggered sensor, so that the under-screen fingerprint module may perform under-screen fingerprint acquisition based on the determined area of the fingerprint light spot and the position of the fingerprint light spot, so that the under-screen fingerprint module may perform light spot adjustment based on the current touch condition, so that the fingerprint image acquired by the under-screen fingerprint module is clearer and more accurate, and the fingerprint identification accuracy is further improved.
In an embodiment, the under-screen fingerprint module includes a plurality of light-emitting sub-modules, and the fingerprint light spot is generated based on the light-emitting sub-modules, the under-screen fingerprint module may determine the triggering number of the light-emitting sub-modules based on the number of the triggered sensors, and further determine the area of the generated fingerprint light spot; and the under-screen fingerprint module can also quickly determine the trigger positions of the light-emitting sub-modules based on the positions of the various triggered sensors, so as to determine the positions of the fingerprint light spots. The embodiment provides a method for determining the area of the fingerprint light spot based on the position of the fingerprint light spot more simply and quickly, and is beneficial to improving the response speed of the under-screen fingerprint module. It should be noted that, the present application does not limit the light emitting sub-module, for example, the light emitting sub-module may be a light emitting diode.
According to the fingerprint identification method and device, the area of the fingerprint light spot is determined based on the number of the sensors received by the touch module and the size information of the pre-stored sensors, so that the fingerprint module under the screen is exposed in a proper range, the fingerprint image with clear lines is guaranteed to be collected, and the fingerprint identification precision is improved.
Corresponding to the embodiment of the application function implementation method, the disclosure further provides a touch area acquisition device, which can be applied to the intelligent device, in the disclosure, the intelligent device includes a touch module, the touch module includes a plurality of sensors, the sensors have the same size, and are uniformly arranged in sequence to form a sensor matrix, and the formed sensor matrix corresponds to the touch screen, so that the touch module can respond based on a touch action of a user on the touch screen.
As shown in fig. 7, fig. 7 is a block diagram illustrating a touch area acquisition apparatus according to an exemplary embodiment of the present disclosure. The apparatus 700 comprises:
a number determination unit 701, configured to determine the number of triggered sensors on the touch module after a user performs a touch action.
An area determination unit 702, configured to determine a touch area based on the number of triggered sensors and a pre-stored sensor size.
In an embodiment, the triggered sensor is a sensor that generates a touch signal based on the touch action, and the semaphore of the touch signal is greater than a preset threshold value.
In one embodiment, the number of triggered sensors includes a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction in a preset two-dimensional coordinate system.
In an embodiment, the number determination unit 701 includes a sensor determination subunit, a transmission subunit, and a number determination subunit.
The sensor determining subunit is configured to determine a triggered sensor on the touch module after a user performs a touch action.
And the transmission subunit is used for receiving the arrangement number of the triggered sensor transmitted by the touch module or the coordinate in a preset two-dimensional coordinate system.
The number determination subunit is configured to determine the number of triggered sensors based on the arrangement number or the number of coordinates.
In one embodiment, the method further comprises:
and the position determining unit is used for determining the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system.
In an embodiment, the smart device further comprises an off-screen fingerprint module.
The area determining unit 702 specifically includes:
the area acquisition subunit is used for determining the area of the fingerprint light spot based on the number of the triggered sensors and the size of the prestored sensors; the fingerprint light spot is used for exposing the under-screen fingerprint module; and
and the fingerprint acquisition subunit is used for acquiring fingerprints under the screen.
In an embodiment, before the fingerprint acquisition subunit, the method further includes:
a position determining subunit for determining the position of the fingerprint spot based on the position of the respective triggered sensor.
In one embodiment, the under-screen fingerprint module comprises a plurality of light-emitting sub-modules; the fingerprint light spot is generated based on the light emitting sub-module.
The area obtaining subunit includes:
determining a number of activations of the light emitting sub-modules based on the number of activated sensors.
The position determining subunit includes:
and determining the triggering position of the light-emitting sub-module based on the position of each triggered sensor, so as to adjust the position of the fingerprint light spot.
In one embodiment, the method further comprises:
the execution unit is used for judging that the palm touches the touch screen when the touch area is larger than or equal to a preset area threshold value, and executing a function corresponding to the palm touch; and when the touch area is smaller than a preset area threshold, judging that the finger touches, and executing a function corresponding to the finger touch.
In one embodiment, the method further comprises:
the execution unit is further configured to determine a corresponding trigger event based on the touch action and the touch area, and execute the trigger event.
In one embodiment, the method further comprises:
and the unlocking unit is used for acquiring the matched input code according to the touch area and unlocking.
In one embodiment, the unlocking unit includes:
a touch area determination subunit configured to determine a touch area based on the positions of the respective triggered sensors.
And the input code acquisition subunit is used for acquiring the matched input code according to the touch area and the touch area.
And the unlocking subunit is used for unlocking based on the input code.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts shown as units may or may not be physical units.
Corresponding to the embodiment of the application function implementation method, the present disclosure further provides an intelligent device, where the electronic device includes a touch module, a processor, and a memory.
The memory is used for storing computer programs executed by the processor.
The touch module is used for acquiring touch signals and informing the processor of the number of sensors triggered by the touch signals according to the touch signals.
The processor is configured to: determining a number of triggered sensors on the touch module, determining a touch area based on the number of triggered sensors and a pre-stored sensor size.
In an embodiment, the smart device further comprises an off-screen fingerprint module.
The touch module is further used for informing the under-screen fingerprint module of the number of the sensors triggered by the touch signal according to the touch signal.
The under-screen fingerprint module is used for determining the area of fingerprint light spots based on the number of the triggered sensors and the size of the prestored sensors and acquiring under-screen fingerprints; the fingerprint light spot is used for exposing the fingerprint module under the screen. It should be further noted that other programs stored in the memory 703 refer to the description in the foregoing method flow, which is not described herein again, and the processor 702 is further configured to execute the other programs stored in the memory 703.
As shown in fig. 8, fig. 8 is another structural block diagram of the smart device shown in the present disclosure according to an exemplary embodiment. The smart device 800 may include one or more of the following components: a processing component 801, a memory 802, a power component 803, a multimedia component 804, an audio component 805, an input/output (I/O) interface 806, a sensor component 807, and a communication component 808.
The processing component 801 generally controls overall operations of the smart device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 801 may include one or more processors 809 for executing instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 801 may include one or more modules that facilitate interaction between the processing component 801 and other components. For example, the processing component 801 may include a multimedia module to facilitate interaction between the multimedia component 804 and the processing component 801.
The memory 802 is configured to store various types of data to support operations at the smart device 800. Examples of such data include instructions for any application or method operating on the smart device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 802 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 803 provides power to the various components of the smart device 800. The power components 803 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the smart device 800.
The multimedia component 804 includes a screen that provides an output interface between the smart device 800 and a user. The screen may display corresponding contents, and the screen may include a Touch Panel (TP) implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 804 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the smart device 800 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability. In some embodiments, the multimedia component 804 may include an off-screen fingerprint module, which may include one or more sensors, may perform off-screen fingerprint acquisition based on a touch action of a user, perform corresponding operations, and so on.
The audio component 805 is configured to output and/or input audio signals. For example, the audio component 805 includes a Microphone (MIC) configured to receive external audio signals when the smart device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 802 or transmitted via the communication component 808. In some embodiments, the audio component 805 also includes a speaker for outputting audio signals.
The I/O interface 802 provides an interface between the processing component 801 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 807 includes one or more sensors for providing status assessment of various aspects of the smart device 800. For example, the sensor component 807 may detect the open/closed status of the smart device 800, the relative positioning of components, such as a display and keypad of the smart device 800, the sensor component 807 may also detect a change in the position of the smart device 800 or a component of the smart device 800, the presence or absence of user contact with the smart device 800, orientation or acceleration/deceleration of the smart device 800, and a change in temperature of the smart device 800. Sensor assembly 807 may comprise a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 807 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 807 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, a temperature sensor, a photoelectric sensor, or a GPS sensor.
The smart device 800 may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G L TE, 5G NR (5G New radio), or a combination thereof, hi one exemplary embodiment, the communication component 808 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel, hi one exemplary embodiment, the communication component 808 further includes a Near Field Communication (NFC) module to facilitate short range communication.
In an exemplary embodiment, the smart device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), programmable logic devices (P L D), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
The implementation process of the functions and actions of each component in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the apparatus embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described device embodiments are merely illustrative, wherein the components described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Corresponding to the foregoing embodiment of the application function implementation method, the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor of the foregoing smart device, implements the steps of the touch area acquisition method described in any of the foregoing embodiments.
The present disclosure may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (27)

1. The method for acquiring the touch area is applied to intelligent equipment, wherein the intelligent equipment comprises a touch module, and the touch module comprises a plurality of sensors; the method comprises the following steps:
determining the number of triggered sensors on the touch module after a user performs a touch action;
determining a touch area based on the number of triggered sensors and a pre-stored sensor size.
2. The method of claim 1, wherein the triggered sensor is a sensor that generates a touch signal based on the touch action and has a semaphore greater than a preset threshold.
3. The method of claim 1, wherein the number of triggered sensors comprises a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction in a preset two-dimensional coordinate system.
4. The method of claim 1, wherein determining the number of triggered sensors on the touch module comprises:
receiving the arrangement number of the triggered sensor or the coordinate in a preset two-dimensional coordinate system transmitted by the touch module;
determining the number of triggered sensors based on the arrangement number or the number of coordinates.
5. The method of claim 4, further comprising:
and determining the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system.
6. The method of claim 5, wherein the smart device further comprises an off-screen fingerprint module;
determining a touch area based on the number of triggered sensors and a pre-stored sensor size, specifically including:
determining the area of a fingerprint light spot based on the number of the triggered sensors and the size of the prestored sensors, and acquiring the fingerprint under a screen; the fingerprint light spot is used for exposing the fingerprint module under the screen.
7. The method of claim 6, further comprising, prior to said performing an off-screen fingerprint acquisition:
determining the location of the fingerprint spot based on the location of the respective triggered sensor.
8. The method of claim 7, wherein the underscreen fingerprint module comprises a number of light emitting sub-modules; the fingerprint light spot is generated based on the light-emitting sub-module;
then said determining the area of the fingerprint spot based on the number of triggered sensors and the pre-stored sensor size comprises:
determining a number of activations of the light emitting sub-modules based on the number of activated sensors;
said determining the location of the fingerprint spot based on the location of the respective triggered sensor comprises:
determining the triggering position of the light-emitting sub-module based on the position of the respective triggered sensor, thereby determining the position of the fingerprint spot.
9. The method of claim 1, further comprising:
if the touch area is larger than or equal to a preset area threshold, judging that the palm touches, and executing a function corresponding to the palm touch;
and if the touch area is smaller than a preset area threshold, judging that the finger touches, and executing a function corresponding to the finger touch.
10. The method of claim 9, further comprising:
and determining a corresponding trigger event based on the touch action and the touch area, and executing the trigger event.
11. The method of claim 5, further comprising:
and acquiring the matched input code according to the touch area, and unlocking.
12. The method of claim 11, wherein the obtaining the matched input code according to the touch area comprises:
determining a touch area based on the location of the respective triggered sensor;
and acquiring a matched input code according to the touch area and the touch area.
13. The touch area acquisition device is applied to intelligent equipment, wherein the intelligent equipment comprises a touch module, and the touch module comprises a plurality of sensors; the device comprises:
the quantity determining unit is used for determining the quantity of the triggered sensors on the touch module after a user performs a touch action;
and the area determining unit is used for determining the touch area based on the number of the triggered sensors and the prestored sensor size.
14. The apparatus of claim 13, wherein the triggered sensor is a sensor that generates a touch signal based on the touch action, and wherein a semaphore of the touch signal is greater than a preset threshold.
15. The apparatus of claim 13, wherein the number of triggered sensors comprises a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction in a preset two-dimensional coordinate system.
16. The apparatus of claim 13, wherein the number determination unit comprises a sensor determination subunit, a transmission subunit, and a number determination subunit;
the sensor determining subunit is configured to determine a triggered sensor on the touch module after a user performs a touch action;
the transmission subunit is used for receiving the arrangement number of the triggered sensor transmitted by the touch module or the coordinate in a preset two-dimensional coordinate system;
the number determination subunit is configured to determine the number of triggered sensors based on the arrangement number or the number of coordinates.
17. The apparatus of claim 16, further comprising:
and the position determining unit is used for determining the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system.
18. The apparatus of claim 17, wherein the smart device further comprises an underscreen fingerprint module;
the area determining unit specifically includes:
the area acquisition subunit is used for determining the area of the fingerprint light spot based on the number of the triggered sensors and the size of the prestored sensors; the fingerprint light spot is used for exposing the under-screen fingerprint module; and
and the fingerprint acquisition subunit is used for acquiring fingerprints under the screen.
19. The apparatus of claim 18, further comprising, prior to the fingerprint acquisition subunit:
a position determining subunit for determining the position of the fingerprint spot based on the position of the respective triggered sensor.
20. The apparatus of claim 19, wherein the underscreen fingerprint module comprises a number of light emitting sub-modules; the fingerprint light spot is generated based on the light-emitting sub-module;
the area obtaining subunit includes:
determining a number of activations of the light emitting sub-modules based on the number of activated sensors;
the position determining subunit includes:
and determining the triggering position of the light-emitting sub-module based on the position of each triggered sensor, so as to adjust the position of the fingerprint light spot.
21. The apparatus of claim 13, further comprising:
the execution unit is used for judging that the palm touches the touch screen when the touch area is larger than or equal to a preset area threshold value, and executing a function corresponding to the palm touch; and when the touch area is smaller than a preset area threshold, judging that the finger touches, and executing a function corresponding to the finger touch.
22. The apparatus of claim 21, further comprising:
the execution unit is further configured to determine a corresponding trigger event based on the touch action and the touch area, and execute the trigger event.
23. The apparatus of claim 17, further comprising:
and the unlocking unit is used for acquiring the matched input code according to the touch area and unlocking.
24. The apparatus of claim 23, wherein the unlocking unit comprises:
a touch area determination subunit configured to determine a touch area based on the positions of the respective triggered sensors;
the input code acquisition subunit is used for acquiring a matched input code according to the touch area and the touch area;
and the unlocking subunit is used for unlocking based on the input code.
25. The intelligent device is characterized by comprising a touch module, a processor and a memory;
the memory is used for storing a computer program executed by the processor;
the touch module is used for acquiring touch signals and informing the processor of the number of sensors triggered by the touch signals according to the touch signals;
the processor is configured to: determining a number of triggered sensors on the touch module, determining a touch area based on the number of triggered sensors and a pre-stored sensor size.
26. The smart device of claim 25, further comprising an off-screen fingerprint module;
the touch module is also used for informing the under-screen fingerprint module of the number of the sensors triggered by the touch signal according to the touch signal;
the under-screen fingerprint module is used for determining the area of fingerprint light spots based on the number of the triggered sensors and the size of the prestored sensors and acquiring under-screen fingerprints; the fingerprint light spot is used for exposing the fingerprint module under the screen.
27. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5 and 9 to 12.
CN201910101447.0A 2019-01-31 2019-01-31 Touch area acquisition method and device, intelligent equipment and storage medium Active CN111507144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910101447.0A CN111507144B (en) 2019-01-31 2019-01-31 Touch area acquisition method and device, intelligent equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910101447.0A CN111507144B (en) 2019-01-31 2019-01-31 Touch area acquisition method and device, intelligent equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111507144A true CN111507144A (en) 2020-08-07
CN111507144B CN111507144B (en) 2024-02-09

Family

ID=71874014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910101447.0A Active CN111507144B (en) 2019-01-31 2019-01-31 Touch area acquisition method and device, intelligent equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111507144B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112286386A (en) * 2020-10-27 2021-01-29 北京小米移动软件有限公司 Processing method, system, device and medium for touch screen point stabilization

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103257777A (en) * 2013-05-14 2013-08-21 深圳市汇顶科技股份有限公司 Control method of state of touch screen, touch screen and portable touch terminal
US20130215060A1 (en) * 2010-10-13 2013-08-22 Nec Casio Mobile Communications Ltd. Mobile terminal apparatus and display method for touch panel in mobile terminal apparatus
JP2014119931A (en) * 2012-12-14 2014-06-30 Fujitsu Ltd Electronic apparatus and control program for touch sensor
CN104090723A (en) * 2014-07-22 2014-10-08 Tcl通讯(宁波)有限公司 Unlocking method and system based on touch area of touch screen
CN105094443A (en) * 2015-08-21 2015-11-25 深圳市汇顶科技股份有限公司 Touch pressure detecting device and method
US20160170530A1 (en) * 2014-12-10 2016-06-16 G2Touch Co., Ltd Touch detection sensor structure of capacitive type touch screen panel
CN108037889A (en) * 2017-12-07 2018-05-15 广东美晨通讯有限公司 Location regulation method, terminal and the computer-readable recording medium of virtual key
CN108496184A (en) * 2018-04-17 2018-09-04 深圳市汇顶科技股份有限公司 Image processing method, device and electronic equipment
CN108563980A (en) * 2017-12-29 2018-09-21 昆山国显光电有限公司 A kind of method for controlling fingerprint identification and touch panel, touch control display apparatus

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215060A1 (en) * 2010-10-13 2013-08-22 Nec Casio Mobile Communications Ltd. Mobile terminal apparatus and display method for touch panel in mobile terminal apparatus
JP2014119931A (en) * 2012-12-14 2014-06-30 Fujitsu Ltd Electronic apparatus and control program for touch sensor
CN103257777A (en) * 2013-05-14 2013-08-21 深圳市汇顶科技股份有限公司 Control method of state of touch screen, touch screen and portable touch terminal
CN104090723A (en) * 2014-07-22 2014-10-08 Tcl通讯(宁波)有限公司 Unlocking method and system based on touch area of touch screen
US20160170530A1 (en) * 2014-12-10 2016-06-16 G2Touch Co., Ltd Touch detection sensor structure of capacitive type touch screen panel
CN105094443A (en) * 2015-08-21 2015-11-25 深圳市汇顶科技股份有限公司 Touch pressure detecting device and method
CN108037889A (en) * 2017-12-07 2018-05-15 广东美晨通讯有限公司 Location regulation method, terminal and the computer-readable recording medium of virtual key
CN108563980A (en) * 2017-12-29 2018-09-21 昆山国显光电有限公司 A kind of method for controlling fingerprint identification and touch panel, touch control display apparatus
CN108496184A (en) * 2018-04-17 2018-09-04 深圳市汇顶科技股份有限公司 Image processing method, device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112286386A (en) * 2020-10-27 2021-01-29 北京小米移动软件有限公司 Processing method, system, device and medium for touch screen point stabilization
CN112286386B (en) * 2020-10-27 2024-04-02 北京小米移动软件有限公司 Touch screen point stabilization processing method, system, device and medium

Also Published As

Publication number Publication date
CN111507144B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
EP2975838B1 (en) Image shooting parameter adjustment method and device
KR102045232B1 (en) Gesture identification methods, devices, programs and recording media
RU2661289C1 (en) Method and device for preventing random taping of terminal with touch screen
CN105260117B (en) Application program control method and device
EP3179711B1 (en) Method and apparatus for preventing photograph from being shielded
RU2640733C2 (en) Method and device for application management
US10095377B2 (en) Method and device for displaying icon badge
EP3163884A1 (en) Image acquiring method and apparatus, computer program and recording medium
JP2020530631A (en) Interaction locating methods, systems, storage media, and smart devices
EP3173970A1 (en) Image processing method and apparatus
RU2652452C2 (en) Device and method for network status information representation
EP3196751A1 (en) Gesture identification method and device
US11222223B2 (en) Collecting fingerprints
CN109639896A (en) Block object detecting method, device, storage medium and mobile terminal
EP3232301B1 (en) Mobile terminal and virtual key processing method
CN105208284B (en) Shoot based reminding method and device
CN112905136A (en) Screen projection control method and device and storage medium
CN106599858B (en) Fingerprint identification method and device and electronic equipment
EP3246805B1 (en) Gesture operation response method and device
CN106445122B (en) Equipment state control method and device
CN111507144B (en) Touch area acquisition method and device, intelligent equipment and storage medium
CN106020694B (en) Electronic equipment, and method and device for dynamically adjusting selected area
CN108491834B (en) Fingerprint identification method and device
US10613622B2 (en) Method and device for controlling virtual reality helmets
US20220147244A1 (en) Method and device for touch operation, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant