CN111507144B - Touch area acquisition method and device, intelligent equipment and storage medium - Google Patents

Touch area acquisition method and device, intelligent equipment and storage medium Download PDF

Info

Publication number
CN111507144B
CN111507144B CN201910101447.0A CN201910101447A CN111507144B CN 111507144 B CN111507144 B CN 111507144B CN 201910101447 A CN201910101447 A CN 201910101447A CN 111507144 B CN111507144 B CN 111507144B
Authority
CN
China
Prior art keywords
touch
triggered
sensors
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910101447.0A
Other languages
Chinese (zh)
Other versions
CN111507144A (en
Inventor
杨坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910101447.0A priority Critical patent/CN111507144B/en
Publication of CN111507144A publication Critical patent/CN111507144A/en
Application granted granted Critical
Publication of CN111507144B publication Critical patent/CN111507144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1324Sensors therefor by using geometrical optics, e.g. using prisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers

Abstract

The disclosure provides a touch area acquisition method, a device, an intelligent device and a storage medium, wherein the method comprises the following steps: after a user performs a touch action, determining the number of triggered sensors on the touch module; and determining the touch area based on the number of triggered sensors and the pre-stored sensor size. The touch module is provided with a touch area acquisition module, a touch control module and a touch control module, wherein the touch module is used for acquiring the number of triggered sensors from the touch module, the parameter of the number of triggered sensors is obtained from the touch module, the information quantity required to be transmitted by the touch module is reduced, and the acquired touch area is more accurate based on the calculation of the number of sensors and the size of the sensors.

Description

Touch area acquisition method and device, intelligent equipment and storage medium
Technical Field
The disclosure relates to the technical field of network communication, and in particular relates to a touch area acquisition method, a device, intelligent equipment and a storage medium.
Background
With the continuous development of network communication technology, intelligent devices with touch screens are also widely used. For an intelligent device with a touch screen, when a touch module on the intelligent device detects a touch action of a user on the touch screen, corresponding touch coordinates are acquired and reported to a processor, so that the processor can make feedback according to the touch coordinates; in some touch screen touch schemes, the touch module acquires touch coordinates and touch area information, so that the processor can make feedback according to the touch coordinates and the touch area information or transmit the feedback to other modules, and the touch area is usually represented by a resolution or a default scale range (such as a range of 0-1) in the related art.
However, in carrying out the invention, the inventors found that: firstly, taking resolution as touch area information, reporting each touch coordinate by a touch module in combination with the touch area information (the resolution is 120 as 120), and if a plurality of touch points exist at the same time, forming a larger total data volume, so that the reading load of a processor is increased; secondly, the default proportion range (such as the range of 0-1) is used as the touch area information, and the reported data quantity can be reduced, but the reported data quantity is not completely consistent with the actual physical touch area, the actual touch area cannot be represented, and the application effect of the acquired area data in some touch screen touch schemes is poor.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a touch area acquisition method, a device, an intelligent apparatus, and a storage medium.
According to a first aspect of embodiments of the present disclosure, a touch area obtaining method is provided and applied to an intelligent device, where the intelligent device includes a touch module, and the touch module includes a plurality of sensors; the method comprises the following steps:
after a user performs a touch action, determining the number of triggered sensors on the touch module;
And determining the touch area based on the number of triggered sensors and the pre-stored sensor size.
Optionally, the triggered sensor is a sensor that generates a touch signal based on the touch action, and the signal quantity of the touch signal is greater than a preset threshold value.
Optionally, the number of triggered sensors includes a maximum number of triggered sensors in an x-axis direction and a maximum number of triggered sensors in a y-axis direction in a preset two-dimensional coordinate system.
Optionally, determining the number of triggered sensors on the touch module includes:
receiving the arrangement number of the triggered sensor or the coordinates in a preset two-dimensional coordinate system transmitted by the touch module;
the number of triggered sensors is determined based on the arrangement number or the number of coordinates.
Optionally, the method further comprises:
and determining the position of each triggered sensor based on the arrangement number of the triggered sensors or coordinates in a preset two-dimensional coordinate system.
Optionally, the intelligent device further comprises an off-screen fingerprint module;
the determining a touch area based on the number of triggered sensors and a pre-stored sensor size specifically includes:
Determining the area of a fingerprint light spot based on the number of the triggered sensors and the pre-stored sensor size, and collecting the fingerprints under the screen; the fingerprint light spots are used for exposing the under-screen fingerprint module.
Optionally, before the off-screen fingerprint acquisition, the method further comprises:
the location of the fingerprint spot is determined based on the location of the respective triggered sensor.
Optionally, the under-screen fingerprint module comprises a plurality of light emitting sub-modules; the fingerprint light spots are generated based on the light emitting submodule;
the determining the area of the fingerprint light spot based on the number of triggered sensors and the pre-stored sensor size comprises:
determining the triggering number of the light emitting sub-modules based on the number of the triggered sensors.
The determining the location of the fingerprint spot based on the locations of the respective triggered sensors comprises:
and determining the triggering position of the luminous submodule based on the positions of the triggered sensors, so as to adjust the position of the fingerprint light spot.
Optionally, the method further comprises:
if the touch area is larger than or equal to a preset area threshold, judging that the palm touch is performed, and executing a function corresponding to the palm touch;
And if the touch area is smaller than the preset area threshold, determining that the touch area is a finger touch, and executing a function corresponding to the finger touch.
Optionally, the method further comprises:
and determining a corresponding trigger event based on the touch action and the touch area, and executing the trigger event.
Optionally, the method further comprises:
and acquiring the matched input code according to the touch area, and unlocking.
Optionally, the acquiring the matched input code according to the touch area includes:
determining a touch area based on the location of each triggered sensor;
and acquiring matched input codes according to the touch area and the touch area.
According to a second aspect of embodiments of the present disclosure, there is provided a touch area acquisition apparatus applied to an intelligent device, where the intelligent device includes a touch module, and the touch module includes a plurality of sensors; the device comprises:
the number determining unit is used for determining the number of the triggered sensors on the touch module after the user executes the touch action;
and the area determining unit is used for determining the touch area based on the number of the triggered sensors and the pre-stored sensor size.
Optionally, the triggered sensor is a sensor that generates a touch signal based on the touch action, and the signal quantity of the touch signal is greater than a preset threshold value.
Optionally, the number of triggered sensors includes a maximum number of triggered sensors in an x-axis direction and a maximum number of triggered sensors in a y-axis direction in a preset two-dimensional coordinate system.
Optionally, the number determining unit includes a sensor determining subunit, a transmitting subunit, and a number determining subunit;
the sensor determining subunit is used for determining a triggered sensor on the touch module after the user executes the touch action;
the transmission subunit is used for receiving the arrangement number of the triggered sensor or the coordinates in a preset two-dimensional coordinate system transmitted by the touch module;
the number determination subunit is configured to determine the number of triggered sensors based on the arrangement number or the number of coordinates.
Optionally, the method further comprises:
and the position determining unit is used for determining the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system.
Optionally, the intelligent device further comprises an off-screen fingerprint module;
The area determining unit specifically includes:
an area acquisition subunit, configured to determine an area of a fingerprint light spot based on the number of triggered sensors and a pre-stored sensor size; the fingerprint light spots are used for exposing the under-screen fingerprint module; and
and the fingerprint acquisition subunit is used for carrying out off-screen fingerprint acquisition.
Optionally, before the fingerprint acquisition subunit, the method further comprises:
and a position determining subunit, configured to determine a position of the fingerprint light spot based on the positions of the respective triggered sensors.
Optionally, the under-screen fingerprint module comprises a plurality of light emitting sub-modules; the fingerprint light spots are generated based on the light emitting submodule;
the area acquisition subunit includes:
determining the triggering number of the light emitting sub-modules based on the number of the triggered sensors.
The position determination subunit comprises:
and determining the triggering position of the luminous submodule based on the positions of the triggered sensors, so as to adjust the position of the fingerprint light spot.
Optionally, the method further comprises:
the execution unit is used for judging that the palm touches when the touch area is larger than or equal to a preset area threshold value and executing the function corresponding to the palm touch; and when the touch area is smaller than a preset area threshold, determining that the finger touches, and executing a function corresponding to the finger touches.
Optionally, the method further comprises:
the execution unit is further configured to determine a corresponding trigger event based on the touch action and the touch area, and execute the trigger event.
Optionally, the method further comprises:
and the unlocking unit is used for acquiring the matched input code according to the touch area and unlocking the input code.
Optionally, the unlocking unit includes:
a touch area determination subunit configured to determine a touch area based on the positions of the respective triggered sensors;
an input code acquisition subunit, configured to acquire a matched input code according to the touch area and the touch area;
and the unlocking subunit is used for unlocking based on the input code.
According to a third aspect of embodiments of the present disclosure, there is provided an intelligent device, including a touch module, a processor, and a memory;
the memory is used for storing a computer program executed by the processor;
the touch module is used for collecting touch signals and notifying a processor of the number of sensors triggered by the touch signals according to the touch signals;
the processor is configured to: determining the number of triggered sensors on the touch module, and determining the touch area based on the number of triggered sensors and the pre-stored sensor size.
Optionally, the intelligent device further comprises an off-screen fingerprint module;
the touch module is also used for informing the under-screen fingerprint module of the number of sensors triggered by the touch signal according to the touch signal;
the under-screen fingerprint module is used for determining the area of fingerprint light spots based on the number of the triggered sensors and the pre-stored sensor size and collecting under-screen fingerprints; the fingerprint light spots are used for exposing the under-screen fingerprint module.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
after the user executes the touch action, the intelligent device determines the number of the triggered sensors on the touch module, then determines the touch area based on the number of the triggered sensors and the pre-stored sensor size, and only obtains the parameter of the number of the triggered sensors from the touch module, so that the information quantity required to be transmitted by the touch module is reduced, and the acquired touch area is more accurate based on the calculation of the number of the sensors and the size of the sensors.
Further, the triggered sensors are sensors with the signal quantity of the touch signal generated based on the touch action being larger than a preset threshold value, and the number of the triggered sensors can be determined more accurately by screening the sensors generating the touch signal, so that the detection precision is improved.
Further, the present disclosure provides a data transmission format in which the touch module can perform data transmission in a data transmission format including two parameters of a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction, thereby reducing an information transmission amount.
Further, the disclosure provides another data transmission format, and the touch module may further transmit the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system, so that the intelligent device determines the number of the triggered sensors based on the number of the arrangement number or the coordinates, thereby reducing the information transmission amount.
Further, the present disclosure may also determine the location of each triggered sensor based on the arrangement number or coordinates of the triggered sensors.
Further, the intelligent device of the present disclosure further includes an under-screen fingerprint module that can perform fingerprint collection based on the determined area of the fingerprint light spot, so that the collected fingerprint has high recognition accuracy.
Furthermore, the under-screen fingerprint module can also determine the positions of fingerprint spots based on the positions of the triggered sensors, so that fingerprint identification accuracy is further improved.
Further, the under-screen fingerprint module can determine the triggering quantity of the luminous submodules based on the quantity of the triggered sensors, so that the area of the fingerprint light spots can be simply, conveniently and quickly determined; and determining the triggering position of the luminous submodule based on the positions of the triggered sensors, so that the position of the fingerprint light spot is simply, conveniently and quickly determined.
Further, the touch area and the preset area threshold value can be related to each other in size, different functions can be executed, more operation choices are provided, and convenience of a user in using the intelligent device is improved.
Further, the touch control method and the touch control device can execute different trigger events based on the touch actions and the touch areas, provide more operation choices and improve convenience when a user uses the intelligent device.
Furthermore, the intelligent device can be matched with the input code based on the touch area, so that the unlocking function is realized under the condition that the password is not exposed, and the unlocking safety is improved.
Furthermore, the intelligent device can acquire the matched input codes according to the touch area and the touch area, more input code acquisition modes are provided, and unlocking safety is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a flowchart of a touch area acquisition method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic diagram of the touch module determining coordinates of a triggered sensor according to an exemplary embodiment of the present disclosure;
FIG. 3 is another flow chart of a touch area acquisition method according to an exemplary embodiment of the present disclosure;
FIG. 4 is a third flow chart of a touch area acquisition method according to an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram of touch area partitioning according to an exemplary embodiment of the present disclosure;
FIG. 6 is a fourth flowchart of a touch area acquisition method according to an exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram of a touch area acquisition device according to an exemplary embodiment of the present disclosure;
Fig. 8 is a block diagram of a configuration of a smart device according to an exemplary embodiment of the present disclosure.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In order to solve the problem that the amount of information transmitted is large due to the fact that the touch area is represented by the resolution in the related art and the problem that the actual touch area cannot be represented due to the fact that the touch area is represented by the preset proportion range, the embodiment of the disclosure provides a touch area obtaining method. The touch area obtaining method of the embodiment of the disclosure can be applied to an intelligent device, wherein the intelligent device refers to any device, instrument or machine with calculation processing capability, and for example, the intelligent device can be a smart phone, an intelligent tablet, a computer, an electronic whiteboard or a device with a touch screen, such as an intelligent home product. In the disclosure, the intelligent device comprises a touch module, the touch module comprises a plurality of sensors, the sensors are identical in size and are uniformly distributed in sequence to form a sensor matrix, and the formed sensor matrix corresponds to the touch screen, so that the touch module can respond based on touch actions of a user on the touch screen.
As shown in fig. 1, fig. 1 is a flowchart of a touch area acquisition method according to an exemplary embodiment of the disclosure, where the smart device includes a processor, and the method may be executed by the processor and includes:
in step S101, after the user performs a touch action, the number of sensors triggered on the touch module is determined.
In step S102, a touch area is determined based on the number of triggered sensors and a pre-stored sensor size.
In step S101, when a user performs a touch action such as clicking or sliding on a touch screen of the smart device, the sensor on the touch module generates a touch signal based on the touch action, and the touch module collects the touch signal and notifies the processor of the smart device of the number of triggered sensors according to the touch signal, that is, the touch module may correspondingly obtain the number of triggered sensors according to the number of generated touch signals, so that the processor determines the number of triggered sensors on the touch module after the user performs the touch action. In the step, only the parameter of the number of triggered sensors is obtained from the touch module, so that the information quantity required to be transmitted by the touch module is obviously reduced; the specific type of the sensor is not limited, and the sensor can generate a touch signal based on a touch action of a user, and the specific type of the sensor can be specifically set according to actual situations.
In one possible implementation, the touch module may transmit data to the processor of the smart device in a data transmission format of a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction, i.e., the number of triggered sensors determined by the processor includes the maximum number of sensors triggered in the x-axis direction and the maximum number of sensors triggered in the y-axis direction in a preset two-dimensional coordinate system, in one example, for example, the touch module transmits data to the smart device in a (1; 2) format, it may be understood that a concrete representation meaning of each parameter may be set according to an actual situation, such as a data format (1; 2), in one case, the first parameter may represent the x-axis direction and the second parameter represent the y-axis direction; in another case, the first parameter may also represent the y-axis direction and the second parameter represents the x-axis direction; in the first case, 1 sensor is triggered in the x-axis direction, 2 sensors are triggered in the y-axis direction, and in the second case, 1 sensor is triggered in the y-axis direction, 2 sensors are triggered in the x-axis direction, but the number of triggered sensors finally acquired is 2 (1×2=2). The embodiment can obviously reduce the information transmission quantity by transmitting the information in the above mode.
In another possible implementation manner, the touch module may transmit an arrangement number of the triggered sensors or coordinates in a preset two-dimensional coordinate system to a processor of the smart device, so that the processor may determine the number of the triggered sensors based on the arrangement number or the number of the coordinates; referring to fig. 2, for example, the touch module sends coordinates to illustrate that a (7×5) sensor matrix is formed on the touch module, if the sensor triggered by the touch action of the user is a black sensor, the first parameter represents the abscissa and the second parameter represents the ordinate, the touch module transmits coordinate information (4, 3;4,4;5,4;6, 4) to the smart device, and indicates that 4 coordinates are triggered, so that the smart device can determine the number of triggered sensors based on the number of coordinates.
In one embodiment, when a touch portion of a user touches the touch screen, a sensor corresponding to the touch position of the user generates a touch signal, but there is one case that: although the touch part of the user does not actually contact with the sensor corresponding to the edge of the touch part of the user, the sensors may generate weak touch signals due to the approach of the touch part of the user, in the disclosure, the touch module compares the signal quantity of the touch signals generated by the sensors with a preset threshold value, and then uses the sensor with the signal quantity of the touch signals larger than the preset threshold value as the triggered sensor, so that the number of the sensors actually triggered can be accurately determined, and the detection accuracy is improved; the specific numerical value of the preset threshold value is not limited in any way, and the specific setting can be performed according to actual conditions.
In step S102, after determining the number of triggered sensors, the intelligent module determines a touch area based on the number of triggered sensors and a pre-stored sensor size, wherein the size of the sensors may be determined according to the type of sensor selected, and the disclosure does not limit this, such as a single sensor size of 1mm×1mm, a determined number of triggered sensors of 2, and the touch area of 2mm 2 The actual touch area is not only clarified, but also the acquired touch area is more accurate.
The intelligent device determines the number of triggered sensors on the touch module after a user executes a touch action, then determines the touch area based on the number of triggered sensors and the pre-stored sensor size, and only obtains the parameter of the number of triggered sensors from the touch module, thereby not only reducing the information quantity required to be transmitted by the touch module, but also enabling the acquired touch area to be more accurate based on the calculation of the number of the sensors and the size thereof.
Fig. 3 is another flowchart of a touch area acquisition method according to an exemplary embodiment of the present disclosure. Fig. 3 depicts an aspect of the present disclosure in more detail with respect to fig. 1.
As shown in fig. 3, the method can be applied to the intelligent device and executed by a processor of the intelligent device, and comprises the following steps:
in step S301, after the user performs a touch action, determining the number of triggered sensors on the touch module; similar to step S101 in fig. 1, a detailed description is omitted here.
In step S302, determining a touch area based on the number of triggered sensors and a pre-stored sensor size; similar to step S102 in fig. 1, a detailed description is omitted here.
In step S303, based on whether the touch area is greater than or equal to a preset area threshold, performing a different function; if the touch area is larger than or equal to a preset area threshold, judging that the palm touch is performed, and executing a function corresponding to the palm touch; and if the touch area is smaller than the preset area threshold, determining that the touch area is a finger touch, and executing a function corresponding to the finger touch.
In step S303, after the smart device obtains the touch area corresponding to the touch action of the user, it is further determined whether the touch area is greater than or equal to a preset area threshold, if the touch area is greater than or equal to the preset area threshold, it is determined that the smart device is a palm touch, a function corresponding to the palm touch is executed, if the touch area is less than the preset area threshold, it is determined that the smart device is a finger touch, and a function corresponding to the finger touch is executed.
In an embodiment, the intelligent terminal may further determine a corresponding trigger event based on the touch action and the touch area, for example, the touch action has multiple types such as clicking, sliding or long pressing, and the intelligent terminal may determine a corresponding trigger event based on the type of the touch action and the size of the touch area, and it may be understood that the specific function setting of the trigger event is not limited, and the specific setting may be performed according to an actual situation, for example, when the touch area is greater than or equal to a preset area threshold, and it is determined that a palm touches, and further determines a corresponding trigger event based on the type of the touch action, for example, when the palm touches, the touch action is sliding, and performs a screen capturing function; in the case of palm touch, the touch action is a long press, and a sharing function or the like is executed; when the touch area is smaller than a preset area threshold, and the touch area is judged to be touched by a finger, a corresponding trigger event is further determined based on the type of the touch action, for example, when the touch action is sliding and a page turning function is executed; in the case of a finger touch, the touch action is clicking, and a link jump function is executed. In the embodiment, more operation choices are provided, and convenience of a user in using the intelligent device is improved.
It can be found that after the touch area is obtained, a corresponding trigger event can be set based on the size of the touch area, so that more convenient operation selection is provided for a user, excessive key steps are reduced, and convenience of the user when using the intelligent device is improved.
Fig. 4 is another flowchart of a touch area acquisition method according to an exemplary embodiment of the present disclosure. Fig. 4 depicts an aspect of the present disclosure in more detail with respect to fig. 1.
As shown in fig. 4, the method may be applied to the smart device and executed by a processor of the smart device, and includes the steps of:
in step S401, after the user performs a touch action, determining the number of triggered sensors on the touch module; similar to step S101 in fig. 1, a detailed description is omitted here.
In step S402, determining a touch area based on the number of triggered sensors and a pre-stored sensor size; similar to step S102 in fig. 1, a detailed description is omitted here.
In step S403, a matched input code is acquired according to the touch area, and unlocking is performed.
In step S403, the user may preset the correspondence between the touch area and the input code, which is of course not limited in any way, and in one example, the input code "1" is represented in the first area range, the input code "2" is represented in the second area range, etc., where the setting of the area range may be specifically set according to the actual situation, for example, the first area range is set to be smaller than the second area range; and then, after the unlocking module acquires the touch area, acquiring a matched input code according to the touch area and a preset corresponding relation, for example, acquiring a corresponding input code '1' when the touch area is in the first area range, and then, unlocking the unlocking module based on the acquired input code, so that the unlocking function is realized under the condition of not exposing a password, and the unlocking safety is improved.
In an embodiment, the processor of the smart device may further determine the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system, then the processor may determine a touch area based on the positions of each triggered sensor, as shown in fig. 5, the processor may divide a touch screen area into several touch areas in advance based on the setting of a program or the selection of a user, where the division number of the touch areas may be specifically set according to the actual situation, and the correspondence between the touch areas and the input codes may be preset by the user, in an example, the input code "1" may be represented in a first area range, the input code "2" may be represented in a second area range, the input code "1A" may be represented in the touch area 1, then the touch area 2 may represent the input code "2B" after the touch area and the touch area are determined, the touch area and the touch area may be matched may be obtained in advance according to the setting of the program, and the touch area may be determined as a matching area based on the touch area may be determined as a matching area after the touch area is determined as the touch area is the first 2. The embodiment provides more modes for acquiring the input codes, so that the personalized password setting of the user is realized, and the unlocking safety is further improved.
In one possible implementation, when the touch module transmits data to the processor of the smart device in a data transmission format of a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction, the processor may acquire a touch shape and a touch angle with respect to a designated reference position based on the maximum number of sensors triggered in the x-axis direction and the maximum number of sensors triggered in the y-axis direction.
It will be appreciated that the disclosure is not limited in any way with respect to the location of the touch angle, such as the touch angle may be described with respect to the upper left corner of the touch screen, or the touch angle may be described with respect to the upper edge of the touch screen; the method and the device have the advantages that the method and the device do not limit the determination rules of the touch shape and the touch angle, for example, if the acquired maximum number of the triggered sensors in the x-axis direction is equal to the maximum number of the triggered sensors in the y-axis direction, the touch shape can be square or round, and the like, and if the maximum number of the triggered sensors in the x-axis direction is not equal to the maximum number of the triggered sensors in the y-axis direction, the touch shape can be rectangular or elliptical, and the like; further, it may be set that if the maximum number of sensors triggered in the x-axis direction is greater than or equal to the maximum number of sensors triggered in the y-axis direction, the touch angle is touched toward the x-axis direction with respect to the designated reference position, if the number of sensors in the x-axis direction is less than the number of sensors in the y-axis direction, the touch angle is touched toward the y-axis direction with respect to the designated reference position, in an example, the touch module transmits data to the smart device in a format of (1; 2), if the first parameter indicates the x-axis direction, the second parameter indicates the y-axis direction, indicating that 1 sensor is triggered in the x-axis direction and 2 sensors are triggered in the y-axis direction, the touch shape may be determined to be rectangular, and the touch angle is touched toward the y-axis direction; if the first parameter indicates the y-axis direction and the second parameter indicates the x-axis direction, indicating that 1 sensor is triggered in the y-axis direction and 2 sensors are triggered in the x-axis direction, the touch shape can be determined to be rectangular, and the touch angle is toward the x-axis direction.
In another possible implementation manner, the smart device can also determine the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system, further determine the touch shape based on the position of each triggered sensor, and determine the touch angle relative to the designated reference position based on the relationship between the positions of each triggered sensor; referring to fig. 2, taking the example that the touch module sends coordinates as an example, the touch module transmits coordinate information (4, 3;4,4;5,4;6, 4) to the smart device, the smart device can determine the positions of the 4 triggered sensors based on the coordinate information, further determine the touch shape more accurately based on the positions of the triggered sensors, and accurately determine the touch angle of the touch sensor relative to a designated reference position based on the relationship between the positions of the triggered sensors, for example, the designated reference position is the upper edge of the touch screen, and the touch angle is inclined by a certain angle towards the y-axis direction (or the vertical direction) relative to the upper edge of the touch screen.
In an embodiment, the user may also preset the touch area, the touch shape, and the corresponding relation between the touch angle and the input code, which is of course not limited in any way, and in one example, the input code "1" is represented in the first area range, the input code "2" is represented in the second area range, the input code "a" is represented in the oval shape, the input code "B" is represented in the circular shape, the input code "a" is represented in the horizontal direction, the input code "B" is represented in the vertical direction, and so on, and then the processor of the intelligent device obtains the matched input code according to the touch area, the touch shape, and the touch angle after determining the touch shape and the touch angle relative to the designated reference position, and unlocks. The embodiment provides more input code acquisition modes, and further improves unlocking security.
In an embodiment, the user may further preset a touch area, a touch shape, and a correspondence between a touch angle and an input code, so that after the user performs a touch action, the processor determines a matched input code based on the determined touch area, touch shape, touch angle, and correspondence. In this embodiment, only one touch action of the user is required, the input code with higher security can be obtained to perform unlocking operation, so that the user operation is convenient, and the unlocking security is further improved.
It can be found that the technical characteristics of the matched input codes are obtained based on the touch area, so that the unlocking function can be realized without exposing the password, and the unlocking safety is improved.
In addition, in the related art, the under-screen fingerprint acquisition technology generally exposes the under-screen fingerprint module for a fixed time under the irradiation of a fingerprint light spot with fixed brightness, so as to meet the light energy required by the under-screen fingerprint sensor, so that the under-screen fingerprint sensor performs fingerprint acquisition based on the light energy, however, when the touch area of a user and a touch screen is smaller, the exposure is excessive due to the fixed brightness of the fingerprint light spot, so that the acquired fingerprint image cannot be identified due to the fact that the brightness is too large. In view of the foregoing, the present application provides a touch area acquisition method, referring to fig. 6, and fig. 6 is a flowchart of a touch area acquisition method according to an exemplary embodiment of the present disclosure. The touch area acquisition method of the embodiment can be applied to an intelligent device, the intelligent device further comprises an off-screen fingerprint module, and further, the touch area acquisition method can be executed by the off-screen fingerprint module and comprises the following steps:
In step S601, after the user performs a touch action, the number of sensors triggered on the touch module is determined.
In step S602, determining an area of a fingerprint spot based on the number of triggered sensors and a pre-stored sensor size, and performing on-screen fingerprint acquisition; the fingerprint light spots are used for exposing the under-screen fingerprint module.
In step S601, when a user performs a touch action such as clicking or sliding on a touch screen of the smart device, the sensors on the touch module generate a touch signal based on the touch action, the touch module collects the touch signal and notifies the number of sensors triggered by the fingerprint module under the screen according to the touch signal, that is, the touch module may correspondingly obtain the number of triggered sensors according to the number of generated touch signals, so that the fingerprint module under the screen determines the number of triggered sensors on the touch module after the user performs the touch action. In this step, only the parameter of the number of triggered sensors is obtained from the touch module, so that the amount of information to be transmitted by the touch module is significantly reduced.
In one possible implementation, the touch module may transmit data to the under-screen fingerprint module of the smart device in a data transmission format of a maximum number of sensors triggered in an x-axis direction and a maximum number of sensors triggered in a y-axis direction, i.e., the number of triggered sensors determined by the under-screen fingerprint module includes the maximum number of sensors triggered in the x-axis direction and the maximum number of sensors triggered in the y-axis direction in a preset two-dimensional coordinate system, in one example, the touch module transmits data to the smart device in a format of (1; 2), it may be understood that a specific meaning of each parameter may be set according to an actual situation, such as a data format (1; 2), in one case, a first parameter may represent the x-axis direction and a second parameter may represent the y-axis direction; in another case, the first parameter may also represent the y-axis direction, and the second parameter the x-axis direction; in the first case, 1 sensor is triggered in the x-axis direction, 2 sensors are triggered in the y-axis direction, and in the second case, 1 sensor is triggered in the y-axis direction, 2 sensors are triggered in the x-axis direction, but the number of triggered sensors finally acquired is 2 (1×2=2). The embodiment can obviously reduce the information transmission quantity by transmitting the information in the above mode.
In another possible implementation manner, the touch module may transmit an arrangement number of the triggered sensors or coordinates in a preset two-dimensional coordinate system to an on-screen fingerprint module of the smart device, so that the on-screen fingerprint module may determine the number of the triggered sensors based on the arrangement number or the number of the coordinates.
In one embodiment, when a touch portion of a user touches the touch screen, a sensor corresponding to the touch position of the user generates a touch signal, but there is one case that: although the touch part of the user does not actually contact with the sensor corresponding to the edge of the touch part of the user, the sensors may generate weak touch signals due to the approach of the touch part of the user, in the disclosure, the touch module compares the signal quantity of the touch signals generated by the sensors with a preset threshold value, and then uses the sensor with the signal quantity of the touch signals larger than the preset threshold value as the triggered sensor, so that the number of the triggered sensors can be accurately determined, and the detection precision is improved; the specific numerical value of the preset threshold value is not limited in any way, and the specific setting can be performed according to actual conditions.
In step S602, the under-screen fingerprint module determines the area of the fingerprint light spot based on the number of triggered sensors and the pre-stored sensor size, where the position of the fingerprint light spot may be a preset position, and then the under-screen fingerprint module performs fingerprint collection based on the area of the fingerprint light spot and the preset position, so as to prevent the fingerprint light spot from being too large or too small to affect the final fingerprint identification effect, ensure the exposure of the under-screen fingerprint module to be moderate, and make the acquired fingerprint image lines clearer when the under-screen fingerprint module performs the under-screen fingerprint collection, thereby being beneficial to improving the fingerprint identification precision.
In an embodiment, the under-screen fingerprint module may further determine the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in the preset two-dimensional coordinate system, and further determine the position of the fingerprint light spot according to the position of each triggered sensor, so that the under-screen fingerprint module may perform under-screen fingerprint acquisition based on the determined area of the fingerprint light spot and the position of the fingerprint light spot, so that the under-screen fingerprint module may perform light spot adjustment based on the current touch condition accurately, so that the fingerprint image acquired by the under-screen fingerprint module is clearer and more accurate, and fingerprint identification precision is further improved.
In an embodiment, the under-screen fingerprint module includes a plurality of light emitting sub-modules, and the fingerprint light spots are generated based on the light emitting sub-modules, so that the under-screen fingerprint module can determine the triggering number of the light emitting sub-modules based on the number of the triggered sensors, thereby determining the area of the generated fingerprint light spots; and the under-screen fingerprint module can also rapidly determine the trigger position of the light emitting submodule based on the positions of the triggered sensors, so as to determine the position of the fingerprint light spot. The embodiment provides a method for simply and rapidly determining the fingerprint light spot area based on the fingerprint light spot position, which is beneficial to improving the response speed of the under-screen fingerprint module. It should be noted that the light emitting sub-module is not limited in this application, for example, the light emitting sub-module may be a light emitting diode or the like.
It can be seen that the area of fingerprint facula is determined based on the number of sensors received from the touch module and the pre-stored size information of the sensors, so that the under-screen fingerprint module is exposed in a proper range, the fingerprint image with clear lines is collected, and the fingerprint identification precision is improved.
Corresponding to the foregoing embodiment of the application function implementing method, the present disclosure further provides a touch area obtaining apparatus, where the apparatus may be applied to the smart device, and in this disclosure, the smart device includes a touch module, where the touch module includes a plurality of sensors, and the sensors are identical in size and are uniformly arranged in order to form a sensor matrix, and the formed sensor matrix corresponds to the touch screen, so that the touch module can respond based on a touch action of a user on the touch screen.
As shown in fig. 7, fig. 7 is a block diagram of a touch area acquisition device according to an exemplary embodiment of the present disclosure. The apparatus 700 includes:
the number determining unit 701 is configured to determine the number of triggered sensors on the touch module after the user performs the touch action.
An area determining unit 702 is configured to determine a touch area based on the number of triggered sensors and a pre-stored sensor size.
In an embodiment, the triggered sensor is a sensor that generates a touch signal based on the touch action with a signal amount greater than a preset threshold value.
In an embodiment, the number of triggered sensors includes a maximum number of triggered sensors in an x-axis direction and a maximum number of triggered sensors in a y-axis direction in a preset two-dimensional coordinate system.
In an embodiment, the number determining unit 701 includes a sensor determining subunit, a transmitting subunit, and a number determining subunit.
The sensor determining subunit is used for determining the triggered sensor on the touch module after the user executes the touch action.
The transmission subunit is configured to receive an arrangement number of the triggered sensor or a coordinate in a preset two-dimensional coordinate system transmitted by the touch module.
The number determination subunit is configured to determine the number of triggered sensors based on the arrangement number or the number of coordinates.
In an embodiment, further comprising:
and the position determining unit is used for determining the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system.
In an embodiment, the smart device further comprises an off-screen fingerprint module.
The area determining unit 702 specifically includes:
An area acquisition subunit, configured to determine an area of a fingerprint light spot based on the number of triggered sensors and a pre-stored sensor size; the fingerprint light spots are used for exposing the under-screen fingerprint module; and
and the fingerprint acquisition subunit is used for carrying out off-screen fingerprint acquisition.
In an embodiment, before the fingerprint acquisition subunit, the method further comprises:
and a position determining subunit, configured to determine a position of the fingerprint light spot based on the positions of the respective triggered sensors.
In an embodiment, the off-screen fingerprint module comprises a plurality of light emitting sub-modules; the fingerprint light spot is generated based on the light emitting submodule.
The area acquisition subunit includes:
determining the triggering number of the light emitting sub-modules based on the number of the triggered sensors.
The position determination subunit comprises:
and determining the triggering position of the luminous submodule based on the positions of the triggered sensors, so as to adjust the position of the fingerprint light spot.
In an embodiment, further comprising:
the execution unit is used for judging that the palm touches when the touch area is larger than or equal to a preset area threshold value and executing the function corresponding to the palm touch; and when the touch area is smaller than a preset area threshold, determining that the finger touches, and executing a function corresponding to the finger touches.
In an embodiment, further comprising:
the execution unit is further configured to determine a corresponding trigger event based on the touch action and the touch area, and execute the trigger event.
In an embodiment, further comprising:
and the unlocking unit is used for acquiring the matched input code according to the touch area and unlocking the input code.
In an embodiment, the unlocking unit includes:
and the touch area determining subunit is used for determining the touch area based on the positions of the triggered sensors.
And the input code acquisition subunit is used for acquiring the matched input codes according to the touch area and the touch area.
And the unlocking subunit is used for unlocking based on the input code.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements.
Corresponding to the embodiment of the application function implementation method, the disclosure further provides an intelligent device, and the electronic device comprises a touch module, a processor and a memory.
The memory is used for storing a computer program executed by the processor.
The touch module is used for collecting touch signals and informing the processor of the number of sensors triggered by the touch signals according to the touch signals.
The processor is configured to: determining the number of triggered sensors on the touch module, and determining the touch area based on the number of triggered sensors and the pre-stored sensor size.
In an embodiment, the smart device further comprises an off-screen fingerprint module.
The touch module is also used for informing the under-screen fingerprint module of the number of sensors triggered by the touch signal according to the touch signal.
The under-screen fingerprint module is used for determining the area of fingerprint light spots based on the number of the triggered sensors and the pre-stored sensor size and collecting under-screen fingerprints; the fingerprint light spots are used for exposing the under-screen fingerprint module. It should be noted that, referring to the description of the foregoing method flow specifically, the other programs stored in the memory 703 are not described herein again, and the processor 702 is further configured to execute the other programs stored in the memory 703.
As shown in fig. 8, fig. 8 is another block diagram of a structure of a smart device according to an exemplary embodiment of the present disclosure. The smart device 800 may include one or more of the following components: a processing component 801, a memory 802, a power component 803, a multimedia component 804, an audio component 805, an input/output (I/O) interface 806, a sensor component 807, and a communication component 808.
The processing component 801 generally controls overall operation of the smart device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 801 may include one or more processors 809 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 801 may include one or more modules that facilitate interactions between the processing component 801 and other components. For example, processing component 801 may include multimedia modules to facilitate interactions between multimedia component 804 and processing component 801.
The memory 802 is configured to store various types of data to support operations at the smart device 800. Examples of such data include instructions for any application or method operating on the smart device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 802 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 803 provides power to the various components of the smart device 800. The power components 803 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the smart device 800.
The multimedia component 804 includes a screen between the smart device 800 and the user that provides an output interface. The screen may display corresponding contents, and the screen may include a Touch Panel (TP) implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 804 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the smart device 800 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities. In some embodiments, the multimedia component 804 may include an off-screen fingerprint module, which may include one or more sensors, may perform off-screen fingerprint acquisition based on a user's touch actions, to perform corresponding operations, and so on.
The audio component 805 is configured to output and/or input audio signals. For example, the audio component 805 includes a Microphone (MIC) configured to receive external audio signals when the smart device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 802 or transmitted via the communication component 808. In some embodiments, the audio component 805 further comprises a speaker for outputting audio signals.
The I/O interface 802 provides an interface between the processing component 801 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 807 includes one or more sensors for providing status assessment of various aspects of the smart device 800. For example, sensor assembly 807 may detect the on/off state of smart device 800, the relative positioning of components, such as the display and keypad of smart device 800, the change in position of smart device 800 or a component of smart device 800, the presence or absence of user contact with smart device 800, the orientation or acceleration/deceleration of smart device 800, and the change in temperature of smart device 800. The sensor assembly 807 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 807 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 807 may also include acceleration sensors, gyroscopic sensors, magnetic sensors, pressure sensors, temperature sensors, photoelectric sensors, or GPS sensors.
The communication component 808 is configured to facilitate communication between the smart device 800 and other devices in a wired or wireless manner. The smart device 800 may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G LTE, 5G NR (5G New Radio), or a combination thereof. In one exemplary embodiment, the communication component 808 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 808 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the smart device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
The implementation process of the functions and roles of each component in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The above-described apparatus embodiments are merely illustrative, wherein the components illustrated as separate components may or may not be physically separate, and the components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the disclosed solution. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Corresponding to the foregoing application function implementation method embodiments, the present disclosure further provides a computer readable storage medium having stored thereon a computer program, which when executed by the processor of the smart device, implements the steps of the touch area acquisition method described in any of the foregoing embodiments.
The present disclosure may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having program code embodied therein. Computer-usable storage media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by the computing device.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present disclosure.

Claims (25)

1. The touch area acquisition method is characterized by being applied to intelligent equipment, wherein the intelligent equipment comprises a touch module, and the touch module comprises a plurality of sensors; the method comprises the following steps:
After a user performs a touch action, determining the number of triggered sensors on the touch module;
determining a touch area based on the number of triggered sensors and a pre-stored sensor size;
acquiring a touch shape and a touch angle relative to a specified reference position based on the number of triggered sensors; wherein the number of triggered sensors includes a maximum number of triggered sensors in an x-axis direction and a maximum number of triggered sensors in a y-axis direction in a preset two-dimensional coordinate system;
acquiring a matched input code according to the touch area, the touch shape, the touch angle and a preset corresponding relation, and unlocking; the corresponding relation is the corresponding relation among the touch area, the touch shape, the touch angle and the input code.
2. The method of claim 1, wherein the triggered sensor is a sensor that generates a touch signal based on the touch action with a signal magnitude greater than a preset threshold value.
3. The method of claim 1, wherein determining the number of sensors triggered on the touch module comprises:
Receiving the arrangement number of the triggered sensor or the coordinates in a preset two-dimensional coordinate system transmitted by the touch module;
the number of triggered sensors is determined based on the arrangement number or the number of coordinates.
4. A method according to claim 3, further comprising:
and determining the position of each triggered sensor based on the arrangement number of the triggered sensors or coordinates in a preset two-dimensional coordinate system.
5. The method of claim 4, wherein the smart device further comprises an off-screen fingerprint module;
the determining a touch area based on the number of triggered sensors and a pre-stored sensor size specifically includes:
determining the area of a fingerprint light spot based on the number of the triggered sensors and the pre-stored sensor size, and collecting the fingerprints under the screen; the fingerprint light spots are used for exposing the under-screen fingerprint module.
6. The method of claim 5, further comprising, prior to said performing an off-screen fingerprint acquisition:
the location of the fingerprint spot is determined based on the location of the respective triggered sensor.
7. The method of claim 6, wherein the off-screen fingerprint module comprises a number of light emitting sub-modules; the fingerprint light spots are generated based on the light emitting submodule;
the determining the area of the fingerprint light spot based on the number of triggered sensors and the pre-stored sensor size comprises:
determining a trigger number of the light emitting sub-modules based on the number of triggered sensors;
the determining the location of the fingerprint spot based on the locations of the respective triggered sensors comprises:
and determining the triggering position of the luminous submodule based on the positions of the triggered sensors, so as to determine the position of the fingerprint light spot.
8. The method as recited in claim 1, further comprising:
if the touch area is larger than or equal to a preset area threshold, judging that the palm touch is performed, and executing a function corresponding to the palm touch;
and if the touch area is smaller than the preset area threshold, determining that the touch area is a finger touch, and executing a function corresponding to the finger touch.
9. The method as recited in claim 8, further comprising:
and determining a corresponding trigger event based on the touch action and the touch area, and executing the trigger event.
10. The method as recited in claim 4, further comprising:
and acquiring the matched input code according to the touch area, and unlocking.
11. The method of claim 10, wherein the obtaining the matched input code from the touch area comprises:
determining a touch area based on the location of each triggered sensor;
and acquiring matched input codes according to the touch area and the touch area.
12. The touch area acquisition device is characterized by being applied to intelligent equipment, wherein the intelligent equipment comprises a touch module, and the touch module comprises a plurality of sensors; the device comprises:
the number determining unit is used for determining the number of the triggered sensors on the touch module after the user executes the touch action;
an area determining unit for determining a touch area based on the number of triggered sensors and a pre-stored sensor size;
an unlocking unit for acquiring a touch shape and a touch angle with respect to a specified reference position based on the number of the triggered sensors; wherein the number of triggered sensors includes a maximum number of triggered sensors in an x-axis direction and a maximum number of triggered sensors in a y-axis direction in a preset two-dimensional coordinate system; acquiring a matched input code according to the touch area, the touch shape, the touch angle and a preset corresponding relation, and unlocking; the corresponding relation is the corresponding relation among the touch area, the touch shape, the touch angle and the input code.
13. The apparatus of claim 12, wherein the triggered sensor is a sensor that generates a touch signal based on the touch action with a signal magnitude greater than a preset threshold value.
14. The apparatus of claim 12, wherein the number determination unit comprises a sensor determination subunit, a transmission subunit, and a number determination subunit;
the sensor determining subunit is used for determining a triggered sensor on the touch module after the user executes the touch action;
the transmission subunit is used for receiving the arrangement number of the triggered sensor or the coordinates in a preset two-dimensional coordinate system transmitted by the touch module;
the number determination subunit is configured to determine the number of triggered sensors based on the arrangement number or the number of coordinates.
15. The apparatus as recited in claim 14, further comprising:
and the position determining unit is used for determining the position of each triggered sensor based on the arrangement number of the triggered sensors or the coordinates in a preset two-dimensional coordinate system.
16. The apparatus of claim 15, wherein the smart device further comprises an off-screen fingerprint module;
The area determining unit specifically includes:
an area acquisition subunit, configured to determine an area of a fingerprint light spot based on the number of triggered sensors and a pre-stored sensor size; the fingerprint light spots are used for exposing the under-screen fingerprint module; and
and the fingerprint acquisition subunit is used for carrying out off-screen fingerprint acquisition.
17. The apparatus of claim 16, further comprising, prior to the fingerprint acquisition subunit:
and a position determining subunit, configured to determine a position of the fingerprint light spot based on the positions of the respective triggered sensors.
18. The apparatus of claim 17, wherein the off-screen fingerprint module comprises a number of light emitting sub-modules; the fingerprint light spots are generated based on the light emitting submodule;
the area acquisition subunit includes:
determining a trigger number of the light emitting sub-modules based on the number of triggered sensors;
the position determination subunit comprises:
and determining the triggering position of the luminous submodule based on the positions of the triggered sensors, so as to adjust the position of the fingerprint light spot.
19. The apparatus as recited in claim 12, further comprising:
The execution unit is used for judging that the palm touches when the touch area is larger than or equal to a preset area threshold value and executing the function corresponding to the palm touch; and when the touch area is smaller than a preset area threshold, determining that the finger touches, and executing a function corresponding to the finger touches.
20. The apparatus as recited in claim 19, further comprising:
the execution unit is further configured to determine a corresponding trigger event based on the touch action and the touch area, and execute the trigger event.
21. The apparatus as recited in claim 15, further comprising:
and the unlocking unit is used for acquiring the matched input code according to the touch area and unlocking the input code.
22. The apparatus of claim 21, wherein the unlocking unit comprises:
a touch area determination subunit configured to determine a touch area based on the positions of the respective triggered sensors;
an input code acquisition subunit, configured to acquire a matched input code according to the touch area and the touch area;
and the unlocking subunit is used for unlocking based on the input code.
23. The intelligent device is characterized by comprising a touch module, a processor and a memory;
The memory is used for storing a computer program executed by the processor;
the touch module is used for collecting touch signals and notifying a processor of the number of sensors triggered by the touch signals according to the touch signals;
the processor is configured to: determining the number of triggered sensors on the touch module, and determining a touch area based on the number of triggered sensors and a pre-stored sensor size; acquiring a touch shape and a touch angle relative to a specified reference position based on the number of triggered sensors; wherein the number of triggered sensors includes a maximum number of triggered sensors in an x-axis direction and a maximum number of triggered sensors in a y-axis direction in a preset two-dimensional coordinate system; acquiring a matched input code according to the touch area, the touch shape, the touch angle and a preset corresponding relation, and unlocking; the corresponding relation is the corresponding relation among the touch area, the touch shape, the touch angle and the input code.
24. The smart device of claim 23, further comprising an off-screen fingerprint module;
The touch module is also used for informing the under-screen fingerprint module of the number of sensors triggered by the touch signal according to the touch signal;
the under-screen fingerprint module is used for determining the area of fingerprint light spots based on the number of the triggered sensors and the pre-stored sensor size and collecting under-screen fingerprints; the fingerprint light spots are used for exposing the under-screen fingerprint module.
25. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1-4 and 8-11.
CN201910101447.0A 2019-01-31 2019-01-31 Touch area acquisition method and device, intelligent equipment and storage medium Active CN111507144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910101447.0A CN111507144B (en) 2019-01-31 2019-01-31 Touch area acquisition method and device, intelligent equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910101447.0A CN111507144B (en) 2019-01-31 2019-01-31 Touch area acquisition method and device, intelligent equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111507144A CN111507144A (en) 2020-08-07
CN111507144B true CN111507144B (en) 2024-02-09

Family

ID=71874014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910101447.0A Active CN111507144B (en) 2019-01-31 2019-01-31 Touch area acquisition method and device, intelligent equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111507144B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112286386B (en) * 2020-10-27 2024-04-02 北京小米移动软件有限公司 Touch screen point stabilization processing method, system, device and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103257777A (en) * 2013-05-14 2013-08-21 深圳市汇顶科技股份有限公司 Control method of state of touch screen, touch screen and portable touch terminal
JP2014119931A (en) * 2012-12-14 2014-06-30 Fujitsu Ltd Electronic apparatus and control program for touch sensor
CN104090723A (en) * 2014-07-22 2014-10-08 Tcl通讯(宁波)有限公司 Unlocking method and system based on touch area of touch screen
CN105094443A (en) * 2015-08-21 2015-11-25 深圳市汇顶科技股份有限公司 Touch pressure detecting device and method
CN108037889A (en) * 2017-12-07 2018-05-15 广东美晨通讯有限公司 Location regulation method, terminal and the computer-readable recording medium of virtual key
CN108496184A (en) * 2018-04-17 2018-09-04 深圳市汇顶科技股份有限公司 Image processing method, device and electronic equipment
CN108563980A (en) * 2017-12-29 2018-09-21 昆山国显光电有限公司 A kind of method for controlling fingerprint identification and touch panel, touch control display apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215060A1 (en) * 2010-10-13 2013-08-22 Nec Casio Mobile Communications Ltd. Mobile terminal apparatus and display method for touch panel in mobile terminal apparatus
KR101621864B1 (en) * 2014-12-10 2016-05-17 주식회사 지2터치 Touch detection sensor structure of capacitive type touch screen panel

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014119931A (en) * 2012-12-14 2014-06-30 Fujitsu Ltd Electronic apparatus and control program for touch sensor
CN103257777A (en) * 2013-05-14 2013-08-21 深圳市汇顶科技股份有限公司 Control method of state of touch screen, touch screen and portable touch terminal
CN104090723A (en) * 2014-07-22 2014-10-08 Tcl通讯(宁波)有限公司 Unlocking method and system based on touch area of touch screen
CN105094443A (en) * 2015-08-21 2015-11-25 深圳市汇顶科技股份有限公司 Touch pressure detecting device and method
CN108037889A (en) * 2017-12-07 2018-05-15 广东美晨通讯有限公司 Location regulation method, terminal and the computer-readable recording medium of virtual key
CN108563980A (en) * 2017-12-29 2018-09-21 昆山国显光电有限公司 A kind of method for controlling fingerprint identification and touch panel, touch control display apparatus
CN108496184A (en) * 2018-04-17 2018-09-04 深圳市汇顶科技股份有限公司 Image processing method, device and electronic equipment

Also Published As

Publication number Publication date
CN111507144A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
EP2975838B1 (en) Image shooting parameter adjustment method and device
RU2661289C1 (en) Method and device for preventing random taping of terminal with touch screen
EP3179711B1 (en) Method and apparatus for preventing photograph from being shielded
RU2640733C2 (en) Method and device for application management
EP3046017A1 (en) Unlocking method, device and terminal
EP3173970A1 (en) Image processing method and apparatus
EP3667365A1 (en) Camera module and ranging radar
RU2677360C1 (en) Method and device for recognition of gestures
CN111695382B (en) Fingerprint acquisition area determining method and fingerprint acquisition area determining device
EP3043246A1 (en) Apparatus and method for implementing functions of touch button and fingerprint identification, and terminal device
EP3109741B1 (en) Method and device for determining character
US11222223B2 (en) Collecting fingerprints
EP3232301B1 (en) Mobile terminal and virtual key processing method
EP3136206A1 (en) Method and apparatus for setting threshold
CN112905136A (en) Screen projection control method and device and storage medium
RU2647868C2 (en) State notification method and apparatus
CN111507144B (en) Touch area acquisition method and device, intelligent equipment and storage medium
US11388652B2 (en) Method and apparatus for indicating position of cell-defining synchronization signal block and searching for the same, and base station
CN106445122B (en) Equipment state control method and device
CN108491834B (en) Fingerprint identification method and device
CN110765813B (en) Fingerprint identification method and device
CN108319899B (en) Fingerprint identification method and device
US20220147244A1 (en) Method and device for touch operation, and storage medium
US20160195992A1 (en) Mobile terminal and method for processing signals generated from touching virtual keys
EP3264308A1 (en) Method and device for fingerprint verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant