CN111552398B - Touch acceleration method and device for touch screen - Google Patents

Touch acceleration method and device for touch screen Download PDF

Info

Publication number
CN111552398B
CN111552398B CN202010228243.6A CN202010228243A CN111552398B CN 111552398 B CN111552398 B CN 111552398B CN 202010228243 A CN202010228243 A CN 202010228243A CN 111552398 B CN111552398 B CN 111552398B
Authority
CN
China
Prior art keywords
touch
standing position
area
user
touch screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010228243.6A
Other languages
Chinese (zh)
Other versions
CN111552398A (en
Inventor
莫善航
汪帅
左繁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Honghe Innovation Information Technology Co Ltd
Original Assignee
Shenzhen Honghe Innovation Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Honghe Innovation Information Technology Co Ltd filed Critical Shenzhen Honghe Innovation Information Technology Co Ltd
Priority to CN202010228243.6A priority Critical patent/CN111552398B/en
Publication of CN111552398A publication Critical patent/CN111552398A/en
Application granted granted Critical
Publication of CN111552398B publication Critical patent/CN111552398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Abstract

The invention relates to the technical field of touch control, in particular to a touch acceleration method and device for a touch screen. The method comprises the following steps: acquiring a current first standing position of a user; inputting the first standing position into a trained relative acceleration region partition model to obtain a relative acceleration region corresponding to the first standing position; adjusting the operation capability of the touch IC in the relative acceleration region to a first value, and adjusting the operation capability of the touch IC in the relative non-acceleration region to a second value; wherein the first value is greater than the second value; the relatively non-accelerating region is a region of the touch screen other than the relatively accelerating region. According to the invention, through learning and analyzing the habit of the user, the touch operation capability of the touch screen is redistributed, so that the positioning of the touch signal of the large-scale touch screen device is realized quickly with low cost.

Description

Touch acceleration method and device for touch screen
Technical Field
The invention relates to the technical field of touch control, in particular to a touch acceleration method and device for a touch screen.
Background
Currently, touch screen technologies are classified into acoustic pulse recognition technology, surface acoustic wave technology, capacitive recognition technology, resistive recognition technology, infrared optical recognition technology, and the like, according to the principle of screen surface positioning. The touch screen continuously positions the touch signals on the touch screen through a certain scanning frequency, so that touch interaction between a user and the touch screen is realized.
With the improvement of the display effect and the display field of the touch screen device by people, the size of the touch screen device on the market is continuously increased, and correspondingly, the touch area of the touch screen is also increased, so that the touch operation requirement on the touch screen is also increased. In order to secure the scanning speed and positioning accuracy of touch signals on large-sized touch screen devices such as educational large screens and smart large screens, the related art mostly adopts high-performance hardware having a higher speed, thereby greatly increasing the hardware cost of the touch screen.
Therefore, how to quickly and inexpensively realize the positioning of the touch signal of the large-scale touch screen device is a technical problem to be solved at present.
Disclosure of Invention
The invention aims to provide a touch acceleration method and device for a touch screen, which can quickly realize the positioning of touch signals of large-scale touch screen equipment with low cost.
The embodiment of the invention provides the following scheme:
in a first aspect, an embodiment of the present invention provides a touch acceleration method of a touch screen, where the method includes:
acquiring a current first standing position of a user;
inputting the first standing position into a trained relative acceleration region partition model to obtain a relative acceleration region corresponding to the first standing position;
adjusting the operation capability of the touch IC in the relative acceleration region to a first value, and adjusting the operation capability of the touch IC in the relative non-acceleration region to a second value; wherein the first value is greater than the second value; the relatively non-accelerating region is other regions of the touch screen region except the relatively accelerating region.
Specifically, the adjustment may be performed in the following manner: and setting the first touch signal scanning frequency corresponding to the relatively accelerated region as a first value, and setting the second touch signal scanning frequency corresponding to the relatively non-accelerated region as a second value.
In a possible embodiment, before the obtaining the current standing position of the user, the method further includes:
dividing a standing position area of the user operating the touch screen into a plurality of standing positions; wherein the plurality of standing positions includes the first standing position;
acquiring a historical touch area of the user performing historical touch operation in the touch screen at each of the plurality of standing positions;
and training the relative acceleration region partition model by using each standing position and the corresponding historical touch region thereof, so that the trained relative acceleration region partition model can determine the corresponding relative acceleration region of the user at each standing position.
In a possible embodiment, before the step of inputting the first standing position into the trained relative acceleration region partition model to obtain the relative acceleration region corresponding to the first standing position, the method further includes:
acquiring a real-time standing position of the user;
updating the first standing position using the real-time standing position.
In a possible embodiment, after the inputting the first standing position into the trained relative acceleration region partition model to obtain the relative acceleration region corresponding to the first standing position, the method further includes:
determining whether the user is using a slide in the relative acceleration region; the slide comprises a text area, a picture area, an edge area and a blank area; the blank area is other areas except the text area, the picture area and the edge area in the slide;
if yes, the display area corresponding to the text area and the blank area on the touch screen is used as the relative acceleration area, so that the relative acceleration area is updated.
In one possible embodiment, the obtaining the current first standing position of the user includes:
acquiring image data of the user currently standing in front of the touch screen;
calculating the relative position of the user and the touch screen in the image data;
and determining the current first standing position of the user according to the relative position.
In one possible embodiment, the acquiring a historical touch area of the user performing a historical touch operation in the touch screen at each of the number of standing positions includes:
acquiring video data of historical touch operation of the user in the touch screen at each standing position stored in a cloud server;
extracting a frame image of the video data;
determining a standing position of the user and a touch area for touch operation in each frame image;
and determining a historical touch area of the user performing historical touch operation in the touch screen at each standing position according to the standing position of the user and the touch area performing touch operation in each frame image.
In a second aspect, an embodiment of the present invention provides a touch acceleration device for a touch screen, where the device includes:
the first standing position acquisition module is used for acquiring the current first standing position of the user;
the relative acceleration region obtaining module is used for inputting the first standing position into a trained relative acceleration region partition model to obtain a relative acceleration region corresponding to the first standing position;
the computing capacity adjusting module is used for adjusting the computing capacity of the touch IC in the relative acceleration area to a first value and adjusting the computing capacity of the touch IC in the relative non-acceleration area to a second value; wherein the first value is greater than the second value; the relatively non-accelerating region is other regions of the touch screen region except the relatively accelerating region.
The adjustment can be specifically performed in the following way: and setting the first touch signal scanning frequency corresponding to the relatively accelerated region as a first value, and setting the second touch signal scanning frequency corresponding to the relatively non-accelerated region as a second value.
In one possible embodiment, the apparatus further comprises:
the standing position dividing module is used for dividing a standing position area of the touch screen controlled by the user into a plurality of standing positions; wherein the plurality of standing positions includes the first standing position;
a history touch area acquisition module, configured to acquire a history touch area of the user performing a history touch operation in the touch screen at each of the plurality of standing positions;
and the model training module is used for training the relative acceleration region partition model by using each standing position and the corresponding historical touch region thereof, so that the trained relative acceleration region partition model can determine the corresponding relative acceleration region of the user at each standing position.
In one possible embodiment, the apparatus further comprises:
the real-time standing position acquisition module is used for acquiring the real-time standing position of the user;
a first standing position updating module for updating the first standing position using the real-time standing position.
In one possible embodiment, the apparatus further comprises:
a first judgment module for judging whether the user is using a slide in the relative acceleration area; the slide comprises a text area, a picture area, an edge area and a blank area; the blank area is other areas except the text area, the picture area and the edge area in the slide;
and the relative acceleration area adding module is used for taking the display area corresponding to the text area and the blank area on the touch screen as the relative acceleration area when the user uses the slide on the touch screen so as to update the relative acceleration area.
In one possible embodiment, the first standing position acquisition module comprises:
the image data acquisition sub-module is used for acquiring image data of the user currently standing in front of the touch screen;
the relative position calculating sub-module is used for calculating the relative position of the user and the touch screen in the image data;
and the first standing position determining submodule is used for determining the current first standing position of the user according to the relative position.
In one possible embodiment, the historical touch region acquisition module includes:
the video data acquisition sub-module is used for acquiring video data of historical touch operation of the user in the touch screen at each standing position stored in the cloud server;
a frame image extraction sub-module for extracting a frame image of the video data;
the touch area determination submodule is used for determining the standing position of the user and the touch area for touch operation in each frame image;
and the historical touch area determination submodule is used for determining a historical touch area of the user for carrying out the historical touch operation in the touch screen at each standing position according to the standing position of the user in each frame image and the touch area for carrying out the touch operation.
In a third aspect, an embodiment of the present invention provides a touch screen device, including:
a memory for storing a computer program;
and the processor is used for executing the computer program to realize the steps of the touch screen touch acceleration method in any one of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the touch screen acceleration method of any one of the first aspects.
Compared with the prior art, the invention has the following advantages and beneficial effects:
after the first standing position where the user stands currently is detected, the trained relative acceleration region partition model is utilized, the region where the user frequently touches the first standing position is used as the relative acceleration region corresponding to the first standing position, and the computing capacity of the touch screen in the relative acceleration region is increased relative to the relative non-acceleration region, so that the scanning speed and the positioning accuracy of touch signals in the relative acceleration region are improved. According to the invention, through learning and analyzing the habit of the user, the touch operation capability of the touch screen is redistributed, and the positioning identification of the touch signal is preferentially carried out in the area where the user frequently carries out touch operation, so that the positioning of the touch signal of the large-scale touch screen device is realized quickly with low cost.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present description, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a possible touch acceleration method embodiment of a touch screen according to an embodiment of the present invention;
FIG. 2 is a graph showing the relationship between the computing power and the numerical value of a possible touch IC according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a possible touch acceleration device embodiment according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art based on the embodiments of the present invention are within the scope of protection of the embodiments of the present invention.
An embodiment of the present invention provides a possible touch acceleration method embodiment of a touch screen, please refer to fig. 1, fig. 1 is a flowchart of the method embodiment, and specifically includes steps 11 to 13.
Step 11, acquiring the current first standing position of the user.
Specifically, the pressure sensor array can be placed on the ground in front of the touch screen device, and the current standing position of the user can be obtained according to the signals returned by the pressure sensor array.
And step 12, inputting the first standing position into a trained relative acceleration region partition model to obtain a relative acceleration region corresponding to the first standing position.
Specifically, the relative acceleration region partition model is a deep learning model, and the embodiment of the invention uses the habit of using the touch screen by a user in a standing position to complete training of the relative acceleration region partition model. The trained relative acceleration region partition model can calculate and output a relative acceleration region corresponding to the first standing position according to the use habit of the user on the touch screen at the first standing position. The relatively accelerated region can be understood as a region predicted by the trained relatively accelerated region partition model and most likely to be touched by the user on the touch screen.
Step 13, adjusting the computing capability of the touch IC in the relatively accelerated region to a first value, and adjusting the computing capability of the touch IC in the relatively non-accelerated region to a second value.
Wherein the first value is greater than the second value; the relatively non-accelerating region is other regions of the touch screen region except the relatively accelerating region.
Specifically, in this embodiment, the computing capability of the touch IC is digitized, and the computing capability can be subdivided into 6 values in total, and as shown in fig. 2, the relationship between the computing capability and the values of the touch IC is shown, it can be seen that the smaller the values, the smaller the corresponding computing capability, and the larger the values, the larger the corresponding computing capability.
In order to conveniently regulate and control the operation capability, the invention also provides a preferred embodiment.
The specific scheme is as follows: and setting the first touch signal scanning frequency corresponding to the relatively accelerated region as a first value, and setting the second touch signal scanning frequency corresponding to the relatively non-accelerated region as a second value.
Specifically, the touch signal scanning frequency is a basic touch parameter of the touch screen, the higher the touch signal scanning frequency is, the faster the touch signal response of the touch screen is, that is, the positioning identification of the touch signal can be completed more quickly, but correspondingly, the higher touch signal scanning frequency increases more workload for the touch IC in unit time, and the touch IC is required to have higher operation capability.
Specifically, since the operation capability of the touch IC is fixed, the scanning frequency of the touch signal provided by the touch IC is usually fixed, and when the size of the touch screen is continuously increased, the scanning speed of the touch IC on the touch signal is continuously reduced under the condition that the operation capability is not increased, so that the positioning identification of the touch signal on the touch screen is affected. According to the embodiment of the invention, the touch screen area is divided into the relative acceleration area where the current user can perform touch operation with high probability and the relative non-acceleration area where the current user cannot perform touch operation with high probability, then the operation capability of the touch IC is redistributed, the touch signal scanning frequency of the relative acceleration area is improved, the touch signal scanning frequency of the relative non-acceleration area is reduced, and the positioning recognition speed and accuracy of the touch screen on the touch signal are ensured on the premise of not replacing the touch IC with higher performance.
Specifically, the touch screen in this embodiment is provided with a standard touch signal scanning frequency, and when the relative acceleration area of the touch screen is not divided, the touch screen performs touch signal scanning within the full screen range at the standard touch signal scanning frequency.
In an optimal example, the first value is larger than the standard touch signal scanning frequency, and the second value is smaller than the standard touch signal scanning frequency, so that hardware acceleration is realized on a relatively accelerated area, hardware deceleration is realized on a relatively non-accelerated area, and calculation force is intelligently distributed to an area frequently used by a user under the condition that the hardware performance is unchanged.
Of course, in other possible examples, the first value may be set to a standard touch signal scanning frequency and the second value may be set to a value less than the standard touch signal scanning frequency.
Of course, in other possible examples, the first value and the second value may both be less than the standard touch signal scan frequency.
Specifically, when the touch signal is not scanned in the relatively accelerated region and the relatively non-accelerated region by the touch screen device, the touch signal scanning frequency of the entire touch screen region may be reset to the standard touch signal scanning frequency, and the entire touch screen region may be scanned.
In a possible embodiment, the embodiment of the invention further provides a specific scheme for obtaining the trained relative acceleration region partition model.
The method comprises the following steps: the method further comprises steps 1.1 to 1.3 before the current standing position of the user is obtained.
Step 1.1, dividing a standing position area of the touch screen controlled by the user into a plurality of standing positions; wherein the plurality of standing positions includes the first standing position.
Specifically, the standing positions of the touch screen controlled by the user are classified in advance, and specific standing positions are determined, so that the data participating in model training and calculation can be unified. In one possible example, the number of standing positions may be a left position, a middle position, and a right position.
Step 1.2, acquiring a historical touch area of the user performing historical touch operation in the touch screen at each of the plurality of standing positions.
Specifically, in the embodiment of the present invention, the historical touch areas corresponding to different standing positions are recorded in advance, and as shown in table 1, a specific recording table is provided. According to the embodiment of the invention, the historical touch area of the user for performing the historical touch operation in the touch screen at each standing position can be obtained by reading the table.
TABLE 1
Sequence number Standing position Historical touch area coordinates Time
1 Left side position Rectangular regions of (x 1, y 1) to (x 2, y 2) 09:35:40
2 Left side position Rectangular regions of (x 3, y 3) to (x 4, y 4) 09:35:45
3 Left side position Rectangular regions of (x 5, y 5) to (x 6, y 6) 09:35:50
4 Intermediate position Rectangular regions of (x 7, y 7) to (x 8, y 8) 09:35:55
5 Intermediate position Rectangular regions of (x 9, y 9) to (x 10, y 10) 09:36:00
6 Right side position Rectangular regions of (x 11, y 11) to (x 12, y 12) 09:36:05
And 1.3, training the relative acceleration region partition model by using each standing position and the corresponding historical touch region thereof, so that the trained relative acceleration region partition model can determine the corresponding relative acceleration region of the user at each standing position.
Specifically, the relative acceleration region partition model may count the hot zone distribution of each standing position and the corresponding historical touch region in the touch screen region according to the occurrence times or duration, and determine the hot zone with the occurrence times greater than a certain value or the duration longer than a certain value as the relative acceleration region.
In a possible embodiment, when the user moves, the first standing position is updated timely according to the real-time standing position of the user, so that the area contained in the relative acceleration area is updated, and the interactive operation of the user on the touch screen is facilitated.
The method comprises the following steps: the method further comprises the steps of 2.1 to 2.2 before the first standing position is input into the trained relative acceleration region partition model to obtain the relative acceleration region corresponding to the first standing position.
And 2.1, acquiring the real-time standing position of the user.
Specifically, the pressure sensor array can be placed on the ground in front of the touch screen device, and the real-time standing position of the user can be obtained according to the signals returned by the pressure sensor array.
Step 2.2 updates the first standing position using the real-time standing position.
Specifically, the updated first standing position may be used to redetermine the relative acceleration region to facilitate user interaction on the touch screen.
In a possible embodiment, the embodiment of the invention also considers that a user frequently performs touch operation in a text area and a blank area when using a slide, so that the specific composition of a relative acceleration area is optimized, and the user can more conveniently perform touch interaction by using a touch screen.
The method comprises the following steps: and after the first standing position is input into the trained relative acceleration region partition model to obtain the relative acceleration region corresponding to the first standing position, the method further comprises the steps 3.1 to 3.2.
Step 3.1, judging whether the user is using a slide in the relative acceleration area; the slide comprises a text area, a picture area, an edge area and a blank area; and the blank area is other areas except the text area, the picture area and the edge area in the slide.
Specifically, the embodiment of the invention can judge whether the user uses the slide on the touch screen by monitoring the background process of the touch screen device, and can judge whether the user uses the slide on the touch screen by monitoring the active window of the touch screen device.
In particular, common slides may include PPT, WPS, and the like. The character area is an area containing character information such as Chinese characters, english, numerical values and the like on the slide; the picture area is an area where a picture is inserted on the slide; the edge area is an area at a set distance from the slide frame.
And 3.2, if so, taking the display areas corresponding to the text areas and the blank areas on the touch screen as the relative acceleration areas so as to update the relative acceleration areas.
Specifically, when the user uses the PPT on the touch screen, the embodiment can further reduce the relative acceleration area, and optimize the specific composition of the relative acceleration area, so that the user can more conveniently use the touch screen to perform touch interaction.
In a possible embodiment, the step of obtaining the current first standing position of the user includes step 4.1 to step 4.3.
And 4.1, acquiring image data of the user currently standing in front of the touch screen.
Specifically, in the embodiment of the invention, the image pickup device is arranged near the touch screen device, and when a user stands in front of the touch screen, the image pickup device can shoot image data comprising the user.
And 4.2, calculating the relative position of the user and the touch screen in the image data.
Specifically, according to the embodiment of the invention, the relative position of the user and the touch screen can be deduced and calculated according to the left-right relative position of the user in the image data.
And 4.3, determining the current first standing position of the user according to the relative position.
In a possible embodiment, the acquiring a historical touch area of the user performing a historical touch operation in the touch screen at each of the plurality of standing positions includes steps 5.1 to 5.4.
And 5.1, acquiring video data of historical touch operation of the user in the touch screen at each standing position stored in a cloud server.
Specifically, in the embodiment of the invention, the image pickup device is arranged near the touch screen device, and when a user stands in front of the touch screen, the image pickup device can shoot video data comprising the user and the user operating the touch screen.
Specifically, in this embodiment, when the user operates the touch screen, the identity of the user is first identified, and the specific position and the specific touch area where the user stands are uploaded to the cloud, so as to construct a cloud learning set corresponding to the user, so that the user can learn and train the cloud learning set, and the user using habit is more attached, so that the user can more accurately divide the relative acceleration area, and the user can use the cloud learning set conveniently.
And 5.2, extracting a frame image of the video data.
And 5.3, determining the standing position of the user and the touch area for touch operation in each frame image.
Specifically, the embodiment of the invention can infer and calculate the relative position of the user and the touch screen according to the left and right relative positions of the user in the frame data, and determine the standing position of the user in the frame image according to the relative position.
Specifically, the embodiment of the invention firstly determines the area of the touch screen in the frame image, and then acquires the human skeleton structure of the user by utilizing the Kinect technology, so as to acquire continuous pointing actions of the hand of the user, and further determine the touch area of the touch operation of the user in the frame image.
And 5.4, determining a historical touch area of the user performing historical touch operation in the touch screen at each standing position according to the standing position of the user and the touch area performing touch operation in each frame image.
Based on the same inventive concept as the method, the embodiment of the invention also provides a touch acceleration device of a touch screen, referring to fig. 3, fig. 3 shows a schematic structural diagram of the embodiment of the device, which specifically includes:
a first standing position acquisition module 21, configured to acquire a current first standing position of the user.
The relative acceleration region obtaining module 22 is configured to input the first standing position into a trained relative acceleration region partition model, and obtain a relative acceleration region corresponding to the first standing position.
The computing capability adjusting module 23 is configured to adjust the computing capability of the touch IC in the relatively accelerated region to a first value, and adjust the computing capability of the touch IC in the relatively non-accelerated region to a second value; wherein the first value is greater than the second value; the relatively non-accelerating region is other regions of the touch screen region except the relatively accelerating region.
In one possible embodiment, the apparatus further comprises:
the standing position dividing module is used for dividing a standing position area of the touch screen controlled by the user into a plurality of standing positions; wherein the plurality of standing positions includes the first standing position;
a history touch area acquisition module, configured to acquire a history touch area of the user performing a history touch operation in the touch screen at each of the plurality of standing positions;
and the model training module is used for training the relative acceleration region partition model by using each standing position and the corresponding historical touch region thereof, so that the trained relative acceleration region partition model can determine the corresponding relative acceleration region of the user at each standing position.
In one possible embodiment, the apparatus further comprises:
the real-time standing position acquisition module is used for acquiring the real-time standing position of the user;
a first standing position updating module for updating the first standing position using the real-time standing position.
In one possible embodiment, the apparatus further comprises:
a first judgment module for judging whether the user is using a slide in the relative acceleration area; the slide comprises a text area, a picture area, an edge area and a blank area; the blank area is other areas except the text area, the picture area and the edge area in the slide;
and the relative acceleration area adding module is used for taking the display area corresponding to the text area and the blank area on the touch screen as the relative acceleration area when the user uses the slide on the touch screen so as to update the relative acceleration area.
In one possible embodiment, the first standing position acquisition module comprises:
the image data acquisition sub-module is used for acquiring image data of the user currently standing in front of the touch screen;
the relative position calculating sub-module is used for calculating the relative position of the user and the touch screen in the image data;
and the first standing position determining submodule is used for determining the current first standing position of the user according to the relative position.
In one possible embodiment, the historical touch region acquisition module includes:
the video data acquisition sub-module is used for acquiring video data of historical touch operation of the user in the touch screen at each standing position stored in the cloud server;
a frame image extraction sub-module for extracting a frame image of the video data;
the touch area determination submodule is used for determining the standing position of the user and the touch area for touch operation in each frame image;
and the historical touch area determination submodule is used for determining a historical touch area of the user for carrying out the historical touch operation in the touch screen at each standing position according to the standing position of the user in each frame image and the touch area for carrying out the touch operation.
Based on the same inventive concept as in the previous embodiments, the embodiments of the present invention further provide a landing control device based on rocket return, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of any of the methods described above when executing the program.
Based on the same inventive concept as in the previous embodiments, embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the methods described above.
The technical scheme provided in the embodiment of the application has at least the following technical effects or advantages:
after the first standing position where the user stands currently is detected, the trained relative acceleration region partition model is utilized, the region where the user frequently touches the first standing position is used as the relative acceleration region corresponding to the first standing position, and the touch signal scanning frequency of the touch screen in the relative acceleration region is increased relative to the relative non-acceleration region, so that the scanning speed and the positioning accuracy of the touch signal in the relative acceleration region are improved. According to the invention, the hardware of the touch screen is not upgraded and improved, the touch operation capability of the touch screen is redistributed through learning and analysis of habits of users, and the positioning identification of touch signals is preferentially carried out in the area where the users frequently touch the touch screen, so that the positioning of the touch signals of large-scale touch screen equipment is realized quickly with low cost.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (modules, systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The touch acceleration method for the touch screen is characterized by comprising the following steps of:
acquiring a current first standing position of a user;
inputting the first standing position into a trained relative acceleration region partition model to obtain a relative acceleration region corresponding to the first standing position; the relative acceleration region is a region in which the user is most likely to touch the touch screen at present, wherein the region partition model of the relative acceleration region is trained based on the first standing position;
adjusting the operation capability of the touch IC in the relative acceleration region to a first value, and adjusting the operation capability of the touch IC in the relative non-acceleration region to a second value; wherein the first value is greater than the second value; the relatively non-accelerating area is other areas except the relatively accelerating area in the touch screen area;
before the current standing position of the user is obtained, the method comprises the following steps:
dividing a standing position area of the user operating the touch screen into a plurality of standing positions; wherein the plurality of standing positions includes the first standing position.
2. The method for touch acceleration of claim 1, further comprising, prior to the obtaining the current standing position of the user:
acquiring a historical touch area of the user performing historical touch operation in the touch screen at each of the plurality of standing positions;
and training the relative acceleration region partition model by using each standing position and the corresponding historical touch region thereof, so that the trained relative acceleration region partition model can determine the corresponding relative acceleration region of the user at each standing position.
3. The method for touch acceleration of claim 1, wherein before inputting the first standing position into the trained relative acceleration region partition model to obtain the relative acceleration region corresponding to the first standing position, the method further comprises:
acquiring a real-time standing position of the user;
updating the first standing position using the real-time standing position.
4. The method for touch acceleration of claim 1, wherein after the first standing position is input into the trained relative acceleration region partition model to obtain the relative acceleration region corresponding to the first standing position, the method further comprises:
determining whether the user is using a slide in the relative acceleration region; the slide comprises a text area, a picture area, an edge area and a blank area; the blank area is other areas except the text area, the picture area and the edge area in the slide;
if yes, the display area corresponding to the text area and the blank area on the touch screen is used as the relative acceleration area, so that the relative acceleration area is updated.
5. The method for accelerating touch screen according to claim 1, wherein the obtaining the current first standing position of the user comprises:
acquiring image data of the user currently standing in front of the touch screen;
calculating the relative position of the user and the touch screen in the image data;
and determining the current first standing position of the user according to the relative position.
6. The touch acceleration method of claim 2, wherein the acquiring a history touch area of the user performing a history touch operation in the touch screen at each of the number of standing positions comprises:
acquiring video data of historical touch operation of the user in the touch screen at each standing position stored in a cloud server;
extracting a frame image of the video data;
determining a standing position of the user and a touch area for touch operation in each frame image;
and determining a historical touch area of the user performing historical touch operation in the touch screen at each standing position according to the standing position of the user and the touch area performing touch operation in each frame image.
7. A touch acceleration device for a touch screen, the device comprising:
the first standing position acquisition module is used for acquiring the current first standing position of the user;
the relative acceleration region obtaining module is used for inputting the first standing position into a trained relative acceleration region partition model to obtain a relative acceleration region corresponding to the first standing position; the relative acceleration region is a region in which the user is most likely to touch the touch screen at present, wherein the region partition model of the relative acceleration region is trained based on the first standing position;
the computing capacity adjusting module is used for adjusting the computing capacity of the touch IC in the relative acceleration area to a first value and adjusting the computing capacity of the touch IC in the relative non-acceleration area to a second value; wherein the first value is greater than the second value; the relatively non-accelerating area is other areas except the relatively accelerating area in the touch screen area;
the apparatus further comprises:
the standing position dividing module is used for dividing a standing position area of the touch screen controlled by the user into a plurality of standing positions; wherein the plurality of standing positions includes the first standing position.
8. The touch-sensitive acceleration device of claim 7, further comprising:
a history touch area acquisition module, configured to acquire a history touch area of the user performing a history touch operation in the touch screen at each of the plurality of standing positions;
and the model training module is used for training the relative acceleration region partition model by using each standing position and the corresponding historical touch region thereof, so that the trained relative acceleration region partition model can determine the corresponding relative acceleration region of the user at each standing position.
9. A touch screen device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the method of any one of claims 1 to 6.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program is executed by a processor to implement the steps of the method of any of claims 1 to 6.
CN202010228243.6A 2020-03-27 2020-03-27 Touch acceleration method and device for touch screen Active CN111552398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010228243.6A CN111552398B (en) 2020-03-27 2020-03-27 Touch acceleration method and device for touch screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010228243.6A CN111552398B (en) 2020-03-27 2020-03-27 Touch acceleration method and device for touch screen

Publications (2)

Publication Number Publication Date
CN111552398A CN111552398A (en) 2020-08-18
CN111552398B true CN111552398B (en) 2023-08-08

Family

ID=72007269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010228243.6A Active CN111552398B (en) 2020-03-27 2020-03-27 Touch acceleration method and device for touch screen

Country Status (1)

Country Link
CN (1) CN111552398B (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109542272A (en) * 2018-12-04 2019-03-29 北京集创北方科技股份有限公司 Touch screen scanning method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN111552398A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
US10372328B2 (en) Intelligent touchscreen keyboard with finger differentiation
US20140168105A1 (en) Electronic device and method for providing tactile stimulation
CN102270035A (en) Apparatus and method for selecting and operating object in non-touch mode
CN105631399A (en) Fast object tracking framework for sports video recognition
CN103970264B (en) Gesture recognition and control method and device
CN103985137A (en) Moving object tracking method and system applied to human-computer interaction
CN103105930A (en) Non-contact type intelligent inputting method based on video images and device using the same
CN103197911A (en) Method, system and device for providing speech input
CN102135836B (en) Superposition hand input method and system
CN103914260A (en) Method and device based on touch screen and used for controlling operation objects
KR20140002008A (en) Information processing device, information processing method, and recording medium
US10401947B2 (en) Method for simulating and controlling virtual sphere in a mobile device
CN107066081B (en) Interactive control method and device of virtual reality system and virtual reality equipment
CN107169427B (en) Face recognition method and device suitable for psychology
US20220415038A1 (en) Image detection method and apparatus, computer device, and computer-readable storage medium
CN105929954B (en) It is a kind of control cursor method and device, display equipment
CN110850982B (en) AR-based man-machine interaction learning method, system, equipment and storage medium
CN102129332A (en) Detection method and device of touch points for image recognition
US9811238B2 (en) Methods and systems for interacting with a digital marking surface
CN104239844A (en) Image recognition system and image recognition method
CN111552398B (en) Touch acceleration method and device for touch screen
CN104881200A (en) Soft keyboard layout adjusting method and soft keyboard layout adjusting apparatus
CN107229330A (en) A kind of character input method and device based on Steady State Visual Evoked Potential
US20160062534A1 (en) Information Processing Method And Electronic Device
CN110119202B (en) Method and system for realizing scene interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant