CN112698739A - Positioning method and device - Google Patents

Positioning method and device Download PDF

Info

Publication number
CN112698739A
CN112698739A CN202011578675.6A CN202011578675A CN112698739A CN 112698739 A CN112698739 A CN 112698739A CN 202011578675 A CN202011578675 A CN 202011578675A CN 112698739 A CN112698739 A CN 112698739A
Authority
CN
China
Prior art keywords
display
area
target object
display area
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011578675.6A
Other languages
Chinese (zh)
Other versions
CN112698739B (en
Inventor
于宙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202011578675.6A priority Critical patent/CN112698739B/en
Publication of CN112698739A publication Critical patent/CN112698739A/en
Application granted granted Critical
Publication of CN112698739B publication Critical patent/CN112698739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03547Touch pads, in which fingers can move on a surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • G06F3/0441Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using active external devices, e.g. active pens, for receiving changes in electrical potential transmitted by the digitiser, e.g. tablet driving signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Abstract

The application provides a control method, which is applied to first electronic equipment, wherein the first electronic equipment is provided with a display area and comprises the following steps: if a first operation aiming at the first electronic equipment is detected, selecting a display object of the display area based on the first operation to determine a target object; if a second operation aiming at the target object is detected, replacing the target object based on the second operation; the space or area where the first operation and the second operation are located is different from the display area, and the first operation and the second operation are different. Simultaneously, this application still provides an electronic equipment.

Description

Positioning method and device
Technical Field
The present disclosure relates to a technology for positioning an object displayed on a display screen, and in particular, to a positioning method and apparatus.
Background
At present, when handwriting is input on a touch pad, because handwriting display (blind writing) is not available on the touch pad, only characters, words, symbols or graphs input on a screen can be displayed, when the input characters, words, symbols or graphs have errors, the wrong characters, words, symbols or graphs cannot be effectively positioned without changing a writing mode of a user.
Disclosure of Invention
In view of this, it is desirable to provide a control method and an electronic device capable of quickly locating an error object.
In order to achieve the purpose, the technical scheme of the application is realized as follows:
according to an aspect of the present application, there is provided a control method applied to a first electronic device having a display area, including:
if a first operation aiming at the first electronic equipment is detected, selecting a display object of the display area based on the first operation to determine a target object;
if a second operation aiming at the target object is detected, replacing the target object based on the second operation;
the space or area where the first operation and the second operation are located is different from the display area, and the first operation and the second operation are different.
In the above scheme, the method further comprises:
if a third operation aiming at the first electronic equipment is detected, displaying input information in the display area based on the third operation;
the space or area where the third operation is located is different from the display area, and the third operation is different from the first operation and the second operation.
In the above scheme, the method further comprises:
if a fourth operation aiming at the first electronic equipment is detected, controlling the content display of the display area based on the fourth operation;
the space or area where the fourth operation is located is different from the display area, and the fourth operation is different from the first operation and the second operation;
or, if the first electronic device is detected to be switched from a first mode to a second mode, controlling the content display of the display area in the second mode;
the first mode characterizes an object selection and/or object replacement mode and the second mode characterizes a mouse mode.
In the above scheme, the method further comprises:
displaying the target object in a first manner such that the target object is displayed differently from other objects in the display area;
wherein the displaying the target object in a first manner comprises:
enlarging a display size of the target object so that the display size of the target object is different from display sizes of other objects displayed by the display area;
or jumping the target object to enable the display effect of the target object to be different from the display effect of other objects displayed in the display area;
or, increasing the display brightness of the target object to make the display brightness of the target object different from the display brightness of other objects displayed in the display area;
or adding a background color to the target object, so that the background color of the target object is different from the background colors of other objects displayed in the display area.
In the foregoing solution, the if the first operation for the first electronic device is detected includes:
if a first operation aiming at the first electronic equipment is detected in a first input and write area of the first electronic equipment, wherein the first input and write area is different from the display area;
or if a first operation aiming at the first electronic equipment is detected in a second input and write area of the first electronic equipment, wherein the second input and write area is the same as the display area;
or if a first operation aiming at a first electronic device is detected in a third writing area of a second electronic device, wherein the second electronic device is different from the first electronic device;
accordingly, the if a second operation is detected for the first electronic device includes:
if a second operation aiming at the first electronic equipment is detected in a first writing area of the first electronic equipment, wherein the first writing area is different from the display area;
or if a second operation aiming at the first electronic equipment is detected in a second writing area of the first electronic equipment, wherein the second writing area is the same as the display area;
or, if a first operation for a first electronic device is detected in a third writing area of a second electronic device, the second electronic device being different from the first electronic device.
In the above scheme, the method further comprises:
the first operation and/or the second operation of the first electronic device are/is detected in a first mode, and the first mode at least comprises at least one of a radar, an infrared sensor, an image collector, a distance measuring sensor and an ultrasonic sensor.
In the foregoing solution, selecting a display object in the display area based on the first operation to determine a target object includes:
and moving a cursor in the display area by taking characters/words as units based on the moving direction or moving angle information corresponding to the first operation so as to determine a target object in the display objects in the display area.
In the foregoing solution, replacing the target object based on the second operation includes:
determining the detection times of the second operation;
determining target candidate objects in the N candidate objects corresponding to the target objects in sequence based on the detection times of the second operation; wherein N is greater than or equal to 1;
replacing the target object with the target candidate object.
According to another aspect of the application, there is provided an electronic device comprising at least:
a display screen having a display area;
control means for selecting a display object of the display area based on a first operation to determine a target object if the first operation for the electronic device is detected; if a second operation aiming at the target object is detected, replacing the target object based on the second operation;
the space or area where the first operation and the second operation are located is different from the display area, and the first operation and the second operation are different.
In the above aspect, the control unit includes at least:
a handwriting board having a writing area;
the control component is specifically configured to, when a first operation for the input/write area is detected, select a display object of the display area based on the first operation to determine a target object; if a second operation aiming at the input and write area is detected, replacing the target object based on the second operation;
and/or the gesture recognition sensor is used for collecting gestures aiming at the electronic equipment in a recognition collection area;
the control component is specifically configured to, when it is detected that the gesture is a first operation for the electronic device, select a display object of the display area based on the first operation to determine a target object; if the gesture is detected to be a second operation on the target object, replacing the target object based on the second operation.
According to the control method and the electronic equipment, the electronic equipment is controlled to select or replace the display object through different operations carried out on the electronic equipment. The method and the device can realize quick positioning and replacement of the error object displayed in the display area, and improve the efficiency and the accuracy of correcting the error object.
Drawings
FIG. 1 is a schematic view of a flow implementation of a control method in the present application;
FIG. 2 is a schematic drawing of a writing and displaying embodiment of the present application;
FIG. 3 is a first schematic structural component diagram of an electronic device according to the present application;
fig. 4 is a structural schematic diagram of an electronic device in the present application.
Detailed Description
The technical solution of the present application is further described in detail with reference to the drawings and specific embodiments of the specification.
Fig. 1 is a schematic view of a flow implementation of a control method in the present application, as shown in fig. 1, the method includes:
step 101, if a first operation aiming at the first electronic equipment is detected, selecting a display object of the display area based on the first operation to determine a target object;
in this application, the first electronic device may specifically be a device with a display screen, such as a television, a computer, a server, and the like, and the display screen has at least one display area.
In one embodiment, the first electronic device further has a gesture recognition function and a gesture recognition area. When a user implements a first gesture operation on the first electronic device in the gesture recognition area, the first electronic device can detect a gesture parameter corresponding to the first gesture operation, and a comparison result can be obtained by comparing the gesture parameter with a preset gesture parameter. And if the comparison result represents that the gesture parameter is successfully matched with the gesture parameter selected by the representation object in the preset gesture parameters, selecting the display object in the display area of the first electronic equipment based on the first gesture operation so as to determine the target object. In this case, the first gesture operation may be understood as the first operation described above.
For example, a user stretches out a palm gesture in the gesture recognition area of the first electronic device, when the first electronic device detects the gesture parameter of the palm gesture, the gesture parameter is compared with a preset gesture parameter in the gesture database, it is determined according to the comparison result that the matching between the palm gesture and the gesture parameter in the gesture database, which is used for characterizing that the selection of the display object in the display area of the first electronic device is successful, and then the display object in the display area of the first electronic device is selected based on the palm gesture, so as to determine the target object.
Here, a plurality of gesture parameters are stored in the gesture database, and the operation corresponding to each gesture parameter is different. For example, a palm gesture corresponds to an object selection operation, a single finger gesture corresponds to a text input operation, two fingers correspond to an object determination operation, and so on.
In another embodiment, the first electronic device further has a writing pad or a touch pad, and the writing area is provided on the writing pad or the touch pad. When a user performs a hovering operation (i.e., a non-contact operation) on the upper side of the input area by using a finger or a stylus pen, the first electronic device may detect a first coordinate value of the finger or the stylus pen with respect to the input area in a first manner, and may select a display object in a display area of the first electronic device based on the first coordinate value to determine a target object. At this time, the floating operation may be understood as the above-described first operation.
Here, the first mode includes at least one of an infrared sensor, a radar, an image collector, a light sensor, a distance measuring sensor, and an ultrasonic sensor.
For example, the first electronic device has a touch pad, a radar is disposed on one side of the touch pad, an electromagnetic wave signal can be emitted outwards through the radar, when a user holds a stylus pen to move up and down over a writing area of the touch pad without contact, the first electronic device can receive an echo signal returned by the electromagnetic wave signal through shielding of the stylus pen, then a first distance from the stylus pen (here, an input end of the stylus pen) to the writing area can be determined based on a signal attenuation degree of the echo signal relative to the emitted electromagnetic wave signal, if the first distance is within a preset distance range, a coordinate value of the stylus pen currently corresponding to the writing area is determined based on the echo signal, the coordinate value is matched with a plurality of second coordinate values in the display area, and according to a matching result, a successfully matched second coordinate value is located in a display object corresponding to the display area, is determined as the target object.
In this application, when the first electronic device determines the target object in the display objects in the display area, specifically, based on the movement direction or the movement angle information corresponding to the first operation, the first electronic device may perform cursor movement in the display area in units of words or phrases to determine the target object in the display objects in the display area.
For example, the first operation is a gesture operation, when the user stretches out a palm gesture in the gesture recognition area and moves the palm to the left, the first electronic device may detect that the moving direction of the palm is the left, and move the cursor to the left from the current position of the cursor in units of words in the display area, so that the target object may be determined in the display objects in the display area.
For another example, the first operation is a hover operation, and when the user holds the stylus and performs a hover operation moving rightward over the input area of the touch pad, the first electronic device may detect that the moving direction of the stylus is rightward, and move the cursor rightward from the current position of the cursor in units of words/phrases in the display area, so that the target object may be determined among the display objects in the display area.
In this application, the first electronic device may have a plurality of writing areas, and the plurality of writing areas may be different. Such as: the first writing area may refer to a writing area of the touch pad, and the second writing area may refer to a touch area of the touch display screen. The third write-transfer area may be a write-transfer area of an external device.
Specifically, if a first electronic device detects a first operation on the first electronic device in a first writing area of the first electronic device, the first writing area is different from the display area; for example, the first writing area is a writing area of the touch pad, which is different from the display area of the screen.
If the first electronic equipment is in a second input and writing area of the first electronic equipment, detecting a first operation aiming at the first electronic equipment, wherein the second input and writing area is the same as the display area; for example, if the screen of the first electronic device is a touch screen, the second input/write area may be a touch display area of the touch screen, and the touch display area is the same as the display area.
If the first electronic equipment is in a third writing area of the second electronic equipment, detecting a first operation aiming at the first electronic equipment, wherein the third writing area is different from the display area, and the second electronic equipment is different from the first electronic equipment; for example, the second electronic device is a writing tablet, a wall, a projection cloth, a desktop, or the like.
According to the method and the device, the display object in the display area is selectively positioned through gesture operation or suspension operation, and the positioning efficiency and the positioning accuracy of wrong characters/words in the display object can be improved.
In this application, when the first electronic device determines a target object in the display objects in the display area, the target object may also be displayed in a first manner, so that the display manner of the target object is different from that of other objects in the display area.
Specifically, when the first electronic device displays the target object in the first manner, the display size of the target object may be made different from the display sizes of the other objects displayed in the display area by enlarging the display size of the target object; or jumping the target object to enable the display effect of the target object to be different from the display effect of other objects displayed in the display area; or, increasing the display brightness of the target object to make the display brightness of the target object different from the display brightness of other objects displayed in the display area; or, adding a background color to the target object so that the background color of the target object is different from the background colors of the other objects displayed in the display region.
The written notes on the touch pad are imaginary, so that a user can only see the correspondence between characters and actual coordinates on the screen and can only see the correspondence between the characters and the actual coordinates on the screen, when the hand moves to a corresponding position, the display effect of the target object is changed, so that the display effect of the target object is different from that of other objects in the display area, and the corresponding words displayed on the screen are highlighted.
102, if a second operation aiming at the target object is detected, replacing the target object based on the second operation;
in this application, the space or area where the first operation and the second operation are located is different from the display area, and the first operation and the second operation are also different.
For example, the space in which the first operation and the second operation are located is a three-dimensional space, and the space in which the display area is located is a two-dimensional space. For another example, the area where the first operation and the second operation are located is a writing area of a touch pad, a handwriting pad, a desktop, a wall surface, or projection cloth, and the area where the display area is located is a screen display area of the first electronic device.
Specifically, in one embodiment, when a user performs a second gesture operation on a first electronic device in a gesture recognition area of the first electronic device, the first electronic device may detect a gesture parameter corresponding to the second gesture operation, and a comparison result may be obtained by comparing the gesture parameter with a preset gesture parameter. And if the comparison result represents that the gesture parameter is successfully matched with the gesture parameter representing object replacement in the preset gesture parameters, replacing the selected target object based on the second gesture operation. At this time, the second gesture operation is the above-mentioned second operation.
For example, a user stretches out a fist gesture in a gesture recognition area of the first electronic device, when the first electronic device detects gesture parameters of the fist gesture, the gesture parameters are compared with gesture parameters preset in a gesture database, it is determined according to a comparison result that the fist gesture is successfully matched with gesture parameters representing target object replacement in the gesture database, a target candidate object is selected from N candidate objects in the candidate database based on the fist gesture, and when the target candidate object is selected, the selected target object in the display area is replaced with the target candidate object. Wherein N is greater than or equal to 1.
In this application, when the first electronic device detects a fist gesture for replacing a target object, the detection times of the fist gesture can be counted, and then the target candidate object is determined in the N candidate objects in sequence according to the detection times of the fist gesture. That is, each time the fist gesture is detected, the first electronic device selects one target candidate among the N candidates to replace the current target object.
Here, the N candidates in the candidate database include, but are not limited to, english words, kanji words, figures, or symbols.
In another embodiment, when a user performs a hover pressing operation (i.e., a non-contact pressing operation) on the touch pad or the writing area of the writing pad by using a finger or a stylus pen, the first electronic device may detect a pressing parameter corresponding to the hover pressing operation in a first manner, determine a target candidate object from N candidate objects in the candidate database when the pressing parameter is within a preset parameter range, and replace the selected target object in the display area with the target candidate object when the target candidate object is determined. Wherein N is greater than or equal to 1.
Here, the first mode includes, but is not limited to, at least one of an infrared sensor, a radar, an image collector, a light sensor, a distance measuring sensor, and an ultrasonic sensor.
In this application, when the first electronic device detects a pressing parameter corresponding to the floating pressing operation, the number of times of detection of the pressing parameter may be counted, so as to determine the target candidate object among the N candidate objects in sequence based on the number of times of detection. That is, the first electronic device selects one target candidate from the N candidates to replace the current target object once every time the user presses the finger or the stylus pen.
Here, the N candidates in the candidate database include, but are not limited to, english words, kanji words, figures, or symbols.
In the application, if a first electronic device is in a first writing area of the first electronic device, a second operation aiming at the first electronic device is detected, wherein the first writing area is different from the display area; for example, the first writing area is a writing area of a touch pad on the first electronic device. The writing area of the touch pad is different from the screen display area of the first electronic device.
If the first electronic device is in a second input/write area of the first electronic device, detecting a second operation aiming at the first electronic device, wherein the second input/write area is the same as the display area; for example, the display screen of the first electronic device is a touch display screen, the second input area is a touch area of the touch display screen, and the touch area is the same as the display area.
If the first electronic device is in a third writing area of a second electronic device, a first operation aiming at the first electronic device is detected, the third writing area is different from the display area, and the second electronic device is different from the first electronic device. For example, the second electronic device is a writing tablet, a wall, a projection cloth, a desktop, or the like.
According to the method and the device, the correct target candidate object can be selected from the N candidate objects through different gesture operations or different suspension operations, so that the determined target object in the display area can be replaced, and the modification efficiency and the modification complexity of wrong characters/words in the display area can be effectively improved.
In the application, the first electronic device further has a writing function, and if the first electronic device detects a third operation on the first electronic device, the input information can be displayed in the display area based on the third operation; here, the space or area where the third operation is located is different from the display area, and the third operation is different from the first operation and the second operation.
For example, the space in which the third operation is performed is a three-dimensional space, and the space in which the display area is performed is a two-dimensional space. For another example, the area where the third operation is located is a writing area of a touch pad, a writing pad, a desktop, a wall surface, or a projection cloth, and the area where the display area is located is a screen display area of the first electronic device.
Specifically, in one embodiment, when a user performs a third gesture operation on a first electronic device in a gesture recognition area of the first electronic device, the first electronic device can detect a gesture parameter corresponding to the third gesture operation, when the user performs text input in the gesture recognition area through a third gesture, the first electronic device can also detect a motion trajectory parameter corresponding to the third gesture operation, information to be input can be determined through the motion trajectory parameter, and the input information is displayed in a display area.
Here, if the first electronic device detects the third operation before detecting the first operation and/or the second operation, the target object (e.g., wrong word/phrase) may be included in the input information displayed in the display area. If the first electronic device detects the third operation after detecting the first operation and/or the second operation, the target object (such as a wrong word) may not be included in the written information displayed in the display area.
For example, a user stretches out a finger gesture in a gesture recognition area of the first electronic device, and performs writing operation on the gesture recognition area by using the finger, when the first electronic device detects a gesture parameter of the finger gesture, the gesture parameter is compared with a preset gesture parameter in a gesture database, if it is determined that the finger gesture is successfully matched with a writing gesture parameter in the gesture database according to a comparison result, a motion track of the finger gesture is detected based on the finger gesture, writing information to be displayed is determined according to the motion track, and the writing information to be displayed is displayed in a display area.
In another embodiment, the user can also use a finger or a stylus pen to input text in the input area of the touch pad or the handwriting pad. When the touch pad or the handwriting pad is a capacitive touch pad or a handwriting pad, because capacitance is generated by human body touch, when a user inputs characters in a writing area of the handwriting pad through fingers, a sensing matrix in the handwriting pad can track the movement of the finger capacitance and record the pressure position of the fingers in contact with the handwriting pad, so that the movement track of the fingers is accurately positioned and recorded, and the first electronic device can display input information in a display area according to the movement track.
In this application, the first electronic device further has a control function to control the content display of the display area. If the first electronic device detects a fourth operation aiming at the first electronic device, the content display of the display area on the first electronic device can be controlled based on the fourth operation; here, the space or area where the fourth operation is located is different from the display area, and the fourth operation is different from the first operation, the second operation, and the third operation.
For example, the space in which the fourth operation is performed is a three-dimensional space, and the space in which the display area is performed is a two-dimensional space. For another example, the area where the fourth operation is located is a writing area of a touch pad, a writing pad, a desktop, a wall surface, or a projection cloth, and the area where the display area is located is a screen display area of the first electronic device.
Specifically, in one embodiment, when a user performs a fourth gesture operation on a first electronic device in a gesture recognition area of the first electronic device, the first electronic device may detect a gesture parameter corresponding to the fourth gesture operation, a comparison result may be obtained by comparing the gesture parameter with a preset gesture parameter in a database, and if the comparison result indicates that the gesture parameter is successfully matched with a gesture parameter indicating that control content is displayed, the content display of the display area is controlled based on the fourth gesture operation.
For example, a user stretches out a gripping gesture in the gesture recognition area of the first electronic device, when the first electronic device detects a gesture parameter of the gripping gesture, the gesture parameter is compared with a preset gesture parameter in the gesture database, and if it is determined that the gripping gesture is successfully matched with the gesture parameter representing the control content display in the gesture database according to the comparison result, the content display of the display area is controlled based on the gripping gesture. That is, the grip gesture here corresponds to a mouse, and the display page in the display area can be controlled to scroll up and down according to the movement direction of the grip gesture.
In another example, the first electronic device may further have a mode switching function of controlling the content display of the display area in a second mode if the first electronic device detects that the first electronic device is switched from the first mode to the second mode. Here, the first mode characterizes an object selection and/or object replacement mode, and the second mode characterizes a mouse mode.
Specifically, the tablet may have a button corresponding to a left and right mouse button, and when the user presses the button, the first electronic device may receive a mouse instruction, so as to switch from the first mode to the second mode (mouse mode), and in the mouse mode, control the content display of the display area. Of course, the first electronic device may also have a voice function thereon, and when a user outputs a mode switching instruction (for example, switching to a mouse mode) for the first electronic device, the first electronic device may switch from the first mode to the second mode based on the mode switching instruction.
Fig. 2 is a schematic diagram of writing and displaying in the present application, as shown in fig. 2, a handwriting board is disposed below a keyboard area of a first electronic device, and the handwriting board has a writing area, a user writes "This is a demo" in the writing area, which is a demonstration, the first electronic device recognizes that writing is "This is a demo" and that the writing is a demonstration, and displays the recognized writing in the display area, and according to the present application, a wrong word "it and side" can be quickly located by using a gesture and/or a hover operation for selectively locating the wrong word, and the wrong word "it and side" can be quickly replaced by using an object replacement operation. In the application, the gesture and/or the floating operation for selectively positioning the wrong character/word are different from the conventional writing operation and mouse control operation, for example, one finger is used for writing, one palm is used for selecting the wrong character/word positioning, for example, the finger or the stylus is used for writing when being contacted with a writing area, and the non-contact floating is used for selecting the wrong character/word positioning. Therefore, the target object can be quickly selected and positioned, so that the selection positioning efficiency and the positioning accuracy of wrong characters/words are improved.
Fig. 3 is a schematic structural composition diagram of an electronic device in the present application, as shown in fig. 3, the electronic device at least includes: a display 301 and a control section 302, wherein the display 301 has a display area; the control component 302 is arranged inside the electronic device and used for selecting a display object of the display area based on a first operation to determine a target object if the first operation aiming at the electronic device is detected; if a second operation aiming at the target object is detected, replacing the target object based on the second operation; the space or area where the first operation and the second operation are located is different from the display area, and the first operation and the second operation are different.
In a preferred version, the control unit 302 comprises at least: a handwriting pad 3021 and/or a gesture recognition sensor 3022;
specifically, the handwriting board 3021 has at least a writing area; the control unit 302 is specifically configured to, when a first operation for the input/write area is detected, select a display object of the display area based on the first operation to determine a target object; if a second operation aiming at the input and write area is detected, replacing the target object based on the second operation;
the gesture recognition sensor 3022 is used for acquiring a gesture for the electronic device in a recognition acquisition area;
the control component 302 is specifically configured to, when it is detected that the gesture is a first operation on the electronic device, select a display object of the display area based on the first operation to determine a target object; if the gesture is detected to be a second operation on the target object, replacing the target object based on the second operation.
Here, the gesture recognition sensor 3022 may be specifically a radar, an infrared sensor, a light sensor, an image collector, a distance measurement sensor, or the like. The control component can be a control chip in the handwriting board.
In a preferred embodiment, the display screen 301 is further configured to display input information in the display area based on a third operation if the third operation for the first electronic device is detected; and the space or area where the third operation is located is different from the display area, and the third operation is different from the first operation and the second operation.
Preferably, the control unit 302 is further configured to control the content display of the display area based on a fourth operation if the fourth operation for the first electronic device is detected; the space or area where the fourth operation is located is different from the display area, and the fourth operation is different from the first operation and the second operation;
or, the control section 302 controls the content display of the display area in the second mode if it is detected that the first electronic device is switched from the first mode to the second mode; wherein the first mode characterizes an object selection and/or object replacement mode and the second mode characterizes a mouse mode.
In a preferred scheme, the display screen 301 is further configured to display the target object in a first manner, so that the target object is displayed in a different manner from other objects in the display area;
specifically, the control unit 302 is further configured to enlarge the display size of the target object so that the display size of the target object is different from the display sizes of the other objects displayed in the display area; or jumping the target object to enable the display effect of the target object to be different from the display effect of other objects displayed in the display area; or, increasing the display brightness of the target object to make the display brightness of the target object different from the display brightness of other objects displayed in the display area; or adding a background color to the target object, so that the background color of the target object is different from the background colors of other objects displayed in the display area.
In a preferred aspect, the electronic device further includes: the detection component 303.
The detecting unit 303 is configured to detect a first operation on the first electronic device if a first writing area of the first electronic device is detected, the first writing area being different from the display area; or if a first operation aiming at the first electronic equipment is detected in a second input and write area of the first electronic equipment, wherein the second input and write area is the same as the display area; or if a first operation aiming at the first electronic equipment is detected in a third writing area of the second electronic equipment, wherein the second electronic equipment is different from the first electronic equipment;
accordingly, the detecting unit 303 is configured to detect a second operation for the first electronic device if a first writing area of the first electronic device is detected, wherein the first writing area is different from the display area; or if a second operation aiming at the first electronic equipment is detected in a second writing area of the first electronic equipment, wherein the second writing area is the same as the display area; or, if a first operation for a first electronic device is detected in a third writing area of a second electronic device, the second electronic device being different from the first electronic device.
In a preferred embodiment, the detecting component 303 is specifically configured to detect the first operation and/or the second operation for the first electronic device in a first manner, where the first manner at least includes at least one of a radar, an infrared sensor, an image collector, a ranging sensor, and an ultrasonic sensor.
In a preferred embodiment, the control unit 302 is specifically configured to perform cursor movement in units of words in the display area based on the movement direction or movement angle information corresponding to the first operation, so as to determine the target object in the display objects in the display area.
In a preferred aspect, the electronic device further includes: a processing component 304;
in particular, the processing component 304 is configured to determine a number of detections of the second operation; determining target candidate objects in the N candidate objects corresponding to the target objects in sequence based on the detection times of the second operation; wherein N is greater than or equal to 1;
a control component 302, in particular for replacing the target object with the target candidate object.
It should be noted that: in the electronic device provided in the above embodiment, when the target object is located and replaced, only the division of the program modules is illustrated, and in practical applications, the processing distribution may be completed by different program modules according to needs, that is, the internal structure of the electronic device is divided into different program modules to complete all or part of the processing described above. In addition, the electronic device and the control method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
An embodiment of the present application further provides another electronic device, including: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor is configured to execute, when running the computer program: if a first operation aiming at the first electronic equipment is detected, selecting a display object of the display area based on the first operation to determine a target object;
if a second operation aiming at the target object is detected, replacing the target object based on the second operation;
the space or area where the first operation and the second operation are located is different from the display area, and the first operation and the second operation are different.
The processor is further configured to, when executing the computer program, perform: if a third operation aiming at the first electronic equipment is detected, displaying input information in the display area based on the third operation;
the space or area where the third operation is located is different from the display area, and the third operation is different from the first operation and the second operation.
The processor is further configured to, when executing the computer program, perform: if a fourth operation aiming at the first electronic equipment is detected, controlling the content display of the display area based on the fourth operation;
the space or area where the fourth operation is located is different from the display area, and the fourth operation is different from the first operation and the second operation;
or, if the first electronic device is detected to be switched from a first mode to a second mode, controlling the content display of the display area in the second mode;
the first mode characterizes an object selection and/or object replacement mode and the second mode characterizes a mouse mode.
The processor is further configured to, when executing the computer program, perform: displaying the target object in a first manner such that the target object is displayed differently from other objects in the display area;
wherein the displaying the target object in a first manner comprises:
enlarging a display size of the target object so that the display size of the target object is different from display sizes of other objects displayed by the display area;
or jumping the target object to enable the display effect of the target object to be different from the display effect of other objects displayed in the display area;
or, increasing the display brightness of the target object to make the display brightness of the target object different from the display brightness of other objects displayed in the display area;
or adding a background color to the target object, so that the background color of the target object is different from the background colors of other objects displayed in the display area.
The processor is further configured to, when executing the computer program, perform: if a first operation aiming at the first electronic equipment is detected in a first input and write area of the first electronic equipment, wherein the first input and write area is different from the display area;
or if a first operation aiming at the first electronic equipment is detected in a second input and write area of the first electronic equipment, wherein the second input and write area is the same as the display area;
or if a first operation aiming at the first electronic equipment is detected in a third writing area of the second electronic equipment, wherein the second electronic equipment is different from the first electronic equipment;
accordingly, if a second operation is detected for the first electronic device in a first input/write area of the first electronic device, the first input/write area and the display area being different;
or if a second operation aiming at the first electronic equipment is detected in a second writing area of the first electronic equipment, wherein the second writing area is the same as the display area;
or, if a first operation for a first electronic device is detected in a third writing area of a second electronic device, the second electronic device being different from the first electronic device.
The processor is further configured to, when executing the computer program, perform: the first operation and/or the second operation of the first electronic device are/is detected in a first mode, and the first mode at least comprises at least one of a radar, an infrared sensor, an image collector, a distance measuring sensor and an ultrasonic sensor.
The processor is further configured to, when executing the computer program, perform: and moving a cursor in the display area by taking characters/words as units based on the moving direction or moving angle information corresponding to the first operation so as to determine a target object in the display objects in the display area.
The processor is further configured to, when executing the computer program, perform: determining the detection times of the second operation;
determining target candidate objects in the N candidate objects corresponding to the target objects in sequence based on the detection times of the second operation; wherein N is greater than or equal to 1;
replacing the target object with the target candidate object.
Fig. 4 is a schematic structural component diagram of an electronic device 400 in the present application, where the electronic device 400 may be a mobile phone, a computer, a digital broadcast terminal, an information transceiver device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, a television, or the like. The electronic device 400 shown in fig. 4 includes: at least one processor 401, memory 402, at least one network interface 404, and a user interface 403. The various components in the electronic device 400 are coupled together by a bus system 405. It is understood that the bus system 405 is used to enable connection communication between these components. The bus system 405 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 405 in fig. 4.
The user interface 403 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
It will be appreciated that the memory 402 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 402 described in embodiments herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 402 in the embodiments of the present application is used to store various types of data to support the operation of the electronic device 400. Examples of such data include: any computer programs for operating on the electronic device 400, such as an operating system 4021 and application programs 4022; contact data; telephone book data; a message; a picture; video, etc. The operating system 4021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is configured to implement various basic services and process hardware-based tasks. The application 7022 may include various applications such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. A program for implementing the method according to the embodiment of the present application may be included in the application 4022.
The method disclosed in the embodiments of the present application may be applied to the processor 401, or implemented by the processor 401. The processor 401 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 401. The Processor 401 described above may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 401 may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 402, and the processor 401 reads the information in the memory 402 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the electronic Device 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the foregoing methods.
In an exemplary embodiment, the present application further provides a computer readable storage medium, such as a memory 402, comprising a computer program, which is executable by a processor 401 of the electronic device 400 to perform the steps of the foregoing method. The computer readable storage medium can be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, performs: if a first operation aiming at the first electronic equipment is detected, selecting a display object of the display area based on the first operation to determine a target object;
if a second operation aiming at the target object is detected, replacing the target object based on the second operation;
the space or area where the first operation and the second operation are located is different from the display area, and the first operation and the second operation are different.
The computer program, when executed by the processor, further performs: if a third operation aiming at the first electronic equipment is detected, displaying input information in the display area based on the third operation;
the space or area where the third operation is located is different from the display area, and the third operation is different from the first operation and the second operation.
The computer program, when executed by the processor, further performs: if a fourth operation aiming at the first electronic equipment is detected, controlling the content display of the display area based on the fourth operation;
the space or area where the fourth operation is located is different from the display area, and the fourth operation is different from the first operation and the second operation;
or, if the first electronic device is detected to be switched from a first mode to a second mode, controlling the content display of the display area in the second mode;
the first mode characterizes an object selection and/or object replacement mode and the second mode characterizes a mouse mode.
The computer program, when executed by the processor, further performs: displaying the target object in a first manner such that the target object is displayed differently from other objects in the display area;
wherein the displaying the target object in a first manner comprises:
enlarging a display size of the target object so that the display size of the target object is different from display sizes of other objects displayed by the display area;
or jumping the target object to enable the display effect of the target object to be different from the display effect of other objects displayed in the display area;
or, increasing the display brightness of the target object to make the display brightness of the target object different from the display brightness of other objects displayed in the display area;
or adding a background color to the target object, so that the background color of the target object is different from the background colors of other objects displayed in the display area.
The computer program, when executed by the processor, further performs: if a first operation aiming at the first electronic equipment is detected in a first input and write area of the first electronic equipment, wherein the first input and write area is different from the display area;
or if a first operation aiming at the first electronic equipment is detected in a second input and write area of the first electronic equipment, wherein the second input and write area is the same as the display area;
or if a first operation aiming at the first electronic equipment is detected in a third writing area of the second electronic equipment, wherein the second electronic equipment is different from the first electronic equipment;
accordingly, if a second operation is detected for the first electronic device in a first input/write area of the first electronic device, the first input/write area and the display area being different;
or if a second operation aiming at the first electronic equipment is detected in a second writing area of the first electronic equipment, wherein the second writing area is the same as the display area;
or, if a first operation for a first electronic device is detected in a third writing area of a second electronic device, the second electronic device being different from the first electronic device.
The computer program, when executed by the processor, further performs: the first operation and/or the second operation of the first electronic device are/is detected in a first mode, and the first mode at least comprises at least one of a radar, an infrared sensor, an image collector, a distance measuring sensor and an ultrasonic sensor.
The computer program, when executed by the processor, further performs: and moving a cursor in the display area by taking characters/words as units based on the moving direction or moving angle information corresponding to the first operation so as to determine a target object in the display objects in the display area.
The computer program, when executed by the processor, further performs: determining the detection times of the second operation;
determining target candidate objects in the N candidate objects corresponding to the target objects in sequence based on the detection times of the second operation; wherein N is greater than or equal to 1;
replacing the target object with the target candidate object.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A control method is applied to a first electronic device, the first electronic device is provided with a display area, and the control method comprises the following steps:
if a first operation aiming at the first electronic equipment is detected, selecting a display object of the display area based on the first operation to determine a target object;
if a second operation aiming at the target object is detected, replacing the target object based on the second operation;
the space or area where the first operation and the second operation are located is different from the display area, and the first operation and the second operation are different.
2. The method of claim 1, further comprising:
if a third operation aiming at the first electronic equipment is detected, displaying input information in the display area based on the third operation;
the space or area where the third operation is located is different from the display area, and the third operation is different from the first operation and the second operation.
3. The method of claim 1, further comprising:
if a fourth operation aiming at the first electronic equipment is detected, controlling the content display of the display area based on the fourth operation;
the space or area where the fourth operation is located is different from the display area, and the fourth operation is different from the first operation and the second operation;
or, if the first electronic device is detected to be switched from a first mode to a second mode, controlling the content display of the display area in the second mode;
the first mode characterizes an object selection and/or object replacement mode and the second mode characterizes a mouse mode.
4. The method of claim 1, further comprising:
displaying the target object in a first manner such that the target object is displayed differently from other objects in the display area;
wherein the displaying the target object in a first manner comprises:
enlarging a display size of the target object so that the display size of the target object is different from display sizes of other objects displayed by the display area;
or jumping the target object to enable the display effect of the target object to be different from the display effect of other objects displayed in the display area;
or, increasing the display brightness of the target object to make the display brightness of the target object different from the display brightness of other objects displayed in the display area;
or adding a background color to the target object, so that the background color of the target object is different from the background colors of other objects displayed in the display area.
5. The method of claim 1, the if detecting a first operation for the first electronic device, comprising:
if a first operation aiming at the first electronic equipment is detected in a first input and write area of the first electronic equipment, wherein the first input and write area is different from the display area;
or if a first operation aiming at the first electronic equipment is detected in a second input and write area of the first electronic equipment, wherein the second input and write area is the same as the display area;
or if a first operation aiming at the first electronic equipment is detected in a third writing area of the second electronic equipment, wherein the second electronic equipment is different from the first electronic equipment;
accordingly, the if a second operation is detected for the first electronic device includes:
if a second operation aiming at the first electronic equipment is detected in a first writing area of the first electronic equipment, wherein the first writing area is different from the display area;
or if a second operation aiming at the first electronic equipment is detected in a second writing area of the first electronic equipment, wherein the second writing area is the same as the display area;
or, if a first operation for a first electronic device is detected in a third writing area of a second electronic device, the second electronic device being different from the first electronic device.
6. The method of claim 1, further comprising:
the first operation and/or the second operation of the first electronic device are/is detected in a first mode, and the first mode at least comprises at least one of a radar, an infrared sensor, an image collector, a distance measuring sensor and an ultrasonic sensor.
7. The method of claim 1, selecting a display object of the display area to determine a target object based on the first operation, comprising:
and moving a cursor in the display area by taking characters/words as units based on the moving direction or moving angle information corresponding to the first operation so as to determine a target object in the display objects in the display area.
8. The method of claim 1, replacing the target object based on the second operation, comprising:
determining the detection times of the second operation;
determining target candidate objects in the N candidate objects corresponding to the target objects in sequence based on the detection times of the second operation; wherein N is greater than or equal to 1;
replacing the target object with the target candidate object.
9. An electronic device comprising at least:
a display screen having a display area;
control means for selecting a display object of the display area based on a first operation to determine a target object if the first operation for the electronic device is detected; if a second operation aiming at the target object is detected, replacing the target object based on the second operation;
the space or area where the first operation and the second operation are located is different from the display area, and the first operation and the second operation are different.
10. The electronic device of claim 9, the control component, comprising at least:
a handwriting board having a writing area;
the control component is specifically configured to, when a first operation for the input/write area is detected, select a display object of the display area based on the first operation to determine a target object; if a second operation aiming at the input and write area is detected, replacing the target object based on the second operation;
and/or the gesture recognition sensor is used for collecting gestures aiming at the electronic equipment in a recognition collection area;
the control component is specifically configured to, when it is detected that the gesture is a first operation for the electronic device, select a display object of the display area based on the first operation to determine a target object; if the gesture is detected to be a second operation on the target object, replacing the target object based on the second operation.
CN202011578675.6A 2020-12-28 2020-12-28 Control method and device Active CN112698739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011578675.6A CN112698739B (en) 2020-12-28 2020-12-28 Control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011578675.6A CN112698739B (en) 2020-12-28 2020-12-28 Control method and device

Publications (2)

Publication Number Publication Date
CN112698739A true CN112698739A (en) 2021-04-23
CN112698739B CN112698739B (en) 2023-03-21

Family

ID=75512723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011578675.6A Active CN112698739B (en) 2020-12-28 2020-12-28 Control method and device

Country Status (1)

Country Link
CN (1) CN112698739B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100287486A1 (en) * 2009-05-07 2010-11-11 Microsoft Corporation Correction of typographical errors on touch displays
CN104020943A (en) * 2013-02-28 2014-09-03 谷歌公司 Character string replacement
CN104423867A (en) * 2013-09-03 2015-03-18 深圳市世纪光速信息技术有限公司 Character input method and character input device
CN104769530A (en) * 2012-11-02 2015-07-08 谷歌公司 Keyboard gestures for character string replacement
US20160070441A1 (en) * 2014-09-05 2016-03-10 Microsoft Technology Licensing, Llc Display-efficient text entry and editing
CN105446572A (en) * 2014-08-13 2016-03-30 阿里巴巴集团控股有限公司 Text-editing method and device used for screen display device
CN106951165A (en) * 2017-03-30 2017-07-14 维沃移动通信有限公司 A kind of word editing method and mobile terminal
CN108089723A (en) * 2017-12-21 2018-05-29 北京小米移动软件有限公司 Character input method and device
CN109740142A (en) * 2018-04-20 2019-05-10 北京字节跳动网络技术有限公司 A kind of character string error correction method and device
CN110008884A (en) * 2019-03-28 2019-07-12 维沃移动通信有限公司 A kind of literal processing method and terminal
CN111061383A (en) * 2019-12-06 2020-04-24 维沃移动通信有限公司 Character detection method and electronic equipment
CN112015270A (en) * 2020-08-21 2020-12-01 上海擎感智能科技有限公司 Terminal control method, terminal and computer storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100287486A1 (en) * 2009-05-07 2010-11-11 Microsoft Corporation Correction of typographical errors on touch displays
CN104769530A (en) * 2012-11-02 2015-07-08 谷歌公司 Keyboard gestures for character string replacement
CN104020943A (en) * 2013-02-28 2014-09-03 谷歌公司 Character string replacement
CN104423867A (en) * 2013-09-03 2015-03-18 深圳市世纪光速信息技术有限公司 Character input method and character input device
CN105446572A (en) * 2014-08-13 2016-03-30 阿里巴巴集团控股有限公司 Text-editing method and device used for screen display device
US20160070441A1 (en) * 2014-09-05 2016-03-10 Microsoft Technology Licensing, Llc Display-efficient text entry and editing
CN106951165A (en) * 2017-03-30 2017-07-14 维沃移动通信有限公司 A kind of word editing method and mobile terminal
CN108089723A (en) * 2017-12-21 2018-05-29 北京小米移动软件有限公司 Character input method and device
CN109740142A (en) * 2018-04-20 2019-05-10 北京字节跳动网络技术有限公司 A kind of character string error correction method and device
CN110008884A (en) * 2019-03-28 2019-07-12 维沃移动通信有限公司 A kind of literal processing method and terminal
CN111061383A (en) * 2019-12-06 2020-04-24 维沃移动通信有限公司 Character detection method and electronic equipment
CN112015270A (en) * 2020-08-21 2020-12-01 上海擎感智能科技有限公司 Terminal control method, terminal and computer storage medium

Also Published As

Publication number Publication date
CN112698739B (en) 2023-03-21

Similar Documents

Publication Publication Date Title
US8446389B2 (en) Techniques for creating a virtual touchscreen
JP4560062B2 (en) Handwriting determination apparatus, method, and program
Le et al. InfiniTouch: Finger-aware interaction on fully touch sensitive smartphones
US8525776B2 (en) Techniques for controlling operation of a device with a virtual touchscreen
US8466934B2 (en) Touchscreen interface
EP2717120B1 (en) Apparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
US20130120282A1 (en) System and Method for Evaluating Gesture Usability
JP5604279B2 (en) Gesture recognition apparatus, method, program, and computer-readable medium storing the program
CN105339872A (en) Electronic device and method of recognizing input in electronic device
US10346032B2 (en) Controlling display object on display screen
JP4851547B2 (en) Mode setting system
JP6089793B2 (en) Associating handwritten information with documents based on document information
CN109753179B (en) User operation instruction processing method and handwriting reading equipment
CN108027648A (en) The gesture input method and wearable device of a kind of wearable device
WO2017148833A1 (en) Method and system for character insertion in a character string
US20230244379A1 (en) Key function execution method and apparatus, device, and storage medium
CN112596661A (en) Writing track processing method and device and interactive panel
US10146424B2 (en) Display of objects on a touch screen and their selection
US20090284480A1 (en) System and apparatus for a multi-point touch-sensitive sensor user interface using distinct digit identification
US9256360B2 (en) Single touch process to achieve dual touch user interface
CN113515228A (en) Virtual scale display method and related equipment
CN112698739B (en) Control method and device
US10860120B2 (en) Method and system to automatically map physical objects into input devices in real time
US20100245266A1 (en) Handwriting processing apparatus, computer program product, and method
JP2014082605A (en) Information processing apparatus, and method of controlling and program for the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant