CN113204283A - Text input method, text input device, storage medium and electronic equipment - Google Patents

Text input method, text input device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113204283A
CN113204283A CN202110484305.4A CN202110484305A CN113204283A CN 113204283 A CN113204283 A CN 113204283A CN 202110484305 A CN202110484305 A CN 202110484305A CN 113204283 A CN113204283 A CN 113204283A
Authority
CN
China
Prior art keywords
text
finger
text input
user
acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110484305.4A
Other languages
Chinese (zh)
Inventor
赵庆浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110484305.4A priority Critical patent/CN113204283A/en
Publication of CN113204283A publication Critical patent/CN113204283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The application discloses a text input method, a text input device, a storage medium and electronic equipment, wherein the method comprises the following steps: if a starting instruction of text input is received, acquiring finger joint points of target fingers of a user hand in an acquisition area, wherein the acquisition area is an area range shot by acquisition equipment in a real space, acquiring a movement track of the finger joint points in the acquisition area, and if a stopping instruction of text input is received, generating text information based on the movement track. By adopting the method and the device, the corresponding text information is acquired by collecting the moving track of the knuckle point in the display space, so that the user can achieve the purpose of inputting the text by drawing with fingers, the step of inputting the text is simpler, the operation is simpler, and the using effect is improved.

Description

Text input method, text input device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a text input method, an apparatus, a storage medium, and an electronic device.
Background
The use of technologies such as Augmented Reality (AR) and Virtual Reality (VR) on electronic devices enables users to feel their own Virtual environment and to interact with the technical effects generated by machines or electronic devices. At present, electronic devices such as AR (augmented reality) and VR (virtual reality) glasses are more and more common, and a user can input texts through the electronic devices, but the existing text input method is that a remote controller is used for inputting texts by aiming at a virtual keyboard, keys are not easy to aim at, operation steps are complicated, and the existing input method is that input is realized through a process that the user simulates typing on the virtual keyboard by two hands, and the two hands are matched, so that the complexity of input operation is increased.
Disclosure of Invention
The embodiment of the application provides a text input method, a text input device, a storage medium and electronic equipment, so that a user can input a text by drawing with fingers, the text input step is simpler, the operation is simpler, and the use effect is improved. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a text input method, where the method includes:
if a starting instruction input by a text is received, acquiring finger joint points of a target finger of a user hand in an acquisition area, wherein the acquisition area is an area range shot by acquisition equipment in a real space;
acquiring a moving track of the finger joint point in the acquisition area;
and if a stop instruction of text input is received, generating text information based on the movement track.
In a second aspect, an embodiment of the present application provides a text input device, including:
the joint point acquisition module is used for acquiring finger joint points of target fingers of a user hand in an acquisition area if a starting instruction input by a text is received, wherein the acquisition area is an area range shot by acquisition equipment in a real space;
the track acquisition module is used for acquiring the moving track of the knuckle points in the acquisition area;
and the text information generating module is used for generating text information based on the movement track if a stop instruction of text input is received.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
in one or more embodiments of the present application, if a start instruction of text input is received, a finger joint point of a target finger of a hand of a user in an acquisition area is acquired, where the acquisition area is an area range photographed by an acquisition device in a real space, a movement track of the finger joint point in the acquisition area is acquired, and if a stop instruction of text input is received, text information is generated based on the movement track. The corresponding text information is acquired by collecting the moving track of the knuckle point in the display space, so that the user can input the text by drawing with fingers, the text input step is simpler, the operation is simpler, and the use effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram illustrating an example of a text input device collecting finger joints according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a text input method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a text input method according to an embodiment of the present application;
FIG. 3a is a schematic diagram illustrating an example of a start command validation according to an embodiment of the present application;
FIG. 3b is a schematic diagram illustrating an example of similar character acquisition according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a text input device according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a text input device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a trajectory acquisition module according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a text information generating module according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of an operating system and a user agent provided in an embodiment of the present application;
FIG. 10 is an architectural diagram of the android operating system of FIG. 8;
FIG. 11 is an architectural diagram of the IOS operating system of FIG. 8.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is noted that, unless explicitly stated or limited otherwise, "including" and "having" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The text input method provided by the embodiment of the application can be realized by relying on a computer program, and can run on a text input device based on a von neumann system. The computer program may be integrated into the application or may run as a separate tool-like application. The text input device in the embodiment of the application can be an electronic device with functions of AR, VR and the like, such as mobile phones, personal computers, tablet computers, handheld devices, vehicle-mounted devices, wearable devices and the like, for example, AR glasses, VR glasses and the like, and the text input device can collect the actions of the user in the real space, so that the user interacts with the virtual scene, and the effect of being personally on the scene is achieved. The acquisition region in the embodiment of the application is a region range in a real space that can be shot by a camera of the text input device, the text input device can acquire finger joint points of a hand of a user in the acquisition region, and the finger joint points in the embodiment of the application are finger joint points of a finger tip representing a target finger in the hand of the user.
Referring to fig. 1 together, an exemplary schematic diagram of a text input device collecting finger joint points is provided for the embodiment of the present application, a text input device, such as AR or VR glasses, may capture an image in a collection area through a camera device on the device, when a hand of a user appears in the collection area, the text input device may identify the finger joint points in the hand of the user and obtain a position of each finger joint point, and then the text input device may confirm whether a start instruction of text input is received through collection of the finger joint point positions, and may also confirm text information written by the user through movement of a finger.
The text input method provided by the present application is described in detail below with reference to specific embodiments.
Referring to fig. 2, a schematic flow chart of a text input method according to an embodiment of the present application is provided. As shown in fig. 2, the method of the embodiment of the present application may include the following steps S101-S103.
S101, if a starting instruction input by a text is received, acquiring finger joint points of target fingers of a user hand in an acquisition area.
Specifically, the text input device may acquire a gesture and an action of the user in the acquisition area, for example, a finger joint of a hand of the user in the acquisition area, and acquire an instruction that the user wants to transmit to the text input device according to a position change of the finger joint of the user, where the acquisition area is an area range photographed by the acquisition device in a real space. The user may communicate an activation instruction of the text input to the text input device by means of a specific voice, a specific gesture, or the like, for prompting the text input device to start the text input, and may instruct the text input device to start acquiring the text information that the user wants to input. If the text input device confirms that the starting instruction of the text input is received, the finger joint point of the finger tip of the user hand representing the target finger in the acquisition area, namely the finger joint point of the target finger, is acquired. The target finger may be an initial setting of the text input device or may be set by the user in the text input device, for example, the target finger may be set as an index finger, i.e., a second finger from left to right in the right hand of the user or a second finger from right to left in the left hand of the user. The target finger may also be a finger of the target hand portion having a predetermined shape, for example, the only upright finger of the target hand portion.
S102, obtaining the moving track of the finger joint point in the acquisition area.
Specifically, after a user transmits a start instruction to the text input device, text information of a text to be input, such as characters of Chinese characters, letters and the like, can be written by a target finger in the acquisition area, the text input device can acquire images in the acquisition area according to the acquisition frequency, for example, the text input device can acquire images in the acquisition area in 30 frames per second, and the text input device can also acquire the moving position of a finger joint point in the acquisition area according to the acquisition frequency, so that the moving track of the finger joint point in the acquisition area can be acquired, that is, the text information which the user wants to write in the acquisition area can be written by directly using the finger in the acquisition area, and the purpose of text input can be achieved.
And S103, generating text information based on the movement track if a stop instruction of text input is received.
Specifically, when the user finishes writing, a stop instruction of text input can be transmitted to the text input device through specific voice and specific gestures, when the text input device confirms that the stop instruction is received, the acquisition of the moving position of a finger joint point of the user is stopped, a moving track acquired in the period from the receiving of the start instruction to the receiving of the stop instruction is acquired, the moving track is identified, the moving track is compared with the existing characters, such as characters, numbers and letters, text information which the user wants to input is acquired according to the moving track, and if the user is currently accessing the text, the text information can be input into the text which the user currently accesses; if the user does not access the text, the text information may be temporarily stored in a local storage of the text input device, such as in a clipboard.
In the embodiment of the application, if a start instruction of text input is received, a finger joint point of a target finger of a hand of a user in an acquisition area is acquired, the acquisition area is an area range shot by acquisition equipment in a real space, a movement track of the finger joint point in the acquisition area is acquired, and if a stop instruction of text input is received, text information is generated based on the movement track. The corresponding text information is acquired by collecting the moving track of the knuckle point in the display space, so that the user can input the text by drawing with fingers, the text input step is simpler, the operation is simpler, and the use effect is improved.
Referring to fig. 3, a schematic flow chart of a text input method according to an embodiment of the present application is provided. As shown in fig. 3, the method of the embodiment of the present application may include the following steps S201-S206.
S201, establishing a coordinate system in the acquisition area, and acquiring finger joint points of each finger of the hand of the user in the acquisition area.
Specifically, the text input device can establish a coordinate system in the acquisition area, the establishment of the coordinate system facilitates recording and acquisition of each position of the text input device in the acquisition area, each position has a corresponding coordinate, and the text input device facilitates recording of position transformation and distance between the positions. For example, the text acquisition device may establish a plane coordinate system with the position of the camera device as an origin, the horizontal direction as an x-axis, and the vertical direction as a y-axis, and since the characters are planar, the text input device only needs to acquire the movement trajectory generated when the user writes with a finger on the plane. Optionally, the text input device may further use the position of the camera device as an origin, use the horizontal direction as an x-axis, use the vertical direction as a y-axis, and use the front-back direction as a z-axis to establish a three-dimensional coordinate system, and since the characters are planar, the text input device may project the acquired movement trajectory onto a plane where the z-axis is 0 to acquire the text information.
The text input device can acquire finger joint points of each finger of the user hand in the acquisition area, can acquire the positions of the finger joint points in the acquisition area and can acquire the coordinates of the finger joint points in the coordinate system.
S202, if the fact that the connecting lines of the finger joint points of the target finger are straight lines is detected, and the connecting lines of the finger joint points of other fingers except the target finger in the hand of the user are not straight lines, determining that a starting instruction input by a text is received, and acquiring the finger joint points of the target finger of the hand of the user in the acquisition area.
Specifically, the text input device may acquire a gesture and an action of the user in the acquisition area, for example, a finger joint of a hand of the user in the acquisition area, and acquire an instruction that the user wants to transmit to the text input device according to a position change of the finger joint of the user, where the acquisition area is an area range photographed by the acquisition device in a real space. The user may communicate an activation instruction of the text input to the text input device by means of a specific voice, a specific gesture, or the like, for prompting the text input device to start the text input, and may instruct the text input device to start acquiring the text information that the user wants to input. If the text input device detects that the connecting line of the finger joint points of the target finger is in a linear state and the connecting lines of the finger joint points of other fingers except the target finger in the hand of the user are not in the linear state, namely the text input device detects that only the target finger of the hand of the user is in a straight state and the other fingers except the target finger are in a bent and curled state, the text input decoration determines to receive a starting instruction of text input and can acquire the finger joint points representing the finger tips of the target finger, namely the finger head joint points of the target finger, of the hand of the user in the acquisition area. The target finger may be an initial setting of the text input device or may be set by the user in the text input device, for example, the target finger may be set as an index finger, i.e., a second finger from left to right in the right hand of the user or a second finger from right to left in the left hand of the user. The target finger may also be a finger of the target hand portion having a predetermined shape, for example, the only upright finger of the target hand portion. The starting instruction of text input is received by detecting that the connecting line of the finger joint points of the target finger is in a straight line, so that the step of sending the starting instruction by the user is simpler, the user can operate the system by one hand to adapt to more application scenes, and the system accords with the practical writing habit of the user.
Please refer to fig. 3a together, which provides an exemplary schematic diagram of confirmation of the start instruction for the embodiment of the present application, as shown in (1) in fig. 3a, the finger joint points of each finger of the user's hand in the collection area acquired by the text input device may each correspond to a finger knuckle of the user's hand, for example, four finger joint points of the index finger, the median finger, the ring finger and the little finger of the right hand, and three finger joint points of the big finger may be acquired. Fig. 3a (2) shows a specific gesture in which the text input device can confirm that the text input is initiated, wherein only the target finger, for example, the index finger, is in a straight state, the other fingers are in a bent and contracted state, and only the connecting line of the finger joints of the target finger is in a straight line, wherein the finger joint of the finger tip representing the target finger, i.e., the finger head joint of the target finger.
S203, respectively collecting the moving positions of the knuckle points in the collecting area according to the collecting frequency so as to obtain at least one piece of coordinate information corresponding to the moving positions in the coordinate system.
Specifically, the text input device acquires the images in the acquisition area according to the acquisition frequency, for example, the text input device may acquire 30 frames of images in the acquisition area every second. After a user transmits a starting instruction to the text input device, text information of a text to be input, such as characters of Chinese characters, letters and the like, can be written by a target finger in the acquisition area, the finger joint points can move continuously along with the writing process of the user, the text input device can respectively acquire the moving positions of the finger joint points in the acquisition area according to the acquisition frequency, and at least one piece of coordinate information corresponding to the moving positions in a coordinate system is obtained.
S204, connecting the at least one piece of coordinate information according to the sequence of the acquisition time to obtain the moving track of the knuckle point in the acquisition area.
Specifically, at least one piece of coordinate information is connected in sequence from morning to evening according to the acquisition time of the coordinate information, and then the moving track of the knuckle point in the acquisition area can be obtained.
S205, if the fact that the distance between the finger joint point and the finger root joint point of the target finger is larger than a distance threshold value is detected, determining that a stop instruction of text input is received, generating a text image based on the moving track, and identifying the text image to obtain similar characters.
Specifically, when the user finishes writing, a stop instruction of text input can be transmitted to the text input device through specific voice and specific gestures, and when the text input device confirms that the stop instruction is received, the acquisition of the moving position of the user finger joint point can be stopped. If the text input device detects that the finger joint points of the finger joint points and the target finger represent the finger joint points at the root of the finger, namely the distance between the finger joint points is larger than the distance threshold value, namely the target finger is not in a straight state any more, the text input device determines that a stop instruction of text input is received, obtains a moving track acquired in the period from the time when the start instruction is received to the time when the stop instruction is received to generate a text image, and compares the text image with the existing characters, such as characters, numbers and letters, to obtain similar characters corresponding to the moving track.
Referring to fig. 3b, an exemplary schematic diagram of similar character acquisition is provided for the embodiment of the present application, for example, if a user wants to input a character "two", the user uses a finger to write in the acquisition area according to the stroke of "two", the text input device acquires the moving position of the knuckle point according to the acquisition frequency, acquires the coordinate information of the moving position, and connects the points corresponding to the coordinate information according to the time sequence, so as to form the moving track of the knuckle point. Then the text input device can generate a text image according to the moving track, and the text image is compared with the existing characters, so that the similar characters of the text image can be found to be the characters 'two'.
Optionally, the text input device may also confirm that the stop instruction of the text input is received when detecting that the movement distance of the knuckle point within the preset time is smaller than the preset distance. The preset time and the preset distance may be initial settings of the text input device, or may be set by a user in the text input device.
S206, adding the similar characters into the text.
Specifically, if the user is currently accessing the text, the text input device may input the similar characters into the text currently accessed by the user; if the user does not access the text, the text information may be temporarily stored in a local storage of the text input device, such as in a clipboard.
It can be understood that, because the text input device is a device for acquiring similar characters from the movement track of the finger joint point of the chicken user, the user cannot cut off all the strokes of the characters as if the user writes on paper with a pen, and the handwriting habits of the user are different, there may be a plurality of similar characters corresponding to the text image acquired by the text input device, for example, the text image in fig. 3b may be the character "two" or may be the character "Z" or "Z", so if there are a plurality of similar characters recognized by the text input device, the user may select a specific similar character, for example:
if the number of the similar characters is equal to one, the text input device directly adds the similar characters to the text;
if the number of the similar characters is larger than one, the text input device displays the similar characters, a user can select a target character to be input by means of finger clicking and the like, the text input device confirms that a character selection instruction aiming at the target character is received when detecting that the user clicks the target character, obtains the target character in the similar characters based on the character selection instruction, and adds the target character to a text.
In the embodiment of the application, a coordinate system is established in the acquisition area and finger joint points of fingers of a hand of a user are acquired, the coordinate system is established to facilitate recording and acquisition of all positions in the acquisition area, and a starting instruction of text input is received by detecting that a connecting line of the finger joint points of a target finger is in a straight line, so that the step of sending the starting instruction by the user is simpler, the user can operate with one hand to adapt to more application scenes, and the writing habit of the user in reality is met. Then finger joint points of a target finger of a user hand in the acquisition area are acquired, the moving position of the finger joint points in the acquisition area and corresponding coordinate information are acquired according to acquisition frequency, a stop instruction of text input is received by detecting that the distance between the finger joint points and finger root joint points of the target finger is larger than a distance threshold value, a character image is generated based on a moving track, similar characters are recognized to be added to a text, the target character can be determined according to a character selection instruction of the user when the number of the similar characters is not equal to one, character recognition errors caused by the fact that the writing habit of the user, the acquisition of the moving track cannot be broken and the like are avoided, and the using effect is improved. The corresponding text information is acquired by collecting the moving track of the knuckle point in the display space, so that the user can input the text by drawing with fingers, the text input step is simpler, the operation is simpler, and the use effect is improved.
The text input device provided by the embodiment of the present application will be described in detail with reference to fig. 4 to 6. It should be noted that, the text input device shown in fig. 4-6 is used for executing the method of the embodiment shown in fig. 2 and 3 of the present application, and for convenience of description, only the portion related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to the embodiment shown in fig. 2 and 3 of the present application.
Referring to fig. 4, a schematic structural diagram of a text input device according to an exemplary embodiment of the present application is shown. The text input device may be implemented as all or part of a device in software, hardware, or a combination of both. The device 1 comprises a joint point acquisition module 11, a track acquisition module 12 and a text information generation module 13.
The joint point acquisition module 11 is configured to acquire finger joint points of a target finger of a hand of a user in an acquisition area if a start instruction input by a text is received, where the acquisition area is an area range shot by acquisition equipment in a real space;
the track acquisition module 12 is used for acquiring the moving track of the knuckle point in the acquisition area;
and the text information generating module 13 is configured to generate text information based on the movement track if a stop instruction of text input is received.
In this embodiment, if a start instruction of text input is received, a finger joint point of a target finger of a hand of a user in an acquisition area is acquired, the acquisition area is an area range shot by an acquisition device in a real space, a movement track of the finger joint point in the acquisition area is acquired, and if a stop instruction of text input is received, text information is generated based on the movement track. The corresponding text information is acquired by collecting the moving track of the knuckle point in the display space, so that the user can input the text by drawing with fingers, the text input step is simpler, the operation is simpler, and the use effect is improved.
Referring to fig. 5, a schematic structural diagram of a text input device according to an exemplary embodiment of the present application is shown. The text input device may be implemented as all or part of a device in software, hardware, or a combination of both. The device 1 comprises a joint point acquisition module 11, a track acquisition module 12, a text information generation module 13 and a coordinate system establishment module 14.
The joint point acquisition module 11 is configured to acquire finger joint points of a target finger of a hand of a user in an acquisition area if a start instruction input by a text is received, where the acquisition area is an area range shot by acquisition equipment in a real space;
optionally, the joint point collecting module 11 is specifically configured to determine that a start instruction of text input is received and obtain the finger joint points of the target finger of the user hand in the collecting area if it is detected that the connection lines of the finger joint points of the target finger are straight lines and the connection lines of the finger joint points of other fingers in the user hand except the target finger are not straight lines.
The track acquisition module 12 is used for acquiring the moving track of the knuckle point in the acquisition area;
specifically, please refer to fig. 6, which provides a schematic structural diagram of a trajectory acquisition module according to an embodiment of the present application. As shown in fig. 6, the trajectory acquisition module 12 may include:
the coordinate acquisition unit 121 is configured to acquire moving positions of the knuckle points in an acquisition area according to an acquisition frequency, so as to obtain at least one piece of coordinate information corresponding to the moving positions in the coordinate system;
and the track acquisition unit 122 is configured to connect the at least one piece of coordinate information according to the sequence of acquisition time to obtain a moving track of the knuckle point in the acquisition area.
A text information generating module 13, configured to generate text information based on the movement trajectory if a stop instruction of text input is received;
optionally, the text information generating module 13 is specifically configured to determine that a stop instruction of text input is received if it is detected that the distance between the finger joint point and the finger root joint point of the target finger is greater than a distance threshold, and generate the text information based on the movement track.
Optionally, the text information generating module 13 is specifically configured to generate a text image based on the movement track, and identify the text image to obtain similar characters;
adding the similar characters to the text.
Specifically, please refer to fig. 7, which provides a schematic structural diagram of a text information generating module according to an embodiment of the present application. As shown in fig. 7, the text information generating module 13 may include:
a character adding unit 131, configured to add the similar characters to a text if the number of the similar characters is equal to one;
a character selecting unit 132, configured to display the similar characters if the number of the similar characters is greater than one, acquire the target character in the similar characters based on a character selection instruction for the target character, and add the target character to the text.
And the coordinate system establishing module 14 is configured to establish a coordinate system in the acquisition area and acquire finger joint points of each finger of the hand of the user in the acquisition area.
In this embodiment, a coordinate system is established in the acquisition area and finger joint points of fingers of a user hand are acquired, the coordinate system is established to facilitate recording and acquisition of positions in the acquisition area, and a starting instruction of text input is received by detecting that a connecting line of the finger joint points of a target finger is a straight line, so that the step of sending the starting instruction by the user is simpler, the user can operate with one hand to adapt to more application scenes, and the writing habit of the user in reality is met. Then finger joint points of a target finger of a user hand in the acquisition area are acquired, the moving position of the finger joint points in the acquisition area and corresponding coordinate information are acquired according to acquisition frequency, a stop instruction of text input is received by detecting that the distance between the finger joint points and finger root joint points of the target finger is larger than a distance threshold value, a character image is generated based on a moving track, similar characters are recognized to be added to a text, the target character can be determined according to a character selection instruction of the user when the number of the similar characters is not equal to one, character recognition errors caused by the fact that the writing habit of the user, the acquisition of the moving track cannot be broken and the like are avoided, and the using effect is improved. The corresponding text information is acquired by collecting the moving track of the knuckle point in the display space, so that the user can input the text by drawing with fingers, the text input step is simpler, the operation is simpler, and the use effect is improved.
It should be noted that, when the text input device provided in the foregoing embodiment executes the text input method, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed and completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the text input device provided by the above embodiment and the text input method embodiment belong to the same concept, and the detailed implementation process thereof is referred to the method embodiment, which is not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the text input method according to the embodiment shown in fig. 2 to 3b, and a specific execution process may refer to a specific description of the embodiment shown in fig. 2 to 3b, which is not described herein again.
The present application further provides a computer program product, where at least one instruction is stored, and the at least one instruction is loaded by the processor and executes the text input method according to the embodiment shown in fig. 2 to fig. 3b, where a specific execution process may refer to a specific description of the embodiment shown in fig. 2 to fig. 3b, and is not described herein again.
Referring to fig. 8, a block diagram of an electronic device according to an exemplary embodiment of the present application is shown. The electronic device in the present application may comprise one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall electronic device using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system, including a system based on Android system depth development, an IOS system developed by apple, including a system based on IOS system depth development, or other systems. The data storage area may also store data created by the electronic device during use, such as phone books, audio and video data, chat log data, and the like.
Referring to fig. 9, the memory 120 may be divided into an operating system space, in which an operating system runs, and a user space, in which native and third-party applications run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in the animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, programs and data stored in the memory 120 are as shown in fig. 10, and a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380 may be stored in the memory 120, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the electronic device, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game-type application, an instant messaging program, a photo beautification program, a text input program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 11, and the IOS system includes: a Core operating system Layer 420(Core OS Layer), a Core Services Layer 440(Core Services Layer), a Media Layer 460(Media Layer), and a touchable Layer 480(Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the electronic device. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework illustrated in FIG. 11, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system can be referred to as a mode and a principle for realizing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. Touch displays are typically provided on the front panel of an electronic device. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the electronic devices illustrated in the above-described figures do not constitute limitations on the electronic devices, which may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components. For example, the electronic device further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the electronic device described above. Optionally, the execution subject of each step is an operating system of the electronic device. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The electronic device of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. A user may utilize a display device on the electronic device 101 to view information such as displayed text, images, video, and the like. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the electronic device shown in fig. 8, the processor 110 may be configured to invoke a text input application stored in the memory 120 and specifically perform the following operations:
if a starting instruction input by a text is received, acquiring finger joint points of a target finger of a user hand in an acquisition area, wherein the acquisition area is an area range shot by acquisition equipment in a real space;
acquiring a moving track of the finger joint point in the acquisition area;
and if a stop instruction of text input is received, generating text information based on the movement track.
In one embodiment, before executing the step of acquiring the finger joint point of the target finger of the user's hand in the acquisition area if the start instruction of the text input is received, the processor 110 further executes the following operations:
establishing a coordinate system in the acquisition area, and acquiring finger joint points of each finger of the hand of the user in the acquisition area.
In an embodiment, when executing that, if a start instruction of text input is received, the processor 110 obtains a finger joint point of a target finger of a hand of a user in an acquisition area, the following operation is specifically executed:
and if the fact that the connecting lines of the finger joint points of the target finger are straight lines is detected, and the connecting lines of the finger joint points of other fingers except the target finger in the hand of the user are not straight lines, determining that a starting instruction input by a text is received, and acquiring the finger joint points of the target finger of the hand of the user in the acquisition area.
In one embodiment, the processor 110, when executing the acquiring of the movement trajectory of the knuckle point in the acquisition area, specifically performs the following operations:
respectively collecting the moving positions of the knuckle points in a collecting area according to collecting frequency so as to obtain at least one piece of coordinate information corresponding to the moving positions in the coordinate system;
and connecting the at least one piece of coordinate information according to the sequence of the acquisition time to obtain the moving track of the knuckle point in the acquisition area.
In an embodiment, when the processor 110 executes the text information generated based on the movement track if the stop instruction of the text input is received, the following operations are specifically performed:
and if the distance between the finger joint point and the finger root joint point of the target finger is detected to be larger than a distance threshold, determining that a stop instruction of text input is received, and generating text information based on the movement track.
In one embodiment, when the processor 110 generates the text information based on the movement track, the following operations are specifically performed:
generating a text image based on the moving track, and identifying the text image to obtain similar characters;
adding the similar characters to the text.
In one embodiment, the processor 110 specifically performs the following operations when adding the similar words to the text based on the above description:
if the number of the similar characters is equal to one, adding the similar characters into a text;
if the number of the similar characters is larger than one, displaying the similar characters, acquiring the target characters in the similar characters based on a character selection instruction aiming at the target characters, and adding the target characters to the text.
In this embodiment, a coordinate system is established in the acquisition area and finger joint points of fingers of a user hand are acquired, the coordinate system is established to facilitate recording and acquisition of positions in the acquisition area, and a starting instruction of text input is received by detecting that a connecting line of the finger joint points of a target finger is a straight line, so that the step of sending the starting instruction by the user is simpler, the user can operate with one hand to adapt to more application scenes, and the writing habit of the user in reality is met. Then finger joint points of a target finger of a user hand in the acquisition area are acquired, the moving position of the finger joint points in the acquisition area and corresponding coordinate information are acquired according to acquisition frequency, a stop instruction of text input is received by detecting that the distance between the finger joint points and finger root joint points of the target finger is larger than a distance threshold value, a character image is generated based on a moving track, similar characters are recognized to be added to a text, the target character can be determined according to a character selection instruction of the user when the number of the similar characters is not equal to one, character recognition errors caused by the fact that the writing habit of the user, the acquisition of the moving track cannot be broken and the like are avoided, and the using effect is improved. The corresponding text information is acquired by collecting the moving track of the knuckle point in the display space, so that the user can input the text by drawing with fingers, the text input step is simpler, the operation is simpler, and the use effect is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A method of text input, the method comprising:
if a starting instruction input by a text is received, acquiring finger joint points of a target finger of a user hand in an acquisition area, wherein the acquisition area is an area range shot by acquisition equipment in a real space;
acquiring a moving track of the finger joint point in the acquisition area;
and if a stop instruction of text input is received, generating text information based on the movement track.
2. The method of claim 1, wherein before acquiring the finger joint point of the target finger of the user's hand in the acquisition area if the start instruction of the text input is received, the method further comprises:
establishing a coordinate system in the acquisition area, and acquiring finger joint points of each finger of the hand of the user in the acquisition area.
3. The method of claim 2, wherein the acquiring of the finger joint points of the target fingers of the user's hand in the acquisition area if the start instruction of the text input is received comprises:
and if the fact that the connecting lines of the finger joint points of the target finger are straight lines is detected, and the connecting lines of the finger joint points of other fingers except the target finger in the hand of the user are not straight lines, determining that a starting instruction input by a text is received, and acquiring the finger joint points of the target finger of the hand of the user in the acquisition area.
4. The method of claim 2, wherein said obtaining a movement trajectory of said knuckle point within said acquisition area comprises:
respectively collecting the moving positions of the knuckle points in a collecting area according to collecting frequency so as to obtain at least one piece of coordinate information corresponding to the moving positions in the coordinate system;
and connecting the at least one piece of coordinate information according to the sequence of the acquisition time to obtain the moving track of the knuckle point in the acquisition area.
5. The method according to claim 2, wherein if a stop instruction of text input is received, generating text information based on the movement track comprises:
and if the distance between the finger joint point and the finger root joint point of the target finger is detected to be larger than a distance threshold, determining that a stop instruction of text input is received, and generating text information based on the movement track.
6. The method according to claim 1 or 5, wherein the generating text information based on the movement trajectory comprises:
generating a text image based on the moving track, and identifying the text image to obtain similar characters;
adding the similar characters to the text.
7. The method of claim 6, wherein adding the similar words to text comprises:
if the number of the similar characters is equal to one, adding the similar characters into a text;
if the number of the similar characters is larger than one, displaying the similar characters, acquiring the target characters in the similar characters based on a character selection instruction aiming at the target characters, and adding the target characters to the text.
8. A text input device, the device comprising:
the joint point acquisition module is used for acquiring finger joint points of target fingers of a user hand in an acquisition area if a starting instruction input by a text is received, wherein the acquisition area is an area range shot by acquisition equipment in a real space;
the track acquisition module is used for acquiring the moving track of the knuckle points in the acquisition area;
and the text information generating module is used for generating text information based on the movement track if a stop instruction of text input is received.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 7.
10. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 7.
CN202110484305.4A 2021-04-30 2021-04-30 Text input method, text input device, storage medium and electronic equipment Pending CN113204283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110484305.4A CN113204283A (en) 2021-04-30 2021-04-30 Text input method, text input device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110484305.4A CN113204283A (en) 2021-04-30 2021-04-30 Text input method, text input device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113204283A true CN113204283A (en) 2021-08-03

Family

ID=77028190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110484305.4A Pending CN113204283A (en) 2021-04-30 2021-04-30 Text input method, text input device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113204283A (en)

Similar Documents

Publication Publication Date Title
WO2020147666A1 (en) User interface display method and apparatus, terminal and storage medium
WO2021203821A1 (en) Page manipulation method and device, storage medium, and terminal
TWI453603B (en) Platform independent information handling system, communication method, and computer program product thereof
WO2021190184A1 (en) Remote assistance method and apparatus, and storage medium and terminal
CN111225138A (en) Camera control method and device, storage medium and terminal
EP3454199B1 (en) Method for responding to touch operation and electronic device
US20160350136A1 (en) Assist layer with automated extraction
CN111124668A (en) Memory release method and device, storage medium and terminal
CN113268212A (en) Screen projection method and device, storage medium and electronic equipment
CN113091769A (en) Attitude calibration method and device, storage medium and electronic equipment
CN110702346B (en) Vibration testing method and device, storage medium and terminal
CN113052078A (en) Aerial writing track recognition method and device, storage medium and electronic equipment
CN112788583A (en) Equipment searching method and device, storage medium and electronic equipment
CN112218130A (en) Control method and device for interactive video, storage medium and terminal
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
CN111176533A (en) Wallpaper switching method, device, storage medium and terminal
CN111913614B (en) Multi-picture display control method and device, storage medium and display
CN112995562A (en) Camera calling method and device, storage medium and terminal
CN107562324A (en) The method and terminal of data display control
CN107765858B (en) Method, device, terminal and storage medium for determining face angle
CN113419650A (en) Data moving method and device, storage medium and electronic equipment
CN111859999A (en) Message translation method, device, storage medium and electronic equipment
CN113204283A (en) Text input method, text input device, storage medium and electronic equipment
CN111538997A (en) Image processing method, image processing device, storage medium and terminal
CN111949150A (en) Method and device for controlling peripheral switching, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination