CN114627478A - Method and device for inputting characters and electronic equipment - Google Patents

Method and device for inputting characters and electronic equipment Download PDF

Info

Publication number
CN114627478A
CN114627478A CN202210224506.5A CN202210224506A CN114627478A CN 114627478 A CN114627478 A CN 114627478A CN 202210224506 A CN202210224506 A CN 202210224506A CN 114627478 A CN114627478 A CN 114627478A
Authority
CN
China
Prior art keywords
sequence
input
inertial sensor
relative
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210224506.5A
Other languages
Chinese (zh)
Inventor
史元春
喻纯
梁宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210224506.5A priority Critical patent/CN114627478A/en
Publication of CN114627478A publication Critical patent/CN114627478A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a method, a device and electronic equipment for inputting characters, wherein the method comprises the following steps: acquiring a current track input by a user, selecting N input points from the current track, and generating an input sequence; judging whether the input sequence is matched with the standard sequence according to the distance between the input sequence and the standard sequence in a preset sequence library, wherein the standard sequence comprises N standard points, and each standard sequence corresponds to a corresponding character; and taking the standard sequence matched with the input sequence in the sequence library as an effective sequence, and pushing at least one character corresponding to the effective sequence. By the method, the device and the electronic equipment for inputting the characters, provided by the embodiment of the invention, the sequences comprising N points are compared, so that the current track input by a user can be more accurately determined to be matched with the standard sequences, the track can be accurately identified, and the characters can be more accurately pushed to the user.

Description

Method and device for inputting characters and electronic equipment
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a method and a device for inputting characters, electronic equipment and a computer readable storage medium.
Background
With the development of input technology, wearable devices such as smart rings are gradually walking into the eye of the public. Input and control based on wearable equipment all have great application potential in scenes such as virtual reality, augmented reality, intelligent house control.
The current wearable device can only detect simple command actions, and when the input method is simulated based on the wearable device to expect to input characters, the intention of a user is difficult to detect accurately, and the accuracy and the input efficiency are low.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present invention provide a method, an apparatus, an electronic device, and a computer-readable storage medium for inputting a text.
In a first aspect, an embodiment of the present invention provides a method for inputting a text, including:
acquiring a current track input by a user, selecting N input points from the current track, and generating an input sequence;
judging whether the input sequence is matched with the standard sequence according to the distance between the input sequence and the standard sequence in a preset sequence library, wherein the standard sequence comprises N standard points, and each standard sequence corresponds to a corresponding character;
and taking the standard sequence matched with the input sequence in the sequence library as an effective sequence, and pushing at least one character corresponding to the effective sequence.
In a second aspect, an embodiment of the present invention further provides a device for inputting a text, including:
the acquisition module is used for acquiring a current track input by a user, selecting N input points from the current track and generating an input sequence;
the judging module is used for judging whether the input sequence is matched with the standard sequence according to the distance between the input sequence and the standard sequence in a preset sequence library, the standard sequence comprises N standard points, and each standard sequence corresponds to a corresponding character;
and the processing module is used for taking the standard sequence matched with the input sequence in the sequence library as an effective sequence and pushing at least one character corresponding to the effective sequence.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a bus, a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor, where the transceiver, the memory, and the processor are connected via the bus, and when the computer program is executed by the processor, the steps in any one of the above methods for inputting words are implemented.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the method for inputting words described in any one of the above.
In the method, the apparatus, the electronic device, and the computer-readable storage medium for inputting characters provided by the embodiments of the present invention, N input points are extracted from a current trajectory input by a user to generate an input sequence, and the input sequence is compared with a preset standard sequence to determine characters corresponding to the input sequence. The method compares the sequences containing N points, can more accurately determine the current track input by the user to be matched with which standard sequences, realizes accurate identification of the track, and can more accurately push characters to the user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present invention, the drawings required to be used in the embodiments or the background art of the present invention will be described below.
FIG. 1 is a flow chart illustrating a method for inputting text according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a scenario for inputting text by a user according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a current trajectory of user input provided by embodiments of the present invention;
FIG. 4 is a diagram illustrating pushing text to a user according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an apparatus for inputting text according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device for executing a method for inputting text according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described below with reference to the drawings.
Fig. 1 shows a flowchart of a method for inputting text according to an embodiment of the present invention. Based on the method, the user can realize text input through inputting a track, as shown in fig. 1, the method comprises the following steps:
step 101: and acquiring a current track input by a user, selecting N input points from the current track, and generating an input sequence.
In the embodiment of the invention, when a user needs to input characters, a corresponding track can be input through wearable equipment, and the track is called as a current track. For example, FIG. 2 shows a schematic view of a scenario in which a user is wearing a first inertial sensor and a second inertial sensor, illustrated as two inertial sensors in the shape of a ring, and worn on a thumb and an index finger, respectively; when the user needs to input the characters, the user can slide the corresponding track of the characters by using the seen keyboard or the imagined keyboard. For example, a track is slid on an index finger (e.g., a first joint of the index finger) with a thumb as a pointer, and the user completes an operation of inputting the current track.
After the current trajectory is acquired, the embodiment of the present invention decodes (decode) the current trajectory, selects N input points from the current trajectory, and generates an input sequence including the N input points. For example, the current trajectory may be equidistantly sampled to obtain N points on the current trajectory, i.e., N input points. The ith input point is giThen the input sequence may be represented as G ═ G1,g2,…,gN}。
Optionally, since the tracks of different users inputting the same character are also different, in order to be able to more accurately identify the track input by each user, the embodiment of the present invention adjusts the track input by the user, so as to improve the accuracy of subsequent decoding (decode). Specifically, the step 101 "selecting N input points from the current trajectory and generating the input sequence" includes:
step A1: performing telescopic adjustment and/or linear adjustment on N original points (x ', y') selected from the current track based on preset adjustment parameters to generate N input points (x, y) and generate an input sequence; the adjustment parameters are obtained according to the historical behavior data statistics of the user.
Wherein, this flexible adjustment includes: (x, y) ═ x'/σx,y'/σy) (ii) a Wherein σx、σyRespectively obtaining a horizontal input standard error and a vertical input standard error according to historical behavior data statistics of the user.
The linear adjustment includes: determining a covariance matrix cov for the ith letter point cloudiBy fitting the covariance matrix coviCarrying out SVD matrix decomposition to determine a transformation matrix M of the ith letter point cloudiAnd, and:
Figure BDA0003535209570000041
wherein the representation SVD () represents an SVD matrix decomposition; the ith letter point cloud is distributed according to a track containing the ith letter input by a user before, wherein i is 1,2, … and 26; converting the origin point (x ', y') to an input point (x, y), and: (x, y)T=M-1(x',y')T(ii) a Wherein,
Figure BDA0003535209570000042
in the embodiment of the present invention, when a trajectory input by a certain user needs to be identified, behavior data of the user when inputting a character before, that is, historical behavior data may be counted, where the historical behavior data includes multiple trajectories when the user inputs a corresponding letter (for example, letters such as a, b, and c). By counting the historical behavior data, the behavior of the user (such as the accuracy of a certain key stroke, the accuracy or error of the horizontal direction and the vertical direction) can be modeled, and the standard error sigma of the horizontal input of the user can be determinedxAnd the standard error sigma of the longitudinal inputyIt can also be determined that the user input contains the trajectory distribution of the ith letter, and then the ith characterAnd (4) mother point cloud.
The horizontal input standard error and the vertical input standard error can be used as adjustment parameters to perform telescopic adjustment on the original point, the adjusted original point can be used as an input point, and the input point is as follows: (x, y) ═ x'/σx,y'/σy). Alternatively, the ith letter point cloud may be used as an adjustment parameter, and the transformation matrix M of each letter point cloud may be determined by SVD (Singular Value Decomposition) matrix DecompositioniAnd further averaging the transformation matrixes of all the letters to obtain a final global transformation matrix M, carrying out linear adjustment on the original points based on the global transformation matrix M, wherein the input points obtained after adjustment meet the following conditions: (x, y)T=M-1(x',y')T
Step 102: and judging whether the input sequence is matched with the standard sequence according to the distance between the input sequence and the standard sequence in the preset sequence library, wherein the standard sequence comprises N standard points, and each standard sequence corresponds to a corresponding character.
In the embodiment of the invention, the tracks of a plurality of characters are predetermined, the embodiment is called a standard track, N points, namely N standard points, are extracted from the standard track, so that a sequence containing the N standard points, namely a standard sequence, is formed, and a sequence library containing a plurality of standard sequences can be generated; wherein the standard sequence or standard trajectory for each word is different. After the input sequence input by the user is acquired, the distance between the input sequence and each standard sequence is calculated, so that whether the input sequence is matched with the standard sequence is compared.
The embodiment adopts a strategy of relative trajectories, that is, the current trajectory or the preset standard trajectory are both trajectories starting from a fixed position, for example, for a QWERTY full keyboard, the trajectories may be trajectories drawn starting from a key position G.
Step 103: and taking the standard sequence matched with the input sequence in the sequence library as an effective sequence, and pushing at least one character corresponding to the effective sequence.
If the input sequence is matched with some standard sequences, the standard sequences can be used as effective sequences, the user needs to input characters corresponding to the effective sequences at the present time, and the characters are pushed to the user for the user to input or select. Wherein, the number of the effective sequences can be one or a plurality of; during pushing, all the characters corresponding to the effective sequences can be pushed to the user, and the characters corresponding to part of the effective sequences can be selected to be pushed to the user.
For example, as shown in fig. 3, the current trajectory input by the user is determined by comparison, and the sequence of the current trajectory matches with the trajectory of the characters such as "think", "number", "jump", "ink", "imo", and the like, at this time, the characters can be pushed and displayed to the user for selection by the user, as shown in fig. 4, fig. 4 illustrates a manner of selecting words by a circular disk. Then based on the selection of the user, inputting corresponding characters; for example, if the user inputs a right instruction, the user may be considered to need to input "think", and at this time, think may be added to the text box.
The method for inputting characters provided by the embodiment of the invention extracts N input points from the current track input by a user to generate an input sequence, and compares the input sequence with a preset standard sequence to determine the characters corresponding to the input sequence. The method compares the sequences containing N points, can more accurately determine the current track input by the user to match with which standard sequences, realizes accurate identification of the track, and can more accurately push characters to the user.
On the basis of the above embodiment, the step 102 "determining whether the input sequence matches the standard sequence according to the distance between the input sequence and the standard sequence in the preset sequence library" includes:
step B1: determining the distance D between the first i input points in the input sequence and the first j standard points in the standard sequence1(i, j), and a distance D1(i, j) satisfies:
D1(i,j)=d(gi,tj)+min{D1(i,j-1),D1(i-1,j),D1(i-1,j-1)};
wherein, giIn representation of input sequenceI-th input point of, tjDenotes the jth criterion point in the criterion sequence, d (g)i,tj) Represents an input point giAnd the standard point tjI, j equals 1,2, …, N.
Step B2: will be a distance D1(N, N) as a first distance between the input sequence and the standard sequence, and determining that the input sequence is matched with the standard sequence under the condition that the first distance is smaller than a first preset threshold value.
In the embodiment of the invention, the input sequence and the standard sequence are both N-dimensional sequences, and the Euclidean distance, the Manhattan distance and the like between the input sequence and the standard sequence can be calculated when the distance between the input sequence and the standard sequence is determined. However, since the length of the input sequence may be different from that of the standard sequence, but N points are taken out of both the input sequence and the standard sequence, so that the input points in the input sequence may not correspond to the standard points in the standard sequence one by one, the values of i and j, i and j which are sequentially increased are set to be 1,2, …, N in the embodiment of the present invention, and the distance D between the first i input points in the input sequence and the first j standard points in the standard sequence is gradually determined1(i, j) until i ═ j ═ N, and the last determined distance D1(N, N) as the first distance between the input sequence and the standard sequence.
Wherein D is1(i,j)=d(gi,tj)+min{D1(i,j-1),D1(i-1,j),D1(i-1, j-1) }, and a distance D1(i,0)、D1(0,j)、D1Both (0,0) are small values, for example, both 0. d (g)i,tj) Represents an input point giAnd the standard point tjThe distance between two points can be Euclidean distance, Manhattan distance, etc., or other means for calculating the distance between two points, such as the distance determined based on the norm, d (g)i,tj)=||gi-tjThis embodiment does not limit this. The present embodiment can better represent the degree of similarity between the two sequences based on the first distance between the input sequence and the standard sequence determined in this way, and the smaller the first distance, the closer the two sequences are, the higher the similarity between the two sequences is, and the more the two sequences are matched.
OptionallyAlthough the calculated first distance can accurately represent whether the two sequences are matched, the calculation process has high complexity and low efficiency; the embodiment of the invention simply calculates the second distance between the input sequence and the standard sequence, and then calculates the first distance between the input sequence and the standard sequence under the condition that the second distance meets the condition. In particular, the distance D between the first i input points in the input sequence and the first j standard points in the standard sequence is determined in the above step B1 ″1(i, j) ", the method further comprising:
step B3: determining a second distance D between the input sequence G and the standard sequence T2(G, T), and a second distance D2(G, T) satisfies:
Figure BDA0003535209570000071
step B4: at a second distance D2(G, T) is less than a second preset threshold, determining the distance D between the first i input points in the input sequence and the first j standard points in the standard sequence is performed1(i, j).
In the embodiment of the invention, the method can be based on a formula
Figure BDA0003535209570000072
Fast calculation of a second distance D between the input sequence G and each of the standard sequences T2(G, T) if the second distance D2(G, T) is less than a second preset threshold, the input sequence and the standard sequence can be preliminarily judged to be similar, the input sequence and the standard sequence can be matched, and then accurate judgment is carried out based on the first distance between the input sequence and the standard sequence.
The first preset threshold and the second preset threshold may be preset fixed values, or may be dynamically determined thresholds according to the currently determined first distance or second distance, for example, a certain percentile, so that only a few standard sequences with the smallest distances may be selected.
Optionally, the embodiment of the present invention may further determine which words to push in combination with a bayesian language model. The step 103 "pushing at least one text corresponding to the valid sequence" includes:
step C1: and determining the probability P (w | context) of the characters w corresponding to the selected effective sequence according to the context input by the user and a preset Bayesian language model.
Step C2: and correcting the probability P (w | context) according to the distance between the input sequence and the effective sequence, and pushing the characters of which the corrected probability is greater than the preset probability value.
In the embodiment of the invention, a Bayesian language model can be trained and set in advance, and the Bayesian language model can be a unitary model, a binary model and the like; after the valid sequence is determined, the character corresponding to the valid sequence is used as an optional character w, and the probability P (w | context) of currently using the optional character w is determined based on the context (generally, only the above) input by the user. And based on the distance between the input sequence and the effective sequence, the probability can be further corrected, so that a more accurate probability value can be obtained, namely the probability after correction can more accurately predict the possibility of selecting the character w currently, and then one or more characters with high probability can be pushed to the user.
Wherein, let the input sequence be G, the probability P (w | G; context) after correction can be expressed as: p (w | G; context) ═ D (G, T)w)×P(w|context). Wherein, TwThe effective sequence corresponding to the character w is shown, and alpha represents the strength parameter of the Bayesian language model. D (G, T)w) Representing the input sequence G and the valid sequence TwMay be the first distance described above.
On the basis of the above embodiment, the user may input the current trajectory based on a plurality of inertial sensors; moreover, in order to avoid the false input, the present embodiment may collect the current trajectory of the user input after the user triggers the pinch action. The embodiment of the invention utilizes the relative postures among a plurality of inertial sensors to acquire whether a user triggers a kneading action and acquire the track input by the user.
Specifically, taking any two inertial sensors (i.e., the first inertial sensor and the second inertial sensor) as an example, the euler angles are used to represent the spatial attitudes of the first inertial sensor and the second inertial sensor (i.e., the attitudes of each inertial sensor in space, including orientation, rotation, etc.). For any inertial sensor, the euler angle can be expressed as (Φ, θ, ψ), and the corresponding one of the rotation matrices DCM (Φ, θ, ψ) for that euler angle can be expressed as:
Figure BDA0003535209570000081
let MrA rotation matrix representing the first inertial sensor; similarly, the spatial attitude collected by the second inertial sensor can also be expressed by euler's angle, and the rotation matrix of the second inertial sensor is defined as MdThus, the relative attitude matrix M of the first inertial sensor with respect to the second inertial sensorRCan be expressed as:
MR=Mr -1Md
because the first inertial sensor and the second inertial sensor are respectively positioned in different coordinate systems, the arbitrary vector v under the second inertial sensor coordinate system0=(x0,y0,z0)TIt can also be described in terms of the first inertial sensor coordinate system, i.e. the vector v0The coordinates in the first inertial sensor coordinate system may be expressed as: mRv0=Mr -1Mdv0
Thus, for the orthogonal vector (e) in any one of the first inertial sensor coordinate systemsi,ej) In other words, any three-dimensional vector v in the second inertial sensor coordinate system0The projection (x, y) on the plane determined by the orthogonal vector satisfies: (x, y)T=(ei,ej)TMRv0(ii) a That is, the projection (x, y) may be represented as a relative attitude matrix M of the first inertial sensor with respect to the second inertial sensorRLinear combinations of parameters of (1), e.g. x ═ p · mR,y=q·mR. Similarly, the relative position d between two inertial sensors can also be expressed as the first inertial sensor relative to the second inertial sensorRelative attitude matrix M of sexual sensorRLinear combination of parameters (e.g. d ═ r · m)R. Wherein m isRIs formed by a relative attitude matrix MRThe 9-dimensional vector obtained by conversion, p and q are also 9-dimensional vectors, and "·" in the formula represents a dot product operation. p and q are unknown coefficients, and after the coefficients p and q are obtained, any relative attitude matrix M can be determinedRThe corresponding projection (x, y). Similarly, r is also an unknown coefficient, hereinafter referred to as a kneading coefficient, and is also a 9-dimensional vector, e.g., a 3 × 3 relative attitude matrix MRThe converted 9-dimensional vector mRCan be as follows: [ a ] A11,a12,a13,a21,a22,a23,a31,a32,a33]Wherein a isijRepresenting a relative attitude matrix MRRow i and column j.
As mentioned above, if the relative attitude matrix of the first inertial sensor with respect to the second inertial sensor is MRConvert it into a 9-dimensional vector mRThe position between the first inertial sensor and the second inertial sensor in a certain direction can be represented by a certain coefficient r. For example, if the kneading coefficient in the kneading direction is r, the position between the first inertial sensor and the second inertial sensor in the kneading direction can be represented as r · mR. This makes it possible to construct a loss function, which is minimized to determine the unknown kneading coefficient r. Similarly, coordinates (x, y) in the projection plane corresponding to the relative attitude can be expressed by using certain coefficients p and q.
Specifically, before the step 101 "acquiring the current trajectory of the user input", the method further includes a process of determining whether there is a pinch motion, the process including the steps D1-D2:
step D1: acquiring a plurality of first current relative postures between the first inertial sensor and the second inertial sensor, determining movement parameters of the plurality of first current relative postures in the kneading direction according to preset kneading coefficients, and determining whether kneading action is triggered currently according to the size of the movement parameters; the kneading coefficient can indicate a relative position of the relative posture between the first inertial sensor and the second inertial sensor in the kneading direction.
Step D2: under the condition of triggering the kneading action, executing the step of acquiring the current track input by the user;
wherein the kneading coefficient is preset by:
the method includes acquiring a plurality of effective relative attitude groups related to a kneading direction by changing relative positions of a first inertial sensor and a second inertial sensor in the kneading direction a plurality of times, the effective relative attitude groups including a first start relative attitude and a first end relative attitude between the first inertial sensor and the second inertial sensor.
Acquiring a plurality of invalid relative attitude groups related to the preset plane by changing the relative positions of the first inertial sensor and the second inertial sensor in the preset plane for a plurality of times, wherein the invalid relative attitude groups comprise a second starting relative attitude and a second ending relative attitude between the first inertial sensor and the second inertial sensor; the kneading direction is perpendicular to the preset plane.
Determining a kneading coefficient in a first loss function based on a first loss function which is set to be minimized by the effective relative posture group and the ineffective relative posture group; the first loss function is used to indicate that there is movement in the kneading direction of the first starting relative posture and the first ending relative posture, and that there is no movement in the kneading direction of the second starting relative posture and the second ending relative posture.
In the embodiment of the present invention, the first inertial sensor and the second inertial sensor are two inertial sensors to be identified, and the inertial sensors may be nine-axis attitude sensors, which include a three-axis accelerometer, a three-axis angular velocity meter, and a three-axis geomagnetic instrument. The first inertial sensor and the second inertial sensor are respectively arranged on different fingers of a user, and the user can extract the starting relative gesture and the ending relative gesture each time by changing the relative position between the first inertial sensor and the second inertial sensor for multiple times, and form a relative gesture group containing the starting relative gesture and the ending relative gesture.
The first inertial sensor and the second inertial sensor are respectively arranged on different fingers of a user, and the direction corresponding to the two fingers when the two fingers are kneaded or released is called as the kneading direction; accordingly, a plane perpendicular to the kneading direction is selected as the predetermined plane. For example, the first inertial sensor and the second inertial sensor are respectively worn at the thumb root and the index finger root of the right hand of the user in the form of finger rings, and the inertial sensor at the thumb root can be taken as the first inertial sensor and the inertial sensor at the index finger root can be taken as the second inertial sensor. The direction between the tip of the thumb and the tip of the index finger can be used as the kneading direction, and a predetermined plane can be selected, for example, the predetermined plane can be the plane corresponding to the abdomen of the index finger.
In the embodiment of the invention, the relative position between the first inertial sensor and the second inertial sensor is changed in the kneading direction, so that a relative attitude group, namely an effective relative attitude group, can be acquired; by changing the relative position between the first inertial sensor and the second inertial sensor in the preset plane, another relative attitude set, namely an invalid relative attitude set, can be acquired. As will be understood by those skilled in the art, the "ineffective relative posture group" means that the relative posture group does not vary greatly or does not vary in the kneading direction, and is ineffective against a variation in the kneading direction, but does not mean that the relative posture group is completely useless or ineffective.
When the relative attitude group is collected, taking the relative attitude between the first inertial sensor and the second inertial sensor as an initial relative attitude, namely an initial relative attitude; then, changing the relative position of the first inertial sensor and the second inertial sensor in the kneading direction or on a preset plane, and after the process of changing the relative position is finished, acquiring the relative attitude between the first inertial sensor and the second inertial sensor again, wherein the relative attitude is the relative attitude corresponding to the process of changing the relative position when the process is finished, and the relative attitude is called as the finished relative attitude in the embodiment; the starting relative posture and the ending relative posture can be collected successively each time the relative position is changed, and the starting relative posture and the ending relative posture are taken as a group to obtain a relative posture group; by changing the relative position of the first inertial sensor and the second inertial sensor for multiple times and repeating the above-described acquisition process, a plurality of relative attitude groups can be obtained.
Specifically, in acquiring the effective relative posture group, the relative posture between the first inertial sensor and the second inertial sensor before the relative position is changed may be taken as a first start relative posture, and the relative posture between the first inertial sensor and the second inertial sensor after the relative position is changed in the kneading direction may be taken as a first end relative posture. Accordingly, in acquiring the invalid relative posture group, the relative posture between the first inertial sensor and the second inertial sensor before the relative position is changed may be taken as a second starting relative posture, and the relative posture between the first inertial sensor and the second inertial sensor after the relative position is changed along the preset plane may be taken as a second ending relative posture.
For example, taking the case where the first inertial sensor and the second inertial sensor are respectively disposed on the thumb and the index finger, the first start relative posture is acquired when the thumb and the index finger are in the release state (open state), and the first end relative posture is acquired when the thumb and the index finger are in the pinch state, or the first start relative posture is acquired when the thumb and the index finger are in the pinch state, and the first end relative posture is acquired when the thumb and the index finger are in the release state (open state). Similarly, the second starting relative posture and the second ending relative posture can be acquired for one relative position change by moving the tip of the thumb from the tip of the index finger to the base of the index finger (or from the base of the index finger to the tip of the index finger), so that the invalid relative posture group can be determined. The embodiment of the present invention may collect at least 5 valid relative posture groups by repeating the above-described process a plurality of times, and may also collect more (e.g., 10, 15, etc.) valid relative posture groups, as well as the number of invalid relative posture groups, as long as it is ensured that the kneading coefficient can be determined. In order to be able to better determine the kneading coefficient, the first inertial sensor or the second inertial sensor may be moved in different directions within a preset plane during the acquisition of the invalid relative attitude group.
In the embodiment of the present invention, the kneading coefficient can represent the correspondence between the relative posture between the two inertial sensors (first inertial sensor, second inertial sensor) and the positions of the two inertial sensors in the kneading direction, that is, the kneading coefficient can be used to convert the relative posture between the two inertial sensors into the positions of the two inertial sensors in the kneading direction. Based on the above-described process of acquiring the valid relative posture group and the invalid relative posture group, it is known that the process corresponding to the valid relative posture group moves in the kneading direction, so that the first start relative posture and the first end relative posture move in the kneading direction, for example, a difference between a position corresponding to the first start relative posture determined based on the kneading coefficient and a position corresponding to the first end relative posture determined based on the kneading coefficient should not be zero; while the course corresponding to the invalid relative posture group has no movement in the kneading direction, the second start relative posture and the second end relative posture have no movement in the kneading direction, and for example, the difference between the position corresponding to the second start relative posture determined based on the kneading coefficient and the position corresponding to the second end relative posture determined based on the kneading coefficient should be zero.
The first loss function may be solved using a plurality of valid and invalid sets of relative poses to determine a kneading coefficient therein. For example, the kneading coefficient may be solved by a least squares method with a regularization term. Wherein the pinch coefficient r is related to the specific form of the first loss function and the sampling manner in which the set of valid relative poses is sampled. For example, a change in position may be performed from the released state to the kneaded state, thereby enabling a valid set of relative poses to be collected; alternatively, a change in position may be performed from the pinch state to the release state, so that another effective relative posture group can be acquired, the two effective relative posture groups generally having different pinch coefficients r.
After the kneading coefficient is determined, after the relative posture between the first inertial sensor and the second inertial sensor is determined based on the data collected by the first inertial sensor and the second inertial sensor, the position of the first inertial sensor and the second inertial sensor in the kneading direction can be converted, and the movement condition between the two inertial sensors can be determined based on the change of a plurality of positions. Specifically, during use, a plurality of newly determined relative postures between the first inertial sensor and the second inertial sensor, that is, current relative postures, may be acquired, and then a movement parameter between the two inertial sensors in the kneading direction, which is used to represent a movement situation of the two inertial sensors in the kneading direction, may be determined based on the plurality of current relative postures. For example, if the positions of the two inertial sensors in the kneading direction become small, that is, the user triggers the kneading action, the movement parameter is positive; if the positions of the two inertial sensors in the kneading direction become large, namely the user triggers the release action, the movement parameter is negative; also, the larger the movement positions of the two inertial sensors in the kneading direction, the larger the absolute value of the movement parameter.
Conversely, whether to trigger a pinch action or a release action may be determined based on the magnitude of the movement parameter. For example, if the movement parameter is positive and sufficiently large, a pinch action may be considered to be triggered; if the movement parameter is negative and its absolute value is large enough, the release action may be considered to be triggered.
Optionally, the first loss function may include:
Figure BDA0003535209570000131
wherein S istRepresenting the set of valid relative poses (u)i,vi) S denotes an invalid relative posture group (x)i,yi) Set of (a) uiRepresenting a first starting relative attitude, viRepresenting a first end relative attitude, xiRepresenting a second starting relative attitude, yiRepresenting the second ending relative attitude, r representing the kneading coefficient in the first loss function, λ representing a preset coefficient, k0Is a non-zero constant; wherein u isi、vi、xi、yiAre 9-dimensional vectors transformed from the corresponding relative attitude matrix.
In the examples of the present invention, LrIs a preset first loss function; r is in the first loss functionA kneading coefficient capable of expressing a relationship between a relative posture between the first inertial sensor and the second inertial sensor and a position in the kneading direction; setting a first initial relative attitude in the effective relative attitude group acquired by the first inertial sensor relative to the second inertial sensor in the kneading direction as uiThe first relative end attitude is set to vi(ii) a Setting a second starting relative attitude in the invalid relative attitude group acquired by the first inertial sensor relative to the second inertial sensor in a preset plane as xiThe second ending relative attitude is set to yi(ii) a I.e. (u)i,vi) Is the ith valid relative pose group acquired, (x)i,yi) Is the ith invalid relative posture group collected; order StRepresenting a set of multiple valid relative pose groups and S representing a set of multiple invalid relative pose groups. Wherein u isi、vi、xi、yiAll are 9-dimensional vectors obtained by corresponding relative attitude matrix conversion; k is a radical of0May be 1 or other non-zero constant; λ represents a preset coefficient, for example, λ may be less than 1.
Wherein k is0Representing the expected r (u) of a user going from released state to kneaded state (or from kneaded state to released state, depending on the sampling mode in which the set of valid relative poses is sampled)i-vi) The variation value of (c). For example, the effective set of relative poses is collected by the user from a released state to a kneaded state, if k0When the user is in the release state, the position between the two inertial sensors is marked as 0, and when the user is in the pinch state, the position between the two inertial sensors is marked as 1. And, the closer the posture of the user is to the kneaded state, the closer the dot product between the kneading coefficient r and the relative posture is to 1.
First loss function L in an embodiment of the present inventionrIn (b), the kneading coefficients r and ui-viPerforming dot product operation to obtain the position variation in the kneading direction by minimizing the first loss function LrSo that the kneading coefficient r obtained can be such that the amount of positional change r (u) in the kneading directioni-vi) With a predetermined non-zero coefficient k in the kneading direction0Approaching; due to the second starting relative attitude x in the kneading directioniAnd a second ending relative attitude yiIn the absence of relative displacement, at the first loss function LrR (x)i-yi) A displacement in the kneading direction, which represents the invalid relative posture group correspondence, should be close to 0; and, in order to avoid the existence of a large value in the kneading coefficient r (the kneading coefficient is a 9-dimensional vector containing 9 values), in the first loss function LpThe increasing loss term: lambda | r |2. Thus, by minimizing the first loss function, the first starting relative attitude u determined based on the kneading coefficient r can be made to beiRelative attitude v to the first endiThe amount of change in position in the kneading direction approaches non-zero k0The second starting relative attitude xiRelative attitude y to the second endiThe amount of positional change in the kneading direction approaches 0, so that the kneading coefficient r after the first loss function is minimized can be finally determined.
Optionally, the step 101 "acquiring the current trajectory input by the user" may include:
step E1: acquiring a second current relative attitude between the first inertial sensor and the second inertial sensor, determining a current projection in a preset plane corresponding to the second current relative attitude according to a preset projection coefficient, and generating a current track according to the plurality of current projections; the projection coefficient is used for representing the relation between the relative attitude between the first inertial sensor and the second inertial sensor and the projection in the preset plane;
wherein the projection coefficients are preset by:
and acquiring a plurality of horizontal relative attitude groups related to the horizontal direction by changing the relative positions of the first inertial sensor and the second inertial sensor in the horizontal direction for a plurality of times, wherein the horizontal relative attitude groups comprise a horizontal starting relative attitude and a horizontal ending relative attitude between the first inertial sensor and the second inertial sensor.
Acquiring a plurality of vertical relative attitude groups related to the vertical direction by changing the relative positions of the first inertial sensor and the second inertial sensor in the vertical direction for a plurality of times, wherein the vertical relative attitude groups comprise a vertical starting relative attitude and a vertical ending relative attitude between the first inertial sensor and the second inertial sensor; the horizontal direction is perpendicular to the vertical direction, and the horizontal direction and the vertical direction are both located in a preset plane.
Determining a projection coefficient in a second loss function based on a second loss function which is minimized and preset by the horizontal relative attitude group and the vertical relative attitude group; the second loss function is used for representing the difference between the projection variation of the horizontal starting relative posture and the horizontal ending relative posture on the preset plane and the preset variation in the horizontal direction, and the difference between the projection variation of the vertical starting relative posture and the vertical ending relative posture on the preset plane and the preset variation in the vertical direction.
In the embodiment of the present invention, similarly to the above-described collection of the effective relative posture groups in the kneading direction, the horizontal relative posture group and the vertical relative posture group may be collected in the horizontal direction and the vertical direction, respectively. The horizontal direction and the vertical direction are perpendicular to each other and are both located in the preset plane, that is, the horizontal relative posture group and the vertical relative posture group can be directly used as the invalid relative posture group. The horizontal direction and the vertical direction are two directions perpendicular to each other in a preset plane, where "horizontal" and "vertical" are relative to the preset plane, and the "horizontal direction" and the "vertical direction" are not limited to a direction parallel to the horizontal plane and a direction perpendicular to the horizontal plane. For example, the predetermined plane may be parallel to the horizontal plane, and two perpendicular directions may still be selected from the predetermined plane as the "horizontal direction" and the "vertical direction" in the present embodiment.
For example, in the case where the first inertial sensor and the second inertial sensor are provided on the thumb and the index finger, respectively, the direction between the tip and the base of the index finger may be set as the horizontal direction, and the direction perpendicular to the direction between the tip and the base of the index finger in the predetermined plane may be set as the vertical direction. In the process of collecting the invalid relative posture group, the fingertip of the thumb moves from the index finger fingertip to the index finger base (or from the index finger base to the index finger fingertip), and a horizontal starting relative posture and a horizontal ending relative posture can be collected, so that a horizontal relative posture group is obtained; and the thumb is moved from the uppermost position in the middle of the index finger to the lowermost position in the middle of the index finger along the direction vertical to the index finger, so that the vertical starting relative posture and the vertical ending relative posture can be acquired, and a vertical relative posture group is obtained.
In the embodiment of the present invention, the projection coefficients (p and q as described above) may represent the relationship between the relative pose and the projection in a certain plane, and since the sampling is performed based on the horizontal direction and the vertical direction in the above steps 101 and 102, the embodiment takes the plane determined by the horizontal direction and the vertical direction as a preset plane, and the projection coefficients represent the relationship between the relative pose and the projection in the preset plane; in this case, since the horizontal direction is perpendicular to the vertical direction, any plane may be set as the predetermined plane. Based on the horizontal relative attitude group, the projection variation in the horizontal direction on the preset plane can be represented by using the projection coefficient, for example, the horizontal direction can be an x-axis, the vertical direction can be a y-axis, the horizontal starting relative attitude is m, and the horizontal ending relative attitude is n, then the projection variation in the horizontal direction of the horizontal relative attitude group can be represented as p (m-n), and the projection variation in the vertical direction can be represented as q (m-n); similarly, the projection variation corresponding to the vertical relative attitude group may also be represented based on the projection coefficient.
In the embodiment of the present invention, a second loss function is preset, in which projection changes of the horizontal relative posture group and the vertical relative posture group on a preset plane are represented based on the projection coefficient, and a difference between the projection changes and a preset variation is taken as a "loss", that is, the second loss function may represent a difference between a projection variation of the horizontal relative posture group on the preset plane and a preset variation in the horizontal direction, and a difference between a projection variation of the vertical relative posture group on the preset plane and a preset variation in the vertical direction. By minimizing the second loss function, the above-mentioned "loss" can be minimized, and the projection coefficient in the second loss function when the difference is minimized can be determined. The projection coefficients in the second loss function may be determined, for example, by a least squares method.
Optionally, the second loss function in the method comprises:
Figure BDA0003535209570000171
Figure BDA0003535209570000172
wherein S ishSet representing horizontal relative attitude groups, SvSet of groups of vertical relative poses, miRepresenting a horizontal starting relative attitude, niIndicating horizontal ending relative attitude, aiRepresents the vertical starting relative attitude, biRepresenting the vertical end relative attitude, p and q representing the projection coefficients in the second loss function, λ representing the predetermined coefficient, k1、k2Are all non-zero constants; wherein m isi、ni、ai、biAre 9-dimensional vectors transformed from the corresponding relative attitude matrix.
In the embodiment of the present invention, LpAnd LqIs a preset second loss function; p and q are projection coefficients in the second loss function that can represent a relationship between the relative attitude between the first inertial sensor and the second inertial sensor and the projection in the preset plane, i.e., weight coefficients of the true acquisition values; setting a horizontal starting relative attitude in a horizontal relative attitude group acquired in the horizontal direction by the first inertial sensor relative to the second inertial sensor as miThe horizontal ending relative attitude is set to ni(ii) a Setting a vertical starting relative attitude in a vertical relative attitude group acquired in the vertical direction by the first inertial sensor relative to the second inertial sensor as aiThe vertical end relative attitude is set as bi(ii) a I.e. (m)i,ni) Is the ith horizontal relative attitude group collected, (a)i,bi) Is the ith vertical relative attitude group acquired; order ShRepresenting a set of multiple horizontal relative attitude groups, SvA set of multiple sets of vertical relative poses is represented. Wherein m isi、ni、ai、biIs a 9-dimensional vector converted from a corresponding relative attitude matrix; and k is1、k2All are non-zero constants which can be 1 or other non-zero constants when k is1Corresponding to a predetermined variation, k, in the horizontal direction2Corresponding to the preset variable quantity in the vertical direction; λ represents a preset coefficient, for example, λ may be less than 1.
Second loss function L in an embodiment of the present inventionpMiddle, projection coefficients p and (m)i-ni) Performing dot product operation to obtain projection variation in horizontal direction, and minimizing the second loss function LpThe solved projection coefficient p can make the projection variation p (m) in the horizontal directioni-ni) And a predetermined variation k in the horizontal direction1Approaching; due to the vertical starting relative attitude a in the horizontal directioniAnd end of vertical relative attitude biThere is no relative displacement, so there is a second loss function LpMiddle order projection coefficients p and (a)i-bi) Performing a dot product operation, the dot product operation result representing a displacement in the horizontal direction corresponding to the vertical relative attitude group, which should be close to 0; in order to avoid a large value in the projection coefficient p (the projection coefficient is a 9-dimensional vector including 9 values), the second loss function L is applied to the projection coefficient ppThe term of the increase loss: λ | p |2. Therefore, by minimizing the second loss function, the horizontal starting relative attitude m determined based on the projection coefficient p can be made to beiRelative attitude n to the horizontal endiProjection variation on a predetermined plane and a predetermined variation k in a horizontal direction1Approximately equal or coincident with the vertical starting relative attitude aiRelative attitude to the vertical end biThe projection variation on the preset plane approaches to 0, so that the projection coefficient after the second loss function is minimized can be finally determinedp is the same as the above. Similarly, the second loss function LqThe same calculation method can be used to determine the projection coefficient q, which is not described herein again.
In the embodiment of the invention, the second loss function L based on the presetpAnd LqAnd a plurality of relative postures m of acquisitioni、ni、ai、biThe projection coefficients p and q can be determined conveniently and quickly.
According to the embodiment of the invention, by changing the relative positions of the two inertial sensors for multiple times, a plurality of effective relative attitude groups, invalid relative attitude groups, horizontal relative attitude groups and vertical relative attitude groups can be acquired, and based on the minimization of the relative attitude groups on the basis of the preset loss function, the kneading coefficient capable of representing the relative positions of the relative attitudes in the kneading direction and the projection coefficient capable of representing the relation between the relative attitudes and the projection in the preset plane can be obtained; after the current relative attitude is acquired in real time, whether a pinch action or a release action exists can be quickly judged, the acquired current relative attitude can be converted into a current projection in a preset plane in real time, and the relative position of the two inertial sensors in the space is represented based on the current projection. The method does not need to directly determine the space absolute position or the space relative position of a plurality of inertial sensors, does not need to unify a coordinate system and the like, and can quickly respond to the relative attitude change among the plurality of inertial sensors, so that the controlled equipment can quickly respond to the instruction sent by a user through operating the plurality of inertial sensors; in addition, the sampling process and the process of determining the kneading coefficient and the projection coefficient of the method are simple, and the calibration of the spatial position can be simply and quickly realized.
The method for inputting characters provided by the embodiment of the invention is described above in detail, and the method can also be implemented by a corresponding device.
Fig. 5 is a schematic structural diagram illustrating an apparatus for inputting text according to an embodiment of the present invention. As shown in fig. 5, the apparatus for inputting characters includes:
the obtaining module 51 is configured to obtain a current trajectory input by a user, select N input points from the current trajectory, and generate an input sequence.
And a judging module 52, configured to judge whether the input sequence matches with the standard sequence according to a distance between the input sequence and the standard sequence in a preset sequence library, where the standard sequence includes N standard points, and each standard sequence corresponds to a corresponding character.
And the processing module 53 is configured to take the standard sequence matched with the input sequence in the sequence library as an effective sequence, and push at least one character corresponding to the effective sequence.
On the basis of the above embodiment, the apparatus further includes: a triggering module;
the triggering module is used for acquiring a plurality of first current relative gestures between the first inertial sensor and the second inertial sensor before acquiring the current track input by the user, determining a movement parameter of the plurality of first current relative gestures in the kneading direction according to a preset kneading coefficient, and determining whether kneading action is triggered currently according to the size of the movement parameter; the kneading coefficient can represent a relative position of a relative attitude between the first inertial sensor and the second inertial sensor in a kneading direction; and in the case of triggering the pinch-in action, executing the step of acquiring the current trajectory input by the user.
Wherein the kneading coefficient is set in advance by:
acquiring a plurality of effective relative attitude groups related to the kneading direction by changing relative positions of the first inertial sensor and the second inertial sensor in the kneading direction a plurality of times, the effective relative attitude groups including a first start relative attitude and a first end relative attitude between the first inertial sensor and the second inertial sensor;
acquiring a plurality of invalid relative attitude groups related to a preset plane by changing relative positions of the first inertial sensor and the second inertial sensor in the preset plane for a plurality of times, wherein the invalid relative attitude groups comprise a second starting relative attitude and a second ending relative attitude between the first inertial sensor and the second inertial sensor; the kneading direction is vertical to the preset plane;
determining a kneading coefficient in a first loss function based on the effective relative posture group and the ineffective relative posture group, wherein the first loss function is minimized; the first loss function is used to represent that there is movement in the kneading direction of a first starting relative posture and the first ending relative posture, and that there is no movement in the kneading direction of a second starting relative posture and the second ending relative posture.
On the basis of the above embodiment, the acquiring module 51 acquires the current trajectory input by the user, including:
acquiring a second current relative attitude between a first inertial sensor and a second inertial sensor, determining a current projection in a preset plane corresponding to the second current relative attitude according to a preset projection coefficient, and generating a current track according to a plurality of current projections; the projection coefficient is used for representing the relation between the relative attitude between the first inertial sensor and the second inertial sensor and the projection in the preset plane;
wherein the projection coefficients are preset by:
acquiring a plurality of horizontal relative attitude groups related to the horizontal direction by changing relative positions of a first inertial sensor and a second inertial sensor in the horizontal direction for a plurality of times, wherein the horizontal relative attitude groups comprise a horizontal starting relative attitude and a horizontal ending relative attitude between the first inertial sensor and the second inertial sensor;
acquiring a plurality of vertical relative attitude groups related to the vertical direction by changing relative positions of the first inertial sensor and the second inertial sensor in the vertical direction a plurality of times, the vertical relative attitude groups including a vertical starting relative attitude and a vertical ending relative attitude between the first inertial sensor and the second inertial sensor; the horizontal direction is perpendicular to the vertical direction, and the horizontal direction and the vertical direction are both located in the preset plane;
determining a projection coefficient in a second loss function based on the horizontal relative attitude group and the vertical relative attitude group, wherein the second loss function is minimized; the second loss function is used for representing the difference between the projection variation of the horizontal starting relative posture and the horizontal ending relative posture on a preset plane and the preset variation in the horizontal direction, and the difference between the projection variation of the vertical starting relative posture and the vertical ending relative posture on the preset plane and the preset variation in the vertical direction.
On the basis of the above embodiment, the obtaining module 51 selects N input points from the current trajectory to generate an input sequence, including:
performing telescopic adjustment and/or linear adjustment on N original points (x ', y') selected from the current track based on preset adjustment parameters to generate N input points (x, y) and generate the input sequence; the adjustment parameters are obtained through statistics according to historical behavior data of the user;
the telescopic adjustment comprises:
(x,y)=(x'/σx,y'/σy) (ii) a Wherein σx、σyRespectively obtaining a horizontal input standard error and a vertical input standard error according to the historical behavior data statistics of the user;
the linear adjustment includes:
determining a covariance matrix cov for the ith letter point cloudiBy applying the covariance matrix coviPerforming SVD matrix decomposition to determine a transformation matrix M of the ith letter point cloudiAnd, and:
Figure BDA0003535209570000211
wherein the representation SVD () represents an SVD matrix decomposition; the ith letter point cloud is distributed according to a track containing the ith letter input by the user before, wherein i is 1,2, … and 26;
will be originalThe starting point (x ', y') is converted into the input point (x, y), and: (x, y)T=M-1(x',y')T
Wherein,
Figure BDA0003535209570000212
on the basis of the above embodiment, the determining module 52 determines whether the input sequence matches with the standard sequence according to the distance between the input sequence and the standard sequence in the preset sequence library, including:
determining distances D between the first i input points in the input sequence and the first j standard points in the standard sequence1(i, j), and a distance D1(i, j) satisfies:
D1(i,j)=d(gi,tj)+min{D1(i,j-1),D1(i-1,j),D1(i-1,j-1)};
wherein, giRepresenting the i-th input point, t, in the input sequencejRepresents the jth criterion point in the criterion sequence, d (g)i,tj) Represents an input point giAnd the standard point tjI, j ═ 1,2, …, N;
will be a distance D1(N, N) as a first distance between the input sequence and the standard sequence, and determining that the input sequence is matched with the standard sequence if the first distance is smaller than a first preset threshold value.
On the basis of the foregoing embodiment, the determining module 52 is further configured to:
determining distances D between the first i of the input points in the input sequence and the first j of the criterion points in the criterion sequence1(i, j) before determining a second distance D between the input sequence G and the standard sequence T2(G, T), and the second distance D2(G, T) satisfies:
Figure BDA0003535209570000221
at the second distance D2(G, T) is smallDetermining the distance D between the first i input points in the input sequence and the first j standard points in the standard sequence is performed under the condition of a second preset threshold value1(i, j).
On the basis of the foregoing embodiment, the processing module 53 pushes at least one text corresponding to the valid sequence, including:
determining the probability P (w | context) of selecting the characters w corresponding to the effective sequence according to the context input by the user and a preset Bayesian language model;
and correcting the probability P (w | context) according to the distance between the input sequence and the effective sequence, and pushing characters of which the corrected probability is greater than a preset probability value.
In addition, an embodiment of the present invention further provides an electronic device, which includes a bus, a transceiver, a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the transceiver, the memory, and the processor are connected via the bus, and when being executed by the processor, the computer program implements each process of the above-mentioned method for inputting a text, and can achieve the same technical effect, and is not described herein again to avoid repetition.
Specifically, referring to fig. 6, an embodiment of the present invention further provides an electronic device, which includes a bus 1110, a processor 1120, a transceiver 1130, a bus interface 1140, a memory 1150, and a user interface 1160.
In an embodiment of the present invention, the electronic device further includes: a computer program stored on the memory 1150 and executable on the processor 1120, the computer program, when executed by the processor 1120, implementing the processes of the above-described method embodiments of inputting text.
A transceiver 1130 for receiving and transmitting data under the control of the processor 1120.
In embodiments of the invention in which a bus architecture (represented by bus 1110) is used, bus 1110 may include any number of interconnected buses and bridges, with bus 1110 connecting various circuits including one or more processors, represented by processor 1120, and memory, represented by memory 1150.
Bus 1110 represents one or more of any of several types of bus structures, including a memory bus, and memory controller, a peripheral bus, an Accelerated Graphics Port (AGP), a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include: an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA), a Peripheral Component Interconnect (PCI) bus.
Processor 1120 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits in hardware or instructions in software in a processor. The processor described above includes: general purpose processors, Central Processing Units (CPUs), Network Processors (NPs), Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), Complex Programmable Logic Devices (CPLDs), Programmable Logic Arrays (PLAs), Micro Control Units (MCUs) or other Programmable Logic devices, discrete gates, transistor Logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. For example, the processor may be a single core processor or a multi-core processor, which may be integrated on a single chip or located on multiple different chips.
Processor 1120 may be a microprocessor or any conventional processor. The steps of the method disclosed in connection with the embodiments of the present invention may be directly performed by a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor. The software modules may be located in a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), a register, and other readable storage media known in the art. The readable storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The bus 1110 may also connect various other circuits such as peripherals, voltage regulators, or power management circuits to provide an interface between the bus 1110 and the transceiver 1130, as is well known in the art. Therefore, the embodiments of the present invention will not be further described.
The transceiver 1130 may be one element or may be multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. For example: the transceiver 1130 receives external data from other devices, and the transceiver 1130 transmits data processed by the processor 1120 to other devices. Depending on the nature of the computer system, a user interface 1160 may also be provided, such as: touch screen, physical keyboard, display, mouse, speaker, microphone, trackball, joystick, stylus.
It is to be appreciated that in embodiments of the invention, the memory 1150 may further include memory located remotely with respect to the processor 1120, which may be coupled to a server via a network. One or more portions of the aforementioned networks may be an ad hoc network (ad hoc network), an intranet (intranet), an extranet (extranet), a Virtual Private Network (VPN), a Local Area Network (LAN), a Wireless Local Area Network (WLAN), a Wide Area Network (WAN), a Wireless Wide Area Network (WWAN), a Metropolitan Area Network (MAN), the Internet (Internet), a Public Switched Telephone Network (PSTN), a plain old telephone service network (POTS), a cellular telephone network, a wireless fidelity (Wi-Fi) network, and a combination of two or more of the aforementioned networks. For example, the cellular telephone network and the wireless network may be a global system for Mobile Communications (GSM) system, a Code Division Multiple Access (CDMA) system, a Worldwide Interoperability for Microwave Access (WiMAX) system, a General Packet Radio Service (GPRS) system, a Wideband Code Division Multiple Access (WCDMA) system, a Long Term Evolution (LTE) system, an LTE Frequency Division Duplex (FDD) system, an LTE Time Division Duplex (TDD) system, a long term evolution-advanced (LTE-a) system, a Universal Mobile Telecommunications (UMTS) system, an enhanced Mobile Broadband (eMBB) system, a mass Machine Type Communication (mtc) system, an Ultra Reliable Low Latency Communication (urrllc) system, or the like.
It will be appreciated that the memory 1150 in embodiments of the present invention can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. Wherein the nonvolatile memory includes: Read-Only Memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), or Flash Memory.
The volatile memory includes: random Access Memory (RAM), which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as: static random access memory (Static RAM, SRAM), Dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), Enhanced Synchronous DRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DRRAM). The memory 1150 of the electronic device described in the embodiments of the invention includes, but is not limited to, the above and any other suitable types of memory.
In an embodiment of the present invention, memory 1150 stores the following elements of operating system 1151 and application programs 1152: an executable module, a data structure, or a subset thereof, or an expanded set thereof.
Specifically, the operating system 1151 includes various system programs such as: a framework layer, a core library layer, a driver layer, etc. for implementing various basic services and processing hardware-based tasks. Applications 1152 include various applications such as: media Player (Media Player), Browser (Browser), for implementing various application services. A program implementing a method of an embodiment of the invention may be included in application program 1152. The application programs 1152 include: applets, objects, components, logic, data structures, and other computer system executable instructions that perform particular tasks or implement particular abstract data types.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements each process of the above method for inputting a text, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The computer-readable storage medium includes: permanent and non-permanent, removable and non-removable media may be tangible devices that retain and store instructions for use by an instruction execution apparatus. The computer-readable storage medium includes: electronic memory devices, magnetic memory devices, optical memory devices, electromagnetic memory devices, semiconductor memory devices, and any suitable combination of the foregoing. The computer-readable storage medium includes: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), non-volatile random access memory (NVRAM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape cartridge storage, magnetic tape disk storage or other magnetic storage devices, memory sticks, mechanically encoded devices (e.g., punched cards or raised structures in a groove having instructions recorded thereon), or any other non-transmission medium useful for storing information that may be accessed by a computing device. As defined in embodiments of the present invention, the computer-readable storage medium does not include transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses traveling through a fiber optic cable), or electrical signals transmitted through a wire.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus, electronic device and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electrical, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to solve the problem to be solved by the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be substantially or partially contributed by the prior art, or all or part of the technical solutions may be embodied in a software product stored in a storage medium and including instructions for causing a computer device (including a personal computer, a server, a data center, or other network devices) to execute all or part of the steps of the methods of the embodiments of the present invention. And the storage medium includes various media that can store the program code as listed in the foregoing.
In the description of the embodiments of the present invention, it should be apparent to those skilled in the art that the embodiments of the present invention may be embodied as methods, apparatuses, electronic devices, and computer-readable storage media. Thus, embodiments of the invention may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), a combination of hardware and software. Furthermore, in some embodiments, embodiments of the invention may also be embodied in the form of a computer program product in one or more computer-readable storage media having computer program code embodied in the medium.
The computer-readable storage media described above may take any combination of one or more computer-readable storage media. The computer-readable storage medium includes: an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer readable storage medium include: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only Memory (ROM), an erasable programmable read-only Memory (EPROM), a Flash Memory, an optical fiber, a compact disc read-only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any combination thereof. In embodiments of the invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, device, or apparatus.
The computer program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including: wireless, wire, fiber optic cable, Radio Frequency (RF), or any suitable combination thereof.
Computer program code for carrying out operations for embodiments of the present invention may be written in assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or in one or more programming languages, including an object oriented programming language, such as: java, Smalltalk, C + +, and also include conventional procedural programming languages, such as: c or a similar programming language. The computer program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be over any of a variety of networks, including: a Local Area Network (LAN) or a Wide Area Network (WAN), which may be connected to the user's computer, may be connected to an external computer.
The method, the device and the electronic equipment are described through the flow chart and/or the block diagram.
It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions. These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner. Thus, the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The above description is only a specific implementation of the embodiments of the present invention, but the scope of the embodiments of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present invention, and all such changes or substitutions should be covered by the scope of the embodiments of the present invention. Therefore, the protection scope of the embodiments of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for inputting text, comprising:
acquiring a current track input by a user, selecting N input points from the current track, and generating an input sequence;
judging whether the input sequence is matched with the standard sequence according to the distance between the input sequence and the standard sequence in a preset sequence library, wherein the standard sequence comprises N standard points, and each standard sequence corresponds to a corresponding character;
and taking the standard sequence matched with the input sequence in the sequence library as an effective sequence, and pushing at least one character corresponding to the effective sequence.
2. The method of claim 1, prior to said obtaining a current trajectory of user input, further comprising:
acquiring a plurality of first current relative gestures between a first inertial sensor and a second inertial sensor, determining movement parameters of the first current relative gestures in the kneading direction according to a preset kneading coefficient, and determining whether kneading action is triggered currently according to the size of the movement parameters; the kneading coefficient can represent a relative position of a relative attitude between the first inertial sensor and the second inertial sensor in a kneading direction; and
under the condition of triggering the kneading action, executing the step of acquiring the current track input by the user;
wherein the kneading coefficient is set in advance by:
acquiring a plurality of effective relative attitude groups related to the kneading direction by changing relative positions of the first inertial sensor and the second inertial sensor in the kneading direction a plurality of times, the effective relative attitude groups including a first start relative attitude and a first end relative attitude between the first inertial sensor and the second inertial sensor;
acquiring a plurality of invalid relative attitude groups related to a preset plane by changing relative positions of the first inertial sensor and the second inertial sensor in the preset plane for a plurality of times, wherein the invalid relative attitude groups comprise a second starting relative attitude and a second ending relative attitude between the first inertial sensor and the second inertial sensor; the kneading direction is vertical to the preset plane;
determining a kneading coefficient in a first loss function based on the effective relative posture group and the ineffective relative posture group, wherein the first loss function is minimized; the first loss function is used to represent that there is movement in the kneading direction of a first starting relative posture and the first ending relative posture, and that there is no movement in the kneading direction of a second starting relative posture and the second ending relative posture.
3. The method of claim 1, wherein obtaining the current trajectory of the user input comprises:
acquiring a second current relative attitude between a first inertial sensor and a second inertial sensor, determining a current projection in a preset plane corresponding to the second current relative attitude according to a preset projection coefficient, and generating a current track according to a plurality of current projections; the projection coefficient is used for representing the relation between the relative attitude between the first inertial sensor and the second inertial sensor and the projection in the preset plane;
wherein the projection coefficients are preset by:
acquiring a plurality of horizontal relative attitude groups related to the horizontal direction by changing relative positions of a first inertial sensor and a second inertial sensor in the horizontal direction for a plurality of times, wherein the horizontal relative attitude groups comprise a horizontal starting relative attitude and a horizontal ending relative attitude between the first inertial sensor and the second inertial sensor;
acquiring a plurality of vertical relative attitude groups related to the vertical direction by changing relative positions of the first inertial sensor and the second inertial sensor in the vertical direction a plurality of times, the vertical relative attitude groups including a vertical starting relative attitude and a vertical ending relative attitude between the first inertial sensor and the second inertial sensor; the horizontal direction is perpendicular to the vertical direction, and the horizontal direction and the vertical direction are both positioned in the preset plane;
determining a projection coefficient in a second loss function based on the horizontal relative attitude group and the vertical relative attitude group, wherein the second loss function is minimized; the second loss function is used for representing the difference between the projection variation of the horizontal starting relative posture and the horizontal ending relative posture on a preset plane and the preset variation in the horizontal direction, and the difference between the projection variation of the vertical starting relative posture and the vertical ending relative posture on the preset plane and the preset variation in the vertical direction.
4. The method of claim 1, wherein the selecting N input points from the current trajectory to generate the input sequence comprises:
performing telescopic adjustment and/or linear adjustment on N original points (x ', y') selected from the current track based on preset adjustment parameters to generate N input points (x, y) and generate the input sequence; the adjustment parameters are obtained through statistics according to the historical behavior data of the user;
the telescopic adjustment comprises:
(x,y)=(x'/σx,y'/σy) (ii) a Wherein σx、σyRespectively obtaining a horizontal input standard error and a vertical input standard error according to the historical behavior data statistics of the user;
the linear adjustment includes:
determining a covariance matrix cov for the ith letter point cloudiBy applying the covariance matrix coviPerforming SVD matrix decomposition to determine a transformation matrix M of the ith letter point cloudiAnd, and:
Figure FDA0003535209560000031
wherein the representation SVD () represents an SVD matrix decomposition; the ith letter point cloud is distributed according to a track containing the ith letter input by the user before, wherein i is 1,2, … and 26;
converting the origin point (x ', y') to the input point (x, y), and: (x, y)T=M-1(x',y')T
Wherein,
Figure FDA0003535209560000032
5. the method according to any one of claims 1 to 4, wherein the determining whether the input sequence matches with a standard sequence in a preset sequence library according to a distance between the input sequence and the standard sequence comprises:
determining distances D between the first i input points in the input sequence and the first j standard points in the standard sequence1(i, j), and a distance D1(i, j) satisfies:
D1(i,j)=d(gi,tj)+min{D1(i,j-1),D1(i-1,j),D1(i-1,j-1)};
wherein, giRepresenting the i-th input point, t, in the input sequencejRepresents the jth criterion point in the criterion sequence, d (g)i,tj) Represents an input point giAnd the standard point tjI, j ═ 1,2, …, N;
will be at a distance D1(N, N) as a first distance between the input sequence and the standard sequence, and determining that the input sequence is matched with the standard sequence if the first distance is smaller than a first preset threshold value.
6. The method of claim 5, wherein determining the distance D between the first i input points in the input sequence and the first j standard points in the standard sequence is performed1Before (i, j), the method further comprises:
determining a second distance D between the input sequence G and the standard sequence T2(G, T), and the second distance D2(G, T) satisfies:
Figure FDA0003535209560000041
at the second distance D2(G, T) is less than a second preset threshold, determining the distance D between the first i input points in the input sequence and the first j standard points in the standard sequence is performed1(i, j).
7. The method of claim 1, wherein pushing at least one text corresponding to the valid sequence comprises:
determining the probability P (w | context) of selecting the characters w corresponding to the effective sequence according to the context input by the user and a preset Bayesian language model;
and correcting the probability P (w | context) according to the distance between the input sequence and the effective sequence, and pushing characters with the corrected probability being greater than a preset probability value.
8. An apparatus for inputting characters, comprising:
the acquisition module is used for acquiring a current track input by a user, selecting N input points from the current track and generating an input sequence;
the judging module is used for judging whether the input sequence is matched with the standard sequence according to the distance between the input sequence and the standard sequence in a preset sequence library, the standard sequence comprises N standard points, and each standard sequence corresponds to a corresponding character;
and the processing module is used for taking the standard sequence matched with the input sequence in the sequence library as an effective sequence and pushing at least one character corresponding to the effective sequence.
9. An electronic device comprising a bus, a transceiver, a memory, a processor and a computer program stored on the memory and executable on the processor, the transceiver, the memory and the processor being connected via the bus, characterized in that the computer program realizes the steps in the method of inputting words according to any one of claims 1 to 7 when executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of inputting words of any one of claims 1 to 7.
CN202210224506.5A 2022-03-07 2022-03-07 Method and device for inputting characters and electronic equipment Pending CN114627478A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210224506.5A CN114627478A (en) 2022-03-07 2022-03-07 Method and device for inputting characters and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210224506.5A CN114627478A (en) 2022-03-07 2022-03-07 Method and device for inputting characters and electronic equipment

Publications (1)

Publication Number Publication Date
CN114627478A true CN114627478A (en) 2022-06-14

Family

ID=81899658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210224506.5A Pending CN114627478A (en) 2022-03-07 2022-03-07 Method and device for inputting characters and electronic equipment

Country Status (1)

Country Link
CN (1) CN114627478A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955562A (en) * 2011-08-22 2013-03-06 幻音科技(深圳)有限公司 Input method and input system
US20150089435A1 (en) * 2013-09-25 2015-03-26 Microth, Inc. System and method for prediction and recognition of input sequences
CN106843737A (en) * 2017-02-13 2017-06-13 北京新美互通科技有限公司 Text entry method, device and terminal device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955562A (en) * 2011-08-22 2013-03-06 幻音科技(深圳)有限公司 Input method and input system
US20150089435A1 (en) * 2013-09-25 2015-03-26 Microth, Inc. System and method for prediction and recognition of input sequences
CN106843737A (en) * 2017-02-13 2017-06-13 北京新美互通科技有限公司 Text entry method, device and terminal device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHEN LIANG等: "DualRing: Enabling Subtle and Expressive Hand Interaction with Dual IMU Rings", PROC. ACM INTERACT. MOB. WEARABLE UBIQUITOUS TECHNOL., vol. 5, no. 3, 30 September 2021 (2021-09-30), pages 2 - 5 *
CHUN YU等: "Tap, Dwell or Gesture?: Exploring Head-Based Text Entry Techniques for HMDs", 2017 ACM, 31 December 2017 (2017-12-31), pages 3 - 4 *
DONALD J. BEMDT等: "Using Dynamic Time Warping to Find Patterns in Time Series", AAAI TECHNICAL REPORT WS-94-03, 26 April 1994 (1994-04-26), pages 361 - 363 *

Similar Documents

Publication Publication Date Title
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
CN108986801B (en) Man-machine interaction method and device and man-machine interaction terminal
Qi et al. Computer vision-based hand gesture recognition for human-robot interaction: a review
CN107428004B (en) Automatic collection and tagging of object data
CN112148128B (en) Real-time gesture recognition method and device and man-machine interaction system
CN114186632B (en) Method, device, equipment and storage medium for training key point detection model
CN104731307B (en) A kind of body-sensing action identification method and human-computer interaction device
CN110287775B (en) Palm image clipping method, palm image clipping device, computer equipment and storage medium
CN107993651B (en) Voice recognition method and device, electronic equipment and storage medium
US11886167B2 (en) Method, system, and non-transitory computer-readable recording medium for supporting object control
CN108693958B (en) Gesture recognition method, device and system
CN110633004A (en) Interaction method, device and system based on human body posture estimation
KR20220059194A (en) Method and apparatus of object tracking adaptive to target object
Saraswat et al. An incremental learning based gesture recognition system for consumer devices using edge-fog computing
CN107346207B (en) Dynamic gesture segmentation recognition method based on hidden Markov model
Prasad et al. A wireless dynamic gesture user interface for HCI using hand data glove
CN116306612A (en) Word and sentence generation method and related equipment
Pan et al. Magicinput: Training-free multi-lingual finger input system using data augmentation based on mnists
CN114627478A (en) Method and device for inputting characters and electronic equipment
WO2021056450A1 (en) Method for updating image template, device, and storage medium
KR101869304B1 (en) System, method and program for recognizing sign language
CN113534997B (en) Parameter adjustment method, system and equipment of Kalman filtering model based on residual error
KR20190132885A (en) Apparatus, method and computer program for detecting hand from video
CN114639158A (en) Computer interaction method, apparatus and program product
CN114625249B (en) Method and device for detecting kneading release action and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination