CN114327232A - Full-screen handwriting realization method and electronic equipment - Google Patents

Full-screen handwriting realization method and electronic equipment Download PDF

Info

Publication number
CN114327232A
CN114327232A CN202111663664.2A CN202111663664A CN114327232A CN 114327232 A CN114327232 A CN 114327232A CN 202111663664 A CN202111663664 A CN 202111663664A CN 114327232 A CN114327232 A CN 114327232A
Authority
CN
China
Prior art keywords
input
track
layer
application
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111663664.2A
Other languages
Chinese (zh)
Inventor
程涛
史晓岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111663664.2A priority Critical patent/CN114327232A/en
Publication of CN114327232A publication Critical patent/CN114327232A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method for realizing full-screen handwriting and electronic equipment, comprising the following steps: determining that the target application is in a character input mode, and responding to the track input operation of the input equipment to control the virtual screen to acquire an input track; the virtual screen is positioned on a data transmission path of a driving layer and a framework layer of the electronic equipment; identifying an input track to obtain track identification information; and displaying the track identification information to realize the full-screen handwriting function. The application also discloses corresponding electronic equipment.

Description

Full-screen handwriting realization method and electronic equipment
Technical Field
The present disclosure relates to the field of full screen handwriting technologies, and in particular, to a method for implementing full screen handwriting and an electronic device.
Background
Full screen handwriting means that the recognition area of a handwriting input method is full screen, and some existing electronic devices such as mobile phones and tablet computers can support handwriting input. The specific implementation method comprises the following steps: a transparent window is newly built on the upper layer of the application layer, the transparent window is used for receiving the moving track of input equipment such as a handwriting pen, and handwriting recognition and the like are carried out on the basis of the transparent window, so that the full-screen handwriting function is realized.
However, part of the application programs are always in the top layer window, and at this time, a full-screen transparent window cannot be generated at the top layer, that is, the method cannot be implemented, so that the universality is poor; moreover, after the application program in the electronic device is upgraded, the transparent window cannot be used continuously, which results in poor compatibility.
Disclosure of Invention
An object of the embodiment of the present application is to provide a method for implementing full screen handwriting and an electronic device, which can support the electronic device to implement a full screen handwriting function, and can be applied to various electronic devices and various application programs on the electronic devices, and have better universality and compatibility.
In a first aspect, an embodiment of the present application provides a method for implementing full-screen handwriting, which is applied to an electronic device, and includes:
determining that the target application is in a character input mode, and responding to the track input operation of the input equipment to control the virtual screen to acquire the input track; the virtual screen is positioned on a data transmission path of a driving layer and a framework layer of the electronic equipment;
identifying the input track to obtain track identification information;
and displaying the track identification information to realize the full-screen handwriting function.
In one possible embodiment, the virtual screen is located on a data transfer path of a driver layer and a framework layer of an electronic device, and includes:
and setting the virtual screen on the driving layer or setting the virtual screen on the framework layer.
In one possible implementation, the determining that the target application is in a text input mode, and controlling the virtual screen to acquire the input track in response to a track input operation of the input device includes:
under the condition that the target application is determined to be in a text input mode, determining a target input device by the target application; the target input device comprises a first input device and a second input device supported by the driving layer;
and when the virtual screen acquires the input track, determining that the target input equipment is the second input equipment.
In a possible implementation, the first input device is one of a hard keyboard, a soft keyboard and a mouse, and the second input device includes a stylus or a touch pad.
In a possible implementation, the presenting the track identification information includes:
transmitting the track identification information to an application layer of the electronic equipment;
and controlling a target application in the application layer to display the track identification information.
In one possible embodiment, the method further comprises:
determining that the target application is in a gesture mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device;
reporting the input track to the framework layer through the virtual screen, so that the framework layer determines a target instruction corresponding to the input track based on a corresponding relation between the track and the instruction;
controlling the framework layer to transmit the target instruction to the application layer;
and controlling the target application in the application layer to respond to the target instruction.
In one possible embodiment, the method further comprises:
determining that the target application is in a drawing mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device;
reporting the input track to the framework layer through the virtual screen so that the framework layer transmits the input track to the application layer;
and controlling a target application in the application layer to display the input track.
In a second aspect, an embodiment of the present application further provides an electronic device, including:
the virtual screen is positioned on a data transmission path of a driving layer and a framework layer of the electronic equipment;
the control module is configured to determine that the target application is in a text input mode, and control the virtual screen to acquire the input track in response to track input operation of the input device; the virtual screen is positioned on a data transmission path of a driving layer and a framework layer of the electronic equipment;
the identification module is configured to identify the input track to obtain track identification information;
the first display module is configured to display the track identification information to realize a full-screen handwriting function.
In one possible embodiment, the virtual screen is arranged on the driver layer or on the framework layer.
In one possible implementation, the obtaining module is further configured to:
under the condition that the target application is determined to be in a text input mode, determining a target input device by the target application; the target input device comprises a first input device and a second input device supported by the driver layer,
and when the virtual screen acquires the input track, determining that the target input equipment is the second input equipment.
According to the implementation method of the embodiment of the application, the input track is obtained through the virtual screens arranged on the data transmission paths of the driving layer and the framework layer of the electronic equipment, then the input track is identified to obtain the track identification information, and the track identification information is displayed to realize the full-screen handwriting function.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart illustrating a method for implementing full screen handwriting provided by the present application;
FIG. 2 illustrates a flow chart of another implementation method provided herein;
FIG. 3 illustrates a flow chart of another implementation method provided herein;
fig. 4 is a schematic structural diagram of an electronic device provided in the present application;
FIG. 5 illustrates a schematic structural diagram of another electronic device provided herein;
fig. 6 shows a schematic structural diagram of another electronic device provided in the present application.
Detailed Description
Various aspects and features of the present application are described herein with reference to the drawings.
It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the application.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present application will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application of unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the application.
The method for realizing full-screen handwriting is applied to the electronic equipment, can realize the full-screen handwriting function on the electronic equipment, is suitable for various electronic equipment and various application programs on the electronic equipment, and has better universality and compatibility. In order to facilitate understanding of the present application, a full-screen handwriting implementation method provided by the present application is first described in detail.
As shown in fig. 1, a flowchart of a method for implementing full-screen handwriting according to an embodiment of the present application is provided, where the specific steps include S101 to S103.
S101, determining that a target application is in a character input mode, and responding to a track input operation of an input device to control a virtual screen to acquire an input track; the virtual screen is positioned on a data transfer path of a driving layer and a framework layer of the electronic equipment.
In a specific implementation, a system of the electronic device includes a driver layer, a framework layer and an application layer, wherein the driver layer is located at a bottom layer, the application layer is located at a top layer, the framework layer is located between the driver layer and the application layer, and when a user uses an application program based on the electronic device, instructions or responses are transmitted to the application layer through the framework layer by the driver layer, so that the application program is run.
In the embodiment of the application, a virtual screen is arranged on the driving layer in advance, or a virtual screen is arranged on the framework layer, that is, the virtual screen is positioned on the data transmission path of the driving layer and the framework layer of the electronic equipment, so that the method and the device can be suitable for different types of electronic equipment; and, the virtual screen is located below the application layer; the problem that the virtual screen cannot be set on the top-layer form due to the fact that part of applications declare the applications into the top-layer form is solved, the virtual screen can be suitable for the applications with different settings, the use of the virtual screen, the control and each application cannot be influenced even if the bottom-layer control and/or the application of the electronic equipment are upgraded, and the universality and the compatibility are good.
In a specific implementation, the virtual screen is used to obtain all input operations on the electronic device, such as coordinate information of a focus point, where the focus point may be formed by a stylus pen or formed by other operation bodies. Alternatively, each application program may set a different operation mode, and the mode in which it is currently located may be determined based on the coordinate information of the focus. As an example, after obtaining coordinate point information of the focus, the virtual screen determines whether a coordinate point identified by the coordinate point information falls within an input coordinate range, where the input coordinate range is an input coordinate range corresponding to a target application, and the input coordinate ranges corresponding to different target applications are different; when the coordinate point identified by the coordinate point information is determined to fall into the input coordinate range, determining that the target application is in a character input mode; when it is determined that the coordinate point identified by the coordinate point information does not fall within the input coordinate range, it is determined that the target application is in a gesture mode, a drawing mode, or the like.
And under the condition that the target application is determined to be in the text input mode, controlling the virtual screen to acquire an input track in response to the track input operation of the input device.
Before the control virtual screen is operated to obtain the input track, a target input device needs to be further determined. Specifically, the target application generates a request instruction to transmit the request instruction to the framework layer to the driver layer, and may also transmit the request instruction to a system bottom layer, i.e., a processing layer or a control layer, of the electronic device, so that the driver layer or the system bottom layer responds to the request instruction, where the request instruction is used for determining the target input device.
In order to ensure that a target input device can simultaneously satisfy a target application and a system driver layer of an electronic device, the target input device in the embodiment of the application includes a first input device and a second input device supported by the driver layer; the first input device is one of a hard keyboard, a soft keyboard and a mouse, and the second input device comprises a stylus pen or a touch pad. For example, when the electronic device is in a non-touch state, any one of the first input devices may be used as the target input device, and of course, multiple types of the first input devices may be simultaneously used as the target input devices, such as a hard keyboard and a mouse combined to form the target input device, or a soft keyboard and a mouse combined to form the target input device. When the electronic device is in a touch state, any one of the second input devices may be used as the target input device, for example, when a touch screen is disposed on the electronic device, a stylus pen corresponding to the electronic device may be determined as the target input device.
In this embodiment of the application, once the virtual screen acquires the input track, that is, the electronic device is in a touch state, at this time, it is determined that the target input device is the second input device, so that the virtual screen serves as the target input device to forward, transmit, and the like the acquired input track.
S102, recognizing the input track to obtain track recognition information.
In specific implementation, after the input track is acquired through the virtual screen, the input track is calculated through a preset track recognition algorithm to obtain track recognition information, namely, character information corresponding to the input track.
S103, displaying the track identification information to realize the full-screen handwriting function.
In specific implementation, after the track identification information is obtained, the track identification information is displayed through a display device of the electronic equipment, so that a full-screen handwriting function is realized. Specifically, the track identification information is transmitted to an application layer of the electronic device, and a target application in the application layer is controlled to display the track identification information.
According to the implementation method of the embodiment of the application, the input track is obtained through the virtual screens arranged on the data transmission paths of the driving layer and the framework layer of the electronic equipment, then the input track is identified to obtain the track identification information, and the track identification information is displayed to realize the full-screen handwriting function.
As shown in fig. 2, a flowchart of another implementation method provided in the embodiment of the present application is shown, where the specific steps include S201 to S204.
S201, determining that the target application is in a gesture mode, and responding to the track input operation of the input device to control the virtual screen to acquire an input track.
S202, reporting the input track to a framework layer through a virtual screen, so that the framework layer determines a target instruction corresponding to the input track based on the corresponding relation between the track and the instruction.
S203, the control framework layer transmits the target instruction to the application layer.
And S204, controlling the target application in the application layer to respond to the target instruction.
In a specific implementation, after the mode determination manner is performed, for example, it is determined that the coordinate point identified by the coordinate point information does not fall within the input coordinate range. As one example, when it is determined that the coordinate point identified by the coordinate point information does not fall within the input coordinate range, it is further determined whether the target application is in the gesture mode or the drawing mode, and optionally, whether the target application is in the gesture mode or the drawing mode or the like is determined based on the length of the input trajectory or based on whether the user performs other preset operations.
And if the target application is determined to be in the gesture mode, the virtual screen does not forward the input track, namely does not perform recognition processing on the input track, and after the virtual screen is controlled to obtain the input track in response to the track input operation of the input equipment, the input track is directly reported to the framework layer through the virtual screen, so that the framework layer determines the target instruction corresponding to the input track based on the corresponding relation between the track and the instruction. The corresponding relationship between the track and the command is preset and stored, for example, if the track is from bottom to top, the command of "switching to the next page" is corresponded, and if the track is from left to right, the command of "speeding up playing" is corresponded.
After determining the target instruction corresponding to the input track, the control framework layer transmits the target instruction to the application layer. After the application layer is determined to receive the target instruction, the target application in the control application layer responds to the target instruction so as to achieve the purpose that the user achieves control of the target application through gestures.
As shown in fig. 3, which is a flowchart of another implementation method provided in the embodiment of the present application, the specific steps include S301 to S303.
And S301, determining that the target application is in a drawing mode, and controlling the virtual screen to acquire an input track in response to the track input operation of the input device.
S302, reporting the input track to a framework layer through a virtual screen, so that the framework layer transmits the input track to an application layer.
And S303, controlling the target application in the application layer to display the input track.
Similarly, after the mode determination mode is executed, if it is determined that the target application is in the drawing mode, the virtual screen does not forward the input track, that is, does not perform recognition processing on the input track, and after the virtual screen is controlled to obtain the input track in response to the track input operation of the input device, the input track is directly reported to the framework layer through the virtual screen, so that the framework layer directly transmits the input track to the application layer, that is, the framework layer does not perform any processing on the input track, and only directly reports the input track.
After the application layer is determined to receive the input track, the target application in the application layer is controlled to display the input track, so that the purpose that a user realizes drawing through touch operation is achieved.
Based on the same concept, the second aspect of the present application further provides an electronic device corresponding to the full-screen handwriting implementation method, and as the principle of solving the problem of the electronic device in the present application is similar to the full-screen handwriting implementation method in the present application, the implementation of the electronic device may refer to the implementation of the method, and repeated details are not repeated.
Fig. 4 shows a schematic diagram of an electronic device provided in an embodiment of the present application, which specifically includes:
a virtual screen 401 located on a data transfer path of a driver layer and a frame layer of the electronic device;
a control module 402 configured to determine that the target application is in a text input mode, and control the virtual screen to acquire the input track in response to a track input operation of the input device; the virtual screen is positioned on a data transmission path of a driving layer and a framework layer of the electronic equipment;
an identification module 403 configured to identify the input track, resulting in track identification information;
a first presentation module 404 configured to present the trajectory identification information to implement a full-screen handwriting function.
In yet another embodiment, the virtual screen 401 is disposed on the driver layer or on the framework layer.
In yet another embodiment, the control module 402 is further configured to:
under the condition that the target application is determined to be in a text input mode, determining a target input device by the target application; the target input device comprises a first input device and a second input device supported by the driver layer,
and when the virtual screen acquires the input track, determining that the target input equipment is the second input equipment.
In yet another embodiment, the first input device is one of a hard keyboard, a soft keyboard, and a mouse, and the second input device includes a stylus or a touch pad.
In yet another embodiment, the first presentation module 404 is further configured to:
transmitting the track identification information to an application layer of the electronic equipment;
and controlling a target application in the application layer to display the track identification information.
In yet another embodiment, the electronic device further comprises a response module 405 configured for:
determining that the target application is in a gesture mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device;
reporting the input track to the framework layer through the virtual screen, so that the framework layer determines a target instruction corresponding to the input track based on a corresponding relation between the track and the instruction;
controlling the framework layer to transmit the target instruction to the application layer;
and controlling the target application in the application layer to respond to the target instruction.
In yet another embodiment, the electronic device further comprises a second presentation module 406 configured for:
determining that the target application is in a drawing mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device;
reporting the input track to the framework layer through the virtual screen so that the framework layer transmits the input track to the application layer;
and controlling a target application in the application layer to display the input track.
Fig. 5 shows a schematic structural diagram of an electronic device as an example, and as shown in fig. 5, the electronic device is connected with a stylus pen, wherein the electronic device can be connected with a wire or wirelessly as long as the electronic device can authenticate, identify and respond to the stylus pen. Further, the electronic device comprises a driving layer, a virtual screen, a framework layer and an application layer, wherein the virtual screen is located between the driving layer and the framework layer, the application layer is located above the driving layer, the virtual screen and the framework layer, as shown in the figure, the framework layer comprises a first determining module, a second determining module and a recognition module, and the application layer comprises a target application and other applications.
The process of implementing the above implementation method based on the structure shown in fig. 5 is as follows:
the framework layer acquires focus coordinates in the screen of the electronic device in advance, and determines the mode of the target application based on the focus coordinates through the first determination module. The specific manner of determination is as described above.
After the target application is determined to be in the text input mode, the target input device is further determined through the second determination module, specifically, after the target application is determined to be in the text input mode, the target application generates a request instruction, transmits the request instruction to the second determination module, and simultaneously transmits the input device corresponding to the target application to the second determination module.
And after receiving the request instruction, the second determining module acquires the input equipment supported by the driving layer, and determines the target input equipment based on the input equipment supported by the driving layer and the input equipment corresponding to the target application. After the target application is determined to be in the character input mode, once the input track is acquired by the virtual screen, the handwriting pen is determined to be the target input device, and then the virtual screen is controlled to acquire the input track.
After the virtual screen acquires the input track, the input track is forwarded to the identification module of the framework layer, so that the identification module calculates the input track through a preset track identification algorithm to obtain track identification information, namely character information corresponding to the input track. And then, the identification module transmits the track identification information to a target application of the application layer.
After receiving the track identification information, the target application displays the track identification information, and further realizes the full-screen handwriting function.
In the case that the target application is in the gesture mode and the drawing mode, the detailed description is omitted here, and reference may be made to the above description.
According to the implementation method of the embodiment of the application, the input track is obtained through the virtual screens arranged on the data transmission paths of the driving layer and the framework layer of the electronic equipment, then the input track is identified to obtain the track identification information, and the track identification information is displayed to realize the full-screen handwriting function.
The storage medium is a computer-readable medium, and stores a computer program, and when the computer program is executed by a processor, the method provided in any embodiment of the present application is implemented, including the following steps S11 to S13:
s11, determining that the target application is in a character input mode, and responding to the track input operation of the input equipment to control the virtual screen to acquire the input track; the virtual screen is positioned on a data transmission path of a driving layer and a framework layer of the electronic equipment;
s12, recognizing the input track to obtain track recognition information;
and S13, displaying the track identification information to realize the function of full-screen handwriting.
When the computer program is executed by the processor to implement the method, the processor specifically executes the following steps: and setting the virtual screen on the driving layer or setting the virtual screen on the framework layer.
The computer program is executed by the processor to determine that the target application is in a character input mode, and when the virtual screen is controlled to acquire the input track in response to the track input operation of the input device, the computer program is further executed by the processor to: under the condition that the target application is determined to be in a text input mode, determining a target input device by the target application; the target input device comprises a first input device and a second input device supported by the driving layer; and when the virtual screen acquires the input track, determining that the target input equipment is the second input equipment.
When the computer program is executed by the processor to display the track identification information, the processor also executes the following steps: transmitting the track identification information to an application layer of the electronic equipment; and controlling a target application in the application layer to display the track identification information.
When the computer program is executed by the processor to realize the method, the processor also executes the following steps: determining that the target application is in a gesture mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device; reporting the input track to the framework layer through the virtual screen, so that the framework layer determines a target instruction corresponding to the input track based on a corresponding relation between the track and the instruction; controlling the framework layer to transmit the target instruction to the application layer; and controlling the target application in the application layer to respond to the target instruction.
When the computer program is executed by the processor to realize the method, the processor also executes the following steps: determining that the target application is in a drawing mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device; reporting the input track to the framework layer through the virtual screen so that the framework layer transmits the input track to the application layer; and controlling a target application in the application layer to display the input track.
According to the implementation method of the embodiment of the application, the input track is obtained through the virtual screens arranged on the data transmission paths of the driving layer and the framework layer of the electronic equipment, then the input track is identified to obtain the track identification information, and the track identification information is displayed to realize the full-screen handwriting function.
An electronic device 6 is provided in an embodiment of the present application, and a schematic structural diagram of the electronic device may be as shown in fig. 6, where the electronic device at least includes a memory 601 and a processor 602, where the memory 601 stores a computer program, and the processor 602 implements the method provided in any embodiment of the present application when executing the computer program on the memory 601. Illustratively, the computer program steps of the electronic device 6 are as follows S21 to S23:
s21, determining that the target application is in a character input mode, and responding to the track input operation of the input equipment to control the virtual screen to acquire the input track; the virtual screen is positioned on a data transmission path of a driving layer and a framework layer of the electronic equipment;
s22, recognizing the input track to obtain track recognition information;
and S23, displaying the track identification information to realize the function of full-screen handwriting.
When executing the implementation method stored on the memory, the processor also executes the following computer program: and setting the virtual screen on the driving layer or setting the virtual screen on the framework layer.
The processor executes a computer program which is stored in the execution memory, determines that the target application is in a character input mode, and when the virtual screen is controlled to acquire the input track in response to the track input operation of the input device: under the condition that the target application is determined to be in a text input mode, determining a target input device by the target application; the target input device comprises a first input device and a second input device supported by the driving layer; and when the virtual screen acquires the input track, determining that the target input equipment is the second input equipment.
When the processor executes the display trajectory identification information stored in the memory, the following computer program is specifically executed: transmitting the track identification information to an application layer of the electronic equipment; and controlling a target application in the application layer to display the track identification information.
When executing the implementation method stored on the memory, the processor also executes the following computer program: determining that the target application is in a gesture mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device; reporting the input track to the framework layer through the virtual screen, so that the framework layer determines a target instruction corresponding to the input track based on a corresponding relation between the track and the instruction; controlling the framework layer to transmit the target instruction to the application layer; and controlling the target application in the application layer to respond to the target instruction.
When executing the implementation method stored on the memory, the processor also executes the following computer program: determining that the target application is in a drawing mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device; reporting the input track to the framework layer through the virtual screen so that the framework layer transmits the input track to the application layer; and controlling a target application in the application layer to display the input track.
According to the implementation method of the embodiment of the application, the input track is obtained through the virtual screens arranged on the data transmission paths of the driving layer and the framework layer of the electronic equipment, then the input track is identified to obtain the track identification information, and the track identification information is displayed to realize the full-screen handwriting function.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes. Optionally, in this embodiment, the processor executes the method steps described in the above embodiments according to the program code stored in the storage medium. Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again. It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
Moreover, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments based on the present application with equivalent elements, modifications, omissions, combinations (e.g., of various embodiments across), adaptations or alterations. The elements of the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the application. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, subject matter of the present application can lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the application should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The embodiments of the present application have been described in detail, but the present application is not limited to these specific embodiments, and those skilled in the art can make various modifications and modified embodiments based on the concept of the present application, and these modifications and modified embodiments should fall within the scope of the present application.

Claims (10)

1. A method for realizing full screen handwriting is applied to electronic equipment and comprises the following steps:
determining that the target application is in a character input mode, and responding to the track input operation of the input equipment to control the virtual screen to acquire the input track; the virtual screen is positioned on a data transmission path of a driving layer and a framework layer of the electronic equipment;
identifying the input track to obtain track identification information;
and displaying the track identification information to realize the full-screen handwriting function.
2. The implementation method of claim 1, wherein the virtual screen is located on a data transfer path of a driver layer and a framework layer of the electronic device, and the implementation method comprises the following steps:
and setting the virtual screen on the driving layer or setting the virtual screen on the framework layer.
3. The implementation method of claim 1, wherein the determining that the target application is in a text input mode, and controlling the virtual screen to acquire the input track in response to a track input operation of an input device comprises:
under the condition that the target application is determined to be in a text input mode, determining a target input device by the target application; the target input device comprises a first input device and a second input device supported by the driving layer;
and when the virtual screen acquires the input track, determining that the target input equipment is the second input equipment.
4. The implementation method of claim 3, wherein the first input device is one of a hard keyboard, a soft keyboard and a mouse, and the second input device comprises a stylus or a touch pad.
5. The implementation method of claim 1, the presenting the trajectory identification information, comprising:
transmitting the track identification information to an application layer of the electronic equipment;
and controlling a target application in the application layer to display the track identification information.
6. The implementation method of claim 2, further comprising:
determining that the target application is in a gesture mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device;
reporting the input track to the framework layer through the virtual screen, so that the framework layer determines a target instruction corresponding to the input track based on a corresponding relation between the track and the instruction;
controlling the framework layer to transmit the target instruction to the application layer;
and controlling the target application in the application layer to respond to the target instruction.
7. The implementation method of claim 2, further comprising:
determining that the target application is in a drawing mode, and controlling a virtual screen to acquire the input track in response to track input operation of the input device;
reporting the input track to the framework layer through the virtual screen so that the framework layer transmits the input track to the application layer;
and controlling a target application in the application layer to display the input track.
8. An electronic device, comprising:
the virtual screen is positioned on a data transmission path of a driving layer and a framework layer of the electronic equipment;
the control module is configured to determine that the target application is in a text input mode, and control the virtual screen to acquire the input track in response to track input operation of the input device;
the identification module is configured to identify the input track to obtain track identification information;
the first display module is configured to display the track identification information to realize a full-screen handwriting function.
9. The electronic device of claim 8, the virtual screen disposed on the driver layer or on the framework layer.
10. The electronic device of claim 9, the acquisition module further configured to:
under the condition that the target application is determined to be in a text input mode, determining a target input device by the target application; the target input device comprises a first input device and a second input device supported by the driver layer,
and when the virtual screen acquires the input track, determining that the target input equipment is the second input equipment.
CN202111663664.2A 2021-12-31 2021-12-31 Full-screen handwriting realization method and electronic equipment Pending CN114327232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111663664.2A CN114327232A (en) 2021-12-31 2021-12-31 Full-screen handwriting realization method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111663664.2A CN114327232A (en) 2021-12-31 2021-12-31 Full-screen handwriting realization method and electronic equipment

Publications (1)

Publication Number Publication Date
CN114327232A true CN114327232A (en) 2022-04-12

Family

ID=81020248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111663664.2A Pending CN114327232A (en) 2021-12-31 2021-12-31 Full-screen handwriting realization method and electronic equipment

Country Status (1)

Country Link
CN (1) CN114327232A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180300542A1 (en) * 2017-04-18 2018-10-18 Nuance Communications, Inc. Drawing emojis for insertion into electronic text-based messages
CN110347305A (en) * 2019-05-30 2019-10-18 华为技术有限公司 A kind of VR multi-display method and electronic equipment
CN113407099A (en) * 2020-03-17 2021-09-17 北京搜狗科技发展有限公司 Input method, device and machine readable medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180300542A1 (en) * 2017-04-18 2018-10-18 Nuance Communications, Inc. Drawing emojis for insertion into electronic text-based messages
CN110347305A (en) * 2019-05-30 2019-10-18 华为技术有限公司 A kind of VR multi-display method and electronic equipment
CN113407099A (en) * 2020-03-17 2021-09-17 北京搜狗科技发展有限公司 Input method, device and machine readable medium

Similar Documents

Publication Publication Date Title
US9996759B2 (en) Method and apparatus for recognizing fingerprint
KR102482850B1 (en) Electronic device and method for providing handwriting calibration function thereof
EP3358455A1 (en) Apparatus and method for controlling fingerprint sensor
CN111158540B (en) Position adjusting method of application icon and electronic equipment
US20200241735A1 (en) Suspend button display method and terminal device
EP3109736A1 (en) Electronic device and method for providing haptic feedback thereof
US10268364B2 (en) Electronic device and method for inputting adaptive touch using display of electronic device
EP3040874A1 (en) Electronic device and inputted signature processing method of electronic device
US10775869B2 (en) Mobile terminal including display and method of operating the same
US10101827B2 (en) Method and apparatus for controlling a touch-screen based application ported in a smart device
KR20170113066A (en) Electronic device with display and method for displaying image thereof
US20090265659A1 (en) Multi-window display control system and method for presenting a multi-window display
US20120280898A1 (en) Method, apparatus and computer program product for controlling information detail in a multi-device environment
US20150301609A1 (en) Gesture recognition method and gesture recognition apparatus
KR20150013991A (en) Method and apparatus for executing application in electronic device
EP3523716B1 (en) Electronic device and method for controlling display in electronic device
US11823593B2 (en) Method for displaying learning content of terminal and application program therefor
US20130027321A1 (en) Display system and method
US10930023B2 (en) Method and apparatus for imitating original graphic, computing device, and storage medium
US9792032B2 (en) Information processing apparatus, information processing method, and program for controlling movement of content in response to user operations
US20190339858A1 (en) Method and apparatus for adjusting virtual key of mobile terminal
EP2891963A1 (en) Window display method and apparatus of displaying a window using an external input device
CN104571914A (en) Terminal equipment and control method and system thereof
KR20170114515A (en) Electronic device for displaying screen and method for controlling thereof
US9436291B2 (en) Method, system and computer program product for operating a keyboard

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination