WO2023246604A1 - 手写输入方法及终端 - Google Patents

手写输入方法及终端 Download PDF

Info

Publication number
WO2023246604A1
WO2023246604A1 PCT/CN2023/100299 CN2023100299W WO2023246604A1 WO 2023246604 A1 WO2023246604 A1 WO 2023246604A1 CN 2023100299 W CN2023100299 W CN 2023100299W WO 2023246604 A1 WO2023246604 A1 WO 2023246604A1
Authority
WO
WIPO (PCT)
Prior art keywords
canvas
event
handwriting
application
input
Prior art date
Application number
PCT/CN2023/100299
Other languages
English (en)
French (fr)
Inventor
张�雄
胡斌
王璞
官象山
冯鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023246604A1 publication Critical patent/WO2023246604A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • the present application relates to the field of terminal technology, and in particular to a handwriting input method and terminal.
  • the handwriting input method and terminal provided by this application can start inputting text as soon as you put pen to paper on the terminal, improving the handwriting experience.
  • a handwriting input method includes: detecting an input control in an application window; setting a first canvas at a corresponding position of the input control in the application window, and the transparency of the first canvas is greater than or equal to a first threshold (That is, the first canvas is set to be close to transparent or translucent); in response to the first event acting on the first canvas (i.e., the event that the user starts writing in the input control of the application window, such as the user using a stylus/finger/mouse contact event of the first canvas position), set a second canvas on the upper layer of the application window; wherein, the second canvas is used to receive the user's writing operation and present the writing handwriting; the size of the second canvas is larger than the size of the first canvas, and the second canvas
  • the transparency of the canvas is greater than or equal to the first threshold (that is, the second canvas is set to be nearly transparent or semi-transparent); the recognition result of the handwriting on the second canvas is displayed in the input control of the application window according to the hand
  • the first canvas is set to be nearly transparent or semi-transparent, that is, the first canvas is non-transparent.
  • setting the first canvas to non-transparent is to use the first canvas to intercept the user's input operation at the input control position.
  • the first canvas is nearly transparent or translucent, so it does not block the content of the application window input controls, making it difficult for users to notice the first canvas, and does not affect the user experience of the foreground window.
  • the transparency of the second canvas can be set to be nearly transparent or semi-transparent.
  • the second canvas can intercept the user's writing operation.
  • the second canvas does not block the content of the foreground window input control, making it difficult for users to notice the second canvas, and does not affect the user experience of the application window.
  • the second canvas and the first canvas are different windows.
  • the first canvas is set first
  • the second picture is set when the second canvas needs to be used.
  • the terminal detects the input control of the application window
  • the terminal sets a first canvas at the position of the input control of the application window for detecting the first event.
  • the terminal sets a second canvas with a larger size (for example, the second canvas is a full-screen window), and the second canvas continues to receive the user's writing operation and presents the user's handwriting.
  • the terminal can also set the second canvas in advance. For example, after starting the handwriting pad application and setting the second canvas, the transparent attribute of the second canvas is set to transparent.
  • the embodiments of the present application do not limit the timing of setting the first canvas and the second canvas in the handwriting tablet application and do not specifically limit the timing.
  • the second canvas and the first canvas may also be the same window. That is, the second canvas is obtained by adjusting the size of the first canvas. Then, when it is detected that the first event occurs at the first canvas, the terminal directly increases the size of the first canvas to obtain the second canvas. The enlarged second canvas continues to receive the user's writing operation and presents the user's writing handwriting.
  • this application detects the input control in the foreground window and sets a nearly transparent or semi-transparent first canvas at the position of the input control to intercept the first event that the user intends to execute the input control (that is, when the user event that starts writing in the input control of the foreground window).
  • the terminal sets a second canvas with a larger size (for example, the second canvas is a full-screen window), and the second canvas continues to receive the user's writing operation and presents the user's handwriting. This enables users to start writing as soon as they put pen to paper, improving the handwriting experience on the terminal.
  • the second canvas is a full-screen window, or the size of the second canvas is the same as the size of the application window.
  • detecting the input controls in the application window includes: calling the user interface automation UIA interface to detect the input controls in the application window. This provides a method for detecting input controls in an application window. This method does not require modifying the terminal's native operating system, and this detection method is more universal.
  • calling the user interface automation UIA interface to detect the input controls of the application window includes: calling the user interface automation UIA interface to obtain information about each control in the application window; When the type of a control is a preset type, and the properties of the first control are preset properties corresponding to the preset type, the first control is determined to be an input control of the application window; where the preset type includes an edit control type and a drop-down list type. , and one or more of the document control types; the default properties include one or more of keyboard focus, non-read-only, and keyboard focus.
  • the method also includes: during the process of detecting the input controls in the application window, also obtaining the type and/or style description of each control in the application window, and based on the type and/or style of each control Style specifications determine the input controls in the application window. This provides another way to detect input controls in application windows. This method can be used independently or in combination with other methods to improve detection accuracy.
  • the method further includes: closing the first canvas when a first event acting on the first canvas is detected.
  • the user's writing operation can be received by the second canvas on the application window.
  • the first canvas can also be set to be transparent, so that the first canvas does not affect the second canvas receiving the user's writing operation.
  • the method after setting the first canvas at the corresponding position of the input control in the application window, the method also includes: after detecting a change event of the application window, re-detecting the input control after the change of the application window, And update the first canvas set on the upper layer of the application window; among them, the change events of the application window include: events when the user switches application windows, interface jump events in the application window, and events where the position and/or size of the controls in the application window change. Hit one or more items. So, provide a
  • the first event is any one of a stylus pen down event, a finger touch down event, and a mouse operation down event.
  • the method when writing on the second canvas, in the input control of the application window Before displaying the recognition result of the handwriting on the second canvas, the method further includes: after detecting a second event acting on the second canvas, identifying the handwriting on the second canvas. That is to say, when the user writes on the second canvas for a period of time and then lifts it up (for example, after a preset time period after lifting), the terminal can start to recognize the writing on the second canvas.
  • the terminal may also start to recognize the handwriting on the second canvas at other times, such as periodically identifying the existing handwriting on the second canvas.
  • the embodiments of the present application do not limit the timing and recognition method for the terminal to recognize the handwriting on the second canvas.
  • the second event when the first event is the down event of the stylus, the second event is the up event of the stylus; when the first event is the down event of the finger touch, the second event is the up event of the finger touch. Event; when the first event is the down event of the mouse operation, the second event is the up event of the mouse operation.
  • the method further includes: when detecting a third event acting on the first canvas, modifying the transparency of the first canvas to be less than the first threshold; wherein the third event is different from the first event; Displays a handwriting box, which is used to receive the user's writing operation and present the writing handwriting.
  • embodiments of the present application can also distinguish different input methods (that is, distinguish the first event and the third event), and achieve different handwriting experiences for different input methods. For example, writing starts when the stylus pen touches the input control (that is, the first event is detected). When the finger or mouse clicks on the input control (that is, the third event is detected), the function of starting writing as soon as the pen is put down cannot be realized. Instead, after the finger or mouse click operation is detected, the handwriting box pops up, and the finger and mouse still need to be in the input control. Write in the handwriting box.
  • writing starts when the stylus pen or finger touches the input control (that is, the first event is detected).
  • the mouse clicks on the input control that is, the third event is detected
  • the handwriting box pops up, and the mouse still needs to write in the handwriting box.
  • displaying the handwriting box includes: drawing a handwriting box by an application to which the application window belongs.
  • the first event is a stylus pen down event
  • the third event is a finger touch down event or a mouse operation down event.
  • the application window is a foreground window.
  • a terminal including: a processor, a memory and a touch screen.
  • the memory, the touch screen are coupled to the processor.
  • the memory is used to store computer program code.
  • the computer program code includes computer instructions. , when the processor reads the computer instructions from the memory, so that the terminal executes the method described in the above aspect and any possible implementation manner.
  • a third aspect is to provide a device, which is included in a terminal and has the function of realizing the terminal behavior in any of the above aspects and possible implementation methods.
  • This function can be implemented by hardware, or it can be implemented by hardware executing corresponding software.
  • the hardware or software includes at least one module or unit corresponding to the above functions. For example, a receiving module or unit, a display module or unit, a processing module or unit, etc.
  • a fourth aspect is to provide a computer-readable storage medium, which includes computer instructions.
  • the terminal is caused to perform the method described in the above aspect and any possible implementation manner.
  • a computer program product is provided.
  • the computer program product When the computer program product is run on a computer, it causes the computer to execute the method described in the above aspects and any of the possible implementations.
  • a sixth aspect provides a chip system, including a processor.
  • the processor executes instructions, the processor executes the method described in the above aspects and any of the possible implementations.
  • Figure 1 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • Figure 2 is a schematic structural diagram of another terminal provided by an embodiment of the present application.
  • Figure 3 is a schematic flow chart of a handwriting method provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of a user interface of a terminal provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of the organizational relationship of controls in a terminal interface provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of a user interface of another terminal provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of user interfaces of some further terminals provided by embodiments of the present application.
  • Figure 8 is a schematic diagram of user interfaces of some terminals provided by embodiments of the present application.
  • Figure 9A is a schematic flow chart of another handwriting method provided by an embodiment of the present application.
  • Figure 9B is a schematic diagram of user interfaces of some further terminals provided by embodiments of the present application.
  • Figure 10 is a schematic structural diagram of another terminal provided by an embodiment of the present application.
  • Figure 11 is a schematic flow chart of another handwriting method provided by an embodiment of the present application.
  • Figure 12 is a schematic flowchart of another handwriting method provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Therefore, features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of this application, unless otherwise specified, "plurality” means two or more.
  • Embodiments of the present application provide a handwriting input method, which can be applied to terminals with touch screens, and can receive operations of the stylus and/or user's fingers on the touch screen to achieve control of the terminal (including sliding input of text on the touch screen, etc.) .
  • the terminal can also receive mouse input to operate the terminal.
  • the terminal in the embodiment of the present application may be a mobile phone, a tablet computer, a personal computer (PC), a personal digital assistant (PDA), a netbook, a wearable terminal (such as a smart watch, an augmented reality terminal) Technology (augmented reality, AR) equipment, virtual reality (VR) equipment, etc.), vehicle-mounted equipment, smart screens, smart cars, smart speakers, etc.
  • a wearable terminal such as a smart watch, an augmented reality terminal) Technology (augmented reality, AR) equipment, virtual reality (VR) equipment, etc.), vehicle-mounted equipment, smart screens, smart cars, smart speakers, etc.
  • This application does not place special restrictions on the specific form of the terminal.
  • Figure 1 shows a schematic structural diagram of the terminal 100.
  • the terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and user Identification module (subscriber identification module, SIM) card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the terminal 100.
  • the terminal 100 may include more or fewer components than shown in the figures, or some components may be combined, or some components may be separated, or may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationships between the modules illustrated in the embodiment of the present invention are only schematic illustrations and do not constitute a structural limitation on the terminal 100 .
  • the terminal 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 Wireless charging input may be received through the wireless charging coil of the terminal 100 . While charging the battery 142, the charging management module 140 can also provide power to the terminal through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, the wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the terminal 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in terminal 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied to the terminal 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the terminal 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellite system. (global navigation satellite system, GNSS), frequency modulation (FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation
  • the antenna 1 of the terminal 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 150.
  • the communication module 160 is coupled so that the terminal 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the terminal 100 implements the display function through the GPU, the display screen 194, and the application processor.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the terminal 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the terminal 100 can implement the shooting function through the ISP, camera 193, video codec, GPU, display screen 194, application processor, etc.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the terminal 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the terminal 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Terminal 100 may support one or more video codecs.
  • the terminal 100 can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • the NPU can realize intelligent cognitive applications of the terminal 100, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the terminal 100 (such as audio data, phone book, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the terminal 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the terminal 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the terminal 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the voice can be heard by bringing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 170C with the human mouth and input the sound signal to the microphone 170C.
  • the terminal 100 may be provided with at least one microphone 170C. In other embodiments, the terminal 100 may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the terminal 100 can also be equipped with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the headphone interface 170D is used to connect wired headphones.
  • the headphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button key.
  • the terminal 100 may receive key input and generate key signal input related to user settings and function control of the terminal 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • the motor 191 can also respond to different vibration feedback effects for touch operations in different areas of the display screen 194 .
  • Different application scenarios such as time reminders, receiving information, alarm clocks, games, etc.
  • the touch vibration feedback effect can also be customized.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be connected to or separated from the terminal 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the terminal 100 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card, etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the terminal 100 interacts with the network through the SIM card to implement functions such as calls and data communications.
  • the terminal 100 adopts eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the terminal 100 and cannot be separated from the terminal 100.
  • the software system of the terminal 100 can adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the software system of the terminal 100 can be an Android system, a Hongmeng system, a Windows system, etc.
  • the following uses the Windows system as an example to introduce the software structure diagram of the terminal 100.
  • the Windows system has a layered architecture, including user mode (or user mode) and kernel mode (or kernel mode, which can also be an operating system).
  • the user-mode components include system support processes, service processes, user applications, environmental subsystems, and subsystem dynamic link libraries (Dynamic Link Library or Dynamic-link Library, DLL), etc.
  • the above system supports processes, including login processes (such as Winlogon.exe), session managers (such as Smss.exe), idle processes (Idle processes, a special process, the carrier of idle threads), system processes (including most kernels) mode system processes), local security and authentication processes (such as Lsass.exe), and applications developed for the system (such as settings applications), etc.
  • login processes such as Winlogon.exe
  • session managers such as Smss.exe
  • idle processes Idle processes, a special process, the carrier of idle threads
  • system processes including most kernels) mode system processes
  • local security and authentication processes such as Lsass.exe
  • applications developed for the system such as settings applications
  • a function switch can be set in the settings application, and the handwriting function switch is used for the user to choose to allow or prohibit starting the handwriting pad application.
  • the above service process is a Windows service that runs independently of user login and has nothing to do with the user.
  • task scheduling service (Task Scheduler) and printing (Print Spooer) service.
  • the user application may include a handwriting pad application, which can realize that after the stylus (or finger or mouse) touches the input control in the interface (i.e., puts down the pen or clicks the input control), Start writing operation.
  • the terminal displays a specific handwriting box, and the user needs to use the stylus (or finger) again to write in the specific Write in the box. It can be seen that the handwriting function that starts writing as soon as you put down the pen simplifies the user's operations and improves the handwriting experience based on Windows systems.
  • the handwriting pad application can also enable the stylus (or finger or mouse) to write across the entire screen, further improving the user's handwriting experience.
  • the handwriting pad application can also distinguish between stylus input, finger touch input, and mouse input, and achieve different handwriting experiences for different inputs. For example, writing begins when the stylus pen is placed on an input control. When the finger or mouse clicks on the input control, the function of starting writing as soon as the pen is put down cannot be realized. Instead, after the finger or mouse click operation is detected, the handwriting box pops up, and the finger and mouse still need to write in the handwriting box.
  • the above-mentioned environment subsystem service process the part that implements operating system environment support.
  • a variety of environmental subsystems are provided to support running various types of application software.
  • the subsystem DLL mainly converts a documented function into some appropriate internal native system service calls.
  • the components of the kernel state include: execution body, kernel, device driver (Device Driver), hardware abstraction layer (Hardware Abstraction Layer, HAL), window and graphics system.
  • the above-mentioned execution bodies include basic operating system services, such as memory management, process and thread management, security, I/O management, network and cross-process communication, etc.
  • the above kernel includes some low-level operating system functions, such as thread scheduling, terminal and exception distribution, and multi-processor synchronization.
  • the above device drivers include hardware device drivers (such as stylus drivers, mouse drivers, touch drivers, etc.), file system drivers, file system filter drivers, network redirection, protocol drivers, kernel drain filter drivers, etc.
  • the hardware device driver converts the IO function calls of the user process into specific hardware device IO requests.
  • the above-mentioned hardware abstraction layer is used to realize that the Windows system can be transplanted to various hardware platforms. It is a loadable kernel module that provides a unified service interface for different hardware platforms.
  • the handwriting tablet application can call the window and graphics system to draw the first canvas.
  • the first canvas corresponds to the input control in the application window, and the first canvas
  • the size and position of the canvas are consistent with the size and position of the corresponding input control, and it is located on the upper layer of the application window. Then, the first canvas can intercept events executed by the user at the position of the input control.
  • the handwriting tablet application can continue to call the window and graphics system to draw the second canvas, which is located at The upper layer of the application window. Then, the second canvas can be used to receive the user's writing input, thereby realizing the function of starting writing as soon as the pen is put down.
  • the size of the second canvas can be set to full screen, thereby enabling the stylus to write within the full screen.
  • the terminal may display multiple application windows at the same time.
  • the handwriting pad application can detect the input controls of all application windows, and set the first canvas for the input controls of all application windows.
  • the handwriting tablet application can also detect the foreground window (that is, the window that the user is operating), identify only the input controls in the foreground window, and set the first canvas for the input controls of the foreground window.
  • the operating load of the terminal can be reduced.
  • FIG. 3 it is a schematic flow chart of a handwriting input method provided by an embodiment of the present application.
  • the method includes:
  • the terminal obtains the control information contained in the foreground window.
  • the stylus pen application starts to obtain the controls contained in the foreground window, that is, starts to perform step S301 and subsequent steps.
  • the functions implemented by the handwriting pad application can also be implemented by system services of the Windows system, that is, by system services in the kernel mode of the Windows system, which will not be described again here.
  • the terminal identifies the input control in the foreground window based on the control information contained in the foreground window.
  • the foreground window refers to the window that the user is operating, that is, the window with the title bar in an activated state.
  • the desktop when the terminal displays the desktop (essentially an interface of the desktop application), the desktop can be the foreground window.
  • the application window currently operated by the user is the foreground window.
  • multiple application windows are displayed in the terminal, namely the window of application 1 and the window of application 2.
  • the window of application 1 is the window that the user is operating, that is, the foreground window.
  • the window of the Windows system (such as the desktop or the window of application 1) includes multiple controls (can also be called UI elements, WindowsUI controls, etc.). These multiple controls include but are not limited to buttons (Buttons), edit boxes (Edit ), drop-down list (ComboBox), list box (ListBox), scroll bar (Scrollbar) and sub-window.
  • the controls in the application interface usually include two categories: one is that the application is developed using the controls that come with the Windows system. These controls are also called Windows standard controls. In the application ecosystem of Windows systems, Windows standard controls account for a relatively small proportion, currently about 15%. The other type is that applications are developed without using the controls that come with the Windows system. For example, you can use the interface of the graphics processing unit (GPU) to develop controls for the application interface. This type of control is also called a custom control. . In the application ecosystem of Windows systems, customized controls account for a large proportion, currently about 85%.
  • Windows standard controls are usually created using the application programming interface (API) of the Windows system (such as CreateWindowEx) and have a window handle. Then the terminal can call the Windows system API according to the window handle to query the control information (such as type, style description, etc.), and determine the input control through the control information. For example, when the control type is a preset type (such as Edit control type or RichEdit control type) and contains a specific style description (such as adding a style description starting with ES_), the control can be determined to be an input control.
  • API application programming interface
  • the control type is a preset type (such as Edit control type or RichEdit control type) and contains a specific style description (such as adding a style description starting with ES_)
  • the control can be determined to be an input control.
  • custom controls are drawn and managed by the application developer.
  • the outermost window of an application (which can be the root control of the application) can use Windows standard controls, and other internal controls can use custom controls.
  • Windows systems custom controls are not visible. That is, you cannot use the Windows system API to query customized control information.
  • embodiments of the present application provide a method for detecting input controls, which can detect information about each control contained in the foreground window by means of User Interface Automation (UIA).
  • UUIA User Interface Automation
  • the control information queried through the UIA interface is called the UIA information of the control.
  • information such as the position of the input control in the foreground window (such as the coordinates of the four vertices of the rectangular control) is determined based on the UIA information of each control.
  • UIA is an accessibility framework for Windows systems, designed to meet the needs of automated testing tools and assistive technology products (such as screen readers).
  • UIA provides a set of self-defined APIs (called UIA APIs, which are different from the APIs of Windows systems), self-defined interfaces, and self-defined interface modes, allowing application developers to follow the UIA framework to implement applications ( (including the foreground window here), so that application users (such as software testers, people with disabilities, etc.) can better use the application software.
  • UIA APIs self-defined APIs
  • self-defined interfaces a set of self-defined interfaces
  • self-defined interface modes allowing application developers to follow the UIA framework to implement applications (including the foreground window here), so that application users (such as software testers, people with disabilities, etc.) can better use the application software.
  • calling the API of the Window system can query a set of control information of the control, and calling the UIA interface can query another set of control information of the control (that is, UIA information).
  • UIA information another set of control information of the control.
  • the desktop window includes one or more sub-controls 1 (such as the desktop taskbar, application icons, etc.), sub-control 2 (the window of application 1) and sub-control 3 ( window of application 2).
  • sub-control 2 includes multiple sub-controls (that is, controls in the interface of application 1, such as title bar, button, text box, edit box, etc.), which are sub-control 21, sub-control 22, sub-control 23, etc. respectively.
  • Sub-control 3 includes multiple sub-controls (i.e., controls in the interface of application 2, such as title bar, button, text box, edit box, etc.), which are sub-control 31, sub-control 32, sub-control 33, etc. respectively.
  • the terminal can first obtain the UIA information of all controls by calling the UIA API interface, including the UIA information of the desktop window (root control) and all sub-controls under the desktop window, and then determine the foreground window (i.e. application 1) Input controls included.
  • the terminal can also first obtain the handle of the foreground window, then obtain the information of the root control of the foreground window based on the handle of the foreground window, and then traverse the UIA information of the child controls under the root control.
  • the UIA information of the root control and sub-controls at all levels includes but is not limited to the type of control and the properties of the control.
  • the types of controls in UIA information include but are not limited to application bar (AppBar) control type, button (Button) control type, calendar control type, check box (CheckBox) control type, drop-down list (ComboBox) control type, data table (DataGrid) control type, data item (DataItem) control type, document control type, edit (Edit) control type, group control type, header control type, hyperlink control type, image control type, list control type, list item ( ListItem) control type, menu control type, menu bar (MenuBar) control type, menu item (MenuItem) control type, pane control type, etc.
  • the properties of the control in the UIA information are used to expose specific aspects of the control function, or to represent common control behaviors.
  • the properties of a control include but are not limited to being keyboard focusable, not read-only, and having keyboard focus.
  • the control when it is determined that the type of the control in the foreground window is a preset type and the properties of the control are preset properties corresponding to the preset type, the control is determined to be an input control.
  • the preset types include but are not limited to edit control (EditControl) type, drop-down list (ComboBox) type, and document control (DocumentControl) type.
  • the default attributes include but are not limited to being keyboard focusable, not read-only, and having keyboard focus.
  • the default attributes corresponding to the edit control (EditControl) type include being keyboard focusable and not read-only.
  • the type of the control is an edit control type, and the property of the control is further determined to be keyboard focusable or non-read-only, then the control is determined to be an input control.
  • the default attributes corresponding to the drop-down list (ComboBox) type include non-read-only and keyboard focus.
  • the type of the control is a drop-down list type, and it is further determined that the property of the control is non-read-only or has keyboard focus, then the control is determined to be an input control.
  • the default attribute corresponding to the document control type is non-read-only.
  • the type of the control is a document control type, and it is further determined that the property of the control is non-read-only, then the control is determined to be an input control.
  • the input control of the foreground window is determined, and information such as the position of the input control (for example, the coordinates of the four vertices of the rectangular control) is obtained.
  • the foreground window is the window of Application 1.
  • the window of Application 1 includes the application bar (AppBar), multiple button (Button) controls (such as close control, maximize control, minimize control, more controls, return "New Mail” button, etc.), and multiple text boxes (such as text box 401 to text box 405).
  • text boxes 401 to 405 all belong to the edit control (EditControl) type, and the properties of text boxes 401 to 403 are non-read-only, and the properties of text boxes 404 and 405 are read-only, so the text can be determined Box 401 to text box 403 are input controls. The position information of each control in the text box 401 to 403 is further obtained.
  • the terminal can also subscribe to the change event of the foreground window through the API of UIA. That is to say, when the controls in the foreground window change, the terminal re-obtains the UIA information of the controls in the foreground window and re-determines the information of the input controls in the foreground window.
  • the controls of the foreground window change including the user switching the window currently being operated (for example, the foreground window is switched from the window of application 1 to the window of application 2), and the foreground window interface is switched (for example, the interface 1 of application 1 is switched to interface 2) , the position and/or size of the controls in the foreground window change (for example, the foreground window switches from small window mode to full screen mode), etc.
  • the terminal sets the first canvas on the upper layer of the foreground window according to the position of the input control in the foreground window.
  • the size and position of the first canvas are consistent with the size and position of the input control in the foreground window.
  • the first canvas may be set to be nearly transparent or semi-transparent, such that the transparency is greater than or equal to the threshold 1 (for example, 0.3) and less than the threshold 2 (for example, 1).
  • threshold 2 is the attribute value at which the canvas is set to opaque. In other words, the first canvas is opaque at this time.
  • setting the first canvas to non-transparent is to use the first canvas to intercept the user's input operation at the input control position.
  • the first canvas is nearly transparent or translucent, so it does not block the content of the foreground window input control, making it difficult for the user to notice the first canvas, and does not affect the user experience of the foreground window.
  • the position information of the text boxes 401 to 403 can be used in the text.
  • Box 401 Window 501 is set in the upper layer
  • window 502 is set in the upper layer of the text box 402
  • window 503 is set in the upper layer of the text box 403.
  • the size and position of window 501 coincide with the size and position of text box 401 .
  • the size and position of window 502 coincides with the size and position of text box 402.
  • the size and position of window 503 coincide with the size and position of text box 403.
  • Text boxes 401 to 403 are not shown in FIG. 6 .
  • windows 501 to 503 can be set to be nearly transparent or semi-transparent, and then windows 501 to 503 can reveal the contents in text boxes 401 to 403. In other words, the added windows 501 to 503 do not affect the user's normal viewing of the content in the foreground window and the operation of the controls in the foreground window.
  • the terminal When detecting the first event acting on the first canvas, the terminal sets a second canvas on the upper layer of the foreground window, and the size of the second canvas is larger than the size of the first canvas.
  • the second canvas is used to display the user's handwriting.
  • the terminal closes the first canvas.
  • the terminal may not close the first canvas, but hide the first canvas, or set the first canvas to be transparent.
  • the first event is an event when the user touches the input control position (ie, the first canvas position) using the stylus pen/finger/mouse.
  • the first event is that the terminal detects a down event at the input control position (i.e., the first canvas position), which may include any one or more of a stylus pen down event, a finger touch down event, and a mouse operation down event. item.
  • the terminal detects the down event of the writing pen at position 601 (ie, the first event), and the terminal closes (or hides) the window 503 or sets the window 503 to be transparent.
  • the terminal can also close (or hide) window 501 and window 502 together.
  • the terminal sets a second canvas above the foreground window. The size of the second canvas is larger than the first canvas. The second canvas may occupy most of the foreground window, such as window 602 as shown in (2) in Figure 7 .
  • the second canvas may occupy the entire area of the foreground window, such as window 602 as shown in (3) in Figure 7 .
  • the second canvas may occupy the entire screen of the terminal, that is, the second canvas is a full-screen window, such as window 602 shown in (4) in Figure 7 .
  • the user can start writing as soon as the stylus pen is put down. Since the second canvas is located at the top layer of the foreground window, the second canvas can receive the user's writing operation and present the user's writing handwriting according to the user's writing operation.
  • the transparency of the second canvas can be set to be close to transparent or semi-transparent, for example, the transparency of the second canvas is greater than or equal to the threshold 1 (for example, 0.3) and less than the threshold 2 (for example, 1).
  • the second canvas can intercept the user's writing operation.
  • the second canvas is nearly transparent or translucent, so it does not block the content of the foreground window input control, making it difficult for the user to notice the second canvas, and does not affect the user experience of the foreground window.
  • the second canvas and the first canvas are different windows.
  • the first canvas is set first
  • the second picture is set when the second canvas needs to be used.
  • the terminal detects the input control of the foreground window
  • the terminal sets a first canvas at the position of the input control of the foreground window for detecting the first event (that is, the event that the user starts writing in the input control of the foreground window).
  • the terminal sets a second canvas with a larger size (for example, the second canvas is a full-screen window), and the second canvas continues to receive the user's writing operation and presents the user's handwriting.
  • the terminal can also set the second canvas in advance.
  • the transparent attribute of the second canvas is set to transparent.
  • the second canvas is then set to be opaque, such as nearly transparent or semi-transparent. Then, the second canvas can receive the user's writing operation and present the user's writing handwriting.
  • the embodiments of the present application do not limit the timing of setting the first canvas and the second canvas using the handwriting pad and do not specifically limit the timing.
  • the second canvas and the first canvas may also be the same window. That is, the second canvas is obtained by adjusting the size of the first canvas. Then, when it is detected that the first event occurs at the first canvas, the terminal directly increases the size of the first canvas to obtain the second canvas. The enlarged second canvas continues to receive the user's writing operation and presents the user's writing handwriting.
  • setting the canvas (such as the first canvas or the second canvas) to be transparent does not require the transparency of the canvas to be strictly equal to 0. It can include that the transparency of the canvas is near 0, that is, the transparency of the canvas is less than the threshold. 1. When the canvas is greater than or equal to the threshold 1, the canvas is considered non-transparent.
  • the terminal After detecting the second event acting on the second canvas and not detecting the first event acting on the second canvas within the preset time period 1 (for example, 1s), the terminal identifies the writing on the second canvas. handwriting.
  • the second event is an event when the user lifts the stylus/finger/mouse.
  • the first event is the down event of the stylus
  • the second event is the up event of the stylus.
  • the first event is a down event of a finger touch
  • the second event is an up event of a finger touch.
  • the first event is the down event of the mouse operation
  • the second event is the up event of the mouse operation.
  • the writing on the second canvas is the trajectory of the stylus/finger/mouse moving on the second canvas.
  • the terminal displays the second canvas.
  • the second canvas occupies the entire screen of the terminal is used as an example for explanation.
  • the user can use the stylus to write on the full screen at this time.
  • the user directly uses the stylus to write text. It can be noted that the text written by the user is not limited by the size of the input control 503 at this time.
  • the terminal detects the second event acting on the second canvas.
  • the first event (the down event of the stylus pen) acting on the second canvas is not detected within the preset time period 1 (for example, 1s) after the second event acting on the second canvas is detected.
  • the moving trajectory on the second canvas is detected, and the text and/or symbol corresponding to the trajectory is detected, as shown in (3) in FIG. 8 , and the corresponding display text and/or symbol is displayed in the input control 503 .
  • the terminal may also start to recognize the handwriting on the second canvas at other times, such as periodically identifying the existing handwriting on the second canvas.
  • the embodiments of the present application do not limit the timing and recognition method for the terminal to recognize the handwriting on the second canvas.
  • the terminal displays the recognition result in the input control of the foreground window, and the terminal resets the first canvas on the upper layer of the input control of the foreground window. Optionally, the terminal closes (or hides) the second canvas.
  • the first canvas is set to be close to transparent or semi-transparent, then the user can see the recognition results displayed in the input control in the foreground window. And, further, the first canvas can continue to intercept the user's operations at the input control position. In other words, when the user continues to write directly at the input control position, the terminal continues to execute the above steps S304 to S306.
  • step S304 if the terminal does not close the first canvas in step S304, but hides the first canvas, or sets the first canvas to be transparent, then the terminal does not need to reset the first canvas and directly sets the first canvas to Close to transparent or translucent.
  • the embodiments of the present application provide a handwriting input method, which detects input controls in the foreground window and sets a nearly transparent or semi-transparent first canvas at the position of the input controls to intercept the user's intention.
  • the first event executed in the input control that is, the event when the user starts writing in the input control in the foreground window.
  • the terminal sets a second canvas with a larger size (for example, the second canvas is a full-screen window), and the second canvas continues to receive the user's writing operation and presents the user's handwriting. This enables users to start writing as soon as they put pen to paper, improving the handwriting experience on the terminal.
  • the terminal does not need to distinguish between different input methods, but implements the function of starting writing as soon as the pen is put down for each input method.
  • the intention of the operation may be different.
  • embodiments of the present application also provide a handwriting input method, which can distinguish different input methods and achieve different handwriting experiences for different input methods.
  • writing begins when the stylus pen is placed on an input control.
  • the finger or mouse clicks on the input control the function of starting writing as soon as the pen is put down cannot be realized. Instead, after the finger or mouse click operation is detected, the handwriting box pops up, and the finger and mouse still need to write in the handwriting box.
  • writing starts when the stylus or finger is placed on the input control. When the mouse clicks on the input control and the handwriting box pops up, the mouse still needs to write in the handwriting box.
  • Figure 9A shows that writing starts when the stylus pen is placed on the input control.
  • the finger or mouse clicks on the input control the handwriting box pops up as an example.
  • Figure 9A it is a schematic flow chart of another handwriting input method provided by an embodiment of the present application. The method includes:
  • the terminal obtains the control information contained in the foreground window.
  • the terminal identifies the input control in the foreground window based on the control information contained in the foreground window.
  • the terminal sets the first canvas on the upper layer of the foreground window according to the position of the input control in the foreground window.
  • steps S901 to S903 please refer to the relevant contents of steps S301 to S302 mentioned above, and will not be described again here.
  • the terminal When detecting the third event acting on the first canvas, the terminal sets a second canvas on the upper layer of the foreground window, and the size of the second canvas is larger than the size of the first canvas.
  • the second canvas is used to display the user's handwriting.
  • the terminal closes the first canvas, or sets the first canvas to transparent.
  • the third event is an event that the handwriting pad application pays attention to, for example, a down event of the stylus pen.
  • the terminal can obtain the input event at the first canvas by calling the API of the Windows system (such as SetWindowsHookEx of Win32API), and further analyze the specific flag bit in the input event to distinguish the input event as the down event of the stylus. Down event for finger touch or down event for mouse operation. When the input event is a down event of the stylus pen, it is determined that the third event occurs at the first canvas.
  • the API of the Windows system such as SetWindowsHookEx of Win32API
  • the terminal After detecting the fourth event acting on the second canvas, and the third event acting on the second canvas is not detected again within the preset time period 1 (for example, 1s), the terminal identifies the second canvas handwriting.
  • This application does not place specific restrictions on the timing of recognizing handwriting.
  • the fourth event is, for example, an up event of the stylus.
  • the terminal displays the recognition result in the input control of the foreground window, and resets the first canvas on the upper layer of the input control of the foreground window or restores the transparency of the first canvas to nearly transparent or translucent.
  • the terminal closes (or hides) the second canvas.
  • step S907 may be executed after step S906 or after step S903.
  • the terminal When detecting the fifth event acting on the first canvas, the terminal sets the first canvas to be transparent. Among them, the fifth event is different from the third event. Optionally, the terminal starts the timer to start timing, and the timing duration is the preset duration 2.
  • the fifth event is an event that the handwriting pad application does not pay attention to, such as a down event of a finger touch or a down event of a mouse operation.
  • Set the first canvas to be transparent for example, set the transparency of the first canvas to be equal to 0, or less than a threshold of 1 (for example, 0.3).
  • the terminal can obtain the input event at the first canvas by calling the API of the Windows system (such as SetWindowsHookEx of Win32API), and further analyze the specific flag bit in the input event to distinguish the input event as the down event of the stylus.
  • Down event for finger touch or down event for mouse operation When the input event is a down event of a finger touch or a down event of a mouse operation, it is determined that the fifth event occurs at the first canvas.
  • the fifth event is an event that the handwriting pad application does not pay attention to, that is, an input event that does not require the function of starting writing as soon as the pen is put down.
  • the terminal sets the first canvas to be transparent in order to transmit the detection of the fifth event to the bottom of the first screen. Input controls.
  • the timer is started to restore the first canvas to be nearly transparent or semi-transparent when the timer expires, so that the first canvas can intercept the third event that subsequently acts on the input control, that is, the handwriting pad application is concerned about Enter the event.
  • the terminal displays a handwriting box in the foreground window, and the handwriting box is used to receive the user's writing operation.
  • the terminal when the first canvas set at the input control 503 detects the fifth event (for example, a finger click event), the terminal sets the first canvas to be transparent. After the transparent attribute of the first canvas takes effect, the terminal displays a handwriting box 910 as shown in (2) in Figure 9B for receiving the user's writing operation.
  • the fifth event for example, a finger click event
  • the handwriting box is different from the second canvas.
  • the handwriting box is a control that pops up after operating the input control in the foreground window.
  • the handwriting box displayed on the terminal is the business execution logic of the application where the foreground window is located.
  • the handwriting box belongs to the application where the foreground window is located, so the size of the handwriting box is generally not larger than the size of the foreground window.
  • Handwriting boxes are generally opaque (for example, transparency is 1 or close to 1). It is understandable that the size and position of the handwriting box set by different applications are generally different.
  • the second canvas is a control that pops up after operating the first canvas.
  • the terminal displays the business execution logic applied on the second canvas as the tablet.
  • the size of the second canvas can be arbitrary and is not limited by the size of the foreground window.
  • the second canvas can be full screen.
  • the second canvas is nearly transparent or translucent.
  • the terminal restores the first canvas to be nearly transparent or translucent.
  • the terminal Before the timer expires, if the user operates the foreground window, the terminal will run according to the business execution logic of the application where the foreground window is located. When the timer expires, the terminal restores the first canvas to be nearly transparent or semi-transparent. That is, the first canvas continues to intercept user input on the input control. When the user's input in the input control is an event that the handwriting pad application focuses on (for example, a stylus down event), the second canvas can be displayed.
  • the user's input in the input control is an event that the handwriting pad application does not pay attention to (for example, a finger touch up event or a mouse up event)
  • continue to set the first canvas to the transparent attribute and then the input in the input control will be directly transparent to The application where the foreground window is located.
  • the input method of the input event (such as stylus input, finger touch input) is further distinguished. input, mouse input), and realizes different handwriting experiences for different input methods, enriching the user's handwriting experience and meeting different handwriting needs.
  • the terminal does not distinguish the user's input mode (stylus input, finger input or mouse input), and uniformly sets the first canvas in the input controls of the foreground window. , and then determine which input method it is based on the input event received by the first canvas, and adopt different processing methods for different input methods.
  • the terminal may also distinguish which input method the input event is.
  • the terminal can directly set a second canvas for receiving writing operations from the handwriting pad.
  • the above embodiment describes the technical solution of the present application from the perspective of the terminal.
  • the following provides an internal structure diagram of a terminal, and explains the technical solution of the present application in conjunction with the internal structure of the terminal.
  • the terminal structural schematic diagram includes the structure of a handwriting pad application.
  • the user mode of the terminal shows the settings application, the handwriting pad application, and the application where the foreground window is located.
  • the modules included in each application will be explained in detail when introducing the technical solutions below.
  • the kernel mode of the terminal shows the kernel and executive body, window and graphics system, and device drivers.
  • the kernel and execution body include background process services, UIA APIs, and Windows APIs.
  • FIG. 11 it is a schematic flow chart of the handwriting method when the terminal detects stylus input.
  • the process includes:
  • a function switch can be set in the settings application of the terminal for the user to choose to allow or prohibit starting the handwriting pad application.
  • the Tablet application's input control recognition engine is used to identify the input controls contained in the foreground window.
  • the foreground window is the window that the user is operating, usually the window of the foreground application.
  • the control recognition module in the input control recognition engine can obtain the UIA information of the controls contained in the foreground window by calling the UIA API in kernel mode, and identify the input controls in it based on the UIA information of the controls, and obtain the input Information such as the number of controls and the position of each input control.
  • the specific identification method please refer to the relevant content in the previous step S302, which will not be described again here.
  • the window handle recognition module in the input control recognition engine can identify the information of Windows standard controls in the foreground window by calling the Windows API, identify the input controls in it, and obtain the number of input controls and each input The location of the control and other information.
  • identify the information of Windows standard controls in the foreground window by calling the Windows API, identify the input controls in it, and obtain the number of input controls and each input The location of the control and other information.
  • the handwriting pad application also includes an application policy configuration module, configured to receive the application policy set by the system or set by the user.
  • the application policy configuration module can also obtain the application policy from the network.
  • the application identification rule module in the input control identification engine can use a combination of the control identification method and the window handle identification method based on the application policy configured in the application policy configuration module.
  • the application policy includes the application list 1 that gives priority to using the control identification method (that is, calling the control identification module), and/or gives priority to using window handle identification. Method (that is, calling the window handle identification module) application list 2.
  • application list 1 includes third-party applications, or applications that contain a large number of custom controls as determined based on statistical data.
  • Application list 2 includes system applications, or applications that contain a large number of Windows standard controls determined based on statistical data. It can be understood that when the foreground window belongs to an application in the application list 1, the input control recognition engine gives priority to calling the control recognition module to identify the input controls in the foreground window. When the foreground window belongs to an application in application list 2, the input control recognition engine first calls the window handle recognition module to identify the input controls in the foreground window.
  • the application policy may also include application list 3 that uses both the control identification method (that is, calling the control identification module) and the window handle identification method (that is, calling the window handle identification module).
  • the embodiment of this application does not specifically limit the setting of the application policy.
  • the handwriting pad application sets the first canvas according to the position of the input control in the foreground window.
  • the tablet application can call the window and graphics system in kernel mode to draw the corresponding first canvas above the input control in the foreground window.
  • the position of the first canvas is consistent with the position of the input control
  • the number of the first canvas is consistent with the number of input controls in the foreground window.
  • the first canvas can be set to be nearly transparent or semi-transparent. It is understandable that at this time, the foreground window and the first canvas drawn by the handwriting pad application are actually displayed on the terminal, and the first canvas is nearly transparent or translucent, so the user can still see the input controls in the foreground window and the contents of the input controls. content, the user can still continue to operate the foreground window.
  • the handwriting pad application can draw the corresponding first canvas based on the number and position of the input controls after identifying the input controls in the foreground window.
  • the first canvas can be set to Close to transparent or translucent.
  • the handwriting pad application can also draw one or more first canvases when it is started, and at this time the first canvas is set to an inactive state.
  • the tablet application activates the corresponding number of first canvases, that is, sets the first canvas to be nearly transparent or semi-transparent, and updates the corresponding first canvas according to the position of the input controls. The position of the canvas, etc.
  • the handwriting pad application monitors the change event of the foreground window.
  • the foreground window listening module in the tablet application can subscribe to the change event of the foreground window to the UIA in the terminal kernel mode.
  • the change event of the foreground window includes the user switching the window currently being operated (for example, the foreground window is switched from the window of application 1 to the window of application 2), the foreground window interface is switched (for example, the interface 1 of application 1 is switched to interface 2),
  • the position and/or size of controls in the foreground window changes (for example, the foreground window switches from small window mode to full screen mode), etc.
  • the handwriting pad application updates the information of the input control in the foreground window according to the change event of the foreground window.
  • the foreground window listening module in the handwriting pad application sends the change event of the foreground window to the input control recognition engine, and the input control recognition engine re-calls the API of UIA and/or the API of Windows to update the information of the input control in the foreground window.
  • the handwriting pad application updates the first canvas according to the updated information of the input control in the foreground window.
  • the method for updating the first canvas by the handwriting tablet application is the same as the above step S1102, which will not be described again here.
  • the handwriting pad application detects the stylus down event acting on the first canvas.
  • step S1104a This step is shown in the figure as step S1104a and step S1104b.
  • the device driver of the Windows operating system After detecting the input event acting on the first canvas, the device driver of the Windows operating system passes The windowing and graphics systems send input events to the tablet application.
  • the handwriting pad application can obtain the specific information of the input event from the Win32 API in kernel mode (such as SetWindowsHookEx) through global hooks.
  • the input event can be distinguished according to the specific flag bit as a down event of the stylus, a down event of finger touch, or Down event of mouse operation.
  • the handwriting pad application sets a second canvas on the upper layer of the foreground window.
  • the handwriting pad application can also close the first canvas, hide the first canvas, or set the first canvas to be transparent.
  • the size of the second canvas is larger than the size of the first canvas.
  • the second canvas is a full screen window.
  • the tablet application can call the window and graphics system in kernel mode to draw the second canvas.
  • the handwriting pad application receives the writing operation on the second canvas and displays the handwriting.
  • step S1106a This step is shown in the figure as step S1106a and step S1106b.
  • the user can write directly without lifting the pen. Since the second canvas is located on the top layer, the user's writing operation is performed on the second canvas. After detecting the writing operation on the second canvas, the device driver of the Windows operating system sends the writing operation to the handwriting pad application through the window and graphics system.
  • the handwriting pad application presents the movement trajectory of the stylus pen on the second canvas, that is, handwriting. This enables the function of starting writing as soon as the stylus pen is put down.
  • the second canvas is a full-screen window, the full-screen handwriting function is also implemented, which greatly improves the handwriting experience.
  • the handwriting pad application begins to identify the second canvas.
  • the handwriting on the second canvas is closes the second canvas.
  • step S1107a This step is shown in the figure as step S1107a, step S1107b, and step S1107c.
  • the device driver of the Windows operating system sends the up event of the stylus to the tablet application through the window and graphics system after detecting the up event of the stylus acting on the second canvas.
  • the handwriting pad application receives the up event of the stylus pen and does not detect the down event of the stylus pen within the preset time period 1
  • the second canvas sends the handwriting to the handwriting recognition engine in the handwriting pad application, and the handwriting recognition engine Identify.
  • the handwriting pad application sends the handwriting recognition result to the foreground window, and the handwriting recognition result is displayed in the corresponding input control in the foreground window.
  • the handwriting pad After the handwriting pad recognizes handwriting, it can notify the foreground window through the UIA focus module to modify the properties of the corresponding input control to have keyboard focus. Then, the foreground window calls the window and graphics system in kernel mode to display the handwriting recognition results in the corresponding input control. Thus, the function of inputting text in the foreground window is realized.
  • FIG. 12 shows the flow of an input method in which the terminal detects finger input or mouse input. After the above step S1103c, or after the above step S1108, the following steps S1201 to S1207 may also be included.
  • the handwriting pad application sets the first canvas to be transparent and starts a timer.
  • step S1201a This step is shown in the figure as step S1201a, step S1201b, and step S1201c.
  • the device driver of the Windows operating system sends the finger to the handwriting pad application through the window and graphics system. After the touch down event or mouse down event.
  • the handwriting pad application receives a finger touch down event or a mouse down event, it sets the first canvas to be transparent.
  • the handwriting pad application can also start a timer to start timing, and the timer duration is the preset duration 2.
  • the operating system for example, specifically the window and graphics system
  • the operating system will no longer forward input events acting on the first canvas to the first canvas. That is, input events that act on the first canvas will be passed to the application on the next layer of the first canvas.
  • the transparent attribute of the first canvas does not take effect immediately, and the effective time is uncertain. Therefore, before the transparent attribute of the first canvas is set to take effect, when the first canvas receives a finger touch down event or a mouse down event, the handwriting pad application needs to send a simulated finger to the operating system (for example, specifically the window and graphics system) In case of touch up event or mouse up event, step S1202 is executed.
  • the operating system will still send the received finger touch up event or mouse up event to the first canvas of the handwriting pad application.
  • the tablet application needs to send a simulated finger touch down event or mouse down event to the operating system (for example, specifically the window and graphics system), that is, execute Step S1203b.
  • the handwriting pad application sends a finger touch up event or a mouse up event to the operating system.
  • the operating system Before the transparent attribute of the first canvas takes effect, the operating system sends a finger touch up event or mouse up event to the first canvas after receiving the finger touch up event or mouse up event.
  • the first canvas After receiving the finger touch up event or mouse up event sent by the operating system, the first canvas sends a simulated finger touch down event or mouse down event to the operating system.
  • the operating system After the transparent attribute of the first canvas takes effect, after receiving the finger touch down event or mouse down event sent by the first canvas, the operating system sends a finger touch down event or mouse down event to the application where the foreground window is located.
  • the operating system After detecting a finger touch up event or a mouse up event, the operating system sends a finger touch up event or a mouse up event to the application where the foreground window is located.
  • step S1205a This step is shown in the figure as step S1205a and step S1205b.
  • a handwriting box pops up in the foreground window, and the handwriting box is used to receive writing operations.
  • the application where the foreground window is located pops up a handwriting box, and the handwriting box is used to receive the user's subsequent writing operations. Subsequently, when the operation on the first canvas is detected, the foreground window runs according to its own business execution logic.
  • Non-transparent properties that is, properties that are close to transparent or translucent, continue to intercept user input on the input control.
  • the user's input in the input control is an event that the handwriting pad application focuses on (stylus down event)
  • the second canvas can be displayed.
  • the user's input in the input control is an event that the handwriting pad application does not pay attention to (finger touch up event or mouse up event)
  • An embodiment of the present application also provides a device, which is included in a terminal and has the function of realizing the terminal behavior in any of the methods in the above embodiments.
  • This function can be implemented by hardware, or it can be implemented by hardware executing corresponding software.
  • the hardware or software includes at least one module or unit corresponding to the above functions. For example, receiving module or unit, display module or unit, processing module or unit, etc.
  • Embodiments of the present application also provide a computer storage medium that includes computer instructions.
  • the computer instructions When the computer instructions are run on a terminal, the terminal is caused to perform any of the methods in the above embodiments.
  • Embodiments of the present application also provide a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to perform any of the methods in the above embodiments.
  • the above-mentioned terminals include hardware structures and/or software modules corresponding to each function.
  • Persons skilled in the art should easily realize that, in conjunction with the units and algorithm steps of each example described in the embodiments disclosed herein, the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving the hardware depends on the specific application and design constraints of the technical solution. Professionals and technicians may use different methods to implement the described functions for each specific application, but such implementations should not be considered to be beyond the scope of the embodiments of the present invention.
  • Embodiments of the present application can divide the above terminals into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or software function modules. It should be noted that the division of modules in the embodiment of the present invention is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • Each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or contribute to the existing technology, or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage device.
  • the medium includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, etc. that can store program code. medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

手写输入方法及终端,涉及终端技术领域,可以实现终端上落笔即开始输入文字,提升手写体验,该方法具体包括:检测应用窗口的输入控件,在应用窗口上输入控件的对应位置设置第一画面,第一画面为接近透明;响应于检测到作用于第一画面的第一事件,在该应用窗口上设置第二画布,第二画布的尺寸大于第一画面,且第二画布为接近透明,第二画布用于接收用户的书写操作,并呈现书写笔迹;根据第二画布上的书写笔迹在应用窗口的输入控件中显示对书写笔迹的识别结果。

Description

手写输入方法及终端
本申请要求于2022年06月24日提交中国国家知识产权局、申请号为202210731487.5、申请名称为“手写输入方法及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及手写输入方法及终端。
背景技术
目前,用户想在终端(例如Windows系统的平板电脑)上书写文字时,需要先使用手写笔点击应用界面中的输入控件后抬起手写笔,应用弹出手写框,而后用户便可以使用手写笔在手写框中进行输入,可见现有的终端上的手写体验不佳。
发明内容
本申请提供的手写输入方法及终端,可以实现在终端上落笔即开始输入文字,提升手写体验。
为了实现上述目的,本申请实施例提供了以下技术方案:
第一方面、提供一种手写输入方法,该方法包括:检测应用窗口中的输入控件;在应用窗口中输入控件的对应位置上设置第一画布,第一画布的透明度大于或等于第一阈值(即第一画布设置为接近透明或半透明);响应于作用在第一画布上的第一事件(即用户在应用窗口的输入控件中开始书写的事件,例如用户使用手写笔/手指/鼠标接触第一画布位置的事件),在应用窗口上层设置第二画布;其中,第二画布用于接收用户的书写操作,并呈现书写笔迹;第二画布的尺寸大于第一画布的尺寸,且第二画布的透明度大于或等于第一阈值(即第二画布设置为接近透明或半透明);根据第二画布上书写笔迹在应用窗口的输入控件中显示第二画布上书写笔迹的识别结果。
其中,第一画布设置为接近透明或半透明,即第一画布为非透明。一方面,将第一画布设置为非透明,是为了使用第一画布拦截用户在输入控件位置的输入操作。另一方面,第一画布接近透明或半透明,故不会遮挡应用窗口输入控件的内容,使得用户不易察觉到第一画布,不影响前景窗口的用户体验。类似的,第二画布的透明度可设置为接近透明或半透明。一方面,第二画布可以拦截用户的书写操作。另一方面,第二画布不会遮挡前景窗口输入控件的内容,使得用户不易察觉到第二画布,不影响应用窗口的用户体验。
在一些实施例中,第二画布和第一画布为不同的窗口。在一个示例中,先设置第一画布,在需要使用第二画布时设置第二画面。例如,当终端检测出应用窗口的输入控件后,终端在应用窗口的输入控件位置设置第一画布,用于检测第一事件。当检测到第一事件后,终端设置尺寸更大的第二画布(例如第二画布为全屏窗口),由第二画布继续接收用户的书写操作,呈现用户的书写笔迹。在另一个示例中,终端也可以提前设置第二画布。例如在启动手写板应用之后就设置第二画布,此时第二画布的透明属性设置为透明。当检测到在第一画布的第一事件后,再将第二画布设置为不透明, 例如接近透明或半透明。那么,第二画布可以接收用户的书写操作,呈现用户的书写笔迹。总之,本申请实施例不限定手写板应用设置第一画布和第二画布的时机不做具体限定。
在其他一些实施例中,第二画布和第一画布也可以为同一窗口。即,通过调整第一画布的尺寸得到第二画布。那么,当检测到在第一画布处发生第一事件时,终端直接调大第一画布的尺寸,得到第二画布。由调大尺寸的第二画布继续接收用户的书写操作,呈现用户的书写笔迹。
由上可见,本申请通过检测出前景窗口中的输入控件,并在输入控件的位置设置接近透明或半透明的第一画布,用于拦截用户意在输入控件执行的第一事件(即用户在前景窗口的输入控件中开始书写的事件)。当在第一画布处检测到第一事件后,终端设置尺寸更大的第二画布(例如第二画布为全屏窗口),由第二画布继续接收用户的书写操作,呈现用户的书写笔迹。由此实现用户落笔即开始书写的功能,提升了终端的手写体验。
在一种可能的实现方式中,第二画布为全屏窗口,或者,第二画布的尺寸与应用窗口的尺寸相同。由此,本申请扩大了用户书写的操作区域,提升用户的手写舒适度。
在一种可能的实现方式中,检测应用窗口中的输入控件,包括:调用用户界面自动化UIA接口检测应用窗口的输入控件。由此提供了一种检测应用窗口中的输入控件的方法,该方法无需修改终端原生的操作系统,该检测方法普适度更高。
在一种可能的实现方式中,调用用户界面自动化UIA接口检测应用窗口的输入控件,包括:调用用户界面自动化UIA接口获取应用窗口中各个控件的信息;当根据各个控件的信息确定应用窗口中第一控件的类型为预设类型,且第一控件的属性为预设类型对应的预设属性时,确定第一控件为应用窗口的输入控件;其中,预设类型包括编辑控制类型、下拉列表类型、以及文档控制类型中一项或多项;预设属性包括可被键盘聚焦、非只读、有键盘焦点中一项或多项。
在一种可能的实现方式中,该方法还包括:在检测应用窗口中的输入控件的过程中,还获取应用窗口中各个控件的类型和/或风格说明,并根据各个控件的类型和/或风格说明确定应用窗口中的输入控件。由此提供了另一种检测应用窗口中输入控件的方法。该方法可以独立使用,也可以和其他方法结合使用,提高检测的准确性。
在一种可能的实现方式中,该方法还包括:在检测到作用于第一画布的第一事件时,关闭第一画布。如此,可由应用窗口上的第二画布接收用户的书写操作。可选的,也可以将第一画布设置为透明的,使得第一画布不影响第二画布接收用户的书写操作。
在一种可能的实现方式中,在应用窗口中输入控件的对应位置上设置第一画布之后,该方法还包括:当监测到应用窗口的变化事件后,重新检测应用窗口变化后的输入控件,并更新在应用窗口上层设置的第一画布;其中,应用窗口的变化事件包括:用户切换应用窗口的事件,应用窗口中界面跳转事件,应用窗口中控件的位置和/或大小发生变化的事件中一项或多项。如此,提供一种
在一种可能的实现方式中,第一事件为手写笔down事件,手指触摸的down事件,鼠标操作的down事件的任一项。
在一种可能的实现方式中,在根据第二画布上书写笔迹在应用窗口的输入控件中 显示第二画布上书写笔迹的识别结果之前,该方法还包括:当检测到作用于第二画布的第二事件后,识别第二画布上的书写笔迹。也就是说,当用户在第二画布上书写一段时间后抬起(例如抬起后预设时长后)时,终端可以开始识别第二画布上的书写笔迹。
在另外一些实施例中,终端也可以在其他时机开始识别第二画布上的书写笔迹,例如周期性识别第二画布上已有的书写笔迹。本申请实施例对终端识别第二画布上的书写笔迹的时机和识别方法均不做限定。
在一种可能的实现方式中,当第一事件为手写笔down事件时,第二事件为手写笔的up事件;当第一事件为手指触摸的down事件时,第二事件为手指触摸的up事件;当第一事件为鼠标操作的down事件时,第二事件为鼠标操作的up事件。
在一种可能的实现方式中,该方法还包括:检测到作用于第一画布的第三事件时,修改第一画布的透明度为小于第一阈值;其中,第三事件与第一事件不同;显示手写框,手写框用于接收用户的书写操作,并呈现书写笔迹。
考虑到用户使用不同的输入方式(如手写笔输入、手指触摸输入、鼠标输入)时,操作的意图可能不同。为此,本申请实施例还可以区分不同输入方式(即区分第一事件和第三事件),并针对不同的输入方式实现不同的手写体验。例如,当手写笔落笔在输入控件(即检测到第一事件)后即开始书写。当手指或鼠标点击输入控件(即检测到第三事件)后,不能实现落笔即开始书写的功能,而是在检测到手指或鼠标的点击操作后,弹起手写框,手指和鼠标仍然需要在手写框中进行书写。又例如,当手写笔或手指落笔在输入控件(即检测到第一事件)后即开始书写。当鼠标点击输入控件(即检测到第三事件)后,弹起手写框,鼠标仍然需要在手写框中进行书写。
在一种可能的实现方式中,显示手写框,包括:应用窗口所属的应用绘制手写框。
在一种可能的实现方式中,第一事件为手写笔down事件;第三事件为手指触摸的down事件或鼠标操作的down事件。
在一种可能的实现方式中,应用窗口为前景窗口。
第二方面、提供一种终端,包括:处理器、存储器和触摸屏,所述存储器、所述触摸屏与所述处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器从所述存储器中读取所述计算机指令,以使得终端执行如上述方面及其中任一种可能的实现方式中所述的方法。
第三方面、提供一种装置,该装置包含在终端中,该装置具有实现上述方面及可能的实现方式中任一方法中终端行为的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括至少一个与上述功能相对应的模块或单元。例如,接收模块或单元、显示模块或单元、以及处理模块或单元等。
第四方面、提供一种计算机可读存储介质,包括计算机指令,当计算机指令在终端上运行时,使得终端执行如上述方面及其中任一种可能的实现方式中所述的方法。
第五方面、提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如上述方面中及其中任一种可能的实现方式中所述的方法。
第六方面、提供一种芯片系统,包括处理器,当处理器执行指令时,处理器执行如上述方面中及其中任一种可能的实现方式中所述的方法。
第二方面提供的终端、第三方面提供的装置、第四方面提供的计算机可读存储介质、第五方面提供的计算机程序产品以及第六方面提供的芯片系统所能达到的技术效果可以参考第一方面以及第一方面中任一种可能的实现方式中有关技术效果的描述,这里不再赘述。
附图说明
图1为本申请实施例提供的一种终端的结构示意图;
图2为本申请实施例提供的另一种终端的结构示意图;
图3为本申请实施例提供的一种手写方法的流程示意图;
图4为本申请实施例提供的一种终端的用户界面示意图;
图5为本申请实施例提供的一种终端界面的控件的组织关系的示意图;
图6为本申请实施例提供的又一种终端的用户界面示意图;
图7为本申请实施例提供的又一些终端的用户界面示意图;
图8为本申请实施例提供的又一些终端的用户界面示意图;
图9A为本申请实施例提供的另一种手写方法的流程示意图;
图9B本申请实施例提供的又一些终端的用户界面示意图;
图10为本申请实施例提供的又一种终端的结构示意图;
图11为本申请实施例提供的又一种手写方法的流程示意图;
图12为本申请实施例提供的又一种手写方法的流程示意图。
具体实施方式
在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请实施例提供一种手写输入方法,可应用于具有触摸屏的终端上,可接收手写笔和/或用户手指在触摸屏上的操作,实现对终端的操控(包括在触摸屏上滑动输入文字等)。可选的,该终端还可以接收鼠标输入,实现对终端的操作。
示例性的,本申请实施例中终端例如可以为手机、平板电脑、个人计算机(personal computer,PC)、个人数字助理(personal digital assistant,PDA)、上网本、可穿戴终端(如智能手表、增强现实技术(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备等)、车载设备、智慧屏、智能汽车、智能音响等,本申请对该终端的具体形式不做特殊限制。
图1示出了终端100的结构示意图。
终端100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对终端100的具体限定。在本申请另一些实施例中,终端100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对终端100的结构限定。在本申请另一些实施例中,终端100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140 可以通过终端100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为终端供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
终端100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。终端100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在终端100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在终端100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,终端100的天线1和移动通信模块150耦合,天线2和无线通 信模块160耦合,使得终端100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
终端100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,终端100可以包括1个或N个显示屏194,N为大于1的正整数。
终端100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,终端100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当终端100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。终端100可以支持一种或多种视频编解码器。这样,终端100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现终端100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展终端100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储终端100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行终端100的各种功能应用以及数据处理。
终端100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。终端100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当终端100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。终端100可以设置至少一个麦克风170C。在另一些实施例中,终端100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,终端100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动终端平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按 键。终端100可以接收按键输入,产生与终端100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和终端100的接触和分离。终端100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。终端100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,终端100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在终端100中,不能和终端100分离。
需要说明的是,终端100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。例如终端100的软件系统可以为Android系统、鸿蒙系统、Windows系统等。下面以Windows系统为例,介绍终端100的软件结构示意图。如图2所示,Windows系统是分层的架构,包括用户态(或称为用户模式)和内核态(或称为内核模式,也可以成为操作系统)。
其中用户态的组件包括系统支持进程、服务进程、用户应用程序、环境子系统以及子系统动态链接库(Dynamic Link Library或者Dynamic-link Library,DLL)等。
上述系统支持进程,包括登陆进程(如Winlogon.exe)、会话管理器(如Smss.exe)、空闲进程(Idle进程,一种特殊进程,是空闲线程的载体)、系统进程(包含大多数内核模式的系统进程)、本地安全和认证进程(如Lsass.exe)、以及为系统开发自带的应用程序(例如设置应用)等。
在本申请的一些示例中,设置应用中可以设置功能开关,该手写功能开关用于用户选择允许或禁止启动手写板应用。
上述服务进程,独立于用户登录运行的Windows服务,与用户无关的Windows服务。例如,任务调度服务(Task Sechedular)和打印(Print Spooer)服务。
上述用户应用程序,包括用户安装的三方应用。例如,在本申请一些示例中,用户应用程序可以包括手写板应用,该手写板应用可以实现手写笔(或手指或鼠标)接触界面中的输入控件(即落笔或点击输入控件)后,即可开始书写操作。而不是如背景技术中介绍的,手写笔(或手指或鼠标)接触界面中的输入控件并抬手后,终端再显示特定的手写框,用户需要再次使用手写笔(或手指)在特定的手写框中进行书写操作。可见,落笔即开始书写的手写功能简化了用户的操作,提升基于Windows系统的手写体验。
可选的,该手写板应用还可以实现手写笔(或手指或鼠标)在全屏范围进行书写,进一步提升用户的手写体验。
在本申请的另一些示例中,手写板应用还可以区分手写笔输入、手指触摸输入、鼠标输入,针对不同的输入实现不同的手写体验。例如,当手写笔落笔在输入控件后即开始书写。当手指或鼠标点击输入控件后,不能实现落笔即开始书写的功能,而是在检测到手指或鼠标的点击操作后,弹起手写框,手指和鼠标仍然需要在手写框中进行书写。
需要说明的是,这里“手写板应用”仅是本申请为方便方案描述提供的一个名称,在一些示例中不能作为对方案的功能的限定。
上述环境子系统服务进程:实现操作系统环境支持的部分。例如,提供多种的环境子系统,以支持运行多种类型的应用软件。
需要说明的是,上述的服务进程和用户应用程序并不直接调用原生的Windows系统服务,而是通过一个或者多个子系统DLL来发起调用。其中子系统DLL主要是将一个文档化的函数转换为一些恰当的内部原生系统服务调用。
其中内核态的组件包括:执行体、内核、设备驱动程序(Device Driver)、硬件抽象层(Hardware Abstraction Layer,HAL)、窗口和图形系统。
上述执行体包括基本的操作系统服务,比如内存管理、进程和线程管理、安全性、I/O管理、网络和跨进程通信等。上述内核包括一些底层的操作系统功能,比如线程调度、终端和异常分发、以及多处理器同步等。上述设备驱动程序包括硬件设备驱动程序(例如手写笔驱动、鼠标驱动、触控驱动等)、文件系统驱动程序、文件系统过滤驱动,网络重定向、协议驱动,内核流失过滤驱动等。其中硬件设备驱动程序将用户进程的IO函数调用转换成特定的硬件设备IO请求。上述硬件抽象层,用于实现Windows系统能够被移植到各类硬件平台,它是一个可加载的内核模块,针对不一样的硬件平台,提供了统一的服务接口。
上述窗口和图形系统实现了图形用户界面功能。在本申请的一些示例中,手写板应用在检测到应用窗口中的输入控件后,可以调用窗口和图形系统绘制第一画布,第一画布与应用窗口中的输入控件一一对应,且第一画布的尺寸和位置与相对应的输入控件的尺寸和位置一致,且位于应用窗口的上层。那么,第一画布可以实现拦截用户在输入控件位置执行的事件。当检测到用户在第一画布位置(即输入控件位置)执行的事件(如手写笔在输入控件位置落笔)后,手写板应用可以继续调用窗口和图形系统绘制第二画布,该第二画布位于应用窗口的上层。那么,第二画布可用于接收用户的书写输入,从而实现落笔即开始书写的功能。可选的,第二画布的尺寸可以设置为全屏,从而实现手写笔在全屏范围内书写的功能。具体的实现方案将在下文展开说明。
可以理解的是,在一些场景中,终端可能同时显示多个应用窗口。那么,手写板应用可以检测所有的应用窗口的输入控件,并针对所有的应用窗口的输入控件均设置第一画布。手写板应用也可以检测其中的前景窗口(foreground window,即用户正在操作的窗口),并仅针对前景窗口识别其中的输入控件,并针对前景窗口的输入控件设置第一画布。由此,可以降低终端的运行负荷。本文中以识别终端中的前景窗口为例,对实现落笔即开始书写的功能进行详细说明。
以下实施例中所涉及的技术方案均可以在具有图1和图2所示架构的终端100中实现。下面结合附图对本申请实施例提供的技术方案进行详细说明。
如图3所示,为本申请实施例提供的一种手写输入方法的流程示意图,该方法包括:
S301、终端获取前景窗口包含的控件信息。
在一些示例中,在用户启动手写板应用,手写笔应用开始获取前景窗口包含的控件,即开始执行步骤S301以及后续步骤。在另一些示例中,手写板应用所实现的功能也可以由Windows系统的系统服务实现,即由Windows系统的内核模式中系统服务实现,这里不再一一赘述。
S302、终端根据前景窗口包含的控件信息,识别出前景窗口中的输入控件。
在步骤S301-步骤S302中,前景窗口是指用户正在操作的窗口,也即标题栏为激活状态的窗口。
在一些示例中,终端显示桌面(实质上为桌面应用的一个界面)时,桌面可以为前景窗口。当用户通过桌面的应用图标启动其他应用后,当前用户正在操作的应用窗口为前景窗口。例如,如图4所示,终端中显示有多个应用窗口,分别为应用1的窗口和应用2的窗口。其中,应用1的窗口为用户正在操作的窗口,即为前景窗口。可以理解,Windows系统的窗口(例如桌面或应用1的窗口)包括多个控件(也可以称为UI元素,WindowsUI控件等),这多个控件包括但不限于按钮(Button)、编辑框(Edit)、下拉列表(ComboBox)、列表框(ListBox)、滚动条(Scrollbar)以及子窗口。
需要说明的是,应用界面中的控件通常包括两类:一类是应用是使用Windows系统自带的控件进行开发的,这类控件也称为Windows标准控件。在Windows系统的应用生态中,Windows标准控件的占比较小,目前约为15%。另一类是应用未使用Windows系统自带的控件进行开发的,例如,可以使用图形处理器(graphics processing unit,GPU)的接口开发应用的界面的控件,这类控件也称为自定义的控件。在Windows系统的应用生态中,自定义的控件的占比较大,目前约为85%。
针对Windows标准控件而言,Windows标准控件通常是采用Windows系统的应用程序接口(application programming interface,API)(例如CreateWindowEx)创建的,具有窗口句柄(handle)。那么终端可以根据窗口句柄调用Windows系统的API来查询控件的信息(例如类型,风格说明等),通过控件的信息确定其中的输入控件。例如,当控件的类型为预设类型(如Edit控件类型或RichEdit控件类型),包含特定的风格说明(如添加ES_开头的风格说明)时,可以确定该控件为输入控件。
但需要说明的是,在采用Windows系统的API(例如CreateWindowEx)创建界面中的输入控件时,并没有强制要求添加ES_开头的风格说明,因此,很多输入控件即使是采用Windows系统的API创建的Windows标准控件,但该控件可能不带ES_开头的风格说明。那么,通过使用Window系统的API查询到该控件的信息后,无法推测出该控件是否为输入控件。另外,在Windows系统的应用生态中,应用界面包含的Windows标准控件的占比本身较小。总而言之,能够根据控件的窗口句柄调用Windows系统的API来识别出输入控件的场景较少。
针对自定义的控件而言,自定义的控件由应用的开发人员自行绘制和管理的。例如,某个应用的最外层窗口(可以是该应用的根控件)可以使用Windows标准控件,内部的其他控件均可采用自定义的控件。然而,对于Windows系统来说,自定义的控件是不可见的。即,无法使用Windows系统的API来查询自定义的控件的信息。
为此,本申请实施例提供了一种检测输入控件的方法,可以借助于用户界面自动化(User Interface Automation,UIA)来检测出前景窗口包含的各个控件的信息。本文中为了区别于Windows系统的API查询到的控件的信息,这里将通过UIA接口查询到的控件的信息称为为控件的UIA信息。而后,根据各个控件的UIA信息确定前景窗口中输入控件的位置(如矩形控件的四个顶点的坐标)等信息。其中,UIA是Windows系统的一种辅助功能框架,旨在满足自动测试工具和辅助技术产品(如屏幕阅读器)的需求。UIA提供了一套自己定义的API(称为UIA的API,区别于Windows系统的API)、自己定义的界面(interface),及自己定义的界面模式,让应用开发人员遵循UIA的框架实现应用(包括这里的前景窗口)的相关界面,从而应用的使用者(如软件测试人员、残障人士等)更好地使用该应用软件。
总而言之,调用Window系统的API可以查询到控件的一套控件信息,调用UIA的接口可以查询到控件的另一套控件信息(即UIA信息)。如前所述,在调用Window系统的API可以查询的控件信息无法判断出控件是否为输入控件的场景中,可以通过调用UIA的接口查询到控件的UIA信息判断控件是否为输入控件。
如图5所示,为图4所示界面的一部分控件的结构示意图。由图5可知,该界面的根控件为桌面窗口,桌面窗口下包括一个或多个子控件1(例如桌面的任务栏、应用图标等)、子控件2(应用1的窗口)以及子控件3(应用2的窗口)。其中,子控件2下包括多个子控件(即应用1的界面中控件,如标题栏、按钮、文本框、编辑框等),分别为子控件21、子控件22以及子控件23等。子控件3下包括多个子控件(即应用2的界面中控件,如标题栏、按钮、文本框、编辑框等),分别为子控件31、子控件32以及子控件33等。
在具体实现中,终端可以通过调用UIA的API接口,先获取所有控件的UIA信息,包括桌面窗口(根控件)以及桌面窗口下所有子控件等的UIA信息,再从中确定出前景窗口(即应用1)包含的输入控件。或者,终端也可以先获取前景窗口的句柄,再根据前景窗口的句柄获取前景窗口的根控件的信息,再遍历该根控件下的子控件的UIA信息。其中根控件和各级子控件的UIA信息包括但不限于控件的类型、控件的属性。
其中,UIA信息中控件的类型包括但不限于应用栏(AppBar)控件类型、按钮(Button)控件类型、日历控件类型、复选框(CheckBox)控件类型、下拉列表(ComboBox)控件类型、数据表格(DataGrid)控件类型、数据项(DataItem)控件类型、文档控件类型、编辑(Edit)控件类型、组控件类型、标头控件类型、超链接控件类型、图像控件类型、列表控件类型、列表项(ListItem)控件类型、菜单控件类型、菜单栏(MenuBar)控件类型、菜单项(MenuItem)控件类型、窗格控件类型等。
其中,UIA信息中控件的属性,用于公开控件功能的特定方面,或来表示常见的控件行为。例如,控件的属性包括但不限于可被键盘聚焦、非只读、有键盘焦点。
在一些实施例中,当确定前景窗口中控件的类型为预设类型,且控件的属性为该预设类型对应的预设属性时,确定该控件为输入控件。其中,预设类型包括但不限于编辑控制(EditControl)类型、下拉列表(ComboBox)类型、以及文档控制(DocumentControl)类型。其中,预设属性包括但不限于可被键盘聚焦、非只读、有键盘焦点。
在一个示例中,编辑控制(EditControl)类型对应的预设属性包括可被键盘聚焦、非只读。例如,当控件的类型为编辑控制类型时,进一步确定该控件的属性为可被键盘聚焦或非只读,那么确定该控件为输入控件。
下拉列表(ComboBox)类型对应的预设属性包括非只读、有键盘焦点。又例如,当控件的类型为下拉列表类型时,进一步确定该控件的属性为非只读或有键盘焦点,那么确定该控件为输入控件。
文档控制类型对应的预设属性为非只读。又例如,当控件的类型为文档控制类型时,进一步确定该控件的属性为非只读,那么确定该控件为输入控件。
由此,确定出前景窗口的输入控件,并获取输入控件的位置(例如矩形控件的四个顶点的坐标)等信息。
如图4所示,前景窗口为应用1的窗口,应用1的窗口包括应用栏(AppBar)、多个按钮(Button)控件(如关闭控件、最大化控件、最小化控件、更多控件、返回“新建邮件”按钮等)、以及多个文本框(例如文本框401至文本框405)。其中,文本框401至文本框405均属于编辑控制(EditControl)类型,且文本框401至文本框403的属性为非只读,文本框404和文本框405的属性为只读,于是可以确定文本框401至文本框403为输入控件。进一步获取文本框401至文本框403中各个控件的位置信息。
还需要说明的是,终端还可以通过UIA的API订阅前景窗口的变化事件。也就是说,当前景窗口中控件发生变化时,终端重新获取前景窗口的控件的UIA信息,重新确定前景窗口的输入控件的信息。其中,前景窗口的控件发生变化,包括用户切换当前正在操作的窗口(例如前景窗口从应用1的窗口切换为应用2的窗口),前景窗口界面发生切换(例如应用1界面1切换为界面2),前景窗口中控件的位置和/或大小等发生变化(例如前景窗口从小窗口模式切换到全屏模式)等。
S303、终端根据前景窗口中输入控件的位置,在前景窗口的上层设置第一画布。
在一个示例中,所述第一画布的尺寸和位置与前景窗口中输入控件的尺寸和位置均一致。或者,第一画布的尺寸略大于输入控件的尺寸,或者,第一画布的尺寸略小于输入控件的尺寸。可选的,第一画布可设置为接近透明或半透明,如透明度大于或等于阈值1(例如为0.3),且小于阈值2(例如为1)。其中,阈值2为画布被设置为不透明的属性值。也就是说,此时第一画布为非透明。一方面,将第一画布设置为非透明,是为了使用第一画布拦截用户在输入控件位置的输入操作。另一方面,第一画布接近透明或半透明,故不会遮挡前景窗口输入控件的内容,使得用户不易察觉到第一画布,不影响前景窗口的用户体验。
如图6所示,在识别出应用1的窗口中的输入控件为文本框401至文本框403后,可以根据文本框401至文本框403的位置信息(如四个顶点的坐标),在文本框401 上层设置窗口501,在文本框402上层设置窗口502,在文本框403上层设置窗口503。在一些示例中,窗口501的尺寸和位置与文本框401的尺寸和位置一致。窗口502的尺寸和位置与文本框402的尺寸和位置一致。窗口503的尺寸和位置与文本框403的尺寸和位置一致。图6中未示出文本框401至文本框403。可选的,窗口501至窗口503可以设置为接近透明或半透明,那么窗口501至窗口503可以透出文本框401至文本框403中的内容。换言之,增加的窗口501至窗口503并不影响用户正常查看前景窗口中的内容,以及对前景窗口中的控件的操作。
S304、当检测到作用于第一画布的第一事件时,终端在前景窗口上层设置第二画布,第二画布的尺寸大于第一画布的尺寸。其中,第二画布用于显示用户的书写笔迹。终端关闭第一画布。可选的,终端也可以不关闭第一画布,而是隐藏第一画布,或者将第一画布设置为透明。
其中,第一事件为用户使用手写笔/手指/鼠标接触输入控件位置(即第一画布位置)的事件。例如,第一事件为终端检测到在输入控件位置(即第一画布位置)的down事件,可以包括手写笔的down事件,手指触摸的down事件,鼠标操作的down事件的任一项或任几项。
示例性的,如图7中(1)所示,当用户想要在输入控件403(即窗口503)中输入文字时,用户可以使用手写笔在窗口503上开始书写。当手写笔落笔在窗口503上时,终端检测到书写笔作用在位置601处的down事件(即第一事件),终端关闭(或隐藏)窗口503或者将窗口503的设置为透明。可选的,终端还可以一并关闭(或隐藏)窗口501和窗口502。并且,终端在前景窗口上层设置第二画布。第二画布的尺寸大于第一画布。第二画布可以占据前景窗口的大部分区域,如图7中(2)所示的窗口602。或者,第二画布可以占据前景窗口的全部区域,如图7中(3)所示的窗口602。又或者,第二画布可以占据终端的全部屏幕,即第二画布为全屏窗口,如图7中(4)所示的窗口602。在一个实施例中,用户在手写笔落笔后即可以开始书写。由于第二画布位于前景窗口的最上层,那么第二画布可以接收到用户的书写操作,并根据用户的书写操作呈现用户的书写笔迹。
可选的,第二画布的透明度可设置为接近透明或半透明,如第二画布的透明度大于或等于阈值1(例如为0.3),且小于阈值2(例如为1)。这样,一方面,第二画布可以拦截用户的书写操作。另一方面,第二画布接近透明或半透明,故不会遮挡前景窗口输入控件的内容,使得用户不易察觉到第二画布,不影响前景窗口的用户体验。
需要说明的是,在一些实施例中,第二画布和第一画布为不同的窗口。在一个示例中,先设置第一画布,在需要使用第二画布时设置第二画面。例如,当终端检测出前景窗口的输入控件后,终端在前景窗口的输入控件位置设置第一画布,用于检测第一事件(即用户在前景窗口的输入控件中开始书写的事件)。当检测到第一事件后,终端设置尺寸更大的第二画布(例如第二画布为全屏窗口),由第二画布继续接收用户的书写操作,呈现用户的书写笔迹。在另一个示例中,终端也可以提前设置第二画布。例如在启动手写板应用之后就设置第二画布,此时第二画布的透明属性设置为透明。当检测到在第一画布的第一事件后,再将第二画布设置为不透明,例如接近透明或半透明。那么,第二画布可以接收用户的书写操作,呈现用户的书写笔迹。总之, 本申请实施例不限定手写板应用设置第一画布和第二画布的时机不做具体限定。
在其他一些实施例中,第二画布和第一画布也可以为同一窗口。即,通过调整第一画布的尺寸得到第二画布。那么,当检测到在第一画布处发生第一事件时,终端直接调大第一画布的尺寸,得到第二画布。由调大尺寸的第二画布继续接收用户的书写操作,呈现用户的书写笔迹。
需要说明的是,本申请中将画布(例如第一画布或第二画布)设置为透明,并不要求画布的透明度严格等于0,可以包括画布的透明度在0的附近,即画布的透明度小于阈值1。当画布大于或等于阈值1时,认为画布为非透明的。
S305、当检测到作用于第二画布的第二事件后,且在预设时长1(例如为1s)内未检测到作用于第二画布的第一事件时,终端识别第二画布上的书写笔迹。
其中,第二事件为用户抬起手写笔/手指/鼠标的事件。例如,当第一事件为手写笔的down事件时,第二事件为手写笔的up事件。当第一事件为手指触摸的down事件时,第二事件为手指触摸的up事件。当第一事件为鼠标操作的down事件时,第二事件为鼠标操作的up事件。
其中,第二画布上的书写笔迹为手写笔/手指/鼠标在第二画布上移动的轨迹。
示例性的,接着图7所示的例子,检测到作用于第一画布的手写笔的down事件(即第一事件)后,终端显示第二画布。这里以第二画布占据终端整个屏幕为例进行说明。也就是说,此时用户使用手写笔可以在全屏进行书写。如图8中(1)所示,用户直接使用手写笔书写文字。可以注意到,此时用户书写文字可以不受输入控件503的大小限制。如图8中(2)所示,当用户完成本次输入的最后一笔后,在位置602处抬起手写笔时,终端检测到作用于第二画布的第二事件。并且,在检测到作用于第二画布的第二事件后的预设时长1(例如1s)内未检测到作用于第二画布的第一事件(手写笔的down事件),终端根据手写笔在第二画布上移动的轨迹,检测出该轨迹对应的文字和/或符号,如图8中(3)所示,在输入控件503中显示相应的显示文字和/或符号。
在另外一些实施例中,终端也可以在其他时机开始识别第二画布上的书写笔迹,例如周期性识别第二画布上已有的书写笔迹。本申请实施例对终端识别第二画布上的书写笔迹的时机和识别方法均不做限定。
S306、终端在前景窗口的输入控件内显示识别结果,终端在前景窗口的输入控件上层重新设置第一画布。可选的,终端关闭(或隐藏)第二画布。
可以理解的是,第一画布设置为接近透明或半透明,那么用户可以看到前景窗口中输入控件中显示的识别结果。并且,进一步的,第一画布可以继续拦截用户在输入控件位置的操作。换言之,当用户继续在输入控件位置直接书写,那么终端继续执行上述步骤S304-步骤S306。
需要说明的是,如果在步骤S304中终端没有关闭第一画布,而是隐藏第一画布,或者将第一画布设置为透明时,那么终端无需重新设置第一画布,直接将第一画布设置为接近透明或半透明。
综上可见,本申请实施例提供了一种手写输入方法,该方法通过检测出前景窗口中的输入控件,并在输入控件的位置设置接近透明或半透明的第一画布,用于拦截用户意在输入控件执行的第一事件(即用户在前景窗口的输入控件中开始书写的事件)。 当在第一画布处检测到第一事件后,终端设置尺寸更大的第二画布(例如第二画布为全屏窗口),由第二画布继续接收用户的书写操作,呈现用户的书写笔迹。由此实现用户落笔即开始书写的功能,提升了终端的手写体验。
在上述实施例中,终端无需区分不同的输入方式,而是针对各个输入方式均实现了落笔即开始书写的功能。而在本申请另一些实施例中,考虑到用户使用不同的输入方式(如手写笔输入、手指触摸输入、鼠标输入)时,操作的意图可能不同。为此,本申请实施例还提供了一种手写输入方法,可以区分不同输入方式,并针对不同的输入方式实现不同的手写体验。
例如,当手写笔落笔在输入控件后即开始书写。当手指或鼠标点击输入控件后,不能实现落笔即开始书写的功能,而是在检测到手指或鼠标的点击操作后,弹起手写框,手指和鼠标仍然需要在手写框中进行书写。又例如,当手写笔或手指落笔在输入控件后即开始书写。当鼠标点击输入控件后,弹起手写框,鼠标仍然需要在手写框中进行书写。
图9A以当手写笔落笔在输入控件后即开始书写。当手指或鼠标点击输入控件后,弹起手写框为例进行说明。如图9A所示,为本申请实施例提供的又一种手写输入方法的流程示意图,该方法包括:
S901、终端获取前景窗口包含的控件信息。
S902、终端根据前景窗口包含的控件信息,识别出前景窗口中的输入控件。
S903、终端根据前景窗口中输入控件的位置,在前景窗口的上层设置第一画布。
其中,步骤S901-步骤S903请参考前文中步骤S301-步骤S302中相关内容,这里不再赘述。
S904、当检测到作用于第一画布的第三事件时,终端在前景窗口上层设置第二画布,第二画布的尺寸大于第一画布的尺寸。其中,第二画布用于显示用户的书写笔迹。终端关闭第一画布,或者将第一画布设置为透明。
其中,第三事件为手写板应用关注的事件,例如为手写笔的down事件。
在一些示例中,终端可以通过调用Windows系统的API(例如Win32API的SetWindowsHookEx)来获取第一画布处的输入事件,并进一步分析输入事件中特定标志位来区分该输入事件为手写笔的down事件、手指触摸的down事件或鼠标操作的down事件。当输入事件为手写笔的down事件时,确定检测到在第一画布处发生第三事件。
S905、当检测到作用于第二画布的第四事件后,且在的预设时长1(例如为1s)内未再次检测到作用于第二画布的第三事件时,终端识别第二画布上的书写笔迹。本申请对于识别书写笔迹的时机不做具体限制。
其中,第四事件例如为手写笔的up事件。
S906、终端在前景窗口的输入控件内显示识别结果,并在前景窗口的输入控件上层重新设置第一画布或者将第一画布的透明度恢复为接近透明或半透明。可选的,终端关闭(或隐藏)第二画布。
上述步骤S904-步骤S906中的其他内容可参考上述步骤S304-步骤S306中相关内容,这里不再赘述。
需要说明的是,下述步骤S907可以在步骤S906之后执行,也可以在步骤S903之后执行。
S907、当检测到作用于第一画布的第五事件时,终端设置第一画布为透明。其中,第五事件与第三事件不同。可选的,终端启动定时器开始计时,定时时长为预设时长2。
其中,第五事件为手写板应用不关注的事件,例如为手指触摸的down事件或鼠标操作的down事件。设置第一画布为透明,例如将第一画布的透明度设置为等于0,或者小于阈值1(例如0.3)。
在一些示例中,终端可以通过调用Windows系统的API(例如Win32API的SetWindowsHookEx)来获取第一画布处的输入事件,并进一步分析输入事件中特定标志位来区分该输入事件为手写笔的down事件、手指触摸的down事件或鼠标操作的down事件。当输入事件为手指触摸的down事件或鼠标操作的down事件时,确定检测到在第一画布处发生第五事件。第五事件为手写板应用不关注的事件,即不需要实现落笔即开始书写功能的输入事件,那么终端将第一画布设置为透明,以便将检测到第五事件透传给第一画面下方的输入控件。另外,启动定时器,用于在定时器定时结束时,将第一画布恢复为接近透明或半透明,便于在第一画布拦截后续作用于在输入控件的第三事件,即手写板应用关注的输入事件。
S908、在第一画布的透明属性生效后,终端在前景窗口中显示手写框,该手写框用于接收用户的书写操作。
示例性的,如图9B中(1)所示,当设置在输入控件503处的第一画布检测到第五事件(例如手指点击事件)时,终端将第一画布设置为透明。在第一画布的透明属性生效后,终端显示如图9B中(2)所示的手写框910用于接收用户的书写操作。
需要注意的是,手写框与第二画布不同。手写框是操作前景窗口中的输入控件后,弹出的控件。换言之,终端显示手写框为前景窗口所在应用的业务执行逻辑。手写框属于前景窗口所在的应用,因此手写框的尺寸一般不会大于前景窗口的尺寸。手写框一般为不透明的(例如透明度为1或接近1)。可以理解的,当不同应用设置的手写框的尺寸和位置一般不同。而第二画布是操作第一画布后弹出的控件。换言之,终端显示第二画布为手写板应用的业务执行逻辑。第二画布的尺寸可以是任意的,不受前景窗口的尺寸限制,例如第二画布可以为全屏。且,第二画布为接近透明或半透明的。
S909、定时器定时结束后,终端恢复第一画布为接近透明或半透明。
在定时器定时结束前,如果用户操作前景窗口,终端将按照前景窗口所在应用的业务执行逻辑运行。当定时器定时结束后,终端恢复第一画布为接近透明或半透明。即,第一画布继续拦截用户在输入控件上的输入。当用户在输入控件的输入为手写板应用关注的事件(例如,手写笔down事件)后,可以显示第二画布。当用户在输入控件的输入为手写板应用不关注的事件(例如,手指触摸up事件或者鼠标up事件),则继续设置第一画布设置为透明属性,而后在输入控件的输入将直接透传给前景窗口所在的应用。
由此可见,在检测到在第一画布的输入事件(即用户在前景窗口的输入控件中开始书写的事件)后,进一步区分该输入事件的输入方式(如手写笔输入、手指触摸输 入、鼠标输入),并针对不同的输入方式实现不同的手写体验,丰富了用户手写体验,满足不同的手写需求。
还需要说明的是,上述步骤S901-步骤S909提供的技术方案,是以终端先不区分用户的输入方式(手写笔输入、手指输入或鼠标输入),在前景窗口的输入控件统一设置第一画布,而后根据第一画布接收到输入事件确定是哪种输入方式,针对不同输入方式采用不同的处理方法。在另外一些实施例中,终端也可以在输入控件检测到用户的输入事件后,区分该输入事件是哪种输入方式。针对手写板应用关注的输入方式,例如手写笔输入,则终端可以直接设置第二画布,用于接收手写板的书写操作。针对手写板应用不关注的输入方式,则无需设置第一画布,直接执行前景窗口原生的输入逻辑。
以上实施例是从终端角度讲述本申请的技术方案,以下提供一种终端的内部结构图,并结合终端的内部结构对本申请的技术方案进行说明。
如图10所示,为本申请提供的另一种终端的结构示意图,该终端结构示意图中包含了手写板应用的结构。具体的,终端的用户模式中示出了设置应用、手写板应用以及前景窗口所在应用。各个应用中包含的模块将在下文介绍技术方案时详细说明。终端的内核模式中示出了内核和执行体,窗口和图形系统、以及设备驱动程序。其中内核和执行体包括后台进程服务、UIA的API、以及Windows的API。
下面结合图10所示的结构,继续以当手写笔落笔在输入控件后即开始书写。当手指或鼠标点击输入控件后,弹起手写框为例对本申请实施例提供的手写方法进行说明。
如图11所示,为终端检测到手写笔输入时的手写方法的流程示意图,该流程包括:
S1101、手写板应用启动后,检测前景窗口中的输入控件。
在一个示例中,终端的设置应用中可以设置功能开关,用于用户选择允许或禁止启动手写板应用。
在具体实现时,请参考图10,当用户通过设置应用的功能开关启动手写板应用。手写板应用启动后,手写板应用的输入控件识别引擎用于识别前景窗口中包含的输入控件。其中,前景窗口为用户正在操作的窗口,一般为前台应用的窗口。在一些示例中,输入控件识别引擎中的控件识别模块可以通过调用内核模式中的UIA的API获取前景窗口包含的控件的UIA信息,并根据控件的UIA信息识别出其中的输入控件,并获取输入控件的数量以及各个输入控件的位置等信息。具体的识别方法可以参考前文步骤S302中的相关内容,这里不再赘述。
在另一些示例中,输入控件识别引擎中的窗口句柄识别模块可以通过调用Windows的API识别前景窗口中的Windows标准控件的信息,并识别出其中的输入控件,并获取输入控件的数量以及各个输入控件的位置等信息。具体的识别方法可以参考前文步骤S302中相关内容,这里不再赘述。
在又一些示例中,手写板应用中还包括应用策略配置模块,用于接收系统设置的或用户设置的应用策略。或者,应用策略配置模块还可以从网络中获取应用策略。那么,输入控件识别引擎中的应用识别规则模块可以基于应用策略配置模块中配置的应用策略,组合使用控件识别方法和窗口句柄识别方法。例如,应用策略包括优先使用控件识别方法(即调用控件识别模块)的应用名单1,和/或,优先使用窗口句柄识别 方法(即调用窗口句柄识别模块)的应用名单2。例如,应用名单1包含第三方的应用,或者根据统计数据确定的包含大量自定义控件的应用等。应用名单2包含系统应用,或者根据统计数据确定的包含大量Windows标准控件的应用等。可以理解的,当前景窗口属于应用名单1中的应用时,输入控件识别引擎优先调用控件识别模块识别前景窗口中的输入控件。当前景窗口属于应用名单2中的应用时,输入控件识别引擎优先调用窗口句柄识别模块识别前景窗口中的输入控件。
可选的,应用策略还可以包括同时使用控件识别方法(即调用控件识别模块)和窗口句柄识别方法(即调用窗口句柄识别模块)的应用名单3。本申请实施例对应用策略的设置不做具体限定。
S1102、手写板应用根据前景窗口中输入控件的位置,设置第一画布。
在一些示例中,手写板应用可以调用内核模式中的窗口和图形系统,在前景窗口中输入控件的上层绘制相应的第一画布。其中,第一画布的位置和输入控件的位置一致,第一画布的数量与前景窗口中的输入控件的数量一致。且第一画布可以设置为接近透明或半透明。可以理解的是,此时终端上实际显示有前景窗口以及手写板应用绘制的第一画布,而第一画布为接近透明或半透明,那么用户仍然可以看到前景窗口中输入控件以及输入控件内的内容,用户仍然可以继续操作前景窗口。
需要说明的是,在具体实现时,手写板应用可以在识别出前景窗口中的输入控件后,根据输入控件的数量以及位置等信息,绘制相应的第一画布,此时第一画布可以设置为接近透明或半透明。或者,手写板应用还可以在启动时就绘制一个或多个第一画布,此时第一画布设置为未激活状态。而后,在识别出前景窗口中的输入控件的数量和位置后,手写板应用激活相应数量的第一画布,即将第一画布设置为接近透明或半透明,并根据输入控件的位置更新相应第一画布的位置等。
S1103a、手写板应用监听前景窗口的变化事件。
示例性的,手写板应用中前景窗口监听模块可以向终端内核模式下的UIA订阅前景窗口的变化事件。其中,前景窗口的变化事件,包括用户切换当前正在操作的窗口(例如前景窗口从应用1的窗口切换为应用2的窗口),前景窗口界面发生切换(例如应用1界面1切换为界面2),前景窗口中控件的位置和/或大小等发生变化(例如前景窗口从小窗口模式切换到全屏模式)等。
S1103b、手写板应用根据前景窗口的变化事件,更新前景窗口中的输入控件的信息。
示例性的,手写板应用中前景窗口监听模块将前景窗口的变化事件发送给输入控件识别引擎,输入控件识别引擎重新调用UIA的API和/或Windows的API更新前景窗口中的输入控件的信息。
S1103c、手写板应用根据更新的前景窗口中的输入控件的信息,更新第一画布。
手写板应用更新第一画布的方法同上述步骤S1102,这里不再赘述。
可以理解的,上述步骤S1103b和步骤S1103c可选。
S1104、手写板应用检测到作用于第一画布的手写笔down事件。
本步骤在图中以步骤S1104a和步骤S1104b示出。
Windows操作系统的设备驱动程序在检测到作用于第一画布的输入事件后,通过 窗口和图形系统将输入事件发送给手写板应用。手写板应用可以通过全局钩子从内核模式下的Win32的API(例如SetWindowsHookEx)获取输入事件的具体信息,可以根据其中特定标志位来区分该输入事件为手写笔的down事件、手指触摸的down事件或鼠标操作的down事件。
S1105、响应于检测到手写笔down事件,手写板应用在前景窗口上层设置第二画布。可选的,手写板应用还可以关闭第一画布,或者隐藏第一画布,或者将第一画布设置为透明。
其中,第二画布的尺寸大于第一画布的尺寸。在一个示例中,第二画布为全屏窗口。
在一些示例中,当检测到手写笔down事件后,手写板应用可以调用内核模式中的窗口和图形系统,绘制第二画布。
S1106、手写板应用接收作用于第二画布的书写操作,并呈现笔迹。
本步骤在图中以步骤S1106a和步骤S1106b示出。
可以理解的是,在步骤S1105中检测到手写笔down事件后,用户可以在不抬笔的情况直接书写。由于第二画布位于最上层,那么用户的书写操作是作用在第二画布上的。Windows操作系统的设备驱动程序在检测到作用于第二画布的书写操作后,通过窗口和图形系统将书写操作发送给手写板应用,手写板应用在第二画布上呈现手写笔移动的轨迹,即笔迹。由此实现,手写笔落笔后即开始书写的功能。当第二画布为全屏窗口时,还实现了全屏手写的功能,大大提升了手写体验。
S1107、当手写笔应用检测到作用于第二画布的手写笔的up事件后,且在预设时长1内未检测到作用于第二画布的手写笔的down事件时,手写板应用开始识别第二画布上的笔迹。可选的,手写板应用关闭第二画布。
本步骤在图中以步骤S1107a、步骤S1107b、以及步骤S1107c示出。
在一些示例中,Windows操作系统的设备驱动程序在检测到作用于第二画布的手写笔的up事件后,通过窗口和图形系统向手写板应用发送手写笔的up事件。当手写板应用接收到手写笔的up事件后,且在预设时长1内未检测到手写笔的down事件,第二画布将手写笔迹发送给手写板应用中的笔迹识别引擎,由笔迹识别引擎识别。
S1108、手写板应用向前景窗口发送笔迹识别结果,前景窗口中相应输入控件中显示笔迹识别结果。
示例性的,手写板在识别出笔迹后,可以通过UIA聚焦模块通知前景窗口修改相应的输入控件的属性为有键盘焦点。而后,前景窗口调用内核模式中的窗口和图形系统在相应的输入控件中显示笔迹识别结果。由此,实现在前景窗口中输入文字的功能。
图12示出了终端检测到手指输入或鼠标输入的输入方法的流程。在上述步骤S1103c之后,或者在上述步骤S1108之后,还可以包括下述步骤S1201至步骤S1207。
S1201、当检测到作用于第一画布的手指触摸down事件或者鼠标down事件,手写板应用设置第一画布为透明,并启动定时器。
本步骤在图中以步骤S1201a、步骤S1201b,以及步骤S1201c示出。
在一些示例中,Windows操作系统的设备驱动程序在检测到作用于第一画布的手指触摸down事件或者鼠标down事件后,通过窗口和图形系统向手写板应用发送手指 触摸down事件或者鼠标down事件后。当手写板应用接收到手指触摸down事件或者鼠标down事件后,设置第一画布为透明。在一些示例中,手写板应用还可以启动定时器开始计时,定时时长为预设时长2。
需要说明的是,当第一画布被设置为透明后,操作系统(例如具体为窗口和图形系统)将不再向第一画布转发作用于第一画布的输入事件。也就是说,作用于第一画布的输入事件将传递给第一画布的下一层的应用。但,第一画布被设置为透明属性不是立即生效的,生效时间不确定。因此,在第一画布设置为透明属性生效前,当第一画布接收到手指触摸down事件或者鼠标down事件后,手写板应用均需向操作系统(例如具体为窗口和图形系统)发送模拟的手指触摸up事件或者鼠标up事件,即执行步骤S1202。可以理解的,在第一画布设置为透明属性生效前,操作系统仍然会将本次接收到的手指触摸up事件或者鼠标up事件发送给手写板应用的第一画布。同理,当第一画布接收到手指触摸up事件或者鼠标up事件后,手写板应用均需向操作系统(例如具体为窗口和图形系统)发送模拟的手指触摸down事件或者鼠标down事件,即执行步骤S1203b。
S1202、手写板应用向操作系统发送手指触摸up事件或者鼠标up事件。
S1203a、在第一画布的透明属性生效前,操作系统在接收到的手指触摸up事件或者鼠标up事件后,向第一画布发送手指触摸up事件或者鼠标up事件。
S1203b、第一画布在接收到操作系统发送的手指触摸up事件或者鼠标up事件后,向操作系统发送模拟的手指触摸down事件或者鼠标down事件。
后续,在第一画布设置的透明属性生效后,当检测到作用于第一画布位置(也即前景窗口中输入控件位置)的输入事件后,操作系统将直接将输入事件转发给前景窗口,而不再转发给手写板应用。即,执行下述步骤S1204以及后续步骤。
S1204、在第一画布的透明属性生效后,操作系统接收到第一画布发送的手指触摸down事件或者鼠标down事件后,向前景窗口所在应用发送手指触摸down事件或者鼠标down事件。
由此可见,当第一画布的透明属性生效后,用户的输入事件将传递给第一画布的下一层的应用,即前景窗口所在应用。
S1205、操作系统检测到手指触摸up事件或者鼠标up事件后,向前景窗口所在应用发送手指触摸up事件或者鼠标up事件。
本步骤在图中以步骤S1205a和步骤S1205b示出。
可以理解的是,上述步骤S1204和步骤S1205,也就是说,检测到用户使用手指或鼠标点击前景窗口的输入控件,且抬手的操作。
S1206、前景窗口弹出手写框,手写框用于接收书写操作。
响应于检测到用户使用手指或鼠标点击前景窗口的输入控件的操作,前景窗口所在应用弹出手写框,手写框用于接收用户后续的书写操作。后续,当检测到作用于第一画布的操作后,前景窗口按照本身的业务执行逻辑运行。
S1207、当定时器计时结束后,手写板应用自动将第一画布恢复为接近透明或半透明。
也就是说,在定时器定时的预设时长2到达时,手写板应用自动恢复第一画布的 非透明属性,即接近透明或半透明的属性,即继续拦截用户在输入控件上的输入。当用户在输入控件的输入为手写板应用关注的事件(手写笔down事件)后,可以显示第二画布。当用户在输入控件的输入为手写板应用不关注的事件(手指触摸up事件或者鼠标up事件),则继续设置第一画布设置为透明属性,而后在输入控件的输入将直接透传给前景窗口所在的应用。
本申请实施例还提供一种装置,该装置包含在终端中,该装置具有实现上述实施例中任一方法中终端行为的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括至少一个与上述功能相对应的模块或单元。例如,接收模块或单元、显示模块或单元、处理模块或单元等。
本申请实施例还提供一种计算机存储介质,包括计算机指令,当计算机指令在终端上运行时,使得终端执行如上述实施例中任一方法。
本申请实施例还提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如上述实施例中任一方法。
可以理解的是,上述终端等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的范围。
本申请实施例可以根据上述方法示例对上述终端等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的 介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种手写输入方法,其特征在于,所述方法包括:
    检测应用窗口中的输入控件;
    在所述应用窗口中所述输入控件的对应位置上设置第一画布,所述第一画布的透明度大于或等于第一阈值;
    响应于作用在所述第一画布上的第一事件,在所述应用窗口上层设置第二画布;其中,所述第二画布用于接收用户的书写操作,并呈现书写笔迹;所述第二画布的尺寸大于所述第一画布的尺寸,且所述第二画布的透明度大于或等于所述第一阈值;
    根据所述第二画布上所述书写笔迹在所述应用窗口的所述输入控件中显示所述第二画布上书写笔迹的识别结果。
  2. 根据权利要求1所述的方法,其特征在于,所述第一画布和所述第二画布为不同的窗口,或者,所述第一画布和所述第二画布为同一个窗口。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第二画布为全屏窗口,或者,所述第二画布的尺寸与所述应用窗口的尺寸相同。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述检测应用窗口中的输入控件,包括:
    调用用户界面自动化UIA接口检测所述应用窗口的所述输入控件。
  5. 根据权利要求4所述的方法,其特征在于,所述调用用户界面自动化UIA接口检测所述应用窗口的所述输入控件,包括:
    调用用户界面自动化UIA接口获取所述应用窗口中各个控件的信息;
    当根据所述各个控件的信息确定应用窗口中第一控件的类型为预设类型,且所述第一控件的属性为所述预设类型对应的预设属性时,确定所述第一控件为所述应用窗口的所述输入控件;
    其中,所述预设类型包括编辑控制类型、下拉列表类型、以及文档控制类型中一项或多项;所述预设属性包括可被键盘聚焦、非只读、有键盘焦点中一项或多项。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述方法还包括:
    在所述检测所述应用窗口中的所述输入控件的过程中,还获取所述应用窗口中各个控件的类型和/或风格说明,并根据所述各个控件的类型和/或风格说明确定所述应用窗口中的所述输入控件。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    在检测到所述作用于所述第一画布的第一事件时,关闭所述第一画布。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述在所述应用窗口中所述输入控件的对应位置上设置第一画布之后,所述方法还包括:
    当监测到所述应用窗口的变化事件后,重新检测所述应用窗口变化后的所述输入控件,并更新在所述应用窗口上层设置的所述第一画布;
    其中,所述应用窗口的变化事件包括:用户切换应用窗口的事件,所述应用窗口中界面跳转事件,所述应用窗口中控件的位置和/或大小发生变化的事件中一项或多项。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,
    所述第一事件为手写笔down事件,手指触摸的down事件,鼠标操作的down事件 的任一项。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,在所述根据所述第二画布上所述书写笔迹在所述应用窗口的所述输入控件中显示所述第二画布上书写笔迹的识别结果之前,所述方法还包括:
    当检测到作用于所述第二画布的第二事件后,识别所述第二画布上的所述书写笔迹。
  11. 根据权利要求10所述的方法,其特征在于,
    当所述第一事件为手写笔down事件时,所述第二事件为手写笔的up事件;
    当所述第一事件为手指触摸的down事件时,所述第二事件为手指触摸的up事件;
    当所述第一事件为鼠标操作的down事件时,所述第二事件为鼠标操作的up事件。
  12. 根据权利要求1-8任一项所述的方法,其特征在于,所述方法还包括:
    当检测到作用于所述第一画布的第三事件时,修改所述第一画布的透明度为小于所述第一阈值;其中,所述第三事件与所述第一事件不同;
    显示手写框,所述手写框用于接收用户的书写操作,并呈现书写笔迹。
  13. 根据权利要求12所述的方法,其特征在于,所述显示手写框,包括:
    所述应用窗口所属的应用绘制所述手写框。
  14. 根据权利要求12或13所述的方法,其特征在于,
    所述第一事件为手写笔down事件;所述第三事件为手指触摸的down事件或鼠标操作的down事件。
  15. 根据权利要求1-14任一项所述的方法,其特征在于,
    所述应用窗口为前景窗口。
  16. 一种终端,其特征在于,包括:处理器、存储器和触摸屏,所述存储器、所述触摸屏与所述处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器从所述存储器中读取所述计算机指令,以使得所述终端执行如权利要求1-15中任一项所述的手写输入方法。
  17. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在终端上运行时,使得所述终端执行如权利要求1-15中任一项所述的手写输入方法。
  18. 一种芯片系统,其特征在于,包括一个或多个处理器,当所述一个或多个处理器执行指令时,所述一个或多个处理器执行如权利要求1-15中任一项所述的手写输入方法。
PCT/CN2023/100299 2022-06-24 2023-06-14 手写输入方法及终端 WO2023246604A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210731487.5 2022-06-24
CN202210731487.5A CN117311586A (zh) 2022-06-24 2022-06-24 手写输入方法及终端

Publications (1)

Publication Number Publication Date
WO2023246604A1 true WO2023246604A1 (zh) 2023-12-28

Family

ID=89261008

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/100299 WO2023246604A1 (zh) 2022-06-24 2023-06-14 手写输入方法及终端

Country Status (2)

Country Link
CN (1) CN117311586A (zh)
WO (1) WO2023246604A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1735733A1 (en) * 2004-04-02 2006-12-27 Nokia Corporation Apparatus and method for handwriting recognition
CN102778999A (zh) * 2012-04-16 2012-11-14 中兴通讯股份有限公司 移动终端及其全屏手写的处理方法
CN103226445A (zh) * 2013-05-10 2013-07-31 广东国笔科技股份有限公司 一种手写输入的方法、系统和终端
CN103547983A (zh) * 2011-05-20 2014-01-29 微软公司 用于手写输入的用户界面
CN113625932A (zh) * 2021-08-04 2021-11-09 北京鲸鲮信息系统技术有限公司 一种全屏手写输入的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1735733A1 (en) * 2004-04-02 2006-12-27 Nokia Corporation Apparatus and method for handwriting recognition
CN103547983A (zh) * 2011-05-20 2014-01-29 微软公司 用于手写输入的用户界面
CN102778999A (zh) * 2012-04-16 2012-11-14 中兴通讯股份有限公司 移动终端及其全屏手写的处理方法
CN103226445A (zh) * 2013-05-10 2013-07-31 广东国笔科技股份有限公司 一种手写输入的方法、系统和终端
CN113625932A (zh) * 2021-08-04 2021-11-09 北京鲸鲮信息系统技术有限公司 一种全屏手写输入的方法及装置

Also Published As

Publication number Publication date
CN117311586A (zh) 2023-12-29

Similar Documents

Publication Publication Date Title
US11722449B2 (en) Notification message preview method and electronic device
WO2021052263A1 (zh) 语音助手显示方法及装置
RU2766255C1 (ru) Способ голосового управления и электронное устройство
WO2021129326A1 (zh) 一种屏幕显示方法及电子设备
WO2020052529A1 (zh) 全屏显示视频中快速调出小窗口的方法、图形用户接口及终端
WO2021063343A1 (zh) 语音交互方法及装置
WO2021036571A1 (zh) 一种桌面的编辑方法及电子设备
US20240179237A1 (en) Screenshot Generating Method, Control Method, and Electronic Device
WO2021036770A1 (zh) 一种分屏处理方法及终端设备
WO2021063237A1 (zh) 电子设备的控制方法及电子设备
WO2020062294A1 (zh) 系统导航栏的显示控制方法、图形用户界面及电子设备
WO2021063098A1 (zh) 一种触摸屏的响应方法及电子设备
WO2021110133A1 (zh) 一种控件的操作方法及电子设备
JP2023506936A (ja) マルチ画面共働方法およびシステム、ならびに電子デバイス
WO2022017393A1 (zh) 显示交互系统、显示方法及设备
WO2022037726A1 (zh) 分屏显示方法和电子设备
WO2024001810A1 (zh) 设备交互方法、电子设备及计算机可读存储介质
WO2024045801A1 (zh) 用于截屏的方法、电子设备、介质以及程序产品
WO2021042878A1 (zh) 一种拍摄方法及电子设备
CN115543145A (zh) 一种文件夹管理方法及装置
WO2023222128A1 (zh) 一种显示方法和电子设备
WO2023071441A1 (zh) 通信录字母的显示方法、装置和终端设备
WO2023246604A1 (zh) 手写输入方法及终端
CN116088715B (zh) 消息提醒方法及电子设备
WO2024037542A1 (zh) 一种触控输入的方法、系统、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23826253

Country of ref document: EP

Kind code of ref document: A1