CN113641283A - Electronic device, screen writing mode switching method and medium thereof - Google Patents

Electronic device, screen writing mode switching method and medium thereof Download PDF

Info

Publication number
CN113641283A
CN113641283A CN202110757707.7A CN202110757707A CN113641283A CN 113641283 A CN113641283 A CN 113641283A CN 202110757707 A CN202110757707 A CN 202110757707A CN 113641283 A CN113641283 A CN 113641283A
Authority
CN
China
Prior art keywords
mode
screen
writing
user
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110757707.7A
Other languages
Chinese (zh)
Inventor
叶枫
谭昆
黄嘉伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110757707.7A priority Critical patent/CN113641283A/en
Publication of CN113641283A publication Critical patent/CN113641283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Abstract

The application relates to the technical field of intelligent office, and discloses an electronic device and a screen writing mode switching method and medium thereof. The screen writing mode switching method can switch the screen writing modes through the setting gesture, so that in the process of writing on the display screen of the electronic equipment by a user, when the screen writing modes of the electronic equipment need to be switched, the setting gesture can be directly adopted to act on the display screen of the electronic equipment, when the electronic equipment detects the setting gesture of the user, the current screen writing mode can be switched to the screen writing mode corresponding to the setting gesture, and the switching efficiency of the screen writing modes can be effectively improved.

Description

Electronic device, screen writing mode switching method and medium thereof
Technical Field
The application relates to the technical field of intelligent office, in particular to an electronic device and a screen writing mode switching method and medium thereof.
Background
With the development of intelligent office progress, more and more intelligent office equipment appears in the work and the life of user. For example, the intelligent collaboration large screen can be used in various meeting, open discussion, reporting and other scenes.
The intelligent collaboration large screen generally comprises a plurality of functional modes such as recognition output, key marking and the like for the written and drawn contents of the user. In scenes such as meeting teaching, in order to facilitate better communication and display, switching among multiple screen writing modes may be required. The screen writing mode may include a free drawing mode, a structured drawing mode, a circle selection mode, and the like.
In the prior art, a method for switching screen writing modes is generally complex, so that the operation efficiency is low.
Disclosure of Invention
In order to solve the technical problem that the screen writing mode switching method in the prior art is generally complex and causes low operation efficiency, embodiments of the present application provide an electronic device, a screen writing mode switching method thereof, and a medium. The screen writing mode switching method provided by the embodiment of the application can switch the screen writing modes through gestures, so that a user can directly touch the display screen by adopting the setting gesture in the writing process when the modes need to be switched, and when the electronic equipment detects the setting gesture of the user, the current screen writing mode can be switched to the screen writing mode corresponding to the setting gesture, and the switching efficiency of the screen writing modes is effectively improved.
In a first aspect, an embodiment of the present application provides a method for switching a screen writing mode, where the method includes:
the screen writing mode of the electronic equipment is a first writing mode;
the electronic device detects a first mode switching gesture of a user;
the electronic equipment switches the first writing mode into a second writing mode corresponding to the first mode switching gesture.
It is to be understood that, in the embodiment of the present application, the first writing mode may be a current screen writing mode of the electronic device mentioned in the embodiment of the present application. The second writing mode may be a switched screen writing mode of the electronic device. The first mode switching gesture may be a mode switching gesture corresponding to a switched screen writing mode of the electronic device.
In the embodiment of the application, when a user has an intention to switch a current screen writing mode to another screen writing mode, for example, the current screen writing mode is a free-writing mode, and the user wants to switch the current free-writing mode to a structured-writing mode, the user may apply a mode switching gesture corresponding to the structured-writing mode stored in a memory of the electronic device to the display screen, where the gesture applied to the display screen may be an action of touching the display screen, tapping the display screen, or sliding on the display screen. When the electronic equipment detects that the mode switching gesture acted on the display screen by the user is the mode switching gesture corresponding to the structured writing and drawing mode, the electronic equipment switches the free writing and drawing mode into the structured writing and drawing mode.
In an embodiment of the application, the electronic device may detect a mode switching gesture of a user through the touch detection device. It can be understood that, in the embodiment of the present application, the mode of detecting the mode switching gesture of the user may be that a touch detection device of the electronic device is used to detect a position of a touch point of the user on the display screen, and identify a corresponding gesture according to the position of the touch point of the user.
In some embodiments, the electronic device may also include an artificial intelligence chip and a camera device, where the artificial intelligence chip is connected to the camera device, and the manner of determining the gesture of the user may also be that when the camera device is shooting or recording a gesture around the display screen, the artificial intelligence chip may recognize a gesture image shot or recorded by the camera device to determine the gesture of the user. The artificial intelligence chip then sends the recognized gesture to the processor. This embodiment enables the user to switch the screen writing mode without having to touch the display screen or perform a corresponding action on the display screen with a gesture.
In a possible implementation of the first aspect, the screen writing mode includes a free-drawing mode, a structured-drawing mode, a circle-selection mode, and a mixed mode;
wherein the mixed mode comprises a circle selection mode and a free-drawing mode, or the mixed mode comprises a circle selection mode and a structured drawing mode.
It can be understood that, in the embodiment of the present application, the screen writing mode that the electronic device can support or the screen writing mode that the user can switch between may be a single screen writing mode, or may be a screen writing mode in which multiple screen writing modes coexist.
For example, after the user switches the screen writing mode to the structured writing and drawing mode by setting a gesture, when the structured writing and drawing mode is maintained, the user wants to copy simultaneously, and the user can enter the circle selection mode by a gesture corresponding to the circle selection mode, at this time, the structured writing and drawing mode and the circle selection mode exist simultaneously.
In a possible implementation of the first aspect, the first mode switching gesture includes any one of a touch, a tap, and a slide action of a user on a display screen of the electronic device.
It can be understood that the first mode switching gesture in the embodiment of the present application refers to not only a hand gesture, for example, a five-finger opening gesture, a two-finger closing gesture; hand movements such as touch movements, tap movements and slide movements may also be included.
In a possible implementation of the first aspect, the switching, by the electronic device, the first writing mode into a second writing mode corresponding to the first mode switching gesture includes:
when the electronic equipment detects that the distance between the position of the first mode switching gesture acting on a display screen on the electronic equipment and the position of the written content of the user is smaller than a set value, the electronic equipment switches the first writing mode to the second writing mode.
It can be understood that, in the embodiment of the present application, whether the electronic device performs an operation of switching the first writing mode to the second writing mode may be determined according to a distance between a position of the first mode switching gesture acting on the display screen on the electronic device and a position of the written content of the user. The condition of increasing the distance determination before performing the mode switching can effectively avoid causing an unexpected mode switching situation.
For example, when the distance between the position of the first mode switching gesture applied to the display screen on the electronic device and the position of the written content of the user is smaller than a set value, it may be preliminarily determined that the user is closer to the position of the written content, so that it may be determined that the user has a need to perform an operation on the written content, for example, a circle selection operation. At this time, mode switching can be performed.
When the distance between the position of the first mode switching gesture applied to the display screen on the electronic device and the position of the written content of the user is greater than the set value, it may be preliminarily determined that the user is far from the position of the written content, for example, it may be determined that the user has touched the screen by mistake with a dominant gesture exactly the same as the mode switching gesture of the electronic device when explaining the content on the display screen, and at this time, the mode is not switched.
Specifically, in this embodiment of the application, a distance between a position of the first mode switching gesture acting on the display screen on the electronic device and a position of the written content of the user may be a distance between the position of the first mode switching gesture acting on the display screen on the electronic device and a position of the written content of the user in a horizontal direction.
The position of the first mode switching gesture can be the center position of the horizontal distance between two edge touch points in the plurality of touch points of the first mode switching gesture; the position of the edge of the touch point which is closest to the position of the written content in all the touch points can be implemented; the edge position of the touch point farthest from the written content position among all the touch points of the first mode switching gesture may also be used.
The position of the written content of the user 002 may be the position of the starting point of the stylus 006, or the position of the nearest or farthest distance mode switching gesture in the written content.
The set point may be the conventional distance between the two hands of the user when handwriting, which may be 40-50cm, for example.
In a possible implementation of the first aspect, the position of the written content of the user is determined by a position of a writing pen held by the user, the writing pen being a writing instrument writing on a display screen of the electronic device.
In a possible implementation of the first aspect, when the electronic device detects that the first mode switching gesture of the user is disengaged from the display screen of the electronic device, the electronic device switches the second writing mode to the first writing mode.
It can be understood that, in some embodiments, in the screen writing mode switching method provided in the embodiments of the present application, after the user touches the display screen through the mode switching gesture to implement mode switching, when the gesture of the user leaves the display screen, the switched mode may also be maintained until the corresponding gesture is detected next time, and then the mode is switched. For example, if the current switched mode is the structured writing and drawing mode, the structured writing and drawing mode is maintained until the corresponding gesture is detected next time.
In other embodiments, in the screen writing mode switching method provided in this embodiment of the present application, after the user touches the display screen through the mode switching gesture to implement mode switching, if the gesture of the user leaves the display screen, the mode returns to the original mode, and if the switched mode is to be maintained, the corresponding gesture needs to be maintained all the time. For example, the mode before switching is a free-writing mode, the mode after switching is a structured-writing mode, when the user always uses the mode switching gesture corresponding to the structured-writing mode to act on the display screen, the display screen always maintains the structured-writing mode, and if the mode switching gesture of the user leaves the display screen once, the mode returns to the original mode, i.e., the free-writing mode.
In a possible implementation of the first aspect, when the electronic device detects that a time for which a first mode switching gesture of a user is maintained exceeds a set time, or when the electronic device detects a mode fixing gesture of the user corresponding to a mode fixing function, the electronic device fixes the second writing mode.
It can be understood that, according to the screen writing mode switching method provided in the embodiment of the present application, after the user touches the display screen through the mode switching gesture to implement mode switching, if the gesture of the user leaves the display screen, the mode will return to the original mode, and if the user wants to maintain the switched mode, the user needs to maintain the corresponding gesture all the time. This will cause fatigue of the hands, reducing the user experience.
Therefore, in the mode switching method provided by the embodiment of the application, the time for the electronic device to detect that the first mode switching gesture of the user is maintained exceeds the set time, or the electronic device can directly fix the switched screen writing mode under the condition that the user is detected, at this time, the user does not need to maintain the gesture corresponding to the switched screen writing mode to act on the display screen all the time, and the condition that the hands of the user are tired can be effectively avoided.
It is understood that the electronic device may unpin the current mode in the event that other preset conditions are met. For example, the electronic device may unpin the current mode when detecting a gesture corresponding to a mode unpin function.
In a possible implementation of the first aspect, the electronic device displays only a screen writing mode tag corresponding to a current screen writing mode of the electronic device on a screen.
In the embodiment of the application, the screen writing mode label corresponding to the current screen writing mode of the electronic equipment is displayed on the screen, so that the user can be reminded when the user forgets the current mode or the corresponding relation between the gesture and the mode. That is, the user can determine the current screen writing mode through the screen writing mode label displayed on the screen, and can also judge whether the user uses the mode switching gesture by mistake, so that the mode is switched to the unintended screen writing mode.
It is to be appreciated that the screen writing mode tab may be a virtual mode tab.
In a possible implementation of the first aspect, the electronic device displays, on a screen, screen writing mode tags corresponding to all screen writing modes supported by the electronic device, and performs specific marking on the screen writing mode tag corresponding to a current screen writing mode of the electronic device.
It can be understood that the specific marking of the screen writing mode label corresponding to the current screen writing mode may be a special marking such as a shadow marking, a bold marking, an enlarged marking, and the like, so that the user can clearly observe what mode the current screen writing mode is in.
In a possible implementation of the first aspect, the electronic device is an intelligent collaboration large screen.
A second aspect of an embodiment of the present application provides an electronic device, including a display screen;
a memory for storing instructions for execution by one or more processors of the electronic device, an
And the processor is one of the processors of the electronic equipment and is used for controlling the display screen to execute the screen writing mode switching method.
A third aspect of the embodiments of the present application provides a machine-readable medium, where the machine-readable medium has instructions stored thereon, and when the instructions are executed on a machine, the machine is caused to execute the screen writing mode switching method described above.
Drawings
FIG. 1 illustrates a scenario diagram for mode switching by key pressing, according to some embodiments of the present application.
FIG. 2 illustrates a scene diagram of a screen writing mode switch by gesture, according to some embodiments of the present application.
FIG. 3 illustrates a schematic diagram of an intelligent collaboration large screen, according to some embodiments of the present application.
FIG. 4 illustrates a diagram of an input system architecture in an intelligent collaborative large screen in relation to a mode switching method in an embodiment of the present application, according to some embodiments of the present application.
Fig. 5 illustrates a flow diagram of a mode switching method, according to some embodiments of the present application.
Fig. 6 illustrates a scene schematic of a mode switching method according to some embodiments of the present application.
FIG. 7 illustrates a setup diagram of a mode tag, according to some embodiments of the present application.
FIG. 8 illustrates a scene schematic of a circle selection mode, according to some embodiments of the present application.
FIG. 9 illustrates a scene schematic of a circle selection mode, according to some embodiments of the present application.
FIG. 10 illustrates a scene schematic of a circle selection mode, according to some embodiments of the present application.
FIG. 11 illustrates a scene diagram with a circle selection mode fixed, according to some embodiments of the present application.
FIG. 12 illustrates a scene diagram for determining distance based on initial stylus position and gesture position, according to some embodiments of the present application.
Detailed Description
The illustrative embodiments of the present application include, but are not limited to, a method, system, electronic device, and storage medium for switching functions. Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
For ease of understanding, the intelligent collaboration large screen is taken as an example, and the aforementioned writing modes are described first:
free-writing and drawing mode
The free writing and drawing mode refers to a mode that the intelligent collaboration large screen can present characters or patterns with the same track as the handwriting of a user.
(II) structured writing and drawing mode
The structured writing and drawing mode is a mode that the intelligent collaboration large screen can recognize the content handwritten by the user and then convert the content into a predefined format. For example, in writing, the structured drawing mode can implement Optical Character Recognition (OCR) on the handwritten word of the user, so as to convert the handwritten word into a standard font (such as sons font) for display. For another example, in drawing, the structured writing and drawing mode can realize that the user 002 hand-drawn shape is recognized and output as a standard shape (such as a line, an arc, a circle, and the like).
(III) circle selection mode
The circle selection mode refers to a mode in which the intelligent collaboration large screen can circle the content presented on the intelligent collaboration large screen and copy or move the content in the circle to other positions. For example, a user writes with a stylus in the free-draw mode and wants to copy or move a partially written content at a certain time. The display screen is recognized according to the pattern of the handwritten strokes of the user, for example, for a word, in a default mode, the display screen is not recognized as a specific word but recognized as a plurality of graphs, each graph corresponds to one stroke (continuous writing is considered as a pattern), so that the corresponding word can be selected just by clicking the written content writing method like a traditional display, only a circle selection mode can be entered, the whole stroke or pattern is selected, and operations such as copying and moving are performed.
It should be understood that, the switching method is described in the present application by taking the above three writing modes as an example, but the switching method provided in the present application may be applied to the switching process of other writing modes, and the method provided in the present application should not be limited to the switching process of the above three writing modes. In order to realize the switching among the writing modes, a selection button can be arranged on the intelligent cooperation large screen. For example, fig. 1(a) and (b) show diagrams of a scenario in which mode switching is performed by a key press.
As shown in fig. 1(a) and (b), in this scenario, the user 002 draws on the smart collaboration large screen 001 with the stylus 006. The intelligent collaboration large screen 001 is provided with a plurality of mode keys, and specifically, the mode keys can comprise a 'painting brush' key 003 corresponding to a free painting mode, a 'structural' key 004 corresponding to a structural painting mode and a 'circle selection' key 005 corresponding to a circle selection mode according to the corresponding modes. The mode key may be located midway down the display screen 102. The three keys may be virtual keys implemented by software or hardware keys.
As shown in fig. 1(a), the intelligent collaboration large screen is currently in the free-writing mode, that is, if the user 002 writes "cattle attack sky" four characters on the display screen 102 in the free-writing mode, the characters displayed on the display screen 102 are the same as the handwriting 006 of the user 002. And the user 002 needs to write more contents, the handwritten handwriting is messy, the occupied space is large, and the like, so that the current free writing and drawing mode is required to be converted into the structured writing and drawing mode, and at the moment, the user can select the structured writing and drawing mode by triggering the intelligent cooperation large-screen structured key 004. After conversion into the structured schema, the font handwritten by the user 002 will be converted into a standard font. For example, as shown in FIG. 1(b), if user 002 presses "structured" button 004, the four words "cattle attack" of Song would appear as shown on display screen 102 in FIG. 1 (b).
The mode switching method can realize the function switching by clicking the corresponding mode key by the user 002. However, since the position of the mode keys on the display screen 102 is fixed, the mode keys are typically at the bottom of the display screen 102, as shown in FIG. 1, for example. When the user 002 needs to switch the mode during writing, the mode button is far away from the user 002 writing position, and the user 002 needs to move a large distance to click the mode button, which takes a long time. And the longer distance makes it difficult for the user 002 to see clearly the fonts on the mode keys, which may require a moving or head-skewed pose for viewing. In addition, the position of the mode key often has a plurality of keys, the keys are arranged together, and a certain time is needed to search the keys, which affects the smoothness of operation.
In order to solve the above problem, an embodiment of the present application provides a mode switching method, which can switch modes through a gesture, so that when a user 002 needs to switch the modes in a writing process, the user can directly touch the display screen 102 with a setting gesture, that is, the switching of the modes corresponding to the setting gesture can be completed, and the operation efficiency is improved.
Fig. 2 shows a scene diagram of switching the screen writing mode by gesture. As shown in fig. 2, in this scenario, the current screen writing mode on the smart collaboration large screen is a free-form drawing mode, that is, similar to the scenario shown in fig. 1(a), if at this time the user 002 writes "cattle air rush" four characters on the display screen 102 in the free-form drawing mode, the characters displayed on the display screen 102 are the same as the handwriting 006 of the user 002. If the user 002 has a need to change the current free-drawing mode into the structured drawing mode, the user may touch the display screen by using a setting gesture corresponding to the structured drawing mode, for example, a gesture of touching the display screen with two fingers, and then the free-drawing mode may be switched to the structured drawing mode. After conversion into the structured schema, the font handwritten by the user 002 will be converted into a standard font. For example, as shown in fig. 2(b), after the smart collaboration large screen 001 detects a gesture in which the double fingers of the user 002 touch the display screen 102, the free sketch mode is converted into the structured sketch mode, that is, as shown in fig. 2(b), four characters of "cattle strike sky" of song dynasty are displayed on the display screen 102.
It can be understood that the technical solution of the present application can be applied to various electronic devices with touch screens, for example, electronic devices including but not limited to smart collaboration large screens, mobile phones, computers, and the like.
The electronic equipment provided by the application does not need the three mode keys; alternatively, the three mode keys may be retained, for example, as shown in fig. 2, although the mode keys may not be used for switching, the retained mode keys may indicate which writing mode is currently used; alternatively, instead of retaining the mode key, but providing three modes of indication information, the "brush", "structured", "circle" of fig. 2 may also be understood as three icons, the icon color or shape being used to indicate the current writing mode.
For convenience of description, the intelligent collaboration large screen 001 is explained as an example below.
Fig. 3 shows a schematic structural diagram of an intelligent collaboration large screen 001 suitable for the technical solution of the present application.
As shown in fig. 3, the intelligent collaboration large screen 001 may include:
the display screen 102: the device comprises a display interface and a writing and drawing interface. The display interface is used for displaying screen projection pictures of a computer or a mobile phone, and the writing and drawing interface is used for providing a writing and drawing interface for a user 002 to write and draw and displaying writing and drawing contents. The writing and drawing interface and the display interface can be freely switched. The display screen 102 may be an electronic whiteboard or a folding screen device, etc. It may be implemented that the display screen 102 may also be a capacitive touch screen.
The touch detection device 190: for recognizing the touch of the user 002.
If the display screen 102 is an electronic whiteboard, the touch recognition technology may adopt an infrared touch recognition technology. The touch detection device 190 may include an infrared matrix and a touch recognition chip disposed in front of the display screen 102; when the user 002 touches a certain position on the display screen 102, the infrared ray corresponding to the position will be shielded, and the touch recognition chip can recognize the shielded position, thereby determining the touch position.
If the display screen 102 is a capacitive screen, the recognition technology can also adopt a capacitive touch recognition technology, and at this moment, the touch monitoring device 190 comprises a touch recognition chip, when the user 002 touches the capacitive screen, due to a human body electric field, the user 002 finger and the working surface form a coupling capacitor, because the working surface is connected with a high-frequency signal, then the finger absorbs a very small current, the current flows out from the electrodes at the four corners of the capacitive screen respectively, and theoretically, the current flowing through the four electrodes is proportional to the distance from the finger head to the four corners, and the touch recognition chip judges the touch position by precisely calculating the proportion of the four currents.
The touch recognition chip can comprise a touch logic unit, and the touch logic unit can send the detected touch position to the processor; or judging the touch gesture according to the detected touch position and sending the touch gesture to the processor.
The processor 110: one or more Processing units may be included, for example, Processing modules or Processing circuits that may include a central Processing Unit (cpu), (central Processing Unit), an image Processing Unit (gpu), (graphics Processing Unit), a Digital Signal Processor (DSP), a microprocessor MCU (Micro-programmed Control Unit), an AI (Artificial Intelligence) processor, or a Programmable logic device (fpga), (field Programmable Gate array), etc. The different processing units may be separate devices or may be integrated into one or more processors. The processor can determine an output mode corresponding to the position according to the detected touch position; or the processor can determine the output mode corresponding to the gesture according to the touch gesture to complete mode switching.
A memory 180 or buffer may be provided in the processor 110 for storing instructions and data. It may be implemented that the memory 180 or buffer may include a correspondence of touch location to mode, or may include a correspondence of corresponding gesture to mode. The corresponding relationship may be stored in the memory 180 in advance, and after receiving the corresponding touch position or gesture, the processor 110 searches whether the corresponding mode exists from the memory 180, and if so, the processor 110 switches the mode.
It is understood that the above mentioned gestures in the embodiments of the present application may also include corresponding gesture actions, for example, a motion of moving two fingers together, a motion of continuously tapping a display screen by a finger joint, and the like.
The corresponding relationship between the gesture or gesture motion stored in the memory 180 and the output mode is listed as the following table 1:
Figure BDA0003148459420000081
it is understood that the corresponding relationship between the gesture or gesture motion and the output mode in table 1 is merely an exemplary description, and the output mode may also correspond to other gestures, for example, the structured writing and drawing mode may also adopt a three-point touch mode to implement switching, and the like.
The power module 140 may include a power supply, power management components, and the like. The power source may be a battery. The power management component is used for managing the charging of the power supply and the power supply of the power supply to other modules. In some embodiments, the power management component includes a charge management module and a power management module. The charging management module is used for receiving charging input from the charger; the power management module is used for connecting a power supply, the charging management module and the processor 110. The power management module receives power and/or charge management module input and provides power to the processor 110, the display 102, the camera 170, and the wireless communication module 120.
The wireless communication module 120 may include an antenna, and implement transceiving of electromagnetic waves via the antenna. The wireless communication module 120 may provide a solution for wireless communication applied to the smart cooperative large screen 001, including Wireless Local Area Network (WLAN) (e.g., wireless fidelity (Wi-Fi) network), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The intelligent collaboration large screen 001 may communicate with a network and other devices through wireless communication technology.
In some embodiments, the mobile communication module 130 and the wireless communication module 120 of the intelligent collaboration large screen 001 may also be located in the same module.
The audio module 150 is used to convert digital audio information into an analog audio signal output or convert an analog audio input into a digital audio signal. The audio module 150 may also be used to encode and decode audio signals. In some embodiments, the audio module 150 may be disposed in the processor 110, or some functional modules of the audio module 150 may be disposed in the processor 110. In some embodiments, audio module 150 may include speakers, an earpiece, a microphone, and a headphone interface.
The camera 170 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The light receiving element converts an optical Signal into an electrical Signal, and then transmits the electrical Signal to an ISP (Image Signal Processing) to convert the electrical Signal into a digital Image Signal. The intelligent collaboration large screen 001 can realize a shooting function through an ISP, the camera 170, a video codec, a GPU (graphics Processing Unit), the display screen 102, an application processor, and the like.
The interface module 160 includes an external memory interface, a Universal Serial Bus (USB) interface, and the like. The external memory interface can be used for connecting an external memory card, such as a Micro SD card, and the storage capability of the intelligent cooperative large screen 001 is expanded. The external memory card communicates with the processor 110 through an external memory interface to implement a data storage function. The universal serial bus interface is used for the intelligent cooperation large screen 001 to communicate with other electronic equipment.
In some embodiments, the smart collaboration large screen 001 also includes a key 101. The keys 101 may include a volume key, an on/off key, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not form a specific limitation to the intelligent collaboration large screen 001. In other embodiments of the present application, the intelligent collaboration large screen 001 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Fig. 4 shows an input system architecture diagram related to the mode switching method of the present application in the above-described intelligent collaboration large screen 001. Specifically, as shown in fig. 4, the relevant components of the input system are distributed in a hardware layer 410, a Kernel layer 420, a framework layer 430, and an application layer 440. The hardware layer 410 includes, but is not limited to, various Input devices, such as a Key (Key)101, a display screen 102, and the like, and the hardware layer 410 further includes an Input module corresponding to each Input device, so as to generate an Input Event (Input Event) in response to an operation performed by the user 002 through the Input device, and send the Input Event to the Kernel layer 420 for processing. For example, the input event may be a text input by the user 002 display screen via a structured drawing mode, and the display screen may send the detected input event to the Kernel layer 420 for processing.
The Kernel layer 420 receives the Input Event and stores the Input Event in an Event Hub (Input Hub) for the framework layer 430 to call, and the Kernel layer 420 includes but is not limited to an Event processing layer 421, an Input core layer 422 and a device driver layer 423, where the Event processing layer 421 is mainly responsible for interacting with the upper layers and includes Input handlers (Input handlers), and each program represents an Event Handler; input core layer 422 performs the top-down role to provide a common interface for event handling layer 421 and device driver layer 423; the Device Driver layer 423 is responsible for interfacing with the underlying Input devices and obtaining event information from the Input devices, including Input devices such as touch screen drivers (Touchscreen drivers), etc. For example, the display screen 102 sends the detected input event to the Kernel layer 420 and then is acquired by the touch screen driver of the device driver layer 423, and the input core layer 422 provides an interface to send the input event acquired by the device driver layer 423 to the event processing layer 421 for processing.
The framework layer 430 includes, but is not limited to, an Input Manager Service (Input Manager Service)331 and a window Manager 432, wherein the Input method management Service 431 manages two processing threads, an Input read thread and an Input scheduler thread. The Input read thread (InputDispatcherThread) reads in the Input Event from the Input Hub, then adds the Input Event into the queue, notifies the Input scheduler thread (InputDispatcherThread) to process, and the InputDispatcherThread receives the Input Event and then sends the Input Event according to the condition of the current Input channel.
The window manager 432 is used to manage the window program, and the window manager 432 can obtain the size of the display screen, determine whether there is a status bar, lock the screen, and intercept the screen. For example, in the above example, the Input method management service 431 of the framework layer 430 reads an Input Event from an Input Hub, queues the Input Event, and transmits the Input Event when an Input channel is idle. For another example, when the mode of the drawing interface is fixed, the window manager 432 can determine that the mode of the drawing interface is in a fixed state.
The application layer 440 may include, but is not limited to, an input method application (InputApplication)441, a memo application 442, a browser application, and the like. The input event receiver (inputeventerreceiver) of the input method application 441 receives input event distribution, and processes corresponding input events by calling the input events multiple times.
It is to be understood that the structure shown in fig. 4 does not constitute a structural limitation on the Input subsystem in the intelligent collaboration large screen 001, and in other embodiments, the process of implementing the solution of the present invention may also involve other more or fewer structures or modules, which is not limited herein.
The mode switching method according to the embodiment of the present application will now be described in detail with reference to the above application scenario and the structure of the smart collaboration large screen 001.
Fig. 5 is a flowchart illustrating a mode switching method according to an embodiment of the present application. As shown in fig. 5, includes:
s501, the user 002 touches the display screen 102 with a gesture.
For example, as shown in fig. 6(a), under the drawing interface of the display screen 102, the user 002 is currently in the free-drawing mode, and if four characters "cattle attack sky" are written on the display screen 102 in the free-drawing mode, the same characters as the handwriting 006 of the user 002 are shown on the display screen 102 in fig. 6 (a). User 002 now has a need to convert the free-draw mode to a structured-draw mode in which the display screen 102 can be touched with a two-finger draw-together gesture.
S502, the touch detection device 190 of the intelligent cooperative large screen 001 detects the position of the touch point of the user 002, and identifies a corresponding touch gesture according to the position of the touch point of the user 002.
In the embodiment of the present application, the position of the touch point of the user 002 can be detected by the touch recognition chip of the touch detection device 190. As shown in fig. 6(a), the user 002 touches the display screen 102 with the two-finger-closing gesture, and in some embodiments, the touch detection device 190 detects the touch points at two positions by the touch recognition chip, i.e. the two-point touch gesture can be determined.
But this detection recognition may lead to the occurrence of an unexpected mode switch. For example, when the user 002 carelessly touches two fingers to the display screen 102, the touch recognition chip of the touch detection device 190 can also recognize the two touch points, but the user 002 does not really intend to switch the free-drawing mode to the structured-drawing mode, which may result in unexpected mode switching.
In other embodiments, the detection and recognition manner of the touch detection device 190 is that the touch recognition chip detects two touch points at two positions, and it is further determined that the distance between the two touch points is within a set range, so that the two-point touch gesture can be determined. The setting range may be smaller than the distance between the middle lines of the two fingers when the two fingers of the person are closed, for example, the distance may be 0-2cm, and at this time, it can be determined that the user 002 is the two-finger closed touch display screen 102, and the intention of the user 002 is accurately determined. The detection and identification mode of increasing the distance judgment of the two touch points can effectively prevent the occurrence of unexpected mode switching.
In addition, the touch detection chip can only detect touch points, but cannot detect whether a plurality of touch points belong to the same hand or the same human user 002. Therefore, as long as the position of the touch point is detected to meet the set condition, the corresponding gesture can be judged. For example, the user 002 brings the index fingers of the two hands together and touches the display screen 102, and at this time, the touch recognition chip of the touch detection device 190 detects the touch points at two positions, and the distance between the two touch points is within the set range, so that the double-point touch gesture can still be determined.
It can be understood that, in this embodiment of the application, the intelligent cooperative large screen 001 may include the touch detection device 190, and therefore, the manner of determining the gesture of the user 002 may be a manner of detecting the touch point position of the user 002 by using the touch detection device 190 of the intelligent cooperative large screen 001, and identifying a corresponding touch gesture according to the touch point position of the user 002.
In some embodiments, the intelligent collaboration large screen 001 may also include an artificial intelligence chip and the camera device 170, and the artificial intelligence chip is connected to the camera device 170, so that the manner of determining the gesture of the user 002 may also be that when the camera device 170 shoots or records a gesture around the display screen 102, the artificial intelligence chip may recognize a gesture image shot or recorded by the camera device 170 to determine the gesture of the user 002. The artificial intelligence chip then sends the recognized gesture to the processor. This embodiment enables the user 002 to switch the screen writing mode without touching or performing a corresponding action on the display screen 102 with a specific gesture.
The artificial intelligence chip can include a gesture collection module and a gesture recognition module, and can store and set specific gestures and screen writing modes corresponding to the specific gestures through the gesture collection module, for example, double fingers can be input into the gesture collection module to close gesture images, and the screen writing modes corresponding to the double fingers are set to be structured writing and drawing modes and the like. The captured or recorded gesture image can be recognized by the gesture recognition module to determine the gesture of the user 002.
And S503, the touch detection device 190 sends the recognized touch gesture to the processor 110.
S504, the processor 110 determines a corresponding mode according to the touch gesture recognized by the touch detection device 190, and switches the current mode to the mode corresponding to the touch gesture.
After receiving the gesture recognized by the touch detection device 190, the processor 110 searches the stored corresponding relationship between the gesture and the mode from the memory, where the corresponding relationship may refer to table 1, and if the gesture recognized by the touch detection device 190 has a corresponding mode, the processor 110 switches the current mode to the mode corresponding to the gesture recognized by the touch detection device 190. If the gesture recognized by the touch detection device 190 does not have the corresponding mode, the processor 110 does not switch the mode.
For example, if the gesture received by the processor 110 and recognized by the touch detection device 190 is a double-point touch gesture, the processor 110 searches for the mode corresponding to the double-point touch gesture from the memory, determines that the mode corresponding to the double-finger close-up gesture is the structured writing and drawing mode, and switches the current free writing and drawing mode to the structured writing and drawing mode. After conversion into the structured schema, the font handwritten by user 002 will be converted into a standard font, such as song, as shown in fig. 6 (b). For example, the "attack of cattle" in fig. 6(a) is automatically converted into the "attack of cattle" in song; alternatively, if the user 002 writes the "cattle strike sky" four characters on the display screen 102 in the structured writing and drawing mode, the "cattle strike sky" four characters of the song style will be shown on the display screen 102 as shown in fig. 6 (b).
It can be understood that in the structured drawing mode mentioned in the present embodiment, it can be realized that the user 002 finishes drawing one line or finishes writing one word, i.e. performs structured output once. For example, when a square is drawn, OCR recognition is performed immediately every time a straight line is drawn, and then converted into a standard straight line. It is also possible to perform OCR recognition after detecting that the stylus 006 is separated from the screen for a preset time. For example, when the user 002 draws a complete square and detects that the stylus 006 is off the screen for 3 seconds, OCR recognition is performed to recognize the graphic as a square. Alternatively, when the user 002 finishes writing one or more words and detects that the stylus 006 is off the screen for 3 seconds, the word or words are OCR recognized.
In some embodiments, in the mode switching method provided in this embodiment of the application, after the user 002 touches the gesture to realize the mode switching, the user can keep the switched mode by moving the hand away from the display 102 until the corresponding gesture is detected next time, and then the mode switching is performed. For example, currently, the display screen 102 is always maintained in the structured writing and drawing mode, and when the user 002 wants to return to the free writing and drawing mode, the two fingers can be closed together again to touch the display screen 102, and the structured writing and drawing mode is switched to the free writing and drawing mode; it can be implemented, and can also set up the corresponding gesture of freehand drawing, for example three-point touch gesture.
In this case, the multiple modes may coexist, for example, after the user switches the screen writing mode to the structured writing and drawing mode by setting a gesture, when the structured writing and drawing mode is maintained and the user wants to copy simultaneously, the user may enter the circle selection mode by a gesture corresponding to the circle selection mode, and at this time, the structured writing and drawing mode and the circle selection mode exist simultaneously.
In other embodiments, in the mode switching method provided in this embodiment of the present application, after the user 002 performs mode switching by gesture touch, if the gesture of the user leaves the display screen 102, the mode returns to the original mode, and if the switched mode is to be maintained, the corresponding gesture needs to be maintained all the time, which causes fatigue of the hand and reduces the experience of the user 002. Therefore, the mode switching method provided by the embodiment of the application further includes the following steps:
if the preset condition is met, the processor fixes the current mode.
According to the embodiment of the application, the current output mode can be fixed through the preset condition, and the gesture does not need to be maintained all the time. The preset condition may be that the predetermined gesture is maintained for more than a predetermined time. For example, the predetermined time to maintain the predetermined gesture may be 5 seconds. For example, if the current mode of the user 002 is the structured writing and drawing mode and the user 002 has a requirement for fixing the structured writing and drawing mode, the two fingers are closed together to touch the display screen for 1026 seconds, and the processor determines that the time for the user 002 to maintain the setting gesture exceeds the predetermined time and meets the preset condition, and then the processor fixes the structured writing and drawing mode.
The preset condition may also be that the display screen 102 is touched by a fixed gesture or a gesture motion. For example, a finger joint double click gesture motion shown in table 1 may be used. When the user 002 has a need to fix the structured writing and drawing mode, the finger joint is used to double-click the display screen 102 while the two fingers are closed up touching the display screen 102 as shown in fig. 6 (c). It is also possible to use the knuckle to double click the display screen 102 within a preset time, for example, within 3 seconds, after the two fingers are closed and touch the display screen 102, that is, after the two fingers are separated from the display screen 102. And if the processor judges that the preset condition is met, fixing the structured writing and drawing mode.
It is understood that the processor may unpin the current mode in case a preset condition is met; the preset conditions may be the same as the preset conditions for fixing the current mode, and are not described herein again.
The mode after the current mode is cancelled to be fixed can be the current mode which is not fixed, or the current mode is directly changed into the default state, namely the free writing and drawing mode after the fixed mode is cancelled. For example, the current mode is in a fixed state of the structured writing and drawing mode, and after the fixation is cancelled, the current mode can be converted into a conventional state of the structured writing and drawing mode, namely, an unfixed state of the structured writing and drawing mode; and can also be directly converted into a free-writing and drawing mode.
It can be understood that the mode switching method provided in the embodiment of the present application may also coexist with multiple modes on the basis of fixing the modes, for example, as shown in fig. 6(d), on the basis of maintaining the structured writing and drawing mode, the circle selection mode may be entered through a five-finger touch gesture corresponding to the circle selection mode, and at this time, the structured writing and the circle selection mode exist simultaneously.
In this embodiment of the present application, in order to prevent the user 002 from forgetting the current mode or the corresponding relationship between the gesture and the mode, the mode switching method provided in this embodiment of the present application may further include setting a mode tag on the smart collaboration large screen 001, and performing mode marking through the virtual mode tag. Here, the mode tab is not a key as shown in fig. 1, but a virtual tab capable of displaying a current mode. For example, the mode labels may include a "brush" label 007 corresponding to the free-draw mode, a "structured" label 008 corresponding to the structured-draw mode, and a "circled" label 009 corresponding to the circled mode according to the corresponding mode. The specific arrangement mode may be that the display screen 102 is arranged at a lower middle position, a left position or a right position, etc. The mode switching method provided by the embodiment of the application includes a plurality of methods for marking a mode through a virtual tag:
in one embodiment, the display screen 102 only displays the mode label corresponding to the current mode, for example, as shown in fig. 7(a), if the current mode is the free-writing mode, only the "brush" label 007 corresponding to the free-writing mode is displayed below the display screen 102 to remind the user 002 that the current mode is the free-writing mode.
In another practical solution, the display screen 102 may display mode labels corresponding to all modes, but the mode label corresponding to the current mode is specially marked, for example, the mode label corresponding to the current mode can be displayed as a shadow, or the mode label corresponding to the current mode can be enlarged, so as to remind the user 002. For example, as shown in fig. 7(b), in the free-drawing mode, mode labels are displayed below the display screen 102, including a "brush" label 007, a "structured" label 008, and a "circle selection" label 009, where the "brush" label 007 corresponding to the free-drawing mode is in a shadow state to remind the user 002 that the current mode is the free-drawing mode. Alternatively, as shown in fig. 7(c), the "brush" label 007 corresponding to the free-writing mode is in an enlarged state, and is used to remind the user 002 that the current mode is the free-writing mode.
In some embodiments, the tags may be displayed or hidden, depending on the output mode of the display screen 102. For example, in the case of a structured mode and a circle selection mode, the corresponding mode tab may be a display state. While in the free-draw mode, which is a normal default mode, the corresponding "brush" tab 007 may be displayed or hidden. That is, if no mode tab appears on the display screen 102, or no tab is highlighted, the user 002 can know that the current mode is the default mode.
In other embodiments, the mode tag may be displayed when the display screen 102 detects a gesture input or the stylus 006. For example, in the structured writing mode, when the user 002 is not writing on the display screen 102 and the user 002 is not touching the display screen 102 with any gesture, the "structured" tab 008 corresponding to the structured writing will not be displayed or highlighted.
In the embodiment of the present application, the mode switching method is described by taking an example of switching the free writing and drawing mode to the structured writing and drawing mode. It can be understood that switching between other modes can also be realized by the above mode switching method, for example, switching the free writing and drawing mode to the circle selection mode only needs to convert the two-finger close touch mode switched to the structured writing and drawing mode into the five-finger open touch screen corresponding to the circle selection mode according to the step shown in fig. 8.
For example, as shown in the schematic diagram of fig. 8(a), under the writing interface of the display screen 102, the user 002 is currently in the free-writing mode, and if four words "cattle air rush" are written on the display screen 102 in the free-writing mode at this time, the same words as the handwriting 006 of the user 002 are displayed on the display screen 102 in fig. 8 (a). The user 002 now has a need to convert the free-draw mode to the circle-select mode, in which the display screen 102 can be touched with a five-finger spread gesture.
The touch recognition chip of the touch detection device 190 detects the touch points at the five positions, and thus the five-point touch gesture can be determined. The touch detection device 190 sends the recognized five-touch gesture to the processor. If the gesture received by the processor and recognized by the touch detection device 190 is a five-point touch gesture, the processor searches for a mode corresponding to the five-point touch gesture from the memory, determines that the mode corresponding to the five-point touch gesture is a circle selection mode, and switches the current free drawing mode to the circle selection mode. After the mode is converted into the circle selection mode, as shown in fig. 8(b), the user 002 can circle the content written by the user 002, for example, the user can copy or move the "cattle" character by circle selection.
In some embodiments, the mode switching method provided in this embodiment of the present application may further include that, after the mode is switched to the selection-by-pass mode, the processor performs selection-by-pass output according to a selection-by-pass rule.
In some embodiments, the selection rules may include: when the pattern is selected, drawing the selection box, and selecting the pattern as long as the pattern is in the drawn selection box. For example, in FIG. 8(b), a "cow" is inside the circle, and a "qi" word has no strokes on the circle, then only a "cow" is selected. If the word "qi" is also within the circled box as in fig. 9, the word "qi" will also be selected.
In an implementation, the frame may be a closed pattern determined according to handwriting of the stylus 006. The content inside the closed pattern or the strokes the closed pattern passes through may be selected. The selection box is beneficial to fine selection of the content to be selected, for example, when free-drawing is messy, the selection box can accurately select the content to be selected. As described above with respect to the circle selection mode, the display screen 102 is recognized as a pattern drawn by the stylus 006 of the user 002, for example, for a word, in the default mode, the display screen 102 does not recognize the word as a specific word, but recognizes a plurality of patterns, each corresponding to a stroke (continuous writing is considered as a pattern).
The handwriting of the stylus 006 may be a closed pattern or a non-closed pattern. As shown in the left "day" word example in fig. 10 below, when the handwriting of the stylus 006 is a closed pattern, the content in the closed pattern or the "day" word of the pattern through which the handwriting of the stylus 006 passes is selected. When the handwriting of the stylus 006 is not closed, a closed pattern can be formed by the system according to the handwriting. As an example of the "big" word in fig. 10, when the "big" word is circled, the stylus 006 passes through three strokes of the "big" word, but when the handwriting is not closed, the stylus 006 is separated from the screen, and at this time, the processor may draw a straight line between the starting point and the ending point of the handwriting, as shown by the dotted line in the figure, to form a closed pattern, so that the "big" word of the pattern in the closed pattern is selected. For another example of the "day" word on the right side of fig. 10, the handwriting of the stylus 006 has not been traced by the last stroke of the "day" word on the right side of fig. 10, i.e., the day is not completely circled, but the processor connects the head and the tail of the circled box of the "day" word on the right side of fig. 10 by straight lines, as shown by the dotted line in the figure, and the formed pattern passes through the last stroke of the "day" word, and then the stroke is selected.
In another practical solution, the selection box may also be of a predetermined shape, for example, a square or a circle. The circle frame may be formed in a manner that one point is used as a base point, the other point is used as an end point, the trajectory of the stylus 006 forms a straight line from the base point to the end point, the processor automatically forms a square circle frame with the straight line as a diagonal line, or a circular circle frame with the straight line as a diameter, and a ball in the circle frame is a circle target.
It is to be understood that, in the embodiment, if the preset condition is satisfied, the processor may fix the selection mode. Wherein the preset condition is identical to that described in step S805. For example, as shown in FIG. 11, the circle selection mode may be fixed by double clicking the display screen 102 with the knuckles while the five fingers are open to touch the display screen 102.
In some embodiments, the user 002 may have some familiar actions while writing and these familiar actions may be exactly the same as the switching gesture of the mode of the intelligent collaboration large screen 001, which would result in an unintended mode switch. Taking the display screen 102 as an electronic whiteboard as an example, some users 002 can naturally open and press the electronic whiteboard screen with five fingers with one hand when writing or explaining, and at this time, the mode switching method provided by the embodiment of the present application will switch to the looping selection mode. However, the user 002 does not have a need to switch the current mode to the select-around mode at this time, and thus, an unintended mode switching is caused.
In order to avoid the foregoing situation, the screen writing mode switching method provided in the embodiment of the present application may include the following steps: whether the intelligent cooperation large screen 001 performs an operation of switching the current mode to another screen writing mode may be determined according to a distance between a position of a mode switching gesture applied to the display screen 102 on the intelligent cooperation large screen 001 and a position of writing contents of the user 002.
Specifically, in the embodiment of the present application, the distance between the position of the mode switching gesture acting on the display screen 102 on the intelligent collaboration large screen 001 and the position of the written content of the user 002 may be a horizontal distance between the position of the mode switching gesture acting on the display screen 102 on the intelligent collaboration large screen 001 and the position of the written content of the user 002.
The position of the mode switching gesture may be a center position of a horizontal distance between two edge touch points among the plurality of touch points of the mode switching gesture; the position of the edge of the touch point which is closest to the position of the written content in all the touch points can be implemented; the edge position of the touch point farthest from the written content position among all the touch points of the mode switching gesture may also be used.
The position of the written content of the user 002 may be the position of the starting point of the stylus 006, or the position of the nearest or farthest distance mode switching gesture in the written content. The location of the starting point of the stylus 006 may be determined by communication between the stylus 006 and the display screen 102.
The setting value L may be a distance between the two hands of the user 002, and may be 40 to 50cm, for example.
When the distance between the position of the written content of the user 002 and the position of the mode switching gesture is smaller than the set value L, it can be preliminarily determined that the user 002 is closer to the position of the written content, and it can be determined that the user 002 has a demand for operating the written content, for example, the user wants to select some content and move the content; at this time, mode switching can be performed. When the distance between the position of the written content of the user 002 and the position of the mode switching gesture is greater than the set value L, it can be preliminarily determined that the user 002 is far away from the position of the written content, and it can be determined that the user 002 may touch the screen by mistake when explaining the content on the display screen 102; at this time, the mode switching is not performed.
For example, the position of the mode switching gesture may be a center position of a horizontal distance of two edge touch points among the plurality of touch points of the mode switching gesture; for example, when the user 002 opens the five fingers to touch the display screen 102, and the distance between the starting point B of the stylus 006 and the finger center a (the center of the five touch points) is L1, where L1 is smaller than the set value L, the five fingers touch the display screen, and then switch to the circle selection mode; as shown in fig. 12(B), the distance between the starting point B of the stylus 006 and the finger center a (center of five touch points) is L2, where L2 is greater than the set value L, then the mode switching is not performed after the five-finger touch, and the smart cooperative large screen 001 still maintains the current free-drawing mode.
The mode switching method provided by the embodiment of the application is additionally provided with a step of judging whether the mode is switched or not through the distance, so that unexpected mode switching can be avoided to a certain extent.
To sum up, the mode switching method provided in the embodiment of the present application can switch the modes through the gesture, and when the user 002 needs to switch the modes in the writing process, the user can directly touch the display screen 102 with the setting gesture, that is, the switching of the mode corresponding to the setting gesture can be completed. The remote movement is not needed, and the selection among a plurality of mode keys is not needed, so that the operation efficiency is improved. In addition, the gestures defined by the embodiment of the application are natural gestures, so that the use habits of a human body are met, and the operation efficiency can be further improved.
The embodiments disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code can also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in this application are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed via a network or via other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), Random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or a tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared digital signals, etc.) using the internet in an electrical, optical, acoustical or other form of propagated signal. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some features of the structures or methods may be shown in a particular arrangement and/or order. However, it is to be understood that such specific arrangement and/or ordering may not be required. Rather, in some embodiments, the features may be arranged in a manner and/or order different from that shown in the illustrative figures. In addition, the inclusion of a structural or methodical feature in a particular figure is not meant to imply that such feature is required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the apparatuses in the present application, each unit/module is a logical unit/module, and physically, one logical unit/module may be one physical unit/module, or may be a part of one physical unit/module, and may also be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logical unit/module itself is not the most important, and the combination of the functions implemented by the logical unit/module is the key to solve the technical problem provided by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-mentioned device embodiments of the present application do not introduce units/modules which are not so closely related to solve the technical problems presented in the present application, which does not indicate that no other units/modules exist in the above-mentioned device embodiments.
It is noted that, in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the application.

Claims (12)

1. A screen writing mode switching method is characterized by comprising the following steps:
the screen writing mode of the electronic equipment is a first writing mode;
the electronic device detects a first mode switching gesture of a user;
the electronic equipment switches the first writing mode into a second writing mode corresponding to the first mode switching gesture.
2. The screen writing mode switching method according to claim 1, wherein the screen writing modes include a free-draw mode, a structured-draw mode, a circle-select mode, and a blend mode;
wherein the mixed mode comprises a circle selection mode and a free-drawing mode, or the mixed mode comprises a circle selection mode and a structured drawing mode.
3. The screen writing mode switching method according to claim 1, wherein the first mode switching gesture includes any one of a touch, a tap, and a slide action of a user on a display screen of the electronic device.
4. The screen writing mode switching method of claim 1, wherein the electronic device switching the first writing mode to a second writing mode corresponding to the first mode switching gesture comprises:
when the electronic equipment detects that the distance between the position of the first mode switching gesture acting on a display screen on the electronic equipment and the position of the written content of the user is smaller than a set value, the electronic equipment switches the first writing mode to the second writing mode.
5. The screen writing mode switching method according to claim 4, wherein the position of the user's writing is determined by a position of a stylus pen held by the user, the stylus pen being a writing tool that writes on a display screen of the electronic device.
6. The screen writing mode switching method of claim 1, further comprising: when the electronic equipment detects that the first mode switching gesture of the user is separated from the display screen of the electronic equipment, the electronic equipment switches the second writing mode back to the first writing mode.
7. The screen writing mode switching method of claim 6, further comprising:
the electronic device fixes the second writing mode when detecting that the time maintained by the first mode switching gesture of the user exceeds the set time or when detecting the mode fixing gesture corresponding to the mode fixing function of the user.
8. The screen writing mode switching method of claim 1, further comprising:
the electronic equipment only displays a screen writing mode label corresponding to the current screen writing mode of the electronic equipment on a display screen.
9. The screen writing mode switching method of claim 1, further comprising:
the electronic equipment displays screen writing mode labels corresponding to all screen writing modes supported by the electronic equipment on a display screen, and performs specific marking on the screen writing mode labels corresponding to the current screen writing mode of the electronic equipment.
10. The screen writing mode switching method of claim 1, wherein the electronic device is an intelligent collaborative large screen.
11. An electronic device, comprising:
a display screen;
a memory for storing instructions for execution by one or more processors of the electronic device, an
A processor, which is one of processors of an electronic device, for controlling the display screen to execute the screen writing mode switching method of any one of claims 1 to 10.
12. A machine-readable medium having stored thereon instructions which, when executed on a machine, cause the machine to perform the screen writing mode switching method of any one of claims 1 to 10.
CN202110757707.7A 2021-07-05 2021-07-05 Electronic device, screen writing mode switching method and medium thereof Pending CN113641283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110757707.7A CN113641283A (en) 2021-07-05 2021-07-05 Electronic device, screen writing mode switching method and medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110757707.7A CN113641283A (en) 2021-07-05 2021-07-05 Electronic device, screen writing mode switching method and medium thereof

Publications (1)

Publication Number Publication Date
CN113641283A true CN113641283A (en) 2021-11-12

Family

ID=78416690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110757707.7A Pending CN113641283A (en) 2021-07-05 2021-07-05 Electronic device, screen writing mode switching method and medium thereof

Country Status (1)

Country Link
CN (1) CN113641283A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821113A (en) * 2021-11-22 2021-12-21 荣耀终端有限公司 Electronic equipment and touch pen interaction method and system and electronic equipment
CN114035721A (en) * 2022-01-07 2022-02-11 荣耀终端有限公司 Touch screen display method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103543937A (en) * 2012-07-10 2014-01-29 义隆电子股份有限公司 Touch handwriting input method and device
CN103581427A (en) * 2012-07-26 2014-02-12 Lg电子株式会社 Mobile terminal and controlling method thereof
US20180239482A1 (en) * 2017-02-20 2018-08-23 Microsoft Technology Licensing, Llc Thumb and pen interaction on a mobile device
CN111475045A (en) * 2020-04-03 2020-07-31 合肥讯飞读写科技有限公司 Handwriting drawing method, device, equipment and storage medium
US20200264770A1 (en) * 2017-05-01 2020-08-20 Pathway Innovations And Technologies, Inc. Gesture-based transitions between modes for mixed mode digital boards

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103543937A (en) * 2012-07-10 2014-01-29 义隆电子股份有限公司 Touch handwriting input method and device
CN103581427A (en) * 2012-07-26 2014-02-12 Lg电子株式会社 Mobile terminal and controlling method thereof
US20180239482A1 (en) * 2017-02-20 2018-08-23 Microsoft Technology Licensing, Llc Thumb and pen interaction on a mobile device
US20200264770A1 (en) * 2017-05-01 2020-08-20 Pathway Innovations And Technologies, Inc. Gesture-based transitions between modes for mixed mode digital boards
CN111475045A (en) * 2020-04-03 2020-07-31 合肥讯飞读写科技有限公司 Handwriting drawing method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821113A (en) * 2021-11-22 2021-12-21 荣耀终端有限公司 Electronic equipment and touch pen interaction method and system and electronic equipment
CN114035721A (en) * 2022-01-07 2022-02-11 荣耀终端有限公司 Touch screen display method and device and storage medium

Similar Documents

Publication Publication Date Title
WO2022022495A1 (en) Cross-device object dragging method and device
US10228848B2 (en) Gesture controlled adaptive projected information handling system input and output devices
US9348420B2 (en) Adaptive projected information handling system output devices
US7956845B2 (en) Apparatus and method for providing virtual graffiti and recording medium for the same
EP0440812B1 (en) System and apparatus for handwritten input information
US20150268773A1 (en) Projected Information Handling System Input Interface with Dynamic Adjustment
US20140285453A1 (en) Portable terminal and method for providing haptic effect
US20160054851A1 (en) Electronic device and method for providing input interface
US9965038B2 (en) Context adaptable projected information handling system input environment
JP2019516189A (en) Touch screen track recognition method and apparatus
KR20160019043A (en) Continuity
CN109614845A (en) Manage real-time handwriting recognition
KR20140027850A (en) Method for providing user interface, machine-readable storage medium and portable terminal
KR20160028823A (en) Method and apparatus for executing function in electronic device
US11112889B2 (en) Electronic device and method for mapping function of electronic device to operation of stylus pen
CN109062433A (en) Method, apparatus, smart machine and the storage medium of touch data processing
US5249296A (en) Information processing apparatus for controlling window positions
CN113641283A (en) Electronic device, screen writing mode switching method and medium thereof
WO2021114690A1 (en) Stylus, terminal, and control method therefor, and computer readable storage medium
US20170285904A1 (en) Direct data transfer electronic device and method
US10133355B2 (en) Interactive projected information handling system support input and output devices
CN110471587A (en) Exchange method, interactive device, terminal and computer readable storage medium
US20150268739A1 (en) Projected Information Handling System Input Environment with Object Initiated Responses
CN114237419A (en) Display device and touch event identification method
CN109471841B (en) File classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination