CN114816088A - Online teaching method, electronic equipment and communication system - Google Patents

Online teaching method, electronic equipment and communication system Download PDF

Info

Publication number
CN114816088A
CN114816088A CN202210415510.XA CN202210415510A CN114816088A CN 114816088 A CN114816088 A CN 114816088A CN 202210415510 A CN202210415510 A CN 202210415510A CN 114816088 A CN114816088 A CN 114816088A
Authority
CN
China
Prior art keywords
pen
user
electronic device
information
holding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210415510.XA
Other languages
Chinese (zh)
Inventor
肖冬
盛琦阳
廖源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210415510.XA priority Critical patent/CN114816088A/en
Publication of CN114816088A publication Critical patent/CN114816088A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/018Input/output arrangements for oriental characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Abstract

The embodiment of the application provides an online teaching method, electronic equipment and a communication system. In the online teaching method, a first electronic device acquires video data including a first handwriting image drawn using a first stylus and a first pen-holding gesture of a first user. The second electronic equipment receives the video data from the first electronic equipment and displays the first handwriting image and a first pen holding gesture through the display screen, wherein the first pen holding gesture moves along with the movement of the pen point position of the first handwriting in the first handwriting image. The pen holding posture of the first user is determined according to first information, and the first information comprises pen body inclination angle information of the first handwriting pen, pen holding position information of the first user and hand support information of the first user. By the technical scheme, the pen skill of the first user can be accurately restored, the user using the second electronic equipment can conveniently learn the pen skill of the first user, and therefore the learning effect of a learner in online teaching is improved.

Description

Online teaching method, electronic equipment and communication system
Technical Field
The embodiment of the application relates to the technical field of electronic equipment, in particular to an online teaching method, electronic equipment and a communication system.
Background
A stylus is an auxiliary device that imitates a human body (usually a finger) to complete a man-machine conversation. With the rapid development of terminal technology, more and more calligraphy and painting creators choose to use a handwriting pen to cooperate with electronic equipment (such as a mobile phone, a tablet personal computer and the like) to perform online teaching. The author can use the handwriting pen to carry out the creation of calligraphy or drawing on author's display screen, and the learner can watch the creation effect on own display screen to copy.
The learner can learn and master the thickness, the track direction, the size and other changes of the brush through the handwriting displayed on the display screen, but the learner cannot know the creation thought of the creator, so the learning efficiency and the copying effect are influenced.
Disclosure of Invention
The embodiment of the application provides an online teaching method, electronic equipment and a communication system, which can accurately restore pen-using techniques of an on-line teaching creator and improve the learning effect of a learner.
In a first aspect, an online teaching method is provided, including: the method comprises the steps that a second electronic device receives video data from a first electronic device, wherein the video data comprise a first handwriting image corresponding to a first stylus and a first pen holding gesture corresponding to a first user, the first stylus is in communication connection with the first electronic device, and the first user is a user using the first stylus; the second electronic device displaying the first handwriting image and the first pen-holding gesture through a display screen, wherein the first pen-holding gesture moves with the movement of the first stylus at the pen point position in the first handwriting image; wherein the first pen-holding posture is determined according to first information, and the first information comprises pen body inclination angle information of the first stylus pen, pen-holding position information of the first user and hand support information of the first user.
In the embodiment of the application, a user using the second electronic device can learn and learn the pen skill and the creation thought of the first user through the first handwriting image and the first pen holding gesture, and the learning effect of a learner can be improved. The first pen holding posture is determined by comprehensively considering pen body inclination angle information of the first handwriting pen, pen holding position information of the first user and hand support information of the first user, so that the first pen holding posture is closer to the actual pen holding posture of the first user, and the online teaching effect can be improved.
With reference to the first aspect, in a possible implementation manner, the first handwriting image is determined according to second information, where the second information includes at least one of pen point pressure information of the first stylus pen, pen body inclination information of the first stylus pen, and pen point position information of the first stylus pen.
The first handwriting image obtained according to the second information can show the changes of the lines in length, thickness, curve, density, lightness and the like, so that the pen using action of the first user is shown. The first handwriting image is combined with the first pen holding gesture, so that a learner can conveniently know the pen skills of the first user, and the learning effect is improved.
With reference to the first aspect, in a possible implementation manner, the hand support information of the first user includes a support position, a support area, and a support contour of the hand of the first user on the display screen of the first electronic device.
The number of pen holding fingers and the pen holding position of the first user may be the same under different pen holding postures, but the support position, the support area or the support outline of the hand of the first user on the display screen may be different. In determining the first pen-holding gesture, taking into account the hand-support information of the first user helps to make the acquired first pen-holding gesture closer to the real pen-holding gesture of the first user
With reference to the first aspect, in a possible implementation manner, the method further includes: detecting and responding to a first operation of a second user, wherein the second electronic equipment acquires a second handwriting image corresponding to a second handwriting pen, and the second handwriting pen is in communication connection with the second electronic equipment; and the second electronic equipment displays the second handwriting image through the display screen, wherein the first handwriting image and the first pen-holding gesture are located in a first display area of the display screen, and the second handwriting image is located in a second display area of the display screen.
The second user can use the second writing pen to copy on the display screen of the second electronic equipment, and the display screen of the second electronic equipment can simultaneously display the pen skill of the first user and the handwriting image actually copied by the second user, so that the second user can compare the pen holding posture with the first pen holding posture and compare the second handwriting image with the first handwriting image, and the pen holding posture and the pen using action can be continuously corrected.
With reference to the first aspect, in a possible implementation manner, the method further includes: detecting and responding to a second operation of the second user, and acquiring a second pen holding gesture corresponding to the second user by the second electronic equipment; the second electronic device identifying a deviation between the second pen-holding gesture and the first pen-holding gesture; and when the deviation exceeds a preset value, the second electronic equipment controls a motor in the second stylus pen to vibrate so as to remind the second user of correcting the pen holding posture.
In the copying process of the second user, the second electronic device can detect the pen holding posture of the second user, and when the deviation between the pen holding posture of the second user and the pen holding posture of the first user is large, the motor of the second handwriting pen is controlled to vibrate for reminding, so that the copying effect and the learning efficiency of the second user can be improved.
With reference to the first aspect, in a possible implementation manner, the acquiring, by the second electronic device, a second pen holding gesture corresponding to the second user includes: the second electronic equipment acquires third information, wherein the third information comprises pen body inclination angle information of the second handwriting pen, pen holding position information of the second user and hand support information of the second user; the second electronic equipment adjusts the posture of a second virtual hand holding a second virtual pen according to the third information, wherein the second virtual hand and the second virtual pen are models stored in the second electronic equipment; rendering the posture of the second virtual pen held by the second virtual hand at a preset visual angle to obtain the second pen holding posture.
The model of a set of hand and pen can be stored in the second electronic equipment, and under different pen holding postures, the second electronic equipment can be respectively adjusted according to the third information, so that the storage space can be saved, and the pen holding posture of the user can be accurately, visually and multi-view-angle reproduced through real-time rendering. And according to different requirements, the user can carry out personalized design or beautification design on the hand model and the pen model.
With reference to the first aspect, in a possible implementation manner, the second virtual pen is a model corresponding to the second handwriting pen, or a model corresponding to a brush used for drawing the second handwriting image.
The second virtual pen may be a model of the stylus such that the second pen-holding gesture is rendered more intuitive.
The second virtual pen may also be a model of a brush used by the second user when drawing the second handwriting image, such as a pencil model, a writing brush model, an oil painting brush model, and the like, so that the second pen holding posture presented is more realistic.
With reference to the first aspect, in a possible implementation manner, the acquiring, by the second electronic device, a second pen holding gesture corresponding to the second user includes: the second electronic equipment acquires third information, wherein the third information comprises pen body inclination angle information of the second handwriting pen, pen holding position information of the second user and hand support information of the second user; the second electronic equipment respectively matches the third information with a plurality of preset holding posture templates to obtain a first holding posture template with the highest matching degree with the third information; and determining the pen holding posture corresponding to the first pen holding posture template as the second pen holding posture.
The third information is matched with the preset holding posture template, so that the calculation amount can be reduced, and the calculation resources can be saved.
Illustratively, the second electronic device may perform the matching process through a neural network.
With reference to the first aspect, in a possible implementation manner, the method further includes: detecting and responding to a third operation of the second user, wherein the second electronic equipment identifies the difference between the second handwriting image and the first handwriting image; and the second electronic equipment displays the difference through the display screen.
By displaying the difference between the second handwriting image and the first handwriting image to the second user, the second user may be prompted how to correct the pen action.
With reference to the first aspect, in a possible implementation manner, the displaying, by the second electronic device, the difference through the display screen includes: the second electronic device annotates at a difference of the second handwriting image and the first handwriting image to display the difference; or the second electronic equipment superimposes the contour line of the first handwriting image on the second handwriting image to display the difference.
And the difference between the second handwriting image and the first handwriting image can be displayed more intuitively in the manner of circle annotation and contour line superposition.
With reference to the first aspect, in a possible implementation manner, the first information further includes pen holding pressure information of the first user and/or size information of the first stylus pen.
The richer the information referred to in determining the first pen-holding gesture, the closer the determined first pen-holding gesture is to the real pen-holding gesture of the first user.
With reference to the first aspect, in a possible implementation manner, the video data further includes raw data for acquiring the first handwriting image and raw data for acquiring the first pen-holding gesture.
The video data may include, in addition to the image data, raw data for capturing the image, so that the second user may also perform some other processing operation based on the raw data.
With reference to the first aspect, in a possible implementation manner, the original data for acquiring the first handwriting image includes pen tip position data of the first stylus pen, and pen tip pressure data of the first stylus pen; and/or the raw data for acquiring the first pen-holding posture comprises pen-holding position data of the first user, pen body inclination angle data of the first stylus pen and hand support data of the first user.
In a second aspect, an online teaching method is provided, including: the method comprises the steps that video data are obtained by first electronic equipment, the video data comprise a first handwriting image corresponding to a first stylus and a first pen holding gesture corresponding to a first user, the first stylus is in communication connection with the first electronic equipment, and the first user is a user using the first stylus; detecting and responding to a first operation of the first user, wherein the first electronic device displays the first handwriting image and the first pen holding gesture through a display screen, and the first pen holding gesture moves along with the movement of the pen point position of the first handwriting in the first handwriting image; wherein the first pen-holding posture is determined according to first information, and the first information comprises pen body inclination angle information of the first stylus pen, pen-holding position information of the first user and hand support information of the first user.
In the embodiment of the application, the first pen holding posture is determined by comprehensively considering the pen body inclination angle information of the first stylus pen, the pen holding position information of the first user and the hand support information of the first user, so that the first pen holding posture is closer to the actual pen holding posture of the first user. Other users can know and learn the pen skill and the creation thought of the first user through the first handwriting image and the first pen holding posture, and the online teaching effect can be improved.
With reference to the second aspect, in one possible implementation manner, the first handwriting image is determined according to second information, where the second information includes at least one of pen point pressure information of the first stylus pen, pen body inclination information of the first stylus pen, and pen point position information of the first stylus pen.
The first handwriting image obtained according to the second information can show the changes of the lines in length, thickness, curve, density, lightness and the like, so that the pen using action of the first user is shown. The first handwriting image is combined with the first pen holding gesture, so that a learner can conveniently know the pen skills of the first user, and the learning effect is improved.
With reference to the second aspect, in one possible implementation manner, the hand support information of the first user includes a support position, a support area, and a support contour of the hand of the first user on the display screen.
The number of pen holding fingers and the pen holding position of the first user may be the same under different pen holding postures, but the support position, the support area or the support outline of the hand of the first user on the display screen may be different. In determining the first pen-holding posture, taking into account the hand support information of the first user helps to bring the acquired first pen-holding posture closer to the real pen-holding posture of the first user.
With reference to the second aspect, in a possible implementation manner, the acquiring, by the first electronic device, video data includes: the first electronic equipment acquires the first handwriting image and the first pen holding gesture; and the first electronic equipment superposes the first pen holding posture and the first handwriting image to obtain the video data.
After the first electronic device executes the operation of superposing the first handwriting image and the first pen holding gesture to obtain the video data, when the subsequent first electronic device shares the video data with other electronic devices, the other electronic devices can directly present the content corresponding to the video data through the display screen.
With reference to the second aspect, in a possible implementation manner, the acquiring, by the first electronic device, the first pen-holding gesture includes: the first electronic equipment adjusts the posture of a first virtual hand holding a first virtual pen according to the first information, wherein the first virtual hand and the first virtual pen are models stored in the first electronic equipment; rendering the posture of the first virtual pen held by the first virtual hand at a preset visual angle to obtain the first pen holding posture.
Therefore, only a set of models of hands and pens need to be stored in the first electronic equipment, and under different pen holding postures, the first electronic equipment can be respectively adjusted according to the first information, so that the storage space can be saved, and the pen holding posture of the user can be accurately, intuitively and multi-view-angle reproduced in real time. And according to different requirements, the user can carry out personalized design or beautification design on the hand model and the pen model.
With reference to the second aspect, in a possible implementation manner, the first virtual pen is a model corresponding to the first stylus pen, or a model corresponding to a brush used for drawing the first handwriting image.
The first virtual pen may be a model of a stylus such that the first pen-holding gesture is rendered more intuitive.
The first virtual pen may also be a model of a brush used by the first user when drawing the first handwriting image, such as a pencil model, a writing brush model, an oil-painting brush model, and the like, so that the presented first pen-holding posture is more realistic.
With reference to the second aspect, in a possible implementation manner, the acquiring, by the first electronic device, the first pen-holding gesture includes: the first electronic equipment respectively matches the first information with a plurality of preset holding posture templates to obtain a second holding posture template with the highest matching degree with the first information; and determining the pen holding posture corresponding to the second pen holding posture template as the first pen holding posture.
The first information is matched with the preset holding posture template, so that the calculation amount can be reduced, and the calculation resources can be saved. Generally, the pen holding posture in the preset pen holding posture template is standard, so that even if the actual pen holding posture of the first user is not very standard, the presented first pen holding posture is standard, and the first user can correct the pen holding posture conveniently. Subsequently, when the first user shares the video data with other users, the other users can be prevented from learning nonstandard pen holding gestures.
With reference to the second aspect, in a possible implementation manner, the first information further includes pen holding pressure information of the first user and/or size information of the first stylus pen.
The richer the information referred to in determining the first pen-holding gesture, the closer the determined first pen-holding gesture is to the real pen-holding gesture of the first user.
With reference to the second aspect, in a possible implementation manner, the video data further includes raw data for acquiring the first handwriting image and raw data for acquiring the first pen-holding gesture.
The video data includes image data and original data used for acquiring images, so that when the first user shares the video data with other users, the other users can also perform some processing operations based on the original data.
With reference to the second aspect, in a possible implementation manner, the original data for acquiring the first handwriting image includes pen tip position data of the first stylus pen, and pen tip pressure data of the first stylus pen; and/or the raw data for acquiring the first pen-holding posture comprises pen-holding position data of the first user, pen body inclination angle data of the first stylus pen and hand support data of the first user.
With reference to the second aspect, in a possible implementation manner, the method further includes: the first electronic equipment directly sends the video data to second electronic equipment; or the first electronic equipment forwards the video data to the second electronic equipment through a server.
The first electronic device may transmit video data directly or indirectly to the second electronic device.
With reference to the second aspect, in a possible implementation manner, the method further includes: and the first electronic equipment stores the video data in a local or cloud terminal.
In a third aspect, an apparatus is provided, where the apparatus is included in an electronic device, and the apparatus has a function of implementing the behavior referred to in any one of the foregoing first aspect and possible implementations of the first aspect, where the function may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the above-described functions. For example, a receiving module or unit, a displaying module or unit, an acquiring module or unit, a detecting module or unit, a processing module or unit, etc.
In a fourth aspect, an apparatus is provided, which is included in an electronic device, and which has a function of implementing the actions referred to in any one of the possible implementations of the second aspect and the second aspect, where the function may be implemented by hardware or by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the above-described functions. For example, an acquisition module or unit, a display module or unit, a detection module or unit, a processing module or unit, a transmission module or unit, etc.
In a fifth aspect, an electronic device is provided, comprising: one or more processors; one or more memories; the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the method of the first aspect described above and any possible implementation of the first aspect.
In a sixth aspect, an electronic device is provided, comprising: one or more processors; one or more memories; the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the method of the second aspect described above and any possible implementation of the second aspect.
The beneficial effects of the apparatuses and the electronic devices according to the third to sixth aspects may refer to the beneficial effects of the methods described in the first and second aspects, and are not described herein again.
In a seventh aspect, a computer-readable storage medium is provided, which includes computer instructions, when executed on an electronic device, cause the electronic device to perform the method of the first aspect and any one of the possible implementations of the first aspect, or the method of the second aspect and any one of the possible implementations of the second aspect.
In an eighth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of the possible implementations of the first aspect and the first aspect described above, or the method of any one of the possible implementations of the second aspect and the second aspect described above.
In a ninth aspect, a chip is provided, where the chip includes a processor and a data interface, and the processor reads instructions stored on a memory through the data interface, and performs the method in any one of the possible implementations of the first aspect and the first aspect, or performs the method in any one of the possible implementations of the second aspect and the second aspect.
Optionally, as an implementation manner, the chip may further include a memory, where instructions are stored in the memory, and the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the processor is configured to perform the method in any one of the possible implementation manners of the first aspect and the first aspect, or perform the method in any one of the possible implementation manners of the second aspect and the second aspect.
The chip may be a field programmable gate array or an application specific integrated circuit.
Drawings
Fig. 1 is a schematic diagram of a system provided in an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a stylus provided in an embodiment of the present application.
Fig. 4 is a schematic flowchart of an online teaching method provided in an embodiment of the present application.
Fig. 5-6 are schematic diagrams illustrating a flow chart for determining a pen-holding gesture in an online teaching method according to an embodiment of the present application.
7-8 are schematic diagrams of some pen-holding gestures provided by embodiments of the present application.
Fig. 9-13 are schematic diagrams of some user interfaces displayed on a first electronic device provided by embodiments of the present application.
Fig. 14-16 are schematic diagrams of some user interfaces displayed on a second electronic device provided by embodiments of the present application.
Fig. 17 is a schematic flowchart of an online teaching method provided in an embodiment of the present application.
FIG. 18 is a schematic flow chart diagram of another on-line teaching method provided by the embodiment of the application.
Fig. 19 is a schematic structural block diagram of an apparatus provided in an embodiment of the present application.
Fig. 20 is a schematic block diagram of another apparatus provided in the embodiment of the present application.
Fig. 21 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 shows a schematic diagram of a system provided in an embodiment of the present application. As shown in fig. 1, the system includes a first subsystem 100 and a second subsystem 200.
The first subsystem 100 comprises a first electronic device 11 and a first stylus 12.
The first electronic device 11 may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a smart watch, and other electronic devices having a display function.
The first electronic device 11 may comprise a first display screen 101, the first display screen 101 being for displaying images, videos, etc. In the embodiment of the present application, the first display screen 101 may also be used for displaying the motion track of the first stylus pen 12.
In some embodiments, a touch sensor (also referred to as a touch panel) may be disposed on the first display screen 101, wherein the touch sensor is used for detecting a touch operation acting on or near the touch sensor. For example, the touch sensor may detect a touch operation on which the first stylus pen 12 or the user's finger is applied, and provide a visual output related to the touch operation through the first display screen 101. Alternatively, the touch sensor may distinguish between a touch operation of the first stylus pen 12 and a touch operation of a finger of the user.
When the touch sensor is disposed on the first display 101, the touch sensor and the first display 101 may form a touch screen, which is also called a "touch screen". The touch screen has a display function and is also responsible for detecting touch and control. Of course, in other embodiments, the touch sensor may not be disposed with the first display 101, for example, the touch sensor may be disposed on a surface of the first electronic device 11, such as a housing.
The first stylus 12 is an input device used with the first electronic device 11. The first stylus 12 may be a capacitive type stylus, a resistive type stylus, an electromagnetic induction type stylus, a bluetooth type stylus, etc. according to different working principles, which is not limited in the embodiment of the present application.
In some embodiments, the first stylus 12 may act directly on the first electronic device 11, such as on a touch screen of the first electronic device 11 or a housing of the first electronic device 11, so that the first electronic device 11 may capture the trace information of the first stylus 12 through contact with the first stylus 12.
In other embodiments, the first stylus 12 may be applied to any surface, including but not limited to a surface, a desktop, an arm, etc. of the first electronic device 11, such that the first stylus 12 may itself collect and input trajectory information to the first electronic device 11.
The first electronic device 11 and the first stylus 12 may be connected wirelessly or in a wired manner, where the wireless connection includes but is not limited to bluetooth, wireless fidelity (Wi-Fi), and the like.
In the embodiment of the present application, the first stylus 12 is used for a first user to create calligraphy or painting on the first electronic device 11. During the authoring process, the display screen of the first electronic device 11 (i.e., the first display screen 101) may synchronously present the motion trajectory of the first stylus 12 to reveal the first user's authoring outcome. The first electronic device 11 may further store the creative efforts of the first user in the first electronic device 11 in the form of images or videos. For example, the first electronic device 11 may save the first user's authoring effort as an image or save the first user's authoring process as a video.
It should be understood that the "authoring result" referred to in the embodiments of the present application may refer to a writing result or a drawing result formed by a user at any stage in the authoring process. In the embodiment of the present application, the writing results include, but are not limited to, hard-tipped writing results and hairbrush writing results. The drawing results include, but are not limited to, Chinese painting drawing results, watercolor painting drawing results, gouache drawing results, and oil painting drawing results.
The second subsystem 200 comprises a second electronic device 21 and a second stylus 22.
The second electronic device 21 may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a smart watch, and other electronic devices having a display function.
The second electronic device 21 may be of the same type as the first electronic device 11, or may be of a different type, which is not limited in this embodiment of the application. In some embodiments, the second electronic device 21 and the first electronic device 11 may be installed with the same application for writing or drawing.
The second electronic device 21 may comprise a second display screen 201, the second display screen 201 being for displaying images, videos, etc. Such as shown in the left half of the display area of the second display screen 201 in fig. 1, the second display screen 201 may be used to display the creation result of the calligraphy or painting done by the first user on the first electronic device 11. In this embodiment, the second electronic device 21 may further be configured to display a motion trajectory of the second stylus pen 22, as shown in a right half display area of the second display screen 201 in fig. 1.
In some embodiments, a touch sensor may be disposed on the second display screen 201. For example, the touch sensor may detect a touch operation on which the second stylus pen 22 or the user's finger is applied, and provide a visual output related to the touch operation through the second display screen 201. Alternatively, the touch sensor may distinguish between the touch operation of the second stylus pen 22 and the touch operation of the user's finger.
When the touch sensor is disposed on the second display screen 201, the touch sensor and the second display screen 201 may form a touch screen. In other embodiments, the touch sensor may not be disposed with the second display 201, for example, the touch sensor may be disposed on a surface of the second electronic device 21, such as a housing.
The second stylus pen 22 is an input device used in cooperation with the second electronic apparatus 21. The second stylus 22 may be a capacitive type, a resistive type, an electromagnetic induction type, a bluetooth type, etc., according to different working principles, which is not limited in the embodiment of the present application. The second stylus 22 may be of the same type as the first stylus 12 or may be of a different type.
In some embodiments, the second stylus 22 may act directly on the second electronic device 21, such as on a touch screen of the second electronic device 21 or a housing of the second electronic device 21, so that the second electronic device 21 may capture the track information of the second stylus 22 through contact with the second stylus 22.
In other embodiments, the second stylus 22 may be applied to any surface, including but not limited to a surface, a desktop, an arm, etc. of the second electronic device 21, such that the second stylus 22 may itself collect and input trajectory information to the second electronic device 21.
The second electronic device 21 and the second stylus pen 22 may be connected wirelessly or in a wired manner, where the wireless connection includes but is not limited to bluetooth, wireless fidelity (Wi-Fi), and the like.
In the embodiment of the present application, the second stylus 22 can be used for copying calligraphy or painting on the second electronic device 21 by the second user. In the copying process, the display screen (i.e. the second display screen 201) of the second electronic device 21 may present an creation result of the first user writing or drawing on the first electronic device 11 by using the first stylus 12, so as to facilitate the copying by the second user. The display screen of the second electronic device 21 may also synchronously present the motion trajectory of the second stylus 22, thereby revealing the copy outcome of the second user. In the second user copying process, the second electronic device 21 may store the copying result of the second user in the second electronic device 21 in the form of an image or a video. For example, the second electronic device 21 may save the copying result of the second user as an image, or save the copying process of the second user as a video.
It should be understood that the "copying result" referred to in the embodiments of the present application may refer to a writing result or a drawing result formed by a user at any stage in the copying process.
In some embodiments, the second user may send his copy result to the first electronic device 11 and present it to the first user through the first display 101.
For ease of understanding, the above description defines the identity of the first user as an author and the identity of the second user as a learner, but it will be understood that the identity of the author of the first user and the identity of the learner of the second user are relative. For example, the first user may also act as a learner and the second user may also act as an author with respect to the third user.
In some embodiments, the second stylus 22 may not be included in the second subsystem 200. In this way, only the creation result of the first user writing or drawing on the first electronic device 11 by using the first writing pen 12 can be displayed on the second electronic device 21, and the second user can copy on the paper by using a real pen (such as a writing brush, a painting brush, a pen, etc.).
Therefore, in order to implement on-line teaching of calligraphy or painting, the system provided by the embodiment of the present application may include a first electronic device 11, a first stylus 12, and a second electronic device 21. Optionally, the system may also include a second stylus 22.
In the embodiment of the present application, the first electronic device 11 and the second electronic device 21 may perform wireless communication. The two devices may communicate with each other through a short-range interconnection (e.g., bluetooth, Wi-Fi, Near Field Communication (NFC), zigbee (zigbee) technology, ultra-wideband (UWB) technology, etc.), or may communicate with each other through a long-range communication such as a cellular network, which is not limited in this embodiment of the present application.
In the embodiment of the present application, the first electronic device 11 may include other units or modules besides the first display 101. Similarly, the second electronic device 21 may include other units or modules besides the second display 201. The structure of an electronic device (such as the first electronic device 11 or the second electronic device 21 shown in fig. 1) provided in the embodiment of the present application is described below with reference to fig. 2.
For example, as shown in fig. 2, an electronic device provided in an embodiment of the present application may include: the mobile phone comprises a processor 110, a memory 120, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a camera 191, a display screen 192 and the like.
It is to be understood that the illustrated structure of the embodiments of the present application does not constitute a specific limitation to electronic devices. In other embodiments of the present application, an electronic device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can be a neural center and a command center of the electronic device. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory, avoiding repeated accesses, reducing the latency of the processor 110, and thus increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
For example, the processor 110 and the touch sensor 180K may communicate via an I2C bus interface to implement touch functionality of the electronic device 100. The processor 110 and the camera 193 may communicate through a CSI interface to implement the photographing function of the electronic device 100. The processor 110 and the display screen 194 may communicate through the DSI interface to implement the display function of the electronic device 100.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the electronic device. In other embodiments of the present application, the electronic device may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive a charging input from a charger. The power management module 141 is used for connecting a battery 142. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, and battery health.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in an electronic device may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to electronic devices, including Wireless Local Area Networks (WLANs) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
The electronic device implements display functions via the GPU, the display screen 192, and the application processor, etc. The GPU is a microprocessor for image processing, coupled to a display screen 192 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 192 is used to display images, video, and the like. The display screen 192 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device may include 1 or N display screens 192, N being a positive integer greater than 1.
The electronic device may implement a shooting function through the ISP, the camera 191, the video codec, the GPU, the display screen 192, the application processor, and the like. The ISP is used to process the data fed back by the camera 193. The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. In some embodiments, the electronic device may include 1 or N cameras 193, N being a positive integer greater than 1.
Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The memory 120 is used to store data and/or instructions.
The memory 120 may include an internal memory. The internal memory is used to store computer executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory.
The memory 120 may also include an external memory, such as a Micro SD card, to extend the storage capabilities of the electronic device. The external memory may communicate with the processor 110 through an external memory interface to implement data storage functions. For example, files such as music, video, etc. are saved in the external memory.
The electronic device may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as audio playback, recording, etc.
The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an acceleration sensor 180C, a distance sensor 180D, a touch sensor 180E, and other sensors.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 192. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronics determine the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 192, the electronic apparatus detects the intensity of the touch operation based on the pressure sensor 180A. The electronic device may also calculate the position of the touch from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B, also known as an angular velocity sensor, may be used to determine a motion gesture of the electronic device. In some embodiments, the angular velocity of the electronic device about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory game scenes, e.g., the gyroscope may be able to completely monitor the displacement of the player's hand, thereby achieving various game operational effects, such as changing a horizontal screen to a vertical screen, racing a car game, etc.
The acceleration sensor 180C can detect the magnitude of acceleration of the electronic device in various directions (typically three axes). When the electronic device is at rest, the magnitude and direction of gravity can be detected. The acceleration sensor 180C can also be used for recognizing the posture of the electronic device, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180D for measuring distance. The electronic device may measure distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device may utilize the distance sensor 180D to range for fast focus.
The touch sensor 180E is also referred to as a "touch panel". The touch sensor 180E may be disposed on the display screen 192, and the touch sensor 180E and the display screen 192 form a touch screen, which is also called a "touch screen". The touch sensor 180E is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 192. In other embodiments, the touch sensor 180E can be disposed on a surface of the electronic device at a different location than the display screen 192.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys or touch keys. The electronic device may receive a key input, generate a key signal input related to user settings and function control of the electronic device.
The electronic device involved in the description of fig. 2 may be the first electronic device 11 shown in fig. 1, or the second electronic device 21 shown in fig. 1. The first electronic device 11 and the second electronic device 21 may have the same structure or different structures, and this embodiment of the present application does not limit this.
For convenience of understanding, the structure of the stylus (i.e., the first stylus 12 or the second stylus 22 shown in fig. 1) provided in the embodiments of the present application is described below with reference to fig. 3 by way of example and not limitation.
As shown in fig. 3, a stylus pen provided in an embodiment of the present application may include: processor 210, sensor module 220, bluetooth module 230, battery 240, charging interface 250, button 260, indicator light 270, vibration motor 280, etc.
It is to be understood that the illustrated structure of the embodiments of the present application does not constitute a specific limitation of the stylus. In other embodiments of the present application, the stylus may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 210 may generate operation control signals according to the instruction operation code and the timing signals, and perform instruction fetching and execution control. The processor 210 is the computational core and control core of the stylus. A memory may be provided in the processor 110 for storing instructions and data.
The sensor module 220 may include a pressure sensor 221, a gyro sensor 222, an acceleration sensor 223, a first electrode 224, a second electrode 225, and the like.
The pressure sensor 221 is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. Pressure sensor 221 may be a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, or the like.
In some embodiments, the pressure sensor 221 may be disposed at the tip of the stylus pen for sensing the pressure level of the tip, and for recording the duration of the pressure signal applied to the tip.
In some embodiments, the pressure sensor 221 may also be disposed at a body position of the stylus pen, for sensing the pressure of the body, and for recording the duration of the pressure signal applied to the body. For example, the outer surface of the pen body of the stylus pen may be wrapped with a layer of pressure sensor, and when a touch operation is applied to the pen body, the stylus pen may detect the intensity of the touch operation according to the pressure sensor 221, that is, the amount of pressure applied to the pen body when the user holds the stylus pen. The stylus pen may also calculate the position where the user touches the pen body based on the detection signal of the pressure sensor 221.
The gyro sensor 222 is used to determine the motion attitude of the stylus. In some embodiments, the angular velocity of the stylus about three axes (i.e., the x, y, and z axes) may be determined by the gyro sensor 222, thereby determining the angle of inclination of the body of the pen with respect to horizontal.
The acceleration sensor 223 may detect the magnitude of acceleration of the stylus in various directions (typically three axes), thereby determining the velocity of movement of the stylus.
The first electrode 224 and the second electrode 225 are used for measuring the inclination angle of the pen body, i.e. the inclination angle of the pen body relative to the plane on which the stylus acts, for example the inclination angle of the pen body may be the inclination angle of the pen body relative to the display screen of the electronic device. The first electrode 224 and the second electrode 225 are disposed at different positions. When the handwriting pen is vertically placed relative to the screen, the difference between the signal quantities of the first electrode 224 and the second electrode 225 is 0, and the smaller the included angle between the pen body and the screen, the larger the difference between the signal quantities of the first electrode 224 and the second electrode 225. Therefore, the inclination angle of the pen body can be calculated by the difference between the signal quantity of the first electrode 224 and the signal quantity of the second electrode 225. In the embodiment of the application, the inclination of the pen body can be measured through the signal quantity of the first electrode 224 and the signal quantity of the second electrode 225.
The bluetooth module 230 is used to implement the wireless communication function of the stylus pen. The handwriting pen is also provided with an antenna, wherein the antenna is used for receiving and transmitting electromagnetic waves. The bluetooth module 230 may perform signal frequency modulation and filtering processing on the electromagnetic waves received by the antenna, and send the processed signals to the processor 210. The bluetooth module 230 may also receive a signal to be transmitted from the processor 210, perform frequency modulation and amplification on the signal, convert the signal into electromagnetic waves via the antenna, and transmit the electromagnetic waves.
The battery 240 is used to power the stylus. Battery 240 may receive charging input from a charger via charging interface 250. The charger can be a wireless charger or a wired charger. The battery 240 may also receive charging input directly from an electronic device connected to the stylus.
Keys 260 may include a power on key, a toggle key, and the like. The keys 260 may be mechanical keys or touch keys. The stylus pen may receive key inputs, which generate key signal inputs related to user settings and function controls of the stylus pen.
Indicator light 270 may be used to indicate a state of charge, a change in power, a bluetooth connection status, a fault status, etc.
The vibration motor 280 is used to output a vibration signal. In some embodiments, the vibration motor 280 may emit different vibratory sensations.
The stylus referred to in the description of fig. 3 may be the first stylus 12 shown in fig. 1 or the second stylus 22 shown in fig. 1. The first stylus 12 and the second stylus 22 may have the same structure, and may be different, and the embodiment of the present application is not limited thereto.
With the rapid development of terminal technology, new functions of terminal products emerge endlessly. For example, according to the pressure of the pen point of the stylus pen, the screen of the electronic device can display the line changes of length, thickness, curve, density, lightness and the like. Therefore, more and more calligraphy and painting creators choose to use the handwriting pen to cooperate with electronic equipment (such as a mobile phone, a tablet computer and the like) to perform online teaching or sharing. Particularly, the author can use the handwriting pen to carry out the creation of calligraphy or drawing on the author's screen to can send the creation achievement to learner's electronic equipment through the mode of live broadcast or record screen and demonstrate, make things convenient for the learner to copy.
However, the learner's display screen only shows the final creation achievement, and the learner can only see the thickness, direction and size of the brush in the screen, and cannot know the creation thought of the creator. That is, the learner can only see the brush effect written by the author, but cannot know how to obtain the brush effect by the skill of the pen, and the learning efficiency and the copying effect of the learner are affected due to the lack of information.
In order to make up for the defect, some creators shoot the actual effects of the motion skills and screen drawing of the creators by placing a desktop camera. However, in this way, a camera needs to be separately installed, and arm shielding exists due to posture change of an author in an authoring process, so that a learner cannot simultaneously see pen skills and screen drawing effects, and learning efficiency and copying effects are also affected.
Therefore, it is desirable to provide a technical solution to solve the above technical problems.
Fig. 4 shows a schematic diagram of an online teaching method provided by an embodiment of the present application.
In the technical scheme provided by the embodiment of the application, when an author creates, the electronic device of the author can collect the relevant information of the handwriting pen to determine the pen technique of the author, and simultaneously collect the actual brush effect. Accordingly, the display screen of the learner can simultaneously display the virtual pen skill and the actual brush effect, so that the learner can learn and copy more effectively. This will be described in more detail below with reference to the accompanying drawings.
For convenience of explanation, in the method shown in fig. 4, the electronic device and the stylus pen used by the author and the electronic device used by the learner may be exemplified by the first electronic device 11, the first stylus pen 12 and the second electronic device 21 shown in fig. 1, respectively, where the first electronic device 11 and the first stylus pen 12 are connected and can communicate with each other. The first electronic device 11 and the first stylus pen 12 may be connected in a wired manner, or may be connected in a wireless manner such as bluetooth or Wi-Fi, which is not limited in this embodiment of the application. The display screen of the first electronic device 11 (i.e. the first display screen 101) may display the motion trajectory of the first stylus 12. The author is the user using the first electronic device 11, and is referred to as the first user in the following embodiments. The learner is the user using the second electronic device 21, and is referred to as the second user in the following embodiments.
As shown in fig. 4, the first electronic device 11 may perform steps S401 to S403, which are described in detail below.
S401, determining a pen holding gesture of a first user according to the first information.
Here, the pen-holding posture of the first user refers to a posture that the first user's hand assumes when holding the pen. In the embodiment of the present application, for convenience of description, the hand of the first user actually holding the pen may be referred to as a real hand 13, and the hand of the first user presented on the second electronic device 12 may be referred to as a virtual hand 14.
In the embodiment of the present application, the first information may include pen holding position information 401a, pen body tilt angle information 401b, and hand support information 401 c.
The pen-holding position information 401a is used to indicate the position of the area where the real hand 13 is in contact with the first stylus 12 on the first stylus 12. In the embodiment of the present application, the portion of the real hand 13 contacting the first stylus 12 includes, but is not limited to, fingers, palm, wrist, and connecting portions between adjacent fingers.
The pen holding position information 401a may be acquired by the first stylus 12 and then transmitted to the first electronic device 11. By way of example and not limitation, the body of the first stylus 12 is provided with a first pressure sensor for detecting a touch area of the real hand 13 on the body. In some embodiments, the first pressure sensor may also detect the touch strength (or pen-holding pressure) of the real hand 13, for example, detect in which touch areas the real hand 13 exerts a large force, in which touch areas the real hand exerts a small force, and the like.
The touch area detected by the first pressure sensor may have a variety of representations.
In some embodiments, the touch area detected by the first pressure sensor is represented in the form of coordinates. For example, the first stylus 12 may include a coordinate system, such as a rectangular coordinate system or a cylindrical coordinate system, which is established by using a point (e.g., a pen tip) on the pen as an origin, and any point on the pen body has certain coordinates, so that when the real hand 13 touches the pen body of the first stylus 12, the first pressure sensor disposed on the pen body may detect the touch region and obtain the corresponding coordinates. Accordingly, the first stylus 12 may transmit the coordinates of the touch area to the first electronic device 11.
In order to maintain the stability of the first stylus pen 12, there are a plurality of areas where the real hand 13 is in contact with the first stylus pen 12 when the pen is held, i.e., there are a plurality of touch areas detected by the first pressure sensor.
For a certain touch area, the coordinates of the touch area may be represented by the coordinates of any point in the touch area, for example, the coordinates of the center point of the touch area is taken as the coordinates of the touch area.
Alternatively, the coordinates of the touch area may include boundary line coordinates of the touch area. For example, the boundary line of the touch area may include a plurality of first points. Accordingly, the boundary line coordinates of the touch area may include coordinates of each of the plurality of first points.
Alternatively, the coordinates of the touch area may include contact surface coordinates of the touch area. It is understood that the contact surface of the touch area is the entire surface of the touch area. For example, the contact surface of the touch area may include a plurality of first points and at least one second point, wherein the first points are points on the boundary line of the touch area, and the second points are points in an area surrounded by the boundary line of the touch area. Accordingly, the contact surface coordinates of the touch area may include coordinates of each of the plurality of first points and coordinates of each of the at least one second point.
In other embodiments, the touch area detected by the first pressure sensor is represented in terms of distance and orientation. For example, the tip of the first stylus 12 may be used as a reference point, and a point (e.g., a center point) within the touch area may represent the touch area. The position of the point on the pen body can be determined by the distance between the point and the pen tip and the oblique orientation of the line connecting the point and the pen tip with respect to the central axis of the first stylus 12. In fact, the first stylus pen 12 has a fixed size, and is inclined in an inclined direction along a line connecting the point and the pen tip with respect to the central axis of the first stylus pen 12 with the pen tip as a reference point and a distance between the point and the pen tip as a radius, and an intersection of the line connecting the point and the pen tip and the pen surface is known as a position of the touch area on the pen body, and an inclination angle of the line connecting the point and the pen tip with respect to the central axis of the first stylus pen 12 is also known.
In still other embodiments, the setting position of the first pressure sensor on the pen body is known, and when the real hand 13 contacts the pen body of the first writing pen 12, the first pressure sensor at the corresponding position can detect the touch of the real hand 13, so that the position of the contact area of the real hand 13 on the pen body relative to the whole pen body is the setting position of the corresponding first pressure sensor on the pen body. The setting position of the first pressure sensor on the pen body is preset and known when the first stylus pen 12 leaves the factory. It should be understood that the pen holding position information 401a may also be obtained by other manners, which is not limited in this embodiment of the application.
The body inclination information 401b is used to indicate the angle at which the first stylus 12 is inclined with respect to the first plane. Specifically, the body inclination angle information 401b may include the inclination angle, or include information for calculating the inclination angle.
For example, the angle may be an angle between the first stylus 12 and a first plane, and specifically may be an angle between a central axis of the first stylus 12 and an axis of a projection of the first stylus 12 in the first plane, such as an angle α shown in fig. 4. In the embodiment of the present application, the first plane is a plane contacting with the tip of the first stylus 12, such as a plane where the first display 101 is located, a desktop, a tangent plane of a paper surface or a curved surface, and the like.
Alternatively, the angle may be an angle between the first stylus pen 12 and the second plane, specifically, an angle between a central axis of the first stylus pen 12 and an axis of a projection of the first stylus pen 12 in the second plane. Here, the second plane is perpendicular to the axis of the projection of the first stylus 12 in the first plane.
In this embodiment of the application, the pen body inclination angle information 401b may be obtained by the first stylus 12 and then sent to the first electronic device 11, or may be directly obtained by the first electronic device 11.
For example, the first stylus pen 12 may be provided with a first electrode 224 and a second electrode 225 as shown in fig. 3, and the first stylus pen 12 may calculate a body tilt angle according to a difference between a signal amount of the first electrode 224 and a signal amount of the second electrode 225 and transmit the body tilt angle to the first electronic device 11. In the embodiment of the present application, the first stylus pen 12 may also measure the tilt orientation of the first stylus pen 12 according to the signal quantity of the first electrode 224 and the signal quantity of the second electrode 225. The tilt orientation of the first stylus 12 is herein understood to mean a tilt direction of the first stylus 12 with respect to a first vertical line, such as a tilt of the first stylus 12 in a three o 'clock direction of the first vertical line, or a tilt of the first stylus 12 in a five o' clock direction of the first vertical line, etc. The first perpendicular line is perpendicular to the first plane and passes through the tip of the first stylus 12.
For another example, a gyro sensor 222 as shown in fig. 3 may be provided in the first stylus pen 12, and the first stylus pen 12 may detect an angle of inclination of the first stylus pen 12 with respect to the first plane by using the gyro sensor 222. Here, the first plane is perpendicular to the direction of gravity and may also be referred to as a horizontal plane. In some embodiments, the gyroscope may also detect the tilt orientation of the first stylus 12.
As another example, the tip of the first stylus 12 may be provided with a distance sensor, which may detect the distance of the tip of the first stylus 12 from the first plane. The body tilt angle may be obtained based on the length of the stylus 12 itself and the distance of the tip of the first stylus 12 from the first plane. The first stylus 12 may calculate a tilt angle of the pen body and transmit the tilt angle to the first electronic device 11, or may transmit the length of the stylus 12 and a distance from the end of the first stylus 12 to the first plane to the first electronic device 11, and the first electronic device 11 calculates the tilt angle of the pen body.
As another example, the first stylus pen 12 may be provided with the first electrode 224 and the second electrode 225 as shown in fig. 3, and the first electronic device 11 may directly obtain a difference between the signal quantity of the first electrode 224 and the signal quantity of the second electrode 225, and may calculate the body tilt angle according to the difference between the signal quantities.
It should be understood that the pen body inclination information 401b may also be obtained by other manners, which is not limited in this embodiment of the application.
The hand support information 401c is used to indicate a support position, a support area, a support site contour, and the like of the real hand 13 on the first display screen 101.
The hand support information 401c may be acquired by the first electronic device 11. Generally, the signal quantity of the display area corresponding to the real hand 13 on the first display screen 101 is different from the signal quantity of other display areas, and the hand support information 401c can be acquired through the signal quantity difference. By way of example and not limitation, the hand support information 401c may be acquired by acquiring the difference in signal amount by a sensor provided in the first electronic device 11, such as a touch sensor, a pressure sensor, or the like. In the embodiment of the application, the touch sensor can also distinguish the hand touch or the stylus pen touch of the user.
In step S401, the first electronic device 11 may determine a pen-holding gesture of the first user according to the pen-holding position information 401a, the pen body inclination information 401b, and the hand support information 401 c. In the embodiment of the application, when determining the pen holding posture of the first user, not only the pen holding position information 401a and the pen body inclination angle information 401b are considered, but also the hand support information 401c is considered, so that the pen holding posture determined by the first electronic device 11 can be more suitable for the actual pen holding posture of the first user, and the subsequent restored pen skills are more accurate.
In some embodiments, the first information may further include pen-grip pressure information 401d, and the pen-grip pressure information 401d is used for indicating the grip strength of the real hand 13.
The pen holding pressure information 401d may be detected by the first stylus 12 and then transmitted to the first electronic device 11. By way of example and not limitation, the pen-holding pressure information 401d may be detected by a first pressure sensor disposed on the pen body surface of the first stylus 12. In this embodiment, the pen holding pressure information 401d may include pressure values corresponding to various positions where the real hand 13 contacts the first stylus 12.
In some embodiments, the first information may also include pen size information 401 e. In some embodiments, the pen size information 401e may be used to construct a virtual pen model corresponding to the first stylus 12. In other embodiments, the pen size information 401e may be used to calculate pen holding position, pen body tilt, etc.
In the embodiment of the present application, there may be various ways to determine the pen holding posture according to the first information.
As an example, the first electronic device 11 may be pre-populated with a three-dimensional model of the virtual hand 14 and a three-dimensional model of the virtual pen 15. The joints of the virtual hand 14 are flexibly movable so that various gestures can be presented as desired. Specifically, in step S401, the posture of the virtual hand 14 holding the virtual pen 15 may be adjusted according to the first information, such as adjusting the pen holding finger and the pen holding position of the virtual hand 14 according to the pen holding position information 401a, further adjusting the support posture of the virtual hand 14 according to the hand support information 401c, adjusting the tilt angle of the virtual pen 15 according to the pen body tilt angle information 401b, and so on. Rendering the three-dimensional model with the adjusted posture to obtain the virtual pen holding posture.
Optionally, in some embodiments, corresponding to each pen tip position, the first electronic device 11 adjusts the three-dimensional model according to the first information and renders the three-dimensional model to obtain a virtual pen-holding gesture corresponding to the pen tip position. In other embodiments, if the pen holding posture of the first user is kept unchanged and only the pen tip position is changed in the process of drawing the handwriting by the first user, the first electronic device 11 may perform adjustment and rendering of the three-dimensional model only once, and the obtained virtual pen holding posture corresponds to each pen tip position.
In the embodiment of the present application, the model of the virtual pen 15 may be fixed, such as having a fixed size and a fixed pen tip type. Of course, in other embodiments, the model of the virtual pen 15 may be adjusted according to the size information of the first stylus 12 to form a model corresponding to the first stylus 12, so that the virtual pen is rendered more realistic. Or, according to the different brush selected by the user, the nib type of the virtual pen 15 can be changed according to the brush, so that the virtual pen is more intuitive. In some embodiments, the first electronic device 11 may provide the user with an option to select the virtual pen model for the user to select to display a model of a real stylus or a model of a corresponding brush.
As another example, a plurality of virtual grip templates may be preset in the first electronic device 11, and the virtual grip templates may be three-dimensional or two-dimensional (e.g., a two-dimensional map of the first viewing angle). Each virtual pen holding posture template corresponds to a pen holding posture. In step S401, the first electronic device 11 may match the first information with the virtual grip template, for example, may match the pen holding position information 401a with the pen holding position of the virtual hand 14 in the virtual grip template, match the pen body inclination angle information 401b with the inclination angle of the virtual pen 15 in the virtual grip template, match the hand supporting information 401c with the supporting position, area of the virtual hand 14 in the virtual grip template, and so on. The first electronic device 11 may use the virtual grip template with the highest matching degree with the first information as the grip gesture to be presented.
The first viewing angle refers to a viewing angle of the first user.
S402, acquiring a handwriting image.
Specifically, the handwriting image may include nib position information 402a, the nib position information 402a indicating a position of a nib of the first stylus 12 on the first display screen 101.
By way of example and not limitation, the electromagnetic film in the first electronic device 11 may detect the electromagnetic signal triggered by the first stylus 12 and convert it into report data to be transmitted to the system-on-chip. The system-on-chip transmits the click data to the display controller, so that the handwriting image is drawn on the first display screen 101. The strike data may comprise coordinate data of the writing of the first stylus 12, i.e. comprising the coordinates of the pen tip at each strike.
In some embodiments, the handwriting image may further include nib pressure information 402b, the nib pressure information 402b indicating the force with which the real hand 13 presses the nib. According to the pressure of the pen tip, the handwriting image of the first handwriting pen 12 can show the effect of the line tip, thickness and the like.
The tip pressure information 402b may be detected by the first stylus 12 and transmitted to the first electronic device 11. By way of example and not limitation, the tip pressure information 402b may be detected by a second pressure sensor disposed within the body of the first stylus 12. Alternatively, the pen tip pressure information 402b may be detected by the first electronic device 11, for example, by a pressure sensor disposed on the first electronic device 11, which is not limited in this embodiment of the application. Optionally, in a specific implementation, the first electronic device 11 may convert the directly or indirectly acquired pen tip pressure information 402b into report data, and transmit the report data to the display screen controller through the system on chip. That is to say, the newspaper point data for drawing the handwriting image may include coordinate data corresponding to newspaper points, and may further include pressure-sensitive data corresponding to newspaper points, so that the handwriting image displayed on the first display screen 101 may realize different writing style effects according to the pressure-sensitive data (i.e., pen tip pressure).
In some embodiments, the handwriting image may also include pen heading information 402c and pen speed information 402 d. The pen heading information 402c and the pen speed information 402d may be represented by a change in the position of the tip of the first stylus 12 on the first display 101 during pen stroke. The direction of the change of the position of the pen point is the pen moving direction, and the speed of the change of the position of the pen point is the pen moving speed.
Alternatively, the first stylus pen 12 may be provided with an acceleration sensor 223 as shown in fig. 3, the acceleration sensor 223 may detect the acceleration of the first stylus pen 12 during movement, and further, the pen-moving speed may be obtained through an integral operation. Compared with a mode of acquiring the pen carrying speed by reporting points, the method has the advantages that the time delay can be reduced by acquiring the pen carrying speed through the accelerometer.
The handwriting image obtained in step S402 shows the brush effect, and specifically can show changes in length, thickness, curvature, density, lightness, rigidity, softness, and the like of lines.
And S403, overlapping the handwriting image and the pen holding posture to form a teaching video by a pen skill method.
In this step, the handwriting image and the pen-holding gesture are superimposed, so that the pen skill of the first user can be restored. The pen skill refers to a process of moving a pen in a pen scene such as writing and drawing, and for example, the pen skill may include processes of dropping, moving, and retracting a pen in which pen holding posture, which pen moving direction, which pen moving speed, and the like are performed, so that a line may have variations in length, thickness, straightness, density, lightness, rigidity, softness, and the like.
In the embodiment of the application, the pen skill can comprise pen holding gesture and pen using action. The pen holding posture refers to the pen holding posture of the hand, and includes the pen holding position, the pen holding pressure, the pen body inclination angle, the hand support position, the hand support area, the hand support contour and the like. The pen-using action refers to an action in the process of pen-moving, and includes the speed, direction, force and the like of the pen-moving. In the embodiment of the application, the pen action can be embodied by the handwriting image, for example, in a continuous time, the change of the pen point position in the handwriting image can indicate the speed and the direction of pen movement, and the change of the thickness of the line in the handwriting image can indicate the strength of the pen movement.
In this embodiment, the pen technique restored by the first electronic device 11 may be referred to as a virtual pen technique. Generally, the richer the information that the virtual pen technique includes, the closer the effect that is ultimately rendered to the actual pen technique.
In step S403, the formed teaching video with pen skills includes a handwriting image and a pen-holding gesture corresponding to each pen point position in the handwriting image (i.e. an action picture of virtual pen skills, which may also be referred to as a virtual pen-holding gesture in this embodiment of the present invention). In other words, the pen skill teaching video comprises a plurality of video frames, and each video frame in the plurality of video frames comprises a pen point track completed at a corresponding time and a pen-holding gesture corresponding to a last pen point position in the pen point track. In this way, the finally formed pen-skilled teaching video can present both the handwriting image drawn on the first display screen 101 by the first stylus 12 and the pen-skilled teaching for drawing the handwriting image.
In this step, since the corresponding virtual pen-holding posture is superimposed at each pen point position in the handwriting image, the handwriting image may also display the change of lines, so that when the formed teaching video for pen skill is presented on the display screen (i.e., the second display screen 201) of the second electronic device 21, the brush effect of the first handwriting pen 13 (e.g., the handwriting image 16 shown in fig. 4) may be displayed, and the pen skill of the first user may also be displayed. Accordingly, a user (i.e., the second user) using the second electronic device 21 can see how the first user has achieved the corresponding brush effect by the pen skill, so that the learning effect can be improved.
In some embodiments, the teaching of the video by pen techniques may further include operations of the first user's finger on the first display screen 101, such as a click operation, a slide operation, a long press operation, and the like.
In other embodiments, the teaching of the pen technique video may further include an operation of the first stylus 12 on the first display screen 101, such as a click operation, a slide operation, a long press operation, and the like.
In some embodiments, the pen-technique teaching video is sent to the second electronic device 21 after being produced by the first electronic device 11. For example, the first user completes a painting on the first display screen 101 using the first stylus 12, and accordingly, the first electronic device 11 may perform the above-described steps S401-S403 to form a pen skills teaching video for showing how to complete the painting. After the pen skill teaching video is completed, the first electronic device 11 may transmit all video frames of the pen skill teaching video to the second electronic device 21 at one time. The second electronic device 21, which receives the pen skill teaching video, may play the pen skill teaching video on the second display screen 201, as shown in fig. 4 by the second display screen 201. In this case, the user using the second electronic device 21 can actually control the progress of the video being technically taught by the pen, for example, perform fast forward, rewind, double speed, slow play, pause, and the like. The authoring process of the first user on the first electronic device 11 is decoupled from the viewing process of the second user on the second electronic device 21.
In other embodiments, the pen technique instructional video may be transmitted to the second electronic device 21 in a live form. That is, while the first user is authoring on the first electronic device 11, the second user may view on the second electronic device 21 in real-time. The production and broadcasting of the pen technique teaching video are synchronous. In this case, the user using the second electronic device 21 may consider watching the live video, and cannot adjust the progress of the teaching of the video by the pen technique. The authoring process of the first user on the first electronic device 11 is synchronized with the viewing process of the second user on the second electronic device 21 without regard to time delay. By way of example and not limitation, the first user may be live by sharing a desktop.
In still other embodiments, the pen-technique teaching video is stored in the first electronic device 11 after being manufactured by the first electronic device 11. The first electronic device 11 may live the pen technique teaching video. In this case, the first user has actually completed creation, and the obtained pen skill teaching video may be stored in a video file of the first electronic device, and then may be presented to the user watching the teaching in real time in a remote live network manner, or presented to the user watching the teaching in real time in a short-distance interconnection manner. The authoring process of the first user on the first electronic device 11 is decoupled from the live broadcast process of the first user through the first electronic device 11, and the process of the first electronic device 11 playing the pen skill teaching video is synchronized with the viewing process of the second user on the second electronic device 21 without considering the time delay.
In the embodiment of the present application, when the first electronic device 11 transmits the pen skill teaching video to the second electronic device 21, the pen skill teaching video may be directly transmitted or may be indirectly transmitted. For example, if the first electronic device 11 and the second electronic device 21 can be connected in a short-range manner and the first electronic device 11 and the second electronic device 21 can directly communicate, the first electronic device 11 can directly send the pen skills instructional video to the second electronic device 21. For another example, if the first electronic device 11 and the second electronic device 21 can both use the service provided by the same server, the first electronic device 11 may first send the pen skill teaching video to the server, and then the server sends the video to the second electronic device 21.
In this embodiment, first electronic equipment 11 can be after accomplishing with pen skill teaching video preparation, will keep in local or high in the clouds with pen skill teaching video.
With continued reference to fig. 4, in the above embodiment, step S403 is executed by the first electronic device 11, and the second electronic device 21 may directly acquire the pen skill teaching video and display the video on the second display screen 201 to the second user. In other embodiments, step S403 may also be performed by the second electronic device 21.
For example, the first electronic device 11 may send the handwriting image obtained in step S402 and the virtual pen-holding gesture (such as a three-dimensional model of the virtual pen-holding gesture, or a rendered pen-holding gesture, or a virtual pen-holding template) obtained in step S401 to the second electronic device 21, and the handwriting image and the virtual pen-holding gesture are superimposed by the second electronic device 21 to obtain the teaching video with pen skills.
Alternatively, the first electronic device 11 may send the first information and the handwriting image obtained in step S402 to the second electronic device 21, and the second electronic device 21 executes steps S401 and S403.
In some embodiments, whether step S403 is performed by the first electronic device 11 may be selected by the user, i.e., the author (first user) may select how to share the authoring effort. For example, when the first user shares the video, the first user may select to superimpose the handwriting image and the virtual pen-holding gesture to form a complete pen skill teaching video for sharing, or may select to separately share the handwriting image and the virtual pen-holding gesture to superimpose the handwriting image and the virtual pen-holding gesture by the electronic device of the learner to form the complete pen skill teaching video. Thus the author can have more sharing options.
In some embodiments, whether the first electronic device 11 performs step S403 may also be selected by the user. For example, the first electronic device 11 may provide options such as "show pen grip gesture" or "hide pen grip gesture" for selection by the user. If the user selects "display pen-holding gesture", the first electronic device 11 may perform steps S401 to S403. If the user selects "hide pen-holding gesture", the first electronic device 11 may only perform step S402.
The method provided by the embodiment of the application can be applied to various scenes created by using the handwriting pen, for example, the first handwriting pen 12 is used as a sign pen, a mark pen, a calligraphy pen, a painting pen and the like to create calligraphy or painting. When the real hand 13 holds the first stylus 12 in the pen-holding postures corresponding to different pen types, the first electronic device 11 may perform the foregoing step S401 to determine the corresponding pen-holding posture, which is described below with reference to a specific example. The pen-holding gesture determined by the first electronic device 11 in the embodiment of the present application may be referred to as a virtual pen-holding gesture.
FIG. 5 is a flow chart illustrating a pen-holding gesture determination method according to an embodiment of the present disclosure.
As shown in fig. 5 (a), the first stylus 12 may include a rectangular coordinate system with the pen tip as the origin, and illustratively, the central axis of the pen body of the first stylus 12 is a Z-axis, and the X-axis and the Y-axis are perpendicular to each other and are perpendicular to the Z-axis. Here, the directions of the X axis and the Y axis may be fixed all the time, or may be different in different pen-holding postures. For example, a straight line extending along the tip tilt direction of the first stylus 12 during use may be taken as the X-axis, that is, an extending direction of a projection of the pen body of the first stylus 12 on the plane of the first display 101 during use may be taken as the X-axis. The direction orthogonal to the X and Z axes is the Y axis.
The pen-holding position information 401a acquired by the first electronic device 11 may include coordinates of four positions where the real hand 13 contacts with the pen body of the first stylus 12 when the first user holds the pen, specifically, P1(X1, Y1, Z1), P2(X2, Y2, Z2), P3(X3, Y3, Z3), and P4(X4, Y4, Z4). The original point is the position of the pen point, and the coordinates are (0, 0, 0). The relative position relationship of the four positions can be obtained according to the coordinates of the four positions, so that the part of the hand contacting with each position is determined. In a specific implementation, the first electronic device 11 may store the number of contact points and the relative position relationship between the contact points corresponding to each pen holding posture, so that after the first electronic device 11 acquires the pen holding position information 401a, the holding portion of the corresponding hand may be determined according to the number of the contact points and the relative position relationship between the contact points, that is, it is determined which fingers and which portions are used by the first user to hold the first stylus 12.
For example, if (Z4 > Z1) & (Z4 > Z2) & (Z4 > Z3), the P4 position is determined to be the index finger root. If (Z1 ≈ Z2 ≈ Z3) & (X1 < X2 < X3), the position P1 is determined as the index finger pulp, the position P2 is the thumb pulp, and the position P3 is the midpoint of the first joint of the middle finger. Here, "&" means "and".
After the correspondence between the holding portion of the hand and the contact point is determined, the three-dimensional virtual hand 14 and the three-dimensional virtual pen 15 are assembled according to the correspondence. As shown in fig. 5 (b), the index finger pulp, thumb pulp, middle finger first joint midpoint and index finger root of the virtual hand 14 are placed at positions P1, P2, P3 and P4 on the virtual pen 15, respectively. In this case, joints of the virtual hand 14 may be flexibly movable, and thus the posture of the virtual hand 14 may be adjusted by adjusting joint points of the virtual hand 14.
After the virtual hand 14 and the virtual pen 15 are assembled, as shown in fig. 5 (c), the hand angle may be adjusted according to the pen body inclination angle information 401b, for example, the assembled virtual hand 14 and virtual pen 15 may be directly inclined at the same time according to the pen body inclination angle α, and the supporting portion, the supporting area, and the like of the virtual hand 14 may be determined according to the hand supporting information 401 c. In some embodiments, this step may accommodate a subsequent rendering process.
In other embodiments, the first electronic device 11 may store the hand posture three-dimensional models corresponding to each pen holding posture (for example, store the part (b) in fig. 5 with the virtual pen 15 removed), the first electronic device 11 matches the acquired pen holding position information 401a with the hand posture three-dimensional models respectively, and the hand posture three-dimensional model with the highest matching degree may be directly assembled with the virtual pen 15.
In some other embodiments, the first electronic device 11 may also store a virtual holding gesture template corresponding to each pen holding gesture, and the first electronic device 11 matches the acquired first information with the virtual holding gesture template, and uses the virtual holding gesture template with the highest matching degree as the pen holding gesture to be presented. Here, the virtual grip template includes an assembled virtual hand and a virtual pen.
It is understood that the first user may select a brush on the first electronic device 11 when using the first stylus 12 for creating, for example, the first electronic device 11 may provide a pencil brush, a writing brush, a pen brush, a mark brush, and so on. The first user selects which brush to use, and may consider the first stylus 12 to be used as a corresponding type of pen.
In the embodiment of the present application, the virtual pen 15 may be a model corresponding to the first stylus 12, so that after the model of the first stylus 12 is established for the first time, the model can be continuously used in the subsequent process without repeatedly establishing the model. In other embodiments, to improve the visual experience and learning effect, the virtual pen 15 may be a model of the corresponding type of pen after the first electronic device 11 determines what type of pen the first stylus 12 is used as. The dimensions of the model may refer to the dimensional information of the first stylus 12. For example, the first stylus 12 is used as a pen, the virtual pen 15 may be a model of a pen; the first stylus 12 is used as a writing brush, and the virtual pen 15 may be a model of a writing brush.
FIG. 6 is a flow chart illustrating another method for determining a pen-holding gesture according to an embodiment of the present disclosure.
As shown in fig. 6 (a), the first stylus 12 may include a rectangular coordinate system with the pen tip as an origin. The pen-holding position information 401a acquired by the first electronic device 11 includes coordinates of four positions where the real hand 13 contacts with the pen body of the first stylus 12 when the first user holds the pen, specifically, P1(X1, Y1, Z1), P2(X2, Y2, Z2), P3(X3, Y3, Z3), and P4(X4, Y4, Z4). The original point is the position of the pen point, and the coordinates are (0, 0, 0). The relative position relationship of the four positions can be obtained according to the coordinates of the four positions, so that the part of the hand contacting with each position is determined.
For example, if (| Z1-Z2| is less than the first preset distance) & (| Z3-Z4| is less than the second preset distance) & (| Z3-Z2| is greater than the third preset distance) & (Z2 > Z1) & (Z4 > Z3), it may be determined that the P1 position is the nail root of the ring finger, the P2 position is the middle finger abdomen, the P3 position is the thumb abdomen, and the P4 position is the first joint or the second joint of the index finger.
After the correspondence between the holding portion of the hand and the contact point is determined, the virtual hand 14 and the virtual pen 15 are assembled according to the correspondence. As shown in fig. 6 (b), at positions P1, P2, P3, and P4 on the virtual pen 15, the ring finger, middle finger, thumb, and index finger of the virtual hand 14 are placed, respectively. In this case, joints of the virtual hand 14 may be flexibly movable, and thus the posture of the virtual hand 14 may be adjusted by adjusting joint points of the virtual hand 14.
Of course, in some other embodiments, the first electronic device 11 may store a three-dimensional model of a hand posture corresponding to a writing brush holding posture (for example, store a portion of fig. 6 (b) with the virtual pen 15 removed), match the three-dimensional model of the hand posture corresponding to the writing brush holding posture according to the pen holding position information 401a obtained by the first electronic device 11, and then assemble the three-dimensional model of the hand posture with the virtual pen. In this example, the virtual pen 15 may be a model of a writing brush, or may be a model of the first writing pen 12, which is not limited in this embodiment of the present application.
After the virtual hand 14 and the virtual pen 15 are assembled, the hand angle may be adjusted according to the pen body inclination angle information 401b, and the support position, the support area, and the like of the virtual hand 14 may be determined according to the hand support information 401 c. In this example, the virtual pen 15 is relatively perpendicular to the XY plane, since the first stylus 12 is perpendicular to the plane of the first display 101.
In other embodiments, the first electronic device 11 may also store a virtual grip template corresponding to the writing brush grip, and the first electronic device 11 may match the virtual grip template corresponding to the writing brush grip according to the first information, so that the virtual grip template corresponding to the writing brush grip may be used as the grip to be presented. Here, the virtual grip template includes an assembled virtual hand and a virtual pen.
The pen-holding gestures in fig. 5 and 6 are examples only, and generally, each type of pen has a basic pen-holding gesture.
For example, when the first stylus 12 is used for hard-tipped pen calligraphy, i.e., when a brush effect such as a pencil or a pen is written with the first stylus 12, the pen-holding posture as shown in fig. 5 is generally adopted. Specifically, the index finger and the thumb are naturally bent and then pinched, the index finger and the thumb are relatively touched or differentiated, the pen point is abutted against the first joint of the middle finger, the pen holder is obliquely abutted against the root of the index finger, the ring finger and the little finger are tightly abutted against the middle finger, the palm center is empty, and the side face of the little finger and the side face of the palm are supported on the drawing surface.
For another example, when the first stylus 12 is used for writing brush calligraphy or traditional Chinese painting, i.e. when the first stylus 12 is used for writing a soft brush effect such as a writing brush, the pen-holding posture shown in fig. 6 and 7 is usually adopted. The writing brush holding method generally comprises a fingering method, a wrist method and a pen position. Fingering refers to the posture and position of a finger holding a pen, and can be classified into two-finger pen holding (as shown in fig. 7 (a)), three-finger pen holding, four-finger pen holding (as shown in fig. 6 (b) and 7 (b)), five-finger pen holding (as shown in fig. 7 (c)), and the like. The wrist manipulation refers to the manipulation of holding the hand, wrist and elbow of a pen, and can be divided into occipital-wrist manipulation, arm-cantilever manipulation, occipital-arm-cantilever manipulation, and cantilever-arm-cantilever manipulation. The pen position refers to the position height of the contact position of the finger for holding the pen and the pen holder, and can be specifically divided into four grades of low position, middle position, high position and ultrahigh position, wherein the height of the pen position is determined by the middle finger for holding the pen. The combination of different fingering, wrist and pen positions can form various pen holding methods, such as single hook method, double hook method, tube holding method, tube pinching method, tube twisting method, stirrup quadrangle method, wrist returning method, and the like, so that the pen can adapt to writing and drawing in different postures, writing in different fonts, and the like.
The pen holding posture corresponding to the writing brush calligraphy can be matched with the corresponding pen using actions to form the pen using skill of the writing brush calligraphy, such as speed, press, turning and the like. The pen holding posture corresponding to the traditional Chinese painting can be matched with the corresponding pen using actions to form the pen using skill of the traditional Chinese painting, such as hooking, chapping, wiping, dyeing, pointing and the like.
As another example, when the first stylus 12 is used for sketching, the pen-holding gesture may include: 1) a pen-holding posture for large-area winding displacement, as shown in fig. 8 (a), in which the pen holder is held by fingers, the wrist joint moves left and right, and the amplitude of movement is large; 2) the pen-holding posture for the small-area flat cable, as shown in (b) and (c) of fig. 8, in which the thumb, the index finger and the middle finger hold the pen, the little finger is supported on the drawing paper to control the force of the pen, the amplitude of the left-right movement is limited by the supporting point, and the short flat cable can be drawn; 3) a pen-holding posture for partial depiction, as shown in fig. 8 (d), wherein the pen-holding posture is similar to that in fig. 5 except that a little finger is supported on the drawing paper, and the pen is used by the movement of the finger joint, flexibly and accurately; 4) the pen-holding gesture for detail drawing, as shown in fig. 5 (c), which is substantially consistent with the pen-holding method of ordinary hard-tipped pen writing, is suitable for drawing the details and edge lines of the object.
As can be seen from the figure, the holding positions of the fingers, the inclination angle of the pen body and the hand support are different under different pen holding postures. In the embodiment of the present application, the above factors are fully considered, so that a more accurate and appropriate pen holding posture can be determined in step S401, and the pen skill of the user can be more accurately reproduced.
To facilitate understanding of the on-line teaching method provided in the embodiments of the present application, drawing with the first stylus 12 is taken as an example, and is further described with reference to some exemplary but non-limiting User Interface (UI) diagrams.
Fig. 9 to 12 are schematic diagrams illustrating some user interfaces displayed on the first electronic device according to the embodiment of the present application. As shown in fig. 9-12, a main interface of a certain drawing software may be displayed on the first display screen 101.
Referring to fig. 9, the first user may first select a brush type before drawing on the first display screen 101 using the first stylus 12. Illustratively, the first user may select a brush type in a "brush library" option box 102 of the drawing software, where the brush type includes, but is not limited to, a pencil, a paintbrush, a felt-tip pen, a water brush, a calligraphy brush, an ink pen, and the like. It will be appreciated that the manner in which the "brush library" option box 102 shown in FIG. 9 displays brushes and the type of brush displayed is merely exemplary and should not be considered a limitation of the present application. The first user may also adjust the transparency and size of the selected brush by sliding buttons 103 and 104, respectively, as shown in FIG. 9.
Referring to fig. 10, the first user may select whether to record a video before drawing on the first display screen 101 using the first stylus 12. Illustratively, the first user may turn the "record video" switch on or off in the "video" option box 105 of the drawing software. For example, if the first user turns on a "record video" switch, the first electronic device 11 may start the process of recording the first user drawing handwriting image after a preset time or after receiving an instruction of the first user. In a specific implementation, the "record video" function may be implemented by a screen recording technology.
In some embodiments, after recording the video, the first user may select a "video playback" function in the "video" option box 105, and the first electronic device 11 may present the recorded video to the first user. That is, the first user may preview the recorded video on his or her own electronic device. Here, the video previewed by the first user may be a video superimposed with a pen-holding gesture (i.e., the aforementioned teaching video by pen techniques), or may be a video showing only a handwriting image. For example, if the first user turns on the "superimpose pen-holding gesture in video" switch, the video previewed by the first user is the handwriting image superimpose pen-holding gesture, and if the first user turns off the "superimpose pen-holding gesture in video" switch, the video previewed by the first user is the handwriting image. Or the video previewed by the first user defaults to the handwriting image and the pen holding gesture is superposed, wherein a switch which can be operated by the first user can be provided on the interface of the previewed video so as to display or hide the pen holding gesture.
In some embodiments, after recording the video, the first user may select the "export video" function in the "video" option box 105, and the first electronic device 11 may store the recorded video in the local or cloud end, or send the recorded video to other users through an application such as email, instant messaging, and the like.
In some embodiments, the first user may select the "import video" function in the "videos" option box 105, and the first user may select a video from the local or cloud for presentation in the drawing software. For example, the first user may choose to present videos that document other user authoring processes, which may facilitate the first user viewing the videos for learning or evaluation, or facilitate the first user viewing the videos while copying learning.
Referring to fig. 11, the first user may make a preference setting before drawing on the first display screen 101 using the first stylus 12. Illustratively, the first user may turn on or off the "superimpose pen-grasp gesture in video" switch in the "preferences" tab 106 of the drawing software. For example, if the first user turns on the "superimpose pen-holding posture in video" switch, the first electronic device 11 may superimpose pen-holding posture while recording the handwriting image, and if the first user turns off the "superimpose pen-holding posture in video" switch, the first electronic device 11 may record the handwriting image without superimposing pen-holding posture. In this way, the first user may choose to record only the handwriting image to present the creation result, or record the handwriting image and superimpose the pen-holding gesture to present the creation result and the corresponding creation idea.
In some embodiments, the first electronic device 11 may also provide the first user with an option to select the virtual pen model after the first user turns on the "superimpose pen-holding gesture in video" switch. For example, if the first user selects the "stylus" option, the pen used in the superimposed pen-holding gesture is a model of the first stylus 12 when recording the video; if the first user selects the "brush library" option, the pen used in the superimposed pen-holding gesture is the model of the brush selected by the first user when recording the video.
In some embodiments, the first electronic device 11 may also provide the first user with an option to select the virtual pen view angle after the first user turns on the "superimpose pen-holding gesture in video" switch. For example, if the first user selects the "first view" option, the superimposed pen-holding gesture is a gesture at the first view (i.e., the view of the first user) when recording a video; if the first user selects the option of "best view", the superimposed pen-holding posture is the posture of the best view when recording the video. Here, the optimum viewing angle may be a viewing angle that enables substantially full view of the hand posture.
In some embodiments, the first user may synchronize the share original parameters switch when the "share video" is turned on or off in the "preferences" option box 106 of the drawing software. For example, when the first user turns on a "share original parameters when sharing video" switch, the first user shares video and shares original parameters (for example, at least one of the pen tip position information 402a, the pen tip pressure information 402b, the pen moving direction information 402c, the pen moving speed information 402d, the pen holding position information 401a, the pen body inclination angle information 401b, the hand support information 401c, the pen holding pressure information 401d, and the pen size information 401 e) related to the video. If the first user turns off the switch for synchronously sharing the original parameters during video sharing, the first user can consider that the video is shared only by the image data during video sharing.
Referring to FIG. 12, after recording the video, the first user may select how to share the creative efforts in a "share" option box 105. For example, the first electronic device 11 may provide a "share video" option, a "share image" option, a "share original parameter" option, or may also provide a "share layer" option, which is not limited in this embodiment of the application.
In some embodiments, if the first user selects to share the video, the first electronic device 11 may further provide the first user with a sub-option of "display pen-holding gesture in video" and a sub-option of "hide pen-holding gesture in video". That is, the first user may select to display or hide the pen-grip gesture in the shared video. In fact, when the pen-holding gesture is hidden in the video, the shared video can be considered to include the handwriting image without superimposing the pen-holding gesture.
In some embodiments, if the first user selects to share the image, the first electronic device 11 may further provide the first user with sub-options for selecting the image type, such as a "PDF" sub-option, a "PSD" sub-option, a "JPEG" sub-option, a "PNG" sub-option, and the like.
In some embodiments, if the first user selects to share the original parameter, the first electronic device 11 may further provide the first user with a sub-option for selecting the original parameter type, such as a "handwriting image" sub-option, a "handwriting image and pen-holding gesture" sub-option, and the like. That is, the first user may choose to share the original parameters (e.g., the pen tip position information 402a, the pen tip pressure information 402b, etc.) used to form the handwriting image, or choose to share the original parameters (e.g., the pen tip position information 402a, the pen tip pressure information 402b, the first information, etc.) used to form the handwriting image and the pen-holding gesture.
Referring to fig. 13, the first user may live his or her authoring process through the sharing screen. Illustratively, the first user may select how to live the authoring process in a "share screen" option box 105. For example, the first electronic device 11 may provide a "show pen grip gesture" option and a "hide pen grip gesture" option.
In some embodiments, if the first user selects the option "show pen-holding gesture", other users watching the live broadcast can see the superposed pen-holding gesture and handwriting image on the screen of the electronic device during the live broadcast of the first user.
In some embodiments, if the first user selects the option of "hiding the pen-holding gesture", other users watching the live broadcast only see the handwriting image on the screen of the electronic device of the first user, but do not see the pen-holding gesture of the first user during the live broadcast of the first user.
It should be noted that the interface diagrams shown in fig. 9 to fig. 13 are only exemplary, and the layout of the interface and the names of the options and switches displayed in the interface are also exemplary, and are mainly used to more intuitively describe the functions that the first electronic device 11 can provide when the first electronic device 11 executes the method shown in fig. 4. The description relating to fig. 9-13 does not constitute a limitation of the present application.
The steps performed by the first electronic device 11 and possible visualization interfaces are mainly described above with reference to fig. 4-13, and the steps performed by the second electronic device 21 are described below with reference to fig. 14-16.
As shown in fig. 14, the second electronic device 21 may perform steps S510 to S520, which are described in detail below.
S510, a pen skill teaching video is obtained and displayed on the second display screen 201.
The pen skills teaching video includes a handwriting image 16 of a first stylus and a pen-holding gesture of a first user. Specifically, the pen-holding gesture is a restored virtual pen-holding gesture, and the virtual pen-holding gesture includes a virtual hand 14 and a virtual pen 15.
In some embodiments, the pen teching video is received by the second electronic device 21 directly or indirectly from the first electronic device 11. For example, after the first electronic device 11 executes step S403 shown in fig. 4, the pen skill teaching video is directly or indirectly transmitted to the second electronic device 21. Illustratively, the first user may select the sub-option "pen-holding gesture is displayed in video" in the interface as shown in fig. 12 after completing the authoring, so that the pen skills teaching video is directly or indirectly transmitted to the second electronic device 21.
In other embodiments, the pen skills teaching video may be superimposed by the second electronic device 21 according to the handwriting image of the first stylus 12 and the pen-holding gesture of the first user.
For example, before step S510, the second electronic device 21 may further perform:
s501, acquiring a handwriting image of the first handwriting pen.
In the embodiment of the present application, the handwriting image of the first stylus is received by the second electronic device 21 directly or indirectly from the first electronic device 11.
S502, acquiring a pen holding gesture of the first user.
In one example, the first user's handwriting image is received by the second electronic device 21 directly or indirectly from the first electronic device 11.
In another example, the handwriting image of the first user is obtained by the second electronic device 21 according to the aforementioned first information. That is, the second electronic device 21 may directly or indirectly obtain the first information from the first electronic device 11, and may then obtain the handwriting image of the first user according to the first information. In this case, the steps executed by the second electronic device 21 are the same as the step S401 shown in fig. 4, and the above description may be specifically referred to, and are not repeated herein.
For example, after completing the creation, the first user may select the sharing of the original parameters, specifically select the sub-option "handwriting image and pen-holding gesture" in the interface shown in fig. 12, so as to directly or indirectly transmit the original parameters for forming the handwriting image and the pen-holding gesture to the second electronic device 21.
S503, overlapping the handwriting image of the first handwriting pen and the pen holding posture of the first user to obtain the teaching video by the pen skill method.
Step S503 is the same as step S403 shown in fig. 4, and reference may be specifically made to the related description above, which is not repeated herein.
In the embodiment of the application, the teaching video with pen techniques can be displayed in a video playing form, and also can be displayed in a live broadcasting form, which is not specifically limited herein. Illustratively, the pen skills instructional video is displayed on the second display screen 201 in live form in fig. 14. Accordingly, a user using the second electronic device 21 (i.e., the second user) may watch a video or live copy on paper, or copy on the second display screen 201 using the second stylus 22.
Taking the second user to copy on the second display 201 by using the second stylus 22, as shown in fig. 14, the second device 21 may perform the following steps:
s520, acquiring the handwriting image 26 of the second handwriting pen 22 and displaying the handwriting image on the second display screen 201.
The manner of acquiring the pen tip image 26 of the second stylus 22 by the second electronic device 23 may be the same as the manner of acquiring the handwriting image 16 of the first stylus 12 by the first electronic device 11, and reference may be made to the above description related to step S402 for brevity, and details are not repeated herein. In the embodiment of the present application, the handwriting image 26 may show variations of length, thickness, curvature, density, weight, rigidity, flexibility, etc. of lines. In other words, the handwriting image 26 may be a brush effect presented by the second electronic device 21.
In some embodiments, the pen skill teaching video and the handwriting image 26 of the second handwriting pen 22 may be simultaneously displayed on the second display screen 201, so that the second user can copy the pen skill teaching video, and the copy result of the second user can be compared with the creation result of the first user to find out the pen skill gap, which is helpful for the second user to better learn the creation process of the first user. Illustratively, as shown in fig. 14, the second display screen includes a first display area 201a and a second display area 201b, wherein a pen skill teaching video is displayed on the first display area 201a, and a handwriting image 26 of the second handwriting pen 22 is displayed on the second display area 201 b.
In some embodiments, the second electronic device 21 may help the second user better learn the authoring process of the first user by comparing the copy result of the second user with the pen skill teaching video, thereby giving suggestions of improved authoring skills in the dimension of the pen skill.
For example, the second electronic device 21 may execute steps S401 to S403 shown in fig. 4 to form a pen-copy video corresponding to the second user. Thus, as shown in (a) and (b) of fig. 15, a pen skill teaching video corresponding to a first user may be displayed on the first display area 201a of the second display screen 201, and a pen skill copying video corresponding to a second user may be displayed on the second display area 201b of the second display screen 201. The second electronic device 21 may find the difference between the handwriting image 16 of the first handwriting pen and the handwriting image 26 of the second handwriting pen, and/or find the difference between the pen-holding posture of the first user and the pen-holding posture of the second user, and mark out to prompt the second user by comparing the teaching video with the pen skill and the copying video with the pen skill.
In some embodiments, the second electronic device 21 may display the difference on the second display screen 201, and may assist a text prompt at the difference, such as "pen-up without front", "color fusion unnatural", "color excessive", "edge frizz", "excessive halation", and so on.
Of course, the second electronic device 21 may also provide an improvement suggestion for the second user, for example, display suggestions of "suggest how many you practice to start pen", "suggest how many you practice to receive pen", and the like, so that the second user knows where he needs to improve according to the annotation and the suggestion, thereby improving the learning effect.
As another example, as shown in (a) and (b) of fig. 16, the second electronic device 21 may display the outline 17 of the handwriting image 16 of the first handwriting pen in a pen-copy video of the second user in an overlapping manner to indirectly display the handwriting image 16 of the first handwriting pen. In this way, the second user can see the outline 17 of the handwriting image of the first stylus and the handwriting image 26 of the second stylus at the same time in the second display area 201b to know where he or she needs to improve. In some embodiments, the second display screen 201 may also display the handwriting image 26 of the second handwriting pen and the outline 17 of the handwriting image of the first handwriting pen in an overlapping manner through the entire display area without displaying the pen-skill teaching video of the first user.
In other embodiments, the second electronic device 21 may obtain the authoring parameter of the first user and the copying parameter of the second user, and compare the two sets of parameters to suggest to the second user.
In an embodiment of the present application, the authoring parameter of the first user includes at least one of: the handwriting image parameters of the first handwriting pen and the pen holding posture parameters of the first user are included, but not limited to, pen tip position parameters and pen tip pressure parameters of the first handwriting pen, and the pen holding posture parameters of the first user include, but not limited to, pen holding position parameters, pen body inclination angle parameters, hand support parameters and pen holding pressure parameters of the first user.
In an embodiment of the present application, the copy parameter of the second user includes at least one of: the handwriting image parameters of the second handwriting pen and the pen holding posture parameters of the second user, wherein the handwriting image parameters of the second handwriting pen include but are not limited to pen point position parameters and pen point pressure parameters of the second handwriting pen, and the pen holding posture parameters of the second user include but are not limited to pen holding position parameters, pen body inclination angle parameters, hand supporting parameters and pen holding pressure parameters of the second user.
By comparing the creation parameters of the first user and the copy parameters of the second user, the second electronic device 21 may output a comparison video or text report, and provide skill improvement suggestions from dimensions of pen skills, such as differences and suggestions in dimensions of line tracks, speed of pen movement, weight of pen down, pressure of pen holding, and the like.
In the example given above, the second electronic device 21 gives the second user suggestion after the second user copy is completed. In still other embodiments, the second electronic device 21 may alert the second user during the second user copy through vibration feedback from the second stylus 22. For example, the second electronic device 21 may compare the authoring parameter of the first user with the copying parameter of the second user, and when a large deviation is detected, may send an instruction for controlling the motor to vibrate to the second stylus 22, so as to remind the second user to correct the pen-holding gesture or pen-down force, etc.
The steps introduced above to compare and suggest may be performed by the second electronic device 21, and in some embodiments, may also be performed by a third party device, such as a server. For example, the first electronic device 11 transmits a pen skill teaching video to the server, and the second electronic device 21 transmits a pen skill copying video to the server, and the server compares the difference between the two videos. Or, the first electronic device 11 sends the authoring parameter of the first user to the server, and the second electronic device 21 sends the copying parameter of the second user to the server, and the server compares the difference between the two sets of parameters. On the basis of the identified differences, the server may also give rise suggestions. In particular, the server sends both the discrepancy and the advice to the second electronic device 21 for presentation to the second user. In some embodiments, the server may also send both the differences and the suggestions to the first electronic device 11 for presentation to the first user.
According to the scheme provided by the embodiment of the application, the creation process and the pen skill of the first user can be restored in a real immersive manner, wherein the pen skill comprises information such as the holding posture of the hand on the pen body and information such as the position and the area of the palm rest on the screen. The teaching video by pen technique can be stored in the first electronic device 11, and can be presented to the user watching the teaching in real time in a remote live network manner, or can be presented to the user watching the teaching in real time in a short-distance interconnection manner.
In practical applications, the first user and the second user may cause the first electronic device 11 and the second electronic device 21 to perform corresponding steps by turning on a button on a stylus pen or an interface switch on the electronic device.
In addition, in a specific implementation, the above-mentioned pen skill teaching video may further include an action screen in which the first user clicks a menu, an option, or the like on the software interface with a finger or a pen tip.
With reference to the foregoing embodiments and the related drawings, the embodiments of the present application provide an online teaching method, which can be implemented in an electronic device as shown in fig. 1 and fig. 2. Fig. 17 is a schematic flowchart of an online education method provided in an embodiment of the present application, and a method 600 shown in fig. 17 describes an interaction process between a first electronic device and a second electronic device. Illustratively, the first electronic device may be the first electronic device 11 above, and the second electronic device may be the second electronic device 21 above. As shown in fig. 17, the method 600 may include steps S610 through S640.
S610, the first electronic device obtains video data.
The video data includes a first image of handwriting corresponding to a first stylus and a first pen-holding gesture corresponding to a first user. Here, the first stylus is in communication with the first electronic device, and the first user is a user using the first stylus.
In the embodiment of the application, the first pen-holding posture is determined according to the first information.
In some embodiments, the first information includes body tilt information of the first stylus, pen-holding position information of the first user, and hand support information of the first user.
The body inclination angle information of the first stylus pen is used to indicate an angle at which the body is inclined when the first user uses the first stylus pen. For example, the body tilt angle information of the first stylus may include an angle at which the first stylus is tilted with respect to a plane in which the display screen is located, or information for calculating a tilt angle of the body such as a length of the body and a distance between a tip of the stylus and the plane in which the display screen is located. The pen body inclination angle information of the first stylus pen may be acquired by the first stylus pen and then transmitted to the first electronic device, or may be directly acquired by the first electronic device, or a part of the information may be acquired by the first stylus pen and the first electronic device, respectively, which is not limited in this embodiment of the application. For a specific obtaining manner and specific content of the body tilt angle information of the first stylus pen, reference may be made to the related description of the body tilt angle information 401b, which is not described herein again for brevity.
The pen holding position information of the first user is used for indicating the number of fingers and the pen holding position of the first user. For example, the pen-holding position information of the first user may include coordinates of the pen-holding position of the first user. The pen-holding position information of the first user may be obtained by the first stylus pen and then transmitted to the first electronic device, and the specific obtaining manner and the specific content may refer to the related description of the pen-holding position information 401a, which is not described herein again for brevity.
The hand support information of the first user is used for indicating the hand support posture of the first user during pen holding, wherein the hand comprises the support posture of a pen holding hand and can also comprise the support posture of a non-pen holding hand. For example, the first user's hand support information may include a support position, a support area, and a support contour of the first user's hand on the display screen. The hand support information of the first user may be obtained by the first electronic device, and the specific obtaining manner and the specific content may refer to the related description of the hand support information 401c, which is not described herein again for brevity.
In the embodiment of the application, when the first pen holding posture is determined, the pen body inclination angle information of the first handwriting pen, the pen holding position information of the first user and the hand support information of the first user are comprehensively considered, so that the acquired first pen holding posture is closer to the real pen holding posture of the first user. In particular, the number of pen holding fingers and the pen holding position of the first user may be the same under different pen holding postures, but the support position, the support area or the support contour of the hand of the first user on the display screen may be different. Therefore, when the first pen holding posture is determined, the hand support information of the first user is considered to be helpful for distinguishing the pen holding postures with similar postures, so that the acquired first pen holding posture is closer to the real pen holding posture of the first user.
In some embodiments, the first information may further include pen holding pressure information of the first user and/or size information of the first stylus.
The pen holding pressure information of the first user is used for indicating the pressure of the first user when holding the pen. For example, the pen-holding pressure information of the first user may include pressure values corresponding to positions where the hand of the first user contacts the first stylus pen. The pen holding pressure information of the first user can be acquired by the first stylus pen and then transmitted to the first electronic device. For the specific obtaining manner and the specific content of the pen-holding pressure information of the first user, reference may be made to the related description of the pen-holding pressure information 401d, and for brevity, no further description is given here.
The size information of the first stylus is used for assisting in building a model of the virtual pen, or is used for calculating a pen holding position, a pen body inclination angle and the like. The size information of the first stylus pen may be pre-stored in the first stylus pen when leaving a factory, and when the first stylus pen establishes a communication connection with the first electronic device, the first stylus pen transmits the size information of the first stylus pen to the first electronic device, or the first stylus pen transmits the size information of the first stylus pen to the first electronic device in response to a request of the first electronic device. For a detailed description of the dimension information of the first stylus, reference may be made to the description of the dimension information 401e of the first stylus, and for brevity, the detailed description is omitted here.
In the embodiment of the application, the richer the information referred to in determining the first pen holding posture, the closer the determined first pen holding posture is to the real pen holding posture of the first user.
In some embodiments, step S610 may specifically include: the first electronic equipment acquires a first handwriting image and a first pen holding gesture; the first electronic equipment superposes the first pen-holding posture and the first handwriting image to obtain the video data.
After the first electronic device executes the operation of superposing the first handwriting image and the first pen holding posture to obtain the video data, when the subsequent first electronic device shares the video data with other electronic devices, the other electronic devices can directly present the content corresponding to the video data through the display screen, and the operation is simple.
In the embodiment of the application, the first electronic device may acquire the first pen-holding posture in various ways.
In one example, the first electronic device may adjust a posture of a first virtual hand holding a first virtual pen according to the first information, wherein the first virtual hand and the first virtual pen are models stored in the first electronic device; rendering the posture of the first virtual pen held by the first virtual hand at a preset visual angle to obtain a first pen holding posture.
That is to say, a model of a hand (i.e., a first virtual hand) and a model of a pen (i.e., a first virtual pen) are stored in the first electronic device, and the first electronic device may adjust a posture of the first virtual hand holding the first virtual pen according to the first information and perform rendering at a preset viewing angle, so as to obtain a first pen holding posture corresponding to the first user. Here, the first pen-holding posture is actually a virtual pen-holding posture, i.e., a pen-holding posture restored according to the first information. Therefore, only a set of models of hands and pens need to be stored in the first electronic equipment, and under different pen holding postures, the first electronic equipment can be respectively adjusted according to the first information, so that the storage space can be saved, and the pen holding posture of the user can be accurately, intuitively and multi-view-angle reproduced in real time. And according to different requirements, the user can carry out personalized design or beautification design on the hand model and the pen model.
For example, in a specific implementation, the first electronic device may adjust the number and the position of fingers of the first virtual hand holding the first virtual pen according to the pen holding position information of the first user, may adjust the support posture of the first virtual hand according to the hand support information of the first user, and may adjust the inclination degree of the first virtual pen according to the pen body inclination angle information of the first stylus pen.
Here, the preset viewing angle may be a first viewing angle (i.e., a viewing angle of the first user), may be an optimal viewing angle (i.e., a viewing angle capable of completely presenting the pen-holding posture), and may also be a viewing angle customized by the first user, which is not limited in this embodiment of the application.
In the embodiment of the application, each pen point position corresponds to a pen holding gesture, so that the gesture rendering of the first virtual pen held by the first virtual hand is real-time along with the movement of the pen point position in the first handwriting image.
In this embodiment, the first virtual pen may be a model corresponding to the first stylus pen, or may be a model corresponding to a brush used for drawing the first handwriting image. In some embodiments, the tip type of the first virtual pen may be switched.
That is, the first virtual pen may be a model of a stylus, such that the first pen-holding gesture rendered is more intuitive. For example, the tip of the second virtual pen may be in the shape of a physical tip.
The first virtual pen may also be a model of a brush used by the first user when drawing the first handwriting image, such as a pencil model, a writing brush model, an oil-painting brush model, and the like, so that the presented first pen-holding posture is more realistic. For example, the tip of the second virtual pen may be in the shape of a brush.
In another example, the first electronic device may match the first information with a plurality of preset grip templates respectively, to obtain a second grip template having a highest degree of matching with the first information; and determining the pen holding posture corresponding to the second pen holding posture template as the first pen holding posture.
That is, a plurality of preset grip templates are stored in the first electronic device, wherein each preset grip template may include a grip position, a pen body inclination angle, hand support information, and the like. The first electronic device may match the first information with each preset grip template, for example, the second grip template is a preset grip template with the highest matching degree with the first information, so that the first electronic device may determine the grip posture corresponding to the second grip template as the first grip posture. The first information is matched with the preset holding posture template, so that the calculation amount can be reduced, and the calculation resources can be saved. Generally, the pen holding posture in the preset pen holding posture template is standard, so that even if the actual pen holding posture of the first user is not very standard, the presented first pen holding posture is standard, and the first user can correct the pen holding posture conveniently. Subsequently, when the first user shares the video data with other users, the other users can be prevented from learning nonstandard pen holding gestures.
In yet another example, the first electronic device may store therein both a model of a hand and a model of a pen, and further store therein a plurality of preset grip templates, that is, the first electronic device acquires the first pen-holding gesture in combination with the first two implementations. For example, when the network is good, the first pen-holding posture is obtained through the rendering model, and when the network is not good, the first pen-holding posture is obtained through matching a preset pen-holding posture template.
In the embodiment of the application, the first handwriting image is determined according to the second information. The second information includes at least one of tip pressure information of the first stylus pen and body tilt information of the first stylus pen, and tip position information of the first stylus pen.
The nib position information of the first stylus pen is used to indicate a movement trace of the nib of the first stylus pen. The motion trail may also embody the pen-moving direction and pen-moving speed of the first user, etc. For example, the tip position information of the first stylus may include position coordinates of the tip of the first stylus on the display screen of the first electronic device. The pen point position information of the first stylus pen may be directly obtained by the first electronic device, and the specific obtaining manner may refer to the description of the pen point position information 402a, which is not repeated herein for brevity.
The pen point pressure information of the first handwriting pen and the pen body inclination angle information of the first handwriting pen are used for assisting in determining the change of the lines of the first handwriting image in width, thickness, rigidity, flexibility and the like. For example, the line is wider as the tip pressure of the first stylus is larger or the degree of inclination of the first stylus is larger, the line is thinner as the tip pressure of the first stylus is larger or the degree of inclination of the first stylus is smaller, and so on. The nib pressure information of the first stylus pen and the body inclination angle information of the first stylus pen may be acquired by the first stylus pen and then transmitted to the first electronic device, or may be directly acquired by the first electronic device, which is not limited in the embodiment of the present application. For a specific obtaining manner and related content of the nib pressure information of the first stylus pen, reference may be made to the related description of the nib pressure information 402b, and for a specific obtaining manner and related content of the body tilt angle information of the first stylus pen, reference may be made to the related description of the body tilt angle information 401b, which is not repeated herein for brevity.
In the embodiment of the application, the first handwriting image obtained according to the second information can show the change of the lines in the aspects of length, thickness, curvature, density, lightness and the like, so that the pen using action of the first user is shown. The first handwriting image is combined with the first pen holding gesture, so that a learner can conveniently know the pen skills of the first user, and the learning effect is improved.
In the embodiment of the application, the first handwriting image and the first pen-holding gesture are image data, and in the process of acquiring the first handwriting image and the first pen-holding gesture, the original data acquired by the first electronic device or the first stylus pen, such as the first information and the second information, is used. In some embodiments, the video data may include only image data for rendering a picture. In other embodiments, the video data may also include raw data for acquiring the image data. That is, the video data further includes raw data for acquiring the first handwriting image and raw data for acquiring the first pen-holding gesture.
In some embodiments, the raw data for acquiring the first handwriting image may include tip position data of the first stylus, tip pressure data of the first stylus. The tip position data of the first stylus may be specific data included in the tip position information of the first stylus. The tip pressure data of the first stylus may be specific data included in the tip pressure information of the first stylus.
In some embodiments, the raw data for obtaining the first pen-holding gesture may include pen-holding position data of the first user, pen body tilt data of the first stylus, and hand support data of the first user. The pen-holding position data of the first user may be specific data included in the pen-holding position information of the first user. The body tilt angle data of the first stylus may be specific data included in the body tilt angle information of the first stylus. The hand support data of the first user may be specific data included in the hand support information of the first user.
In the embodiment of the application, the video data includes image data and also includes original data used for acquiring the image, so that when the first user shares the video data with other users, the other users can also perform some processing operations based on the original data.
S620, detecting and responding to a first operation of a first user, and displaying the first handwriting image and a first pen holding gesture by the first electronic equipment through the display screen, wherein the first pen holding gesture moves along with the movement of the pen point position of the first handwriting in the first handwriting image.
The display screen involved in step S620 is a display screen of a first electronic device, for example, the first display screen 101 shown in fig. 1.
The first operation of the first user may be regarded as an operation of playing a video. Specifically, the first operation of the first user is an operation of playing a content corresponding to the video data. For example, the first operation of the first user may be an operation of clicking a play button, or an operation of clicking a control for playing a video, and the like. The first operation of the first user is designed according to specific requirements, which is not limited in the embodiment of the present application.
When the first electronic device plays the content corresponding to the video data through the display screen, the handwriting image of the first handwriting pen and the first pen holding posture moving along with the pen point position can be presented in the video. Since the first user may switch the pen-holding posture during drawing the first handwriting image, the first pen-holding posture may present a position where the pen-holding posture is changed when the first stylus moves according to the movement of the pen tip position in the first handwriting image.
This step S620 is an optional step, and in some other embodiments, after the first electronic device acquires the video data, the video data may not be displayed on the display screen of the first electronic device.
In some embodiments, after the first electronic device obtains the video data, the video data may be stored locally or in the cloud.
S630, the second electronic device receives the video data from the first electronic device.
In some embodiments, the first electronic device may transmit video data directly to the second electronic device, and accordingly, the second electronic device receives the video data from the second electronic device.
In other embodiments, the first electronic device may forward the video data to the second electronic device through the server. That is, the first electronic device sends the video data to the server, and then the server sends the video data to the second electronic device.
In this step, in the video data received by the second electronic device, the first pen image and the first pen-holding gesture may be superimposed or not, which is not limited in this embodiment of the application. If the video data comprises the superposed first handwriting image and the first pen holding posture, the second electronic equipment can directly play the content corresponding to the video data after receiving the video data. If the video data includes the first independent handwriting image and the first pen holding posture, the second electronic device may superimpose the first handwriting image and the first pen holding posture after receiving the video data, and then play the content corresponding to the video data.
And S640, displaying the first handwriting image and the first pen holding gesture by the second electronic device through the display screen.
The display screen involved in step S640 is a display screen of a second electronic device, for example, the second display screen 201 shown in fig. 1.
In some embodiments, this step S640 may be an action performed in response to some operation by the second user.
When the second electronic device plays the content corresponding to the video data through the display screen, the handwriting image of the first handwriting pen and the first pen holding posture moving along with the pen point position can be presented in the video. Since the first user may switch the pen-holding posture during drawing the first handwriting image, the first pen-holding posture may present a position where the pen-holding posture is changed when the first stylus moves according to the movement of the pen tip position in the first handwriting image.
In the embodiment of the application, a user using the second electronic device can know and learn about the pen skills and the creation thinking of the first user through the first handwriting image and the first pen holding gesture, and the learning effect of a learner can be improved. The first pen holding posture is determined by comprehensively considering pen body inclination angle information of the first handwriting pen, pen holding position information of the first user and hand support information of the first user, so that the first pen holding posture is closer to the actual pen holding posture of the first user, and the online teaching effect can be improved.
In some embodiments, a user using the second electronic device may copy and learn from the video data under a line, for example using a real pen to copy on a paper surface.
In other embodiments, a user using the second electronic device may copy and learn from the video data online, for example, using a second stylus to copy on a display screen of the second electronic device.
As shown in fig. 18, optionally, the method 600 further includes:
s650, detecting and responding to a first operation of a second user, and acquiring a second handwriting image corresponding to a second handwriting pen by the second electronic device, wherein the second handwriting pen is in communication connection with the second electronic device.
The first operation of the second user is used for instructing the second electronic equipment to display the second handwriting image. For example, the first operation of the second user may be a screen splitting operation, an operation of clicking a control for controlling display of the second handwriting image, and the like, and the first operation of the second user is designed according to specific requirements, which is not limited in this embodiment of the application.
In this step, the second electronic device acquires the second handwriting image in a manner similar to the manner in which the first electronic device acquires the first handwriting image. Illustratively, the second handwriting image is determined according to the fourth information. The fourth information includes at least one of pen point pressure information of the second stylus pen and pen body inclination information of the second stylus pen, and pen point position information of the second stylus pen. For the description of the fourth information, reference may be made to the related description of the second information, and for brevity, the description is not repeated here.
And S660, the second electronic equipment displays the second handwriting image through the display screen.
Here, the display screen of the second electronic device simultaneously displays the first handwriting image, the first pen-holding gesture, and the second handwriting image, where the first handwriting image and the first pen-holding gesture may be located in a first display area of the display screen, and the second handwriting image may be located in a second display area of the display screen. Illustratively, the first display area may be a first display area 201a as shown in fig. 14, and the second display area may be a second display area 201b as shown in fig. 14.
In the embodiment of the application, the second user can copy on the display screen of the second electronic device by using the second handwriting pen, and the display screen of the second electronic device can simultaneously display the pen skill of the first user and the handwriting image actually copied by the second user, so that the second user can compare the pen holding posture with the first pen holding posture and compare the second handwriting image with the first handwriting image, and the pen holding posture and the pen using action can be continuously corrected.
In some embodiments, method 600 may further include:
detecting and responding to a second operation of a second user, and acquiring a second pen holding gesture corresponding to the second user by the second electronic equipment;
the second electronic device identifies a deviation between the second pen-holding posture and the first pen-holding posture;
and when the deviation exceeds a preset value, the second electronic equipment reminds the second user of correcting the pen holding posture by controlling the motor in the second stylus pen to vibrate.
The second operation of the second user is used for instructing the second electronic equipment to acquire a second pen holding gesture. For example, the second operation of the second user may be an operation of clicking a control for controlling rendering of the second pen-holding gesture, and the like, and the second operation of the second user is designed according to specific requirements, which is not limited in this embodiment of the application.
In this application embodiment, the second electronic device may recognize a deviation between the second pen holding posture and the first pen holding posture through the image data, and may also recognize a deviation between the second pen holding posture and the first pen holding posture through the raw data acquisition. Here, the raw data refers to raw data for acquiring the second pen-holding gesture and the first pen-holding gesture.
When the deviation between the second pen-holding posture and the first pen-holding posture is over the preset value, the second electronic device can send a control instruction to the second writing pen to enable a motor in the second writing pen to vibrate, and therefore the second user can correct the pen-holding posture.
In the copying process of the second user, the second electronic device can detect the pen holding posture of the second user, and when the deviation between the pen holding posture of the second user and the pen holding posture of the first user is large, the motor of the second handwriting pen is controlled to vibrate for reminding, so that the copying effect and the learning efficiency of the second user can be improved.
In the embodiment of the present application, the manner in which the second electronic device obtains the second pen-holding posture is similar to the manner in which the first electronic device obtains the first pen-holding posture, and only a brief description is provided below, and reference may be made to the related description of the first pen-holding posture for detailed description.
In one example, the second electronic device may acquire third information, where the third information includes pen body tilt information of the second stylus pen, pen holding position information of the second user, and hand support information of the second user; the second electronic equipment adjusts the posture of a second virtual hand holding a second virtual pen according to the third information, wherein the second virtual hand and the second virtual pen are models stored in the second electronic equipment; rendering the posture of the second virtual pen held by the second virtual hand at a preset visual angle to obtain a second pen holding posture.
That is to say, a model of a hand (i.e., a second virtual hand) and a model of a pen (i.e., a second virtual pen) are stored in the second electronic device, and the second electronic device may adjust a posture of the second virtual hand holding the second virtual pen according to the third information and perform rendering at a preset viewing angle, so as to obtain a second pen holding posture corresponding to the second user.
The second virtual pen is a model corresponding to a second handwriting pen, or a model corresponding to a brush used for drawing a second handwriting image, which is not limited in the embodiment of the present application.
In another example, the second electronic device acquires third information, wherein the third information comprises pen body inclination angle information of the second stylus pen, pen holding position information of the second user and hand support information of the second user; the second electronic equipment respectively matches the third information with a plurality of preset holding posture templates to obtain a first holding posture template with the highest matching degree with the third information; and determining the pen holding posture corresponding to the first pen holding posture template as a second pen holding posture.
That is, the second electronic device stores therein a plurality of preset grip templates, wherein each preset grip template may include a grip position, a pen body inclination, hand support information, and the like. The second electronic device may match the third information with each preset grip template, for example, the first grip template is a preset grip template with the highest matching degree with the third information, so that the second electronic device may determine the grip posture corresponding to the first grip template as the second grip posture.
By way of example and not limitation, the second electronic device may perform the matching process described above through a neural network.
In some embodiments, the second electronic device may also suggest modification suggestions to the second user based on the video data and a second handwriting image drawn by the second user. For example, method 600 may further include: detecting and responding to a third operation of a second user, and identifying the difference between the second handwriting image and the first handwriting image by the second electronic equipment; the second electronic device displays the difference through the display screen.
The third operation of the second user is used for instructing to compare the second handwriting image with the first handwriting image. For example, the third operation of the second user may be an operation of clicking a control for controlling the contrasting handwriting image, and the like, and the third operation of the second user is designed according to specific requirements, and is not limited in this embodiment of the application.
The second electronic device may display the difference between the second handwriting image and the first handwriting image in a variety of ways.
In one example, the second electronic device circles around a difference of the second handwriting image and the first handwriting image to display the difference. By way of example and not limitation, the second electronic device may display the differences in a manner as shown in fig. 15.
In another example, the second electronic device overlays a contour line of the first handwriting image on the second handwriting image to display the difference. By way of example and not limitation, the second electronic device may display the differences in a manner as shown in fig. 16.
In yet another example, the second electronic device may display the discrepancy via a prompt box, text report, or other means.
The on-line teaching method provided by the embodiments of the present application is described in detail above with reference to fig. 1 to 18, and the apparatus embodiments of the present application are described in detail below with reference to fig. 19 to 21. It should be understood that the apparatus in the embodiment of the present application may perform the various methods in the embodiment of the present application, that is, the following specific working processes of various products, and reference may be made to the corresponding processes in the embodiment of the foregoing methods.
Fig. 19 is a schematic block diagram of an apparatus 700 of an embodiment of the present application. It is to be understood that the apparatus 700 is capable of performing the steps performed by the second electronic device in the methods of fig. 17-18, and may also perform the steps performed by the second electronic device 21 in the embodiments described in fig. 4-16. The apparatus 700 comprises: a receiving unit 710 and a processing unit 720.
The receiving unit 710 is configured to receive video data from a first electronic device, where the video data includes a first handwriting image corresponding to a first stylus and a first pen-holding gesture corresponding to a first user, and the first stylus is communicatively connected to the first electronic device, and the first user is a user using the first stylus.
And the processing unit 720 is configured to display the first handwriting image and a first pen-holding gesture through the display screen, wherein the first pen-holding gesture moves along with the movement of the pen tip position of the first stylus in the first handwriting image.
The first pen holding posture is determined according to first information, and the first information comprises pen body inclination angle information of the first handwriting pen, pen holding position information of the first user and hand support information of the first user.
Optionally, the first handwriting image is determined according to second information, and the second information includes at least one of pen point pressure information of the first stylus pen and pen body inclination information of the first stylus pen, and pen point position information of the first stylus pen.
Optionally, the first user's hand support information includes a support position, a support area, and a support contour of the first user's hand on the display screen of the first electronic device.
Optionally, the processing unit 720 is further configured to: detecting and responding to a first operation of a second user, and acquiring a second handwriting image corresponding to a second handwriting pen, wherein the second handwriting pen is in communication connection with the second electronic device; and displaying a second handwriting image through the display screen, wherein the first handwriting image and the first pen-holding posture are located in a first display area of the display screen, and the second handwriting image is located in a second display area of the display screen.
Optionally, the processing unit 720 is further configured to: detecting and responding to a second operation of a second user, and acquiring a second pen holding gesture corresponding to the second user; the second electronic device identifies a deviation between the second pen-holding posture and the first pen-holding posture; and when the deviation exceeds a preset value, reminding the second user of correcting the pen holding posture by controlling the motor in the second handwriting pen to vibrate.
Optionally, the processing unit 720 is specifically configured to: acquiring third information, wherein the third information comprises pen body inclination angle information of a second handwriting pen, pen holding position information of a second user and hand support information of the second user; adjusting the posture of a second virtual hand holding a second virtual pen according to the third information, wherein the second virtual hand and the second virtual pen are models stored in the second electronic equipment; rendering the posture of the second virtual pen held by the second virtual hand at a preset visual angle to obtain a second pen holding posture.
Optionally, the second virtual pen is a model corresponding to a second handwriting pen, or a model corresponding to a brush used to draw the second handwriting image.
Optionally, the processing unit 720 is specifically configured to: acquiring third information, wherein the third information comprises pen body inclination angle information of a second handwriting pen, pen holding position information of a second user and hand support information of the second user; respectively matching the third information with a plurality of preset holding posture templates to obtain a first holding posture template with the highest matching degree with the third information; and determining the pen holding posture corresponding to the first pen holding posture template as a second pen holding posture.
Optionally, the processing unit 720 is further configured to: detecting and responding to a third operation of a second user, and identifying the difference between the second handwriting image and the first handwriting image; and displaying the difference through the display screen.
Optionally, the processing unit 720 is specifically configured to: circling at the difference position of the second handwriting image and the first handwriting image to display the difference; or, the contour line of the first handwriting image is superposed on the second handwriting image to display the difference.
Optionally, the first information further comprises pen holding pressure information of the first user and/or size information of the first stylus.
Optionally, the video data further comprises raw data for acquiring the first handwriting image and raw data for acquiring the first pen-holding gesture.
Optionally, the raw data for acquiring the first handwriting image includes tip position data of the first stylus and tip pressure data of the first stylus; and/or the raw data for acquiring the first pen-holding gesture comprises pen-holding position data of the first user, pen body inclination angle data of the first stylus pen and hand support data of the first user.
Fig. 20 is a schematic block diagram of an apparatus 800 of an embodiment of the present application. It should be understood that the apparatus 800 is capable of performing the steps performed by the first electronic device in the methods of fig. 17-18, and may also perform the steps performed by the first electronic device 11 in the embodiments described in fig. 4-16. The apparatus 800 comprises: a processing unit 810.
The processing unit 810 is configured to obtain video data, where the video data includes a first handwriting image corresponding to a first stylus and a first pen-holding gesture corresponding to a first user, and the first stylus is communicatively connected to a first electronic device, and the first user is a user using the first stylus.
The processing unit 810 is further configured to detect and respond to a first operation of the first user by displaying the first handwriting image and a first pen-holding gesture through the display screen, wherein the first pen-holding gesture moves along with movement of the pen tip position of the first handwriting in the first handwriting image.
The first pen holding posture is determined according to first information, and the first information comprises pen body inclination angle information of the first handwriting pen, pen holding position information of a first user and hand support information of the first user.
Optionally, the first handwriting image is determined according to second information, and the second information includes at least one of pen point pressure information of the first stylus pen and pen body inclination information of the first stylus pen, and pen point position information of the first stylus pen.
Optionally, the first user's hand support information includes a support position, a support area, and a support contour of the first user's hand on the display screen.
Optionally, the processing unit 810 is specifically configured to obtain a first handwriting image and a first pen-holding gesture; and superposing the first pen holding posture and the first handwriting image to obtain video data.
Optionally, the processing unit 810 is specifically configured to adjust, according to the first information, a posture of the first virtual hand holding the first virtual pen, where the first virtual hand and the first virtual pen are models stored in the first electronic device; rendering the posture of the first virtual pen held by the first virtual hand at a preset visual angle to obtain a first pen holding posture.
Optionally, the first virtual pen is a model corresponding to a first stylus, or a model corresponding to a brush used to draw the first image of writing.
Optionally, the processing unit 810 is specifically configured to match the first information with a plurality of preset grip templates, respectively, to obtain a second grip template with a highest matching degree with the first information; and determining the pen holding posture corresponding to the second pen holding posture template as the first pen holding posture.
Optionally, the first information further comprises pen holding pressure information of the first user and/or size information of the first stylus.
Optionally, the video data further comprises raw data for acquiring the first handwriting image and raw data for acquiring the first pen-holding gesture.
Optionally, the original data for acquiring the first handwriting image includes pen point position data of the first stylus pen, and pen point pressure data of the first stylus pen; and/or the raw data for acquiring the first pen-holding gesture comprises pen-holding position data of the first user, pen body inclination angle data of the first stylus pen and hand support data of the first user.
Optionally, the apparatus 800 further comprises a sending unit 820, configured to send the video data directly to the second electronic device; or forwarding the video data to the second electronic equipment through the server.
Optionally, the processing unit 810 is further configured to store the video data in a local or cloud end.
Fig. 21 is a schematic structural diagram of an apparatus provided in an embodiment of the present application. The apparatus 900 shown in fig. 21 may correspond to the apparatus 700 or the apparatus 800 described above, and specifically the apparatus 900 may be a specific example of the first electronic device 11 or the second electronic device 12 in fig. 1.
The apparatus 900 comprises: a processor 910. In an embodiment of the present application, the processor 910 is configured to implement a corresponding control management operation. For example, if the apparatus 910 is a device in the first electronic device 11, the processor 910 is configured to support the apparatus to perform the operation or function performed by the first electronic device in the method shown in fig. 16 or fig. 17 in the foregoing embodiment, and perform the operation or function performed by the first electronic device 11 in the embodiments shown in fig. 4 to fig. 15. For example, if the apparatus 900 is a device in a second electronic device, the processor 910 is configured to enable the apparatus to perform the operations or functions performed by the second electronic device in the method shown in fig. 16 or fig. 17 in the foregoing embodiment, and the operations or functions performed by the second electronic device 21 in the embodiments shown in fig. 4 to fig. 15.
Optionally, the apparatus 900 may further include: a memory 920 and a communication interface 930. The processor 910, the communication interface 930, and the memory 920 may be connected to each other or to each other through a bus 940. Wherein the communication interface 930 is used to support the device for communication. The memory 920 is used to store program codes and data of the devices. The processor 1120 calls the code or data stored in the memory 920 to implement the corresponding operation. The memory 920 may or may not be coupled to the processor. The coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units or modules, and may be in an electrical, mechanical or other form, which is used for information interaction between the devices, units or modules.
The processor 910 may be, for example, a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a digital signal processor and a microprocessor, or the like.
The communication interface 930 may be a transceiver, circuit, bus, module, or other type of communication interface.
The bus 940 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 20, but this is not intended to represent only one bus or type of bus.
An embodiment of the present application further provides an electronic device, including: one or more processors; one or more memories; the one or more memories store one or more computer programs comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the steps of the aforementioned method performed by a second electronic device.
An embodiment of the present application further provides an electronic device, including: one or more processors; one or more memories; the one or more memories store one or more computer programs comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the steps of the aforementioned method performed by the first electronic device.
An embodiment of the present application further provides a communication system, including the above-described first electronic device, second electronic device, and first stylus pen, where the first stylus pen is connected to the first electronic device in a communication manner.
Optionally, the communication system further comprises a second stylus, the second stylus being in communication with the second electronic device.
Embodiments of the present application also provide a computer-readable storage medium comprising computer instructions that, when executed on an electronic device, cause a processor to perform the above online teaching method.
The embodiment of the present application further provides a chip, where the chip includes a processor and a data interface, and the processor reads an instruction stored in a memory through the data interface to execute the on-line teaching method in the foregoing.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments of the present application, "first", "second", and various numerical references are only used for convenience of description and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated, and thus, the features defined as "first", "second" may explicitly or implicitly include one or more of the features.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (30)

1. An online teaching method, comprising:
the method comprises the steps that a second electronic device receives video data from a first electronic device, wherein the video data comprise a first handwriting image corresponding to a first stylus and a first pen holding gesture corresponding to a first user, the first stylus is in communication connection with the first electronic device, and the first user is a user using the first stylus;
the second electronic device displaying the first handwriting image and the first pen-holding gesture through a display screen, wherein the first pen-holding gesture moves with the movement of the first stylus at the pen point position in the first handwriting image;
wherein the first pen-holding posture is determined according to first information, and the first information comprises pen body inclination angle information of the first stylus pen, pen-holding position information of the first user and hand support information of the first user.
2. The method of claim 1, wherein the first image of handwriting is determined based on second information, the second information comprising nib position information of the first stylus and at least one of nib pressure information and body tilt information of the first stylus.
3. The method of claim 1 or 2, wherein the first user's hand support information comprises a support location, a support area, and a support contour of the first user's hand on a display screen of the first electronic device.
4. The method according to any one of claims 1 to 3, further comprising:
detecting and responding to a first operation of a second user, wherein the second electronic equipment acquires a second handwriting image corresponding to a second handwriting pen, and the second handwriting pen is in communication connection with the second electronic equipment;
and the second electronic equipment displays the second handwriting image through the display screen, wherein the first handwriting image and the first pen-holding gesture are located in a first display area of the display screen, and the second handwriting image is located in a second display area of the display screen.
5. The method of claim 4, further comprising:
detecting and responding to a second operation of the second user, and acquiring a second pen holding gesture corresponding to the second user by the second electronic equipment;
the second electronic device identifying a deviation between the second pen-holding posture and the first pen-holding posture;
and when the deviation exceeds a preset value, the second electronic equipment controls the motor in the second handwriting pen to vibrate so as to remind the second user of correcting the pen holding posture.
6. The method of claim 5, wherein the second electronic device obtaining a second pen-holding gesture corresponding to the second user comprises:
the second electronic equipment acquires third information, wherein the third information comprises pen body inclination angle information of the second handwriting pen, pen holding position information of the second user and hand support information of the second user;
the second electronic equipment adjusts the posture of a second virtual hand holding a second virtual pen according to the third information, wherein the second virtual hand and the second virtual pen are models stored in the second electronic equipment;
rendering the posture of the second virtual pen held by the second virtual hand at a preset visual angle to obtain the second pen holding posture.
7. A method according to claim 6, characterized in that the second virtual pen is a model corresponding to the second stylus or a model corresponding to a brush used for drawing the second image of writing.
8. The method of claim 5, wherein the second electronic device obtaining a second pen-holding gesture corresponding to the second user comprises:
the second electronic equipment acquires third information, wherein the third information comprises pen body inclination angle information of the second handwriting pen, pen holding position information of the second user and hand support information of the second user;
the second electronic equipment respectively matches the third information with a plurality of preset holding posture templates to obtain a first holding posture template with the highest matching degree with the third information;
and determining the pen holding posture corresponding to the first pen holding posture template as the second pen holding posture.
9. The method according to any one of claims 4 to 8, further comprising:
detecting and responding to a third operation of the second user, wherein the second electronic equipment identifies the difference between the second handwriting image and the first handwriting image;
and the second electronic equipment displays the difference through the display screen.
10. The method of claim 9, wherein the second electronic device displays the difference via the display screen, comprising:
the second electronic device annotates at a difference of the second handwriting image and the first handwriting image to display the difference; alternatively, the first and second electrodes may be,
and the second electronic equipment superposes the contour line of the first handwriting image on the second handwriting image so as to display the difference.
11. The method according to any one of claims 1 to 10, characterized in that the first information further comprises pen grip pressure information of the first user and/or size information of the first stylus.
12. The method of any of claims 1 to 11, wherein the video data further comprises raw data for obtaining the first handwriting image and raw data for obtaining the first pen-holding gesture.
13. The method of claim 12,
the original data for acquiring the first handwriting image comprises pen point position data of the first handwriting pen and pen point pressure data of the first handwriting pen; and/or
The raw data for acquiring the first pen-holding posture comprises pen-holding position data of the first user, pen body inclination angle data of the first stylus pen and hand support data of the first user.
14. An online teaching method, comprising:
the method comprises the steps that video data are obtained by first electronic equipment, the video data comprise a first handwriting image corresponding to a first stylus and a first pen holding gesture corresponding to a first user, the first stylus is in communication connection with the first electronic equipment, and the first user is a user using the first stylus;
detecting and responding to a first operation of the first user, wherein the first electronic device displays the first handwriting image and the first pen holding gesture through a display screen, and the first pen holding gesture moves along with the movement of the pen point position of the first handwriting in the first handwriting image;
wherein the first pen-holding posture is determined according to first information, and the first information comprises pen body inclination angle information of the first stylus pen, pen-holding position information of the first user and hand support information of the first user.
15. The method of claim 14, wherein the first image of handwriting is determined based on second information, the second information comprising nib position information of the first stylus and at least one of nib pressure information and body tilt information of the first stylus.
16. The method of claim 14 or 15, wherein the first user's hand support information comprises a support position, a support area, and a support contour of the first user's hand on the display screen.
17. The method of any of claims 14-16, wherein the first electronic device obtains video data, comprising:
the first electronic equipment acquires the first handwriting image and the first pen holding gesture;
and the first electronic equipment superposes the first pen holding posture and the first handwriting image to obtain the video data.
18. The method of claim 17, wherein the first electronic device obtaining the first pen-holding gesture comprises:
the first electronic equipment adjusts the posture of a first virtual hand holding a first virtual pen according to the first information, wherein the first virtual hand and the first virtual pen are models stored in the first electronic equipment;
rendering the posture of the first virtual pen held by the first virtual hand at a preset visual angle to obtain the first pen holding posture.
19. The method of claim 18, wherein the first virtual pen is a model corresponding to the first stylus or a model corresponding to a brush used to draw the first image of writing.
20. The method of claim 17, wherein the first electronic device obtaining the first pen-holding gesture comprises:
the first electronic equipment respectively matches the first information with a plurality of preset holding posture templates to obtain a second holding posture template with the highest matching degree with the first information;
and determining the pen holding posture corresponding to the second pen holding posture template as the first pen holding posture.
21. The method according to any of claims 14 to 20, characterized in that the first information further comprises pen grip pressure information of the first user and/or size information of the first stylus.
22. The method of any of claims 14 to 21, wherein the video data further comprises raw data for obtaining the first handwriting image and raw data for obtaining the first pen-grip gesture.
23. The method of claim 22,
the original data for acquiring the first handwriting image comprises pen point position data of the first handwriting pen and pen point pressure data of the first handwriting pen; and/or
The raw data for acquiring the first pen-holding posture comprises pen-holding position data of the first user, pen body inclination angle data of the first stylus pen and hand support data of the first user.
24. The method according to any one of claims 14 to 23, further comprising:
the first electronic equipment directly sends the video data to second electronic equipment; alternatively, the first and second electrodes may be,
and the first electronic equipment forwards the video data to the second electronic equipment through the server.
25. The method of any one of claims 14 to 24, further comprising:
and the first electronic equipment stores the video data in a local or cloud terminal.
26. An electronic device, comprising:
one or more processors;
one or more memories;
the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-13.
27. An electronic device, comprising:
one or more processors;
one or more memories;
the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the method of any of claims 14-25.
28. A communication system comprising an electronic device as claimed in claim 26, an electronic device as claimed in claim 27 and a first stylus, wherein the first stylus is communicatively connected to the electronic device as claimed in claim 26.
29. A computer readable storage medium comprising computer instructions which, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-25.
30. A chip comprising a processor and a data interface, the processor reading instructions stored on a memory through the data interface to perform the method of any one of claims 1 to 25.
CN202210415510.XA 2022-04-20 2022-04-20 Online teaching method, electronic equipment and communication system Pending CN114816088A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210415510.XA CN114816088A (en) 2022-04-20 2022-04-20 Online teaching method, electronic equipment and communication system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210415510.XA CN114816088A (en) 2022-04-20 2022-04-20 Online teaching method, electronic equipment and communication system

Publications (1)

Publication Number Publication Date
CN114816088A true CN114816088A (en) 2022-07-29

Family

ID=82504753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210415510.XA Pending CN114816088A (en) 2022-04-20 2022-04-20 Online teaching method, electronic equipment and communication system

Country Status (1)

Country Link
CN (1) CN114816088A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348328A (en) * 2022-08-11 2022-11-15 环胜电子(深圳)有限公司 Handwritten data processing method and handwritten data processing system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348328A (en) * 2022-08-11 2022-11-15 环胜电子(深圳)有限公司 Handwritten data processing method and handwritten data processing system

Similar Documents

Publication Publication Date Title
CN103513894B (en) Display device, remote control equipment and its control method
US9454834B2 (en) Storage medium storing image processing program for implementing controlled image display according to input coordinate, and information processing device
CN105335001B (en) Electronic device having curved display and method for controlling the same
CN110427110B (en) Live broadcast method and device and live broadcast server
EP2919104B1 (en) Information processing device, information processing method, and computer-readable recording medium
CN109308205B (en) Display adaptation method, device, equipment and storage medium of application program
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
KR20130088104A (en) Mobile apparatus and method for providing touch-free interface
CN111045511B (en) Gesture-based control method and terminal equipment
US20190050132A1 (en) Visual cue system
KR20210023680A (en) Content creation in augmented reality environment
CN110489027B (en) Handheld input device and display position control method and device of indication icon of handheld input device
WO2018223605A1 (en) Input method, apparatus and system
WO2022052620A1 (en) Image generation method and electronic device
US8643679B2 (en) Storage medium storing image conversion program and image conversion apparatus
WO2021147465A1 (en) Image rendering method, electronic device, and system
CN109829982B (en) Model matching method, device, terminal equipment and storage medium
KR20220154763A (en) Image processing methods and electronic equipment
CN111432123A (en) Image processing method and device
CN113129411A (en) Bionic animation generation method and electronic equipment
CN204945943U (en) For providing the remote control equipment of remote control signal for external display device
CN114816088A (en) Online teaching method, electronic equipment and communication system
CN110717993B (en) Interaction method, system and medium of split type AR glasses system
CN113360009A (en) Image display method and electronic device
TW201439813A (en) Display device, system and method for controlling the display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination