CN111865871A - Conference record sharing method, electronic conference terminal and storage device - Google Patents

Conference record sharing method, electronic conference terminal and storage device Download PDF

Info

Publication number
CN111865871A
CN111865871A CN201910335946.6A CN201910335946A CN111865871A CN 111865871 A CN111865871 A CN 111865871A CN 201910335946 A CN201910335946 A CN 201910335946A CN 111865871 A CN111865871 A CN 111865871A
Authority
CN
China
Prior art keywords
data
user
terminal
conference
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910335946.6A
Other languages
Chinese (zh)
Inventor
杨剑峰
张兴明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Xingke Information Technology Co ltd
Original Assignee
Nantong Xingke Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Xingke Information Technology Co ltd filed Critical Nantong Xingke Information Technology Co ltd
Priority to CN201910335946.6A priority Critical patent/CN111865871A/en
Publication of CN111865871A publication Critical patent/CN111865871A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10009Improvement or modification of read or write signals
    • G11B20/10018Improvement or modification of read or write signals analog processing for digital recording or reproduction
    • G11B20/10027Improvement or modification of read or write signals analog processing for digital recording or reproduction adjusting the signal strength during recording or reproduction, e.g. variable gain amplifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1083In-session procedures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling

Abstract

The application discloses a conference record sharing method, an electronic conference terminal and a storage device, wherein the method comprises the following steps: the method comprises the steps that a first terminal obtains content written on paper by a first user, first character data are generated, and the first character data are displayed; sending the first text data to at least one second terminal, so that the at least one second terminal displays the first text data; receiving conference recording data sent by at least one second terminal, wherein the conference recording data is generated according to the first character data; and prompting the conference recording data to a user. Through the mode, the remote conference can be realized, and the communication efficiency and the working efficiency are improved.

Description

Conference record sharing method, electronic conference terminal and storage device
Technical Field
The present application relates to the field of meeting record technologies, and in particular, to a meeting record sharing method, an electronic meeting terminal, and a storage device.
Background
In order to maintain commercial competitiveness, people can obtain a common understanding of the work through meetings to improve the efficiency of cooperative work. The working rhythm of modern people is more and more compact, the conference is more and more, and the connection between the conference contents is more and more intimate. Even though the intelligent flat plate or the projector is used for replacing the blackboard to write and display meeting information, meeting records are shared by the intelligent flat plate or the projector, oral commentary is required or the meeting records are transmitted through mails or communication tools, the oral commentary influences the current colleague commentary of the lecture, and the transmission through the mails and the communication tools is troublesome in operation and time-consuming, and the meeting efficiency is influenced.
Disclosure of Invention
The technical problem mainly solved by the application is to provide the method for sharing the conference record, the electronic conference terminal and the storage device, so that the remote conference can be realized, and the communication efficiency and the working efficiency can be improved.
In order to solve the technical problem, the application adopts a technical scheme that: a method of conference record sharing is provided, comprising: the method comprises the steps that a first terminal obtains content written on paper by a first user, first character data are generated, and the first character data are displayed; sending the first text data to at least one second terminal, so that the at least one second terminal displays the first text data; receiving conference recording data sent by at least one second terminal, wherein the conference recording data is generated according to the first character data; and prompting the conference recording data to a user.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an electronic conference terminal comprising a processor, a memory, a communication circuit sensing circuit and a display, the processor being coupled to the memory, the communication circuit sensing circuit and the display respectively, the processor being operative to control itself to implement the steps of the method as described above.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an apparatus having a storage function, storing program data executable to implement the steps in the method as described above.
The beneficial effect of this application is: different from the prior art, in the application, the first terminal generates first text data according to the content written on paper by the first user, displays the first text data, sends the first text data to the second terminal, receives second text data generated according to the first text data and sent by the second terminal, and displays the second text data. In the conference process, the user writes the conference record on the paper, the conference record is displayed and sent to the second terminal, the user at the second terminal learns the first character record, and the conference record data recorded by the user is sent to the first terminal, so that the user at the first terminal can acquire the second character data, the teleconference can be realized, the user can realize the mutual sharing of the conference record, the efficiency of communication in the conference is improved, and the user can be helped to effectively communicate through the teleconference, and the working efficiency can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic structural diagram of a first embodiment of a notebook provided herein;
fig. 2 is a schematic structural diagram of a first embodiment in which a first board body and a second board body of a notebook are connected with each other;
fig. 3 is a schematic structural diagram of a second embodiment of a notebook in which a first board body and a second board body are connected with each other;
FIG. 4 is a schematic structural diagram of a second embodiment of a notebook provided herein;
FIG. 5 is a schematic structural diagram of a third embodiment of a notebook provided herein;
FIG. 6a is a schematic structural diagram illustrating a display screen of an embodiment of a second board of a notebook according to the present application in a lowered state;
fig. 6b is a schematic structural diagram of a display screen of an embodiment of a second board body of a notebook provided by the present application in a supported state;
FIG. 7a is a schematic view of a display screen of a fourth embodiment of a notebook provided by the present application in a down position;
FIG. 7b is a schematic structural diagram of a display screen of a fourth embodiment of a notebook provided by the present application in a set-up state;
FIG. 8 is a schematic flow chart of a first embodiment of the educational method provided herein;
FIG. 9 is a schematic flow chart of a second embodiment of the educational method provided herein;
FIG. 10 is a schematic flow chart of a third embodiment of the educational method provided herein;
FIG. 11 is a schematic flow chart diagram of a first embodiment of the electronic education method provided in the present application;
FIG. 12 is a schematic flow chart diagram of a second embodiment of the electronic education method provided in the present application;
FIG. 13 is a schematic flow chart diagram of a third embodiment of the electronic education method provided in the present application;
FIG. 14 is a schematic flow chart diagram illustrating a fourth embodiment of the electronic education method provided in the present application;
fig. 15 is a schematic flowchart of a first embodiment of a method for sharing a meeting record provided in the present application;
fig. 16 is a schematic flowchart of a method for sharing a meeting record according to a second embodiment of the present disclosure;
fig. 17 is a schematic flowchart of a method for sharing a meeting record according to a third embodiment of the present disclosure;
Fig. 18 is a schematic flow chart diagram of a first embodiment of a method of conference content recording provided by the present application;
fig. 19 is a schematic flow chart diagram of a second embodiment of a method of conference content recording provided by the present application;
fig. 20 is a schematic flow chart diagram of a third embodiment of a method for conference content recording provided by the present application;
fig. 21 is a schematic flow chart diagram of a fourth embodiment of a method for conference content recording provided by the present application;
FIG. 22 is a schematic diagram illustrating the structure of an embodiment of an electronic education terminal provided in the present application;
fig. 23 is a schematic structural diagram of an embodiment of an electronic conference terminal provided in the present application;
fig. 24 is a schematic structural diagram of an embodiment of a device with a storage function provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a notebook according to a first embodiment of the present application. The notebook 10 includes a first board 11 and a second board 12. Wherein the side 111 of the first board 11 is connected to the side 121 of the second board 12. And the first panel 11 and the second panel 12 can be folded over with respect to the junction. In this embodiment, the side 111 of the first board 11 and the side 121 of the second board 12 are integrally and directly connected. In other implementation scenarios, the connection can also be realized through a connecting piece. As shown in fig. 2, by means of connecting members 21 and 22. One end of the connecting piece 21 is connected to the first board 11, and the other end is connected to the second board 12. Similarly, one end of the connecting member 22 is connected to the first board 11, and the other end is connected to the second board 12. The connecting members 21 and 22 extend through the first panel 11 and the second panel 12, so that the first panel 11 and the second panel 12 can be folded relatively. In other embodiments, as shown in fig. 3, the two components can be connected by a connecting member 31. The side 111 of the first board 11 is connected to the connecting member 31, and the side 121 of the second board 12 is also connected to the connecting member 31. The connecting member 31 is a flexible member so that the first panel 11 and the second panel 12 can be folded relatively.
The surface of the first plate body 11 is provided with a writing area 112, and the writing area 112 is used for placing paper for a user to write. The first board body 11 is further provided with a sensing component 113, the sensing component 113 is located below the surface of the first board body 11, and further, the sensing component 113 is arranged corresponding to the writing area 112 and used for sensing the content written by the user.
The surface of the second board 12 is provided with a display 13, and the display 13 is connected to the sensing component 113 to receive and display sensing information of the sensing component 113. In this implementation scenario, the sensing component 113 senses the content written by the user on the paper disposed in the writing area 112, and transmits the sensed content written by the user to the display 13, and the display 13 displays the content written by the user.
Can know through the above-mentioned description, the surface of the first plate body of notebook in this embodiment is provided with writes the district, writes the district and is used for placing the paper and write for the user, is equipped with response subassembly in the first plate body, and second plate body surface is provided with the display screen, is connected with response subassembly for show response subassembly's response information, like this, when the user writes on writing the paper in district, the content of writing can be through response subassembly perception and through the display screen demonstration.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a notebook according to a second embodiment of the present application. The notebook 40 includes a first board 41, a second board 42, and a display screen 43. The display screen 43 is movably connected with the second plate 42 through a first connecting member 431. The connection relationship and the position relationship between the first board 41 and the second board 42 are the same as those between the first board 11 and the second board 12 in the first embodiment of the notebook provided in this application, and will not be described herein again. The writing area 412 and the sensing component 413 are arranged on the first board body 41, and the positions, functions and connection relations of the writing area 412 and the sensing component 413 are consistent with those of the writing area 112 and the sensing component 113 in the first embodiment of the notebook provided by the present application, and will not be described herein again.
In this implementation scenario, the first connecting part 431 may be folded along the dashed line 4311, so that the display screen 43 is movably connected to the second board 42. In other embodiments, the display screen 43 and the second plate 42 may be movably connected by other methods, for example, the first connecting member 431 may be a sliding member, or an adhesive member. Because the display screen 43 is movably connected with the second plate body 42, when the display screen 43 displays, the display screen can be supported at different angles, so as to meet the requirement of a user for watching the display screen.
Through the above, the description can be known, and the display screen passes through first connecting piece and second plate body swing joint in this embodiment for the display screen can prop up with different angles, can satisfy the demand that the user watched the display screen.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a notebook according to a third embodiment of the present application. The notebook 50 includes a first board body 51, a second board body 52, and a display 53. The display 53 is movably connected to the second board 52 by a first connecting member 531. The connection relationship and the position relationship between the first board 51 and the second board 52 are the same as those between the first board 11 and the second board 12 in the first embodiment of the notebook provided in this application, and will not be described herein again. The first board body 51 is provided with a writing area 512 and a sensing component 513, and the position, function and connection relationship between the writing area 512 and the sensing component 513 are the same as those between the writing area 112 and the sensing component 113 in the first embodiment of the notebook provided in this application, and will not be described herein again.
The first connector 531 includes first and second support plates 5311 and 5312 connected to each other and a flexible adhesive portion 5313. The side 53111 of the first support plate 5311 and the side 53121 of the second support plate 5312 are connected and can be folded over with respect to the connection. The first support plate 5311 is attached to the back of the display 53, and the second support plate 5312 is detached from the back of the display 53. The other side 53122 of the second support plate 5312 is connected to one side 53131 of the flexible adhesive portion 5313, and the other side 52132 of the flexible adhesive portion 5313 is connected to the second plate 52.
In this embodiment, when the display 53 is lifted, the first support plate 5311 and the second support plate 5312 are folded at the connection point, and the second support plate 5312 is folded at an angle relative to the second plate 52 by the flexible adhesive portion 5313. The supported angle of the display 53 can be controlled by adjusting the angle between the first support plate 5311 and the second support plate 5312 and/or the angle between the second support plate 5312 and the second plate 52.
As can be seen from the above description, in the present embodiment, by providing the first supporting plate and the second supporting plate which are connected to each other, and the flexible bonding portion, wherein the side edge of the first supporting plate is connected to the side edge of the second supporting plate, and can be turned over relative to the joint, the other side of the second supporting plate is connected to one side of the flexible bonding portion, and the other side of the flexible bonding portion is connected to the second plate body. Wherein, first backup pad and the laminating of display screen back, the separation of second backup pad and display screen back, like this, through the contained angle between adjustment first backup pad and the second backup pad and/or the angle between second backup pad and the second plate body can control the angle of propping up of display screen, can prop up the display screen with different angles to satisfy the demand that the user watched the display screen.
Referring to fig. 6a and 6b, fig. 6a is a schematic structural diagram of a display screen of an embodiment of a second board body of a notebook provided by the present application in a down state. Fig. 6b is a schematic structural diagram of a display screen of an embodiment of a second board body of a notebook provided by the present application in a supported state.
A display screen 63 is provided on the second plate 62. The display screen 63 is movably connected to the second plate body through a first connecting member 631. The first connector 631 includes first and second support plates 6311 and 6312 connected to each other, a flexible adhesive part 6313, and a pull member 6314. The pull tab 6314 is used to assist the user in lifting the display screen 63.
The second board 62 is provided with a screen protection area 621, the surface of the screen protection area 621 is provided with a flexible material with hardness lower than a set value, in the present implementation scenario, the screen protection area 621 is arranged relative to the display screen 63, and when the display screen 63 is horizontally placed, the screen of the display screen faces and is attached to the screen protection area 621. Since the surface of the screen protection area 621 is provided with the flexible material, when the notebook is in a bumpy state, the flexible material can protect the screen of the display screen 63 from being scratched.
In the present embodiment, the surface friction coefficient of the screen saver 621 is greater than that of the other areas of the second board 62, so that the display screen 63 can be in a stable state by a large friction force between the one side 632 of the display screen and the screen saver 621 when the display screen is in the set-up state.
It can be known through the above description that, in this embodiment, through setting up the pull-up piece, help the user to lift up the display screen, can protect the display screen through the screen protection zone that sets up the surface and be in stable state when being equipped with flexible material to make the screen supported through the great coefficient of friction in screen protection zone surface, thereby convenience of customers consults the display content.
Referring to fig. 7a and 7b, fig. 7a is a schematic structural diagram of a display screen of a fourth embodiment of a notebook provided by the present application in a down state. Fig. 7b is a schematic structural diagram of a display screen of a fourth embodiment of a notebook provided by the present application in a set-up state.
The notebook 70 includes a first board 71, a second board 72, a display 73, and a second connector 74. The side 711 of the first plate 71 and the side 721 of the second plate 72 are connected by the second connecting member 74. One side of the second connecting member 74 is adhered to the first plate 71, and the other side of the second connecting member 74 is adhered to the second plate 72, so that the side 711 of the first plate 71 and the side 721 of the second plate 72 are connected by the second connecting member 74. In this embodiment, one side of the second connecting member 74 is bonded to the entire back surface of the first plate 71, and the other side of the second connecting member 74 is bonded to the entire back surface of the second plate 72, but in other embodiments, one side of the second connecting member 74 may be bonded to only a portion of the back surface of the first plate 71 near the side edge 711, and the other side of the second connecting member 74 may be bonded to only a portion of the back surface of the second plate 72 near the side edge 721, as shown in fig. 3.
In this implementation scenario, a writing pen fixing member 741 is disposed on the second connecting member 74, and the writing pen fixing member 741 is configured to accommodate a writing pen. In this embodiment, the pen holder 741 is an opening into which the pen clip can be inserted, so as to hold the pen by the second connecting member 74. In other embodiments, the pen holder 741 may be a pen cap, so that the pen can be inserted into the pen holder for fixing. Or the writing pen fixing member 741 may also be a storage groove, and the writing pen may be placed in the storage groove to achieve the fixing purpose.
The surface of the first plate body 71 is provided with a writing area 712, and the writing area 712 is used for placing paper for a user to write. A sensing member (not shown) is disposed in the first plate 71, and the sensing member is disposed below a surface of the first plate 71, and further, the sensing member is disposed corresponding to the writing area 712 for sensing the content written by the user. Since the writing area 712 is used for placing paper, a paper fixing member 714 is further provided on the surface of the first plate body for fixing the paper placed in the writing area 712 in order to facilitate writing by the user. In this embodiment, the paper fixing member 714 is an opening through which the paper can be inserted to be fixed to the first plate 71 by friction, and in other embodiments, the paper fixing member 714 may be a fixing device such as a receiving groove or a shutter.
Because the paper of writing has modes of putting such as size and horizontal version and vertical edition, in order to avoid putting of paper to exceed the response scope of response subassembly, consequently, be provided with a plurality of papers in writing district 712 and put sign 715, these papers are put the place region that sign 715 is used for instructing the paper of different sizes, avoid the paper to put and exceed the scope and lead to the response subassembly not to respond to the content that the user wrote.
The second plate 72 is provided with a display screen 73 and a business card pocket 75. The display screen 73 is connected with the sensing assembly and is used for displaying the content sensed by the sensing assembly. For example, in the present implementation scenario, a user writes on a piece of paper placed in the writing area 712, the sensing component may sense the content written by the user and transmit the content to the display 73, so that the display 73 may display the content written by the user.
The display screen 73 is movably connected to the second plate 72 through a first connecting member 731. The first connecting member 731 includes a first supporting plate 7311 and a second supporting plate 7312 connected to each other, and a flexible adhesive part 7313. The side edge 73111 of the first support plate 7311 and the side edge 73121 of the second support plate 7312 are connected and can be folded over with respect to the connection. The first support plate 7311 is attached to the back of the display 73, and the second support plate 7312 is detached from the back of the display 73. The other side 73122 of the second support plate 7312 is connected to one side 73131 of the flexible adhesive part 7313, and the other side 73132 of the flexible adhesive part 7313 is connected to the second plate 72.
In this embodiment, when the display screen 73 is lifted, the first support plate 7311 and the second support plate 7312 are folded at the joint, and the second support plate 7312 is folded at an angle with respect to the second plate 72 by the flexible adhesive portion 7313. The supported angle of the display screen 73 can be controlled by adjusting the angle between the first supporting plate 7311 and the second supporting plate 7312 and/or the angle between the second supporting plate 7312 and the second plate body 72.
The other side 73112 of the first support plate 7311 is provided with a pull-up member 7314 for assisting the user in lifting the display screen 73.
The second plate 72 is provided with a screen protection area 722, a surface of the screen protection area 722 is provided with a flexible material with hardness lower than a set value, in this implementation scenario, the screen protection area 722 is arranged opposite to the display screen 73, and when the display screen 73 is horizontally placed, the screen of the display screen faces and is attached to the screen protection area 722. Since the surface of the screen protection area 722 is provided with the flexible material, the flexible material can protect the screen of the display screen 73 from being scratched when the notebook is in a bumpy state.
In this embodiment, the surface friction coefficient of the screen saver 722 is greater than the surface friction coefficient of the rest of the second plate 72, so that the display screen 73 can be in a stable state by a greater friction between the screen saver 722 and the side 732 of the display screen when in the erect state.
The business card pocket 75 is located on the surface of the second panel 72 in an area other than the screen saver 722 for holding a business card or similar sized card.
Can know through the above-mentioned description, through the content that the user wrote on the paper that the regional setting was write in the response subassembly response in this embodiment, show the content that the response subassembly sensed through the display screen, can realize carrying out synchronous electronic display with the content that the user wrote.
Referring to fig. 8, fig. 8 is a schematic flow chart of a first embodiment of the education method provided by the present application. The educational method provided by the present application comprises:
s801: the method comprises the steps that a first terminal obtains content written on paper by a user, first character data are generated, and the first character data are displayed.
In a specific implementation scenario, the first terminal is a notebook shown in any one of fig. 1 to 7 in this application. The sensing component of the first terminal senses the content written by a user on paper arranged in the writing area, and in the implementation scene, the sensing content can be obtained through pressure sensing, and in other implementation scenes, the sensing content can be obtained through sensing the track of a writing pen. The sensed content is the content written by the user, first character data are generated according to the content written by the user, the sensed first character data are transmitted to the display screen through the sensing assembly, and the display screen acquires the first character data and displays the first character data. In this embodiment, the content written by the user is directly displayed in the form of handwritten characters and/or drawings of the user, and in other embodiments, the content written by the user may be converted into a character display in a standard font, for example, a print.
In another specific implementation scenario, in the process of a remote class, a student completes a class test or a class assignment arranged by a teacher by using the first terminal, writes an answer on paper set in a writing area, and the sensing component acquires the content written by the user and displays the answer written by the student on a display screen.
In another specific implementation scenario, in the process of remote teaching, a teacher uses the first terminal to arrange classroom work, writes the questions of classroom tests on paper set in a writing area, and the sensing component acquires the contents written by the user and displays the questions of classroom tests on a display screen.
S802: and sending the first text data to at least one second terminal, so that the at least one second terminal displays the first text data.
In a specific implementation scenario, the second terminal is also a notebook as shown in any of fig. 1-7 in this application. The first terminal sends the first character data to the second terminal, and the second terminal displays the received first character data on the display screen after receiving the first character data. The first terminal and the second terminal may be connected by wire or wirelessly, or may be connected through the internet.
In another specific implementation scenario, in the process of a remote class, after a student writes an answer of a classroom test by using a first terminal, the first terminal sends the obtained answer written by the student to a teacher, a display screen of a second terminal of the teacher displays the answer of the student, and the teacher can select to display the first text data sent by a specific first terminal due to the fact that a plurality of students submit the answers.
In another specific implementation scenario, the teacher sends the arranged classroom homework to the student through the first terminal during the course of the remote lecture, and the student can see the questions arranged by the teacher on the display screen.
S803: and receiving the education data transmitted by the at least one second terminal, wherein the education data is generated according to the first text data.
In a specific implementation scenario, the first terminal receives education data transmitted by the second terminal, and the education data is generated according to the first text data transmitted by the first terminal.
In another specific implementation scenario, the first text data is an answer written by the student according to a question issued by the teacher, a correct answer of the question may be pre-stored in the second terminal, if the first text data sent by the student through the first terminal is the correct answer, the second terminal sends information of correct answer to the first terminal, otherwise, the second terminal sends information of incorrect answer to the first terminal. In this implementation scenario, the educational data includes only correct or incorrect judgment information, and in other implementation scenarios, the educational data further includes second text data and/or first voice data.
In another specific implementation scenario, in the process of remote lecture, after obtaining answers written by the student through a display screen of the second terminal, the teacher writes modified opinions or correct answers on paper set in the writing area according to the answers of the student, the second terminal obtains contents written by the teacher in the writing area, generates education data according to the contents written by the teacher, and sends the education data to the first terminal of the student.
In the implementation scenario, the lecture is a remote lecture, and a teacher can explain the student by sending voice. In another specific implementation scenario, when the teacher writes content, while writing correct answers or amending opinions of wrong answers to the students, explains the solution thought of the subject or the wrong point of the students, the second terminal obtains the content written by the user on the paper to generate the second text data. When the user writes on the paper, still gather environmental sound, acquire mr's explanation sound to generate first speech data.
In another specific implementation scenario, in the course of a remote class, after a student acquires a question issued by a teacher through a first terminal through a display screen of a second terminal, the student writes an answer on paper according to the question, and the second terminal acquires content written by the user to generate second text data.
Or the teacher arranges the title to read and transcribe the word, and the like, and the student reads the word while transcribing the word. The second terminal acquires the content written by the student on the paper to generate second character data, and also acquires environmental sound to acquire the sound read by the student to generate first voice data when the user writes.
S804: and prompting the user of the education data.
In a specific implementation scenario, the first terminal prompts the user for the received educational data. In this implementation scenario, the educational data includes correct or incorrect judgment information, and the judgment information is displayed on the display screen. In other implementations, the educational data includes second textual data, such as written correction content of a teacher or written answers of a student, and the second textual data is displayed on the display screen. In other implementations, the educational data further includes first voice data, and the first terminal plays the first voice data. Or the first terminal plays the first voice data while displaying the second text data.
In this implementation scenario, the first text data is displayed in a first format, and the second text data is displayed in a second format different from the first format, so that a user can easily distinguish whether the first text data or the second text data is currently displayed.
It can be known through the above description that this embodiment is through the content that acquires the user and write, generate first literal data, and send first literal data for the second terminal, make the second terminal generate and send educational data according to first literal data, receive and indicate educational data, the user at first terminal can acquire mr's standard answer and explanation, thereby at distance education's in-process, the exchange between mr and the student can be more convenient, can improve education efficiency, and can help user better study through this distance education, can improve educational effect.
Referring to fig. 9, fig. 9 is a schematic flow chart of a second embodiment of the education method provided by the present application. The educational method provided by the present application comprises:
s901: the method comprises the steps that a first terminal obtains content written on paper by a user, first character data are generated, the first character data are displayed, and second voice data of the user of the first terminal are obtained.
In a specific implementation scenario, a first terminal acquires content written by a user on paper, acquires environmental sound generated when the user writes, generates first character data according to the acquired written content, and generates second voice data according to the acquired environmental sound.
In another specific implementation scenario, the teacher may lay out a job of transcribing and reading words during the course of the remote lecture, and may read the words to demonstrate students while laying out the job. The first terminal acquires the content written by the teacher (the arranged job) to generate first character data, and acquires the sound read by the teacher to generate second voice data.
In yet another specific implementation scenario, during a remote lesson, a student completes a transcription and reading job placed by a teacher and reads a word while transcribing the job. The first terminal acquires the written content (completed homework) of the student to generate first character data, and acquires the reading sound of the student to generate second voice data.
S902: and sending the second voice data and the first text data to the at least one second terminal together, so that the at least one second terminal displays the first text data and/or plays the second voice data.
In a specific implementation scenario, the first terminal sends the generated first text data and the second voice data to the second terminal, and the second terminal displays the first text data and/or plays the second voice data after receiving the first text data and the second voice data.
In another specific implementation scenario, the student can review the assignment placed by the teacher and listen to the sound read by the teacher through the second terminal during the course of the remote class. In yet another specific implementation scenario, the teacher may view the completion of the student's assignment through the second terminal, such as listening to the student's reading sound.
S903: and receiving the education data transmitted by the at least one second terminal, wherein the education data comprises second text data and/or first voice data.
In this implementation scenario, second text data and first voice data sent by a second terminal are received, and the second text data and the first voice data are generated according to at least one of the first text data and the second voice data.
In the implementation scenario, in the course of a remote class, a student submits work for transcribing words and voice for reading words through a first terminal, a second terminal used by a teacher displays the words transcribed by the student, the teacher finds that the words are misspelled, the teacher writes correct spelling on paper, and the second terminal acquires the content written by the teacher to generate second character data. Or the teacher finds that the reading of the student is also wrong, the teacher corrects the pronunciation of the student through the voice while correcting the transcription homework of the student, and the second terminal acquires the pronunciation corrected by the teacher when the teacher writes to generate the first voice data.
In another specific implementation scenario, when a student completes the transcription and reading activities arranged by a teacher during a remote lesson, the student transcribes a word, the second terminal acquires the content written by the student to generate second text data, the student reads the word while transcribing the word, and the second terminal acquires the voice read by the student to generate first voice data.
And the second terminal sends the acquired education data comprising the second text data and/or the first voice data to the first terminal.
S904: and displaying the second text data and/or playing the first voice data, wherein the first text data and the second text data are displayed by adopting different display screens or displayed by adopting the same display screen in a split-screen manner.
In one particular implementation, upon receiving the educational data, the second textual data is displayed in a format other than the format in which the first textual data is displayed, e.g., the first textual data is displayed in a first format and the second textual data is displayed in a second format different from the first format.
In the implementation scenario, the first text data is still displayed while the second text data is displayed, and the first text data and the second text data are displayed by adopting different display screens or displayed by adopting the same display screen in a split-screen manner.
In other implementation scenarios, when the first text data and the second text data are displayed simultaneously, the first text data and the second text data may be compared, and a difference between the first text data and the second text data is displayed in a third format different from the first format and the second format.
In another specific implementation scenario, the first text data is the answers of the questions written by the students, the second text data is the problem solving step given by the teacher, and the students can quickly find out the problem solving error areas by displaying the difference between the first text data and the second text data.
As can be seen from the above description, in the present embodiment, the first text data and the second voice data are respectively generated by acquiring the content written by the user and the voice of the user, and are sent to the second terminal, and the education data generated by the second terminal according to the first text data and the second voice data are received, and are prompted, so that the teacher and the student can communicate with each other through voice in the remote education process. Therefore, communication between teachers and students can be more convenient, the education efficiency can be improved, better learning of users can be helped through the distance education, and the education effect can be improved.
Referring to fig. 10, fig. 10 is a schematic flow chart of a third embodiment of the education method provided by the present application. The educational method provided by the present application comprises:
s1001: the method comprises the steps that a first terminal obtains content written on paper by a user, first character data are generated, the first character data are displayed, and second voice data of the user of the first terminal are obtained.
S1002: and sending the second voice data and the first text data to the at least one second terminal together, so that the at least one second terminal displays the first text data and/or plays the second voice data.
S1003: and receiving the education data transmitted by the at least one second terminal, wherein the education data comprises second text data and/or first voice data.
S1004: and displaying the second text data and/or playing the first voice data, wherein the first text data and the second text data are displayed by adopting different display screens or displayed by adopting the same display screen in a split-screen manner.
In this implementation scenario, steps S1001 to S1004 are substantially the same as steps S901 to S904 in the second embodiment of the education method provided by the present application, and are not described herein again.
S1005: recording at least one of the first text data, the second text data, the first voice data and the second voice data.
In a specific implementation scenario, the first terminal stores at least one of the first text data and the second voice data when generating the first text data and the second voice data, or when receiving the second text data and the first voice data.
In this implementation scenario, the first text data, the second text data, the first voice data, and the second voice data are saved. For subsequent review by the user.
In another specific implementation scenario, the first text data and the second voice data are jobs submitted by students, and the second text data and the first voice data are modification opinions returned by teachers. The student may need to review the revision opinions given by the teacher multiple times or listen to the teacher's reading repeatedly.
In other implementations, the user may also select content that needs to be saved.
S1006: and playing back at least one of the first text data, the second text data, the first voice data and the second voice data according to an instruction input by a user.
In a specific implementation scenario, at least one of the first text data, the second text data, the first voice data and the second voice data is played back according to a playback instruction input by a user. In another specific implementation scenario, if the user needs to repeatedly listen to the sound read by the teacher, the user may input a playback instruction to play back the first voice data. In another specific implementation scenario, the user may specify to play back the first voice data and the second voice data at the same time, so as to find the gap between the own pronunciation and the pronunciation of the teacher.
The playback instructions may also include a number of playbacks, such as an unlimited number of repeated playbacks or playbacks only once, three times, five times, and so forth.
As can be seen from the above description, in the embodiment, at least one of the first text data, the second text data, the first voice data and the second voice data is saved, and at least one of the first text data, the second text data, the first voice data and the second voice data is played back according to the playback instruction input by the user, so that the user can repeatedly learn the course content within the self-designated range when receiving distance education, the learning effect of the user can be enhanced, the user can be helped to learn better, and the education effect can be improved.
Referring to fig. 11, fig. 11 is a flowchart illustrating an electronic education method according to a first embodiment of the present application. The electronic education method provided by the application comprises the following steps:
s1101: first display data transmitted by the educational terminal is received.
In a specific implementation scenario, the electronic education terminal used by the student is a notebook as shown in any one of fig. 1-7. The electronic education terminal receives first display data transmitted from the education terminal. The electronic education terminal and the education terminal may be connected by wire or wireless, or may be connected through the internet.
For example, the education terminal is a notebook computer of a teacher, and the first display data is a class lecture of the teacher in class or a blackboard-writing written by the teacher in class.
S1102: and acquiring the content written on paper by the user, and generating second display data according to the content.
In a specific implementation scenario, when the user is in class for speaking, the teacher needs to record the content temporarily supplemented by the teacher because the teacher temporarily supplements the content that is not in the textbook and the classroom lecture (i.e., the first display data). When the user writes on the paper, the electronic education terminal senses the content written on the paper by the user and generates second display data according to the content.
In this embodiment, the content written by the user is directly used as the second display data, and in other embodiments, the second display data is recognized, and the content written by the user is converted into the content displayed in a standard font (e.g., a print).
S1103: and generating third display data according to the first display data and the second display data, and displaying and storing the third display data.
In a specific implementation scenario, the received first display data and the second display data are combined to generate third display data, and the third display data is displayed and stored.
For example, if the first display data is a classroom lecture transmitted by the teacher's educational terminal and the second display data is a classroom note recorded by the user while listening to the lecture, the first display data and the second display data are displayed in a combined manner, and the second display data may be displayed in a blank area of the first display data, for example. Namely, the classroom notes are displayed in the blank area of the classroom lecture, so that the normal reading of the classroom lecture is not influenced, and the classroom notes and the classroom lecture can be combined. Therefore, the third display data is the classroom lecture marked with classroom notes. Therefore, the relevance of the classroom notes and the classroom lectures can be increased, and the user can easily remember the classroom contents during review.
The third display data is stored, so that the user does not need to transcribe the content of the lecture and only needs to record class notes. And because the first display data, namely the classroom lecture is sent by the teacher's educational terminal, the lecture content is synchronous with the teacher's lecture content, does not need manual adjustment of the user, and is very convenient.
As can be seen from the above description, in this embodiment, through the application, by receiving first display data, such as a classroom lecture, sent by an education terminal, acquiring content, such as a classroom note, written by a user on paper, and generating and displaying third display data according to the first display data and the second display data, such as a classroom lecture marked with a classroom note, the user can be helped to better learn and review, so that the learning effect of the user is improved.
Referring to fig. 12, fig. 12 is a schematic flowchart illustrating an electronic education method according to a second embodiment of the present application. The electronic education method provided by the application comprises the following steps:
s1201: first display data transmitted by the educational terminal is received.
S1202: and acquiring the content written on paper by the user, and generating second display data according to the content.
S1203: and generating third display data according to the first display data and the second display data, and displaying and storing the third display data.
In a specific implementation scenario, steps S1201 to S1203 are substantially the same as steps S1101 to S1103 in the first embodiment of the electronic education method provided by the present application, and details thereof are not repeated here.
S1204: and receiving a playback instruction input by a user.
In a specific implementation scenario, the third display data is saved. When the user needs to refer to the third display data again, a playback instruction may be input. In this embodiment, the time for generating the third display data is recorded when the third display data is stored in step S1203. The playback instruction may therefore include a first time range to be displayed. For example, if the user needs to review the contents of a mathematical class 9:00 am on 1 month and 15 days, the first time range entered is 9:00 am on 1 month and 15 days.
In another implementation scenario, the input first time range may further include a termination time, for example, when the content of the mathematical class of 9:00 am is reviewed in 1 month and 15 days, and the time of the mathematical class is 45 minutes, the input first time range is 9 am in 1 month and 15 days: 00-9: 45.
s1205: and reading and displaying the third display data in response to the playback instruction.
In a specific implementation scenario, when the electronic education terminal receives a playback instruction input by the user, the electronic education terminal reads and displays the third display data according to the playback instruction.
For example, if the playback instruction includes a first time range to be displayed, the electronic education terminal finds third display data whose generated time satisfies the first time range, based on the time generated by the third display data recorded when the third display data is saved, and displays the third display data.
In yet another specific implementation scenario, the playback instruction may include other identification of the third display data, and the identification may be an identification added to the third display data when or after the third display data is generated. The user may be named after a subject, e.g., math, language, English, etc. And the electronic education terminal receives the playback instruction. And reading the identifier indicated by the playback instruction, finding the third display data with the corresponding identifier, and displaying the third display data.
Referring to fig. 13, fig. 13 is a schematic flowchart illustrating an electronic education method according to a third embodiment of the present application. The electronic education method provided by the application comprises the following steps:
s1301: first display data transmitted by the educational terminal is received.
S1302: and acquiring the content written on paper by the user, and generating second display data according to the content.
In a specific implementation scenario, steps S1301 to S1302 are substantially the same as steps S1101 to S1102 in the first embodiment of the electronic education method provided by the present application, and are not described herein again.
S1303: collecting environmental sounds when a user writes on paper to obtain audio data, and storing the audio data.
In a specific implementation scenario, the user takes a classroom note, i.e., the teacher speaks the temporarily supplemented content. Therefore, when the user writes on paper, the environmental sound is collected, the explanation of the teacher for the supplementary classroom notes can be recorded, the audio data is obtained according to the collected sound, and the audio data is stored, namely the temporary explanation content of the teacher in the classroom is stored. So that the user may then choose to re-listen to the piece of content to enhance understanding.
S1304: and generating third display data according to the first display data and the second display data, and displaying and storing the third display data.
In a specific implementation scenario, step S1303 is substantially the same as step S1103 in the first embodiment of the electronic education method provided in this application, and details thereof are not repeated here.
S1305: and receiving a playback instruction input by a user.
In a specific implementation scenario, the playback instruction includes a playback instruction for the third display data and a playback instruction for the audio data. The implementation scenario of the playback instruction including the playback instruction for the third display data is substantially the same as that described in step S1204 in the second embodiment of the electronic education method provided in the present application, and details thereof are not repeated here.
In this implementation scenario, since the electronic education terminal also acquires and stores audio data while the user writes, the playback instruction may also be for the audio data. When audio data is saved, the time at which the audio data was generated may be saved together. The playback instructions may also include a second time range of audio data that needs to be played. For example, if the user needs to review the contents of a math class teacher at 9:00 am on 1 month 15, the second time range entered is 9:00 am on 1 month 15. In other implementation scenarios, the input second time range may further include an end time, for example, when the teacher gives a lecture in a math lesson of 9:00 am 1 month and 15 days, and the lecture time in the math lesson is 45 minutes, the input first time is 9 am 1 month and 15 days: 00-9: 45.
In other implementations, the playback instructions may further include formulating whether to play back the third display data or the audio data, or both. The first time range and the second time range may be different, and for distinction, the first time range and the second time range may be respectively identified so that the electronic education terminal can distinguish which is the first time range and which is the second time range.
S1306: and responding to the playback instruction, reading and displaying the third display data and/or playing the audio data.
In a specific implementation scenario, in response to the playback instruction, the implementation scenario of reading and displaying the third display data is substantially the same as that described in step S1205 in the second embodiment of the electronic education method provided in the present application, and details are not repeated here.
And when the playback instruction comprises a second time range needing to be played, the electronic education terminal finds the audio data with the generated time meeting the second time range according to the audio generation time recorded when the audio data is stored, and plays the audio data.
In this implementation scenario, after receiving the playback instruction, it is first determined whether the item to be played back is the third display data, the audio data, or both. The items to be played back and the respective times may be obtained by reading the identifications of the first time range and the second time range.
As can be seen from the above description, in the embodiment, the audio data is generated by acquiring the environmental sound when the user writes, and is stored and played back when the playback instruction input by the user is received, so that the user can be helped to better record the classroom situation, and the follow-up review is easier.
Referring to fig. 14, fig. 14 is a schematic flowchart illustrating an electronic education method according to a fourth embodiment of the present application. The electronic education method provided by the application comprises the following steps:
s1401: first display data transmitted by the educational terminal is received.
S1402: and acquiring the content written on paper by the user, and generating second display data according to the content.
In a specific implementation scenario, steps S1401 to S1402 are substantially the same as steps S1101 to S1102 in the first embodiment of the electronic education method provided by the present application, and are not described herein again.
S1403: and acquiring the target relative display position of the second display data and the first display data.
In a specific implementation scenario, the target relative display position of the second display data and the first display data is obtained. For example, when a user records a classroom note on paper, classroom notes can be synchronously added and displayed on a classroom lecture, and since the classroom lecture may include many contents and classroom notes are recorded for only one of the contents, the addition of classroom notes beside the corresponding contents is more beneficial to the user for review after class.
Thus, in the present implementation scenario, the target relative position of the second display data (e.g., classroom notes) to the first display data (e.g., classroom lectures) is obtained. The relative display position of the target can be determined according to a position instruction input by a user, for example, the user can input the position of the classroom note display before recording the classroom note, the position can be a corresponding position of the display screen by long pressing or the display screen is divided into a plurality of rows and columns, and the position of the specific row and column is manually input.
In other implementations, the user records a sheet of paper that is a printed version of the lecture. And the writing region at electronic education terminal is provided with the paper and puts the sign, puts the lecture notes according to the paper and puts the sign, can acquire the writing position of the content of writing on the paper through the response, and this position is the relative display position of target promptly. For example, when it is sensed that the user writes a classroom note in the center of paper, the acquired second display data is displayed in the center of the first display data.
S1404: and when the third display data is displayed, displaying the second display data and the first display data according to the target relative display position.
In a specific implementation scenario, when the third data is displayed, the second display data and the first display data are displayed according to the relative display position of the target. For example, if the acquired target relative display position is the center of the first display data, the second display data is displayed in the center of the first display data.
In other implementations, there may be an overlap between the second display data and the first display data when displayed in the target relative display position, which may affect the user's view, and the first display data and/or the second display data may be automatically scaled appropriately to make full use of the display space.
As can be seen from the above description, in this embodiment, the object relative display position of the second display data and the first display data is obtained, and the first display data and the second display data are displayed according to the object relative display position, so that the classroom note recorded by the user can better meet the actual needs of the user, and the learning effect of the user is improved.
Referring to fig. 15, fig. 15 is a schematic flowchart illustrating a method for sharing a meeting record according to a first embodiment of the present disclosure. The conference record sharing method provided by the application comprises the following steps:
s1501: the method comprises the steps that a first terminal obtains content written on paper by a first user, first character data are generated, and the first character data are displayed.
In a specific implementation scenario, the first terminal is a notebook shown in any one of fig. 1 to 7 in this application. The sensing component of the first terminal senses the content written by a user on paper arranged in the writing area, and in the implementation scene, the sensing content can be obtained through pressure sensing, and in other implementation scenes, the sensing content can be obtained through sensing the track of a writing pen. The sensed content is the content written by the user, first character data are generated according to the content written by the user, the sensed first character data are transmitted to the display screen through the sensing assembly, and the display screen acquires the first character data and displays the first character data. In this embodiment, the content written by the user is directly displayed in the form of handwritten characters and/or drawings of the user, and in other embodiments, the content written by the user may be converted into a character display in a standard font, for example, a print.
In this implementation scenario, when a user participates in a teleconference, the user needs to record conference contents (e.g., critical time, amount, etc.) in time so as to avoid forgetting related matters, or write down own understanding contents for a scheme proposed for the conference, or write down own creativity when participating in a brainstorming. The first terminal senses the content written by the user on the paper, sends the sensed written content to the display screen, and displays the written content of the user.
S1502: and sending the first text data to at least one second terminal, so that the at least one second terminal displays the first text data.
In a specific implementation scenario, the second terminal is also a notebook as shown in any of fig. 1-7 in this application. The first terminal sends the first character data to the second terminal, and the second terminal displays the received first character data on the display screen after receiving the first character data. The first terminal and the second terminal may be connected by wire or wirelessly, or may be connected through the internet.
In this embodiment, during the teleconference, the user writes his question or opinion about the plan proposed at the conference on paper, and then sends the question or opinion to the second terminal used by the presenter of the plan or to the second terminals used by all or some of the members participating in the conference. So that the user of the second terminal can obtain the user's question or opinion. In some cases, the user's opinion or question may be relatively abstract in language description, and the user may draw a simple sketch on a sheet of paper to aid understanding. The first terminal sends the sensed content written by the user to the second terminal, wherein the content comprises the sketch I, and misunderstanding caused by language description can be avoided.
In another specific implementation scenario, during the teleconference, the user writes his/her creative idea when participating in the brainstorming, and may attach a simple hand-drawn sketch or the like for easy understanding, and the first terminal sends the induced creative idea and sketch of the user to the summons of the brainstorming or the second terminals used by all or some of the members participating in the brainstorming. The first terminal sends the sensed content written by the user to the second terminal, wherein the content comprises the sketch I, and misunderstanding caused by language description can be avoided.
S1503: and receiving conference recording data sent by at least one second terminal, wherein the conference recording data is generated according to the first text data.
In a specific implementation scenario, the first terminal receives conference recording data sent by the second terminal, where the conference recording data is generated according to first text data sent by the first terminal. In this implementation scenario, after the first terminal sends the user's question or opinion about the plan proposed in the meeting to the second terminal used by the plan presenter or the second terminal used by at least some of the people participating in the meeting, the plan presenter may respond to the question or opinion.
In this implementation scenario, the presenter can write the response content on paper, and similarly, for ease of understanding, can draw a sketch to visualize the abstract concept.
In other implementations, the conference recording data further includes the first voice data. The user may spend a long time writing on the paper and, in addition, may be unable to properly understand the content of the written first word data by others due to sloppy writing. Therefore, the user can explain while drawing a sketch on the paper. Or the keywords can be explained while writing the keywords on the paper. When a user writes content on paper, the first terminal collects environmental sound and acquires first voice data. The method comprises the steps that a user carries out explanation when writing on paper, environmental sounds are collected to obtain the sound explained by the user, and first voice data are generated according to the sound explained by the user.
Or the first character data sent by the first terminal comprises the originality of the user participating in the brainstorming in the teleconference, and the originality is evaluated or optimized after the participants of the brainstorming see the originality through the second terminal. The user can only write down a few key words for save time through writing the evaluation to this intention on the paper, specifically explains the key word with pronunciation simultaneously, and the user explains when writing on the paper, acquires the sound that acquires the user explanation to environmental sound, generates first speech data according to the sound of the user explanation who acquires.
S1504: and prompting the conference recording data to a user.
In a specific implementation scenario, the first terminal prompts the user with the received meeting record data. In this implementation scenario, the meeting record data includes second text data, for example, the content of the question and the opinion of answering others is written on paper by the presenter of the scheme. And the second terminal acquires the content written on the paper and generates second character data, and the conference recording data comprises the second character data. And after receiving the conference recording data, the first terminal displays the second text data on a display screen for a user to view.
In this implementation scenario, the first text data is displayed in a first format, and the second text data is displayed in a second format different from the first format, so that a user can easily distinguish whether the first text data or the second text data is currently displayed.
In other implementations, the conference recording data further includes the first voice data. The first terminal plays the first voice data in addition to displaying the second text data after receiving the conference recording data.
In other implementations, the first textual data is displayed simultaneously with the second textual data so that the user can comprehend the first and second textual data coherently. The first character data and the second character data are displayed by different display screens or displayed in a split screen mode by the same display screen.
As can be seen from the above description, in this embodiment, first text data is generated according to content written on paper by a first user, the first text data is displayed and sent to a second terminal, conference recording data generated according to the first text data and sent by the second terminal is received, and the conference recording data is prompted to the user. In the teleconference process, the content written on the paper by the user can be quickly shared by other conference participants, so that the creative idea or the opinion can be efficiently shared with each other in the teleconference process, the communication efficiency in the conference is improved, the conference can be smoothly carried out, the user can be helped to effectively communicate through the teleconference, and the working efficiency can be improved.
Referring to fig. 16, fig. 16 is a schematic flowchart illustrating a method for sharing a meeting record according to a second embodiment of the present disclosure. The method for sharing the conference record comprises the following steps:
s1601: the method comprises the steps that a first terminal obtains content written on paper by a user, first character data are generated, the first character data are displayed, and second voice data of the user of the first terminal are obtained.
In a specific implementation scenario, a first terminal acquires content written by a user on paper, acquires environmental sound generated when the user writes, generates first character data according to the acquired written content, and generates second voice data according to the acquired environmental sound.
In the implementation scenario, in order to improve the efficiency of teleconference communication, a user can write down recorded key words on paper, and the key words are explained through voice to express the own viewpoint. The first terminal acquires content (such as key words) written by a user, and generates second voice data according to the acquired environmental sound (including voice of the user for explaining the key words) and the acquired sound.
S1602: and sending the second voice data and the first text data to the at least one second terminal together, so that the at least one second terminal displays the first text data and/or plays the second voice data.
In a specific implementation scenario, the first terminal sends the generated first text data and the second voice data to the second terminal, and the second terminal displays the first text data and/or plays the second voice data after receiving the first text data and the second voice data.
In this implementation scenario, after receiving first text data including a keyword recorded in a conference written by a user and second voice data including a user's formulated viewpoint sent by a first terminal, a second terminal displays the first text data (e.g., the keyword) while playing a second voice, so that a user of the second terminal can see the keyword and hear the formulated voice of the user of the first terminal, and can better understand the idea of the user of the first terminal.
S1603: and receiving conference recording data sent by the at least one second terminal, wherein the conference recording data comprises second text data and/or first voice data.
In this implementation scenario, second text data and first voice data sent by a second terminal are received, and the second text data and the first voice data are generated according to at least one of the first text data and the second voice data.
In another specific implementation scenario, the user of the second terminal obtains an idea of the user of the first terminal according to the first text data (keyword) and/or the second voice data (set forth in the keyword) in the teleconference, and gives a reply according to the idea. And writing the corresponding content on the paper, and acquiring the content written by the user by the second terminal to generate second character data. The conference recording data including the second text data and/or the first voice data can be sent to the first terminal by the second terminal.
S1604: and displaying the second text data and/or playing the first voice data, wherein the first text data and the second text data are displayed by adopting different display screens or displayed by adopting the same display screen in a split-screen manner.
In a specific implementation scenario, after receiving the meeting minutes data, the second text data is displayed in a format different from the format of the first text data, for example, if the first text data is displayed in the first format, the second text data is displayed in a second format different from the first format.
In the implementation scenario, the first text data is still displayed while the second text data is displayed, and the first text data and the second text data are displayed by adopting different display screens or displayed by adopting the same display screen in a split-screen manner.
In other implementation scenarios, when the first text data and the second text data are displayed simultaneously, the first text data and the second text data may be compared, and a difference between the first text data and the second text data is displayed in a third format different from the first format and the second format.
In other implementation scenarios, the first text data and the second text data may be displayed in a projection manner, that is, in addition to displaying the first text data and the second text data at the first terminal, the first text data and the second text data may be displayed in a projection manner, so that others participating in the conference may see the first text data and the second text data.
For example, when a brainstorming occurs in a teleconference, a good creative point (first text data) and a supplement to the creative point (second text data) can be displayed to all the participating people in a projection mode, and a more perfect answer is obtained through the discussion of everyone.
It can be known from the above description that, this embodiment can acquire the voice of user to the explanation of writing content when writing through acquiring the environmental sound of gathering the user when writing on the paper, generates first voice data according to the environmental sound of gathering, sends first voice data for the second terminal, can make the user can more clearly express own idea with the help of pronunciation in the teleconference to further improve the communication efficiency of meeting, and can help the user effectively communicate through this teleconference, can improve work efficiency.
Referring to fig. 17, fig. 17 is a schematic flowchart illustrating a method for sharing a meeting record according to a third embodiment of the present application. The method for sharing the conference record comprises the following steps:
s1701: the method comprises the steps that a first terminal obtains content written on paper by a user, first character data are generated, the first character data are displayed, and second voice data of the user of the first terminal are obtained.
S1702: and sending the second voice data and the first text data to the at least one second terminal together, so that the at least one second terminal displays the first text data and/or plays the second voice data.
S1703: and receiving conference recording data sent by the at least one second terminal, wherein the conference recording data comprises second text data and/or first voice data.
S1704: and displaying the second text data and/or playing the first voice data, wherein the first text data and the second text data are displayed by adopting different display screens or displayed by adopting the same display screen in a split-screen manner.
In this implementation scenario, steps S1701 to S1704 are substantially the same as steps S1601 to S1604 in the second embodiment of the method for sharing a conference record provided by the present application, and details are not repeated here.
S1705: recording at least one of the first text data, the second text data, the first voice data and the second voice data.
In a specific implementation scenario, the first terminal stores at least one of the first text data and the second voice data when generating the first text data and the second voice data, or when receiving the second text data and the first voice data.
In this implementation scenario, the first text data, the second text data, the first voice data, and the second voice data are saved. For subsequent review by the user.
Because the first text data, the second text data, the first voice data and the second voice data are all used for recording, replying, discussing and the like of some problems in the conference process, when conference discussion results need to be checked later or conference materials are arranged, documents can be arranged or collected according to the first text data, the second text data, the first voice data and the second voice data. The basis for discussion may be provided later.
S1706: acquiring a reference instruction input by a user, and displaying corresponding first text data and/or second text data according to the reference instruction.
In a specific implementation scenario, a lookup instruction input by a user is acquired to display the first text data and/or the second text data, and in other implementation scenarios, in order to more conveniently understand the first text data and the second text data, the first voice data and the second voice data are also played while the first text data and the second text data are displayed.
As can be seen from the above description, in the embodiment, at least one of the first text data, the second text data, the first voice data and the second voice data is saved, and the first text data and/or the second text data are/is displayed according to the lookup instruction input by the user, so that the user can conveniently sort the conference materials and summarize the discussion result of the conference, thereby effectively improving the working efficiency of the user.
Referring to fig. 18, fig. 18 is a schematic flowchart illustrating a method for recording meeting contents according to a first embodiment of the present application. The method for recording the conference content comprises the following steps:
s1801: and receiving first display data sent by the conference initiating terminal.
In a specific implementation scenario, the electronic conference terminal used by the user is a notebook as shown in any one of fig. 1 to 7. The electronic conference terminal receives first display data sent by a conference initiating terminal. The electronic conference terminal and the conference initiating terminal may be connected in a wired or wireless manner, or may be connected through the internet.
For example, the conference initiating terminal is a notebook computer of the conference initiator or a notebook computer shown in any one of fig. 1 to 7, and the first display data is conference material such as a slide or a picture prepared by the conference initiator.
S1802: and acquiring the content written on paper by the user, and generating second display data according to the content.
In a specific implementation scenario, when a user participates in a conference, the user often goes through some problems during the conference, or proposes new ideas or ideas appearing in the conference material (i.e., the first display data) prepared in advance, and therefore, the user needs to write contents on paper to record contents not included in the conference material, such as new ideas or ideas.
In this embodiment, the content written by the user is directly used as the second display data, and in other embodiments, the second display data is recognized, and the content written by the user is converted into the content displayed in a standard font (e.g., a print).
S1803: and generating third display data according to the first display data and the second display data, and displaying and storing the third display data.
In a specific implementation scenario, the received first display data and the second display data are combined to generate third display data, and the third display data is displayed and stored. In this implementation scenario, the first display data is a slide prepared by a conference initiator, the second display data is a conference record recorded when a user participates in a conference, and the first display data and the second display data are displayed in a combined manner, for example, the second display data may be displayed in a blank area of the first display data, so that the slide of the conference and the conference record may be combined, which may not affect reading, but also effectively utilize space.
The third display data are stored, so that the user can accurately summarize according to the third display data when meeting materials need to be collated or meeting summarization is performed subsequently. The third display data is sent by the conference initiating terminal with the conference initiator, so that the third display data is matched with the current discussion content, manual adjustment by a user is not needed, and the third display data is very convenient.
Since the third display data includes the first display data (e.g., pre-prepared meeting materials, slides) and the second display data (e.g., a meeting record written by the user), a relatively comprehensive conclusion can be drawn in conjunction with the first display data and the second display data when the user subsequently reviews the third display data.
In other implementation scenarios, in order to improve the security of the conference material, at least one of the first display data, the second display data, and the third display data may be sent to a preset storage terminal, such as a preset storage relay device, so that a peer failing to participate in the conference may find the record of the conference from the preset storage relay device. And the conference data can be prevented from being damaged due to the failure of the electronic conference terminal by sending the conference data to the storage terminal.
In other implementation scenarios, the first display data, the second display data and the third display data are displayed on the electronic conference terminal, and are also sent to the projection device for projection display, so that all people participating in the conference can acquire the conference record, and the communication efficiency of the conference can be effectively improved.
As can be seen from the above description, in this embodiment, by receiving first display data sent by a conference initiating terminal, for example, pre-prepared conference materials, slides, and the like, acquiring content written by a user on paper, for example, a conference record, and generating and displaying third display data according to the first display data and the second display data, for example, a slide supplemented with a conference discussion result, it is possible to provide a more detailed and clear reference in subsequently organizing conference materials and conveying conference content, thereby improving conference efficiency.
Referring to fig. 19, fig. 19 is a flowchart illustrating a method for recording meeting contents according to a second embodiment of the present application. The method for recording the conference content comprises the following steps:
s1901: and receiving first display data sent by the conference initiating terminal.
S1902: and acquiring the content written on paper by the user, and generating second display data according to the content.
In a specific implementation scenario, steps S1901 to S1902 are substantially the same as steps S1801 to S1802 in the fourth embodiment of the method for sharing a conference record provided by the present application, and are not described herein again.
S1903: and acquiring the target relative display position of the second display data and the first display data.
In a specific implementation scenario, the target relative display position of the second display data and the first display data is obtained. For example, when a user writes a conference record on paper, the conference record can be synchronously added and displayed on a pre-prepared conference slide, and since the conference slide may include many contents and the conference record is recorded only for the discussion result of one of the contents, adding the conference record note beside the corresponding content is more beneficial for the user to sort the material after meeting.
Thus, in the present implementation scenario, the target relative position of the second display data (e.g., meeting record) and the first display data (e.g., material such as a meeting slide) is obtained. The relative display position of the target can be determined according to a position instruction input by a user, for example, the user can input the position of the display of the conference record before recording the conference record, the corresponding position of the display screen can be pressed for a long time, or the display screen is divided into a plurality of rows and columns, and the position of the specific row and column is input manually.
In other implementations, the user records a sheet of paper that is a printed version of the meeting material, such as a slide show. And the writing area of the electronic conference terminal is provided with a paper placing identifier, the conference slides are placed according to the paper placing identifier, the writing position of the written content on the paper can be obtained through induction, and the writing position is the target relative display position. For example, it is sensed that the user writes a meeting record in the center of paper, the acquired second display data is displayed in the center of the first display data.
S1904: and when the third display data is displayed, displaying the second display data and the first display data according to the target relative display position.
In a specific implementation scenario, when the third data is displayed, the second display data and the first display data are displayed according to the relative display position of the target. For example, if the acquired target relative display position is the center of the first display data, the second display data is displayed in the center of the first display data.
In other implementations, there may be an overlap between the second display data and the first display data when displayed in the target relative display position, which may affect the user's view, and the first display data and/or the second display data may be automatically scaled appropriately to make full use of the display space.
As can be seen from the above description, in this embodiment, by acquiring the target relative display position of the second display data and the first display data and displaying the first display data and the second display data according to the target relative display position, the meeting record recorded by the user can better meet the actual needs of the user, so as to improve the efficiency of collating the data after the meeting of the user.
Referring to fig. 20, fig. 20 is a schematic flowchart illustrating a method for recording meeting contents according to a third embodiment of the present application. The method for recording the conference content comprises the following steps:
S2001: and receiving first display data sent by the conference initiating terminal.
S2002: and acquiring the content written on paper by the user, and generating second display data according to the content.
S2003: and generating third display data according to the first display data and the second display data, and displaying and storing the third display data.
In a specific implementation scenario, steps S2001-S2003 are substantially the same as steps S1801-S1803 in the first embodiment of the method for recording conference content provided in this application, and details thereof are not repeated here.
S2004: and receiving a playback instruction input by a user.
In a specific implementation scenario, the third display data is saved. When the user needs to refer to the third display data again, a playback instruction may be input. In this embodiment, the time for generating the third display data is recorded when the third display data is stored in step S1203. The playback instruction may therefore include a first time range to be displayed. For example, if the user needs to organize the contents of a 9:00 am department meeting on 1 month and 15 days, the first time range entered is 9:00 am on 1 month and 15 days.
In another implementation scenario, the input first time range may further include a termination time, for example, when the content of the department meeting at 9:00 am on 1 month and 15 days is collated, the duration of the current meeting is 30 minutes, and the input first time range is 9 am on 1 month and 15 days: 00-9: 30.
S2005: and reading and displaying the third display data in response to the playback instruction.
In a specific implementation scenario, when the electronic conference terminal receives a playback instruction input by a user, the electronic conference terminal reads and displays the third display data according to the playback instruction.
For example, if the playback instruction includes a first time range to be displayed, the electronic education terminal finds third display data whose generated time satisfies the first time range, based on the time generated by the third display data recorded when the third display data is saved, and displays the third display data.
In yet another specific implementation scenario, the playback instruction may include other identification of the third display data, and the identification may be an identification added to the third display data when or after the third display data is generated. Users may be named with meeting content, such as a newman training meeting, a department monthly summary meeting, and so on. And after the electronic conference terminal receives the playback instruction. And reading the identifier indicated by the playback instruction, finding the third display data with the corresponding identifier, and displaying the third display data.
As can be seen from the above description, in this embodiment, by recording the time generated by the third display data, the corresponding third display data can be found and displayed according to the first time range in the playback instruction input by the user, and the efficiency of arranging the conference materials after the user meets can be effectively improved.
Referring to fig. 21, fig. 21 is a schematic flowchart of a fourth embodiment of a method for recording meeting contents provided by the present application. The method for recording the conference content comprises the following steps:
s2101: and receiving first display data sent by the conference initiating terminal.
S2102: and acquiring the content written on paper by the user, and generating second display data according to the content.
In a specific implementation scenario, steps S2101 to S2102 are substantially the same as steps S1801 to S1802 in the first embodiment of the method for recording conference content provided by the present application, and details are not repeated here.
S2103: collecting environmental sounds when a user writes on paper to obtain audio data, and storing the audio data.
In a specific implementation scenario, when a user performs a conference recording, all content may not be recorded in a very detailed manner due to a time relationship, so that the user may record only a few keywords, and collect environmental sounds when the user writes the conference recording, so as to record the current discussion or the speech of participants. And obtaining audio data according to the collected sound, and storing the audio data, namely storing the speech content of the discussion or the speech corresponding to the conference record recorded by the user. Therefore, when the user arranges the conference materials in the follow-up process, the user can obtain the specific details at that time by listening to the audio data again, and more detailed and reliable conference materials can be arranged.
S2104: and generating third display data according to the first display data and the second display data, and displaying and storing the third display data.
In a specific implementation scenario, step S2104 is substantially the same as step S1803 in the first embodiment of the method for recording conference content provided in this application, and details are not repeated here.
S2105: and receiving a playback instruction input by a user.
In a specific implementation scenario, the playback instruction includes a playback instruction for the third display data and a playback instruction for the audio data. The implementation scenario of the playback instruction including the playback instruction for the third display data is substantially the same as that described in step S2004 in the third embodiment of the conference content recording method provided in this application, and details are not repeated here.
In this implementation scenario, since the electronic conference terminal also acquires and stores the audio data when the user writes, the playback instruction may also be for the audio data. When audio data is saved, the time at which the audio data was generated may be saved together. The playback instructions may also include a second time range of audio data that needs to be played. For example, if the user needs to arrange the content of the long speech in the department meeting at 9:00 am on 1 month and 15 days, the second time range input is 9:00 am on 1 month and 15 days. In another implementation scenario, the input second time range may further include a termination time, for example, when the content of the chief speech in the department meeting at 9:00 am after 1 month and 15 days is collated, the duration of the department meeting is 30 minutes, and the input first time is 1 month and 15 days am 9: 00-9: 30.
In other implementations, the playback instructions may further include formulating whether to play back the third display data or the audio data, or both. The first time range and the second time range may not coincide, and for distinction, the first time range and the second time range may be respectively identified so that the electronic conference terminal can distinguish which is the first time range and which is the second time range.
S2106: and responding to the playback instruction, reading and displaying the third display data and/or playing the audio data.
In a specific implementation scenario, in response to the playback instruction, an implementation scenario in which the third display data is read and displayed is substantially the same as that described in step S2005 in the third embodiment of the method for recording conference content provided in this application, and details are not repeated here.
And when the playback instruction comprises a second time range needing to be played, the electronic education terminal finds the audio data with the generated time meeting the second time range according to the audio generation time recorded when the audio data is stored, and plays the audio data.
In this implementation scenario, after receiving the playback instruction, it is first determined whether the item to be played back is the third display data, the audio data, or both. The items to be played back and the respective times may be obtained by reading the identifications of the first time range and the second time range.
As can be seen from the above description, in the embodiment, the audio data is generated by acquiring the environmental sound when the user writes, and is stored and played back when the playback instruction input by the user is received, so that the user can be helped to better record the conference situation, and thus the conference materials can be more easily arranged in the follow-up process.
Referring to fig. 22, fig. 22 is a schematic structural diagram of an embodiment of an electronic education terminal provided in the present application, including: a processor 221, a memory 222, a communication circuit 223, a display 224, and a sensing circuit 225, the processor 221 coupled to the memory 222, the communication circuit 223, the display 224, and the sensing circuit 225, the processor 221 operable to control itself and the memory 222, the communication circuit 223, the display 224, and the sensing circuit 225 to implement steps in the educational method of any of the embodiments of figures 8-10 and their associated text or to implement steps in the educational method of any of the embodiments of figures 11-14 and their associated text.
According to the description, the electronic education terminal generates the first character data by acquiring the content written by the user, and sends the first character data to the second terminal, so that the second terminal generates and sends the education data according to the first character data, receives and prompts the education data, communication between a teacher and students can be more convenient, the teacher can also send different correction contents to different students, the time of receiving and paying homework is saved, and the education effect can be effectively improved. The first display data, such as a classroom lecture, sent by the education terminal is received, the content written on the paper by the user, such as classroom notes, is obtained, the third display data, such as the classroom lecture marked with the classroom notes, is generated and displayed according to the first display data and the second display data, and the user can be helped to learn and review better, so that the learning effect of the user is improved.
Referring to fig. 23, fig. 23 is a schematic structural diagram of an embodiment of an electronic conference terminal provided in the present application, where the network terminal includes: the conference record sharing method includes a processor 231, a memory 232, a communication circuit 233, a display 234 and a sensing circuit 235, the processor 231 is coupled to the memory 232, the communication circuit 233, the display 234 and the sensing circuit 235, and the processor 231 controls itself and the memory 232, the communication circuit 233, the display 234 and the sensing circuit 235 when operating to implement steps in the conference record sharing method of any embodiment of fig. 15-17 and relevant text description thereof or steps in the conference content recording method of any embodiment of fig. 18-21 and relevant text description thereof.
As can be seen from the above description, in the electronic conference terminal in this embodiment, the first text data is generated according to the content written on the paper by the first user, the first text data is displayed and sent to the second terminal, the conference recording data generated according to the first text data and sent by the second terminal is received, and the conference recording data is prompted to the user. In the conference process, the content written on the paper by the user can be quickly shared by other conference participants, so that creativity or opinion in the conference can be efficiently shared with each other, the communication efficiency in the conference is improved, and the conference can be smoothly carried out. The conference system also receives the first display data sent by the conference initiating terminal, such as the conference materials and slides which are prepared in advance, obtains the content written on the paper by the user, such as the conference records, and generates and displays the third display data according to the first display data and the second display data, such as the slides supplemented with the conference discussion results, so that the conference materials can be sorted in the follow-up process, more detailed and clear references can be provided when the conference content is conveyed, and the conference efficiency is improved.
Referring to fig. 24, fig. 24 is a schematic structural diagram of an embodiment of a device with storage function according to the present application, where the device with storage function 240 stores program instructions 241, and the program instructions 241 can be executed to implement the education method according to any one of the embodiments described in fig. 8 to 10 and the associated text, or to implement the electronic education method according to any one of the embodiments described in fig. 11 to 14 and the associated text, or to implement the meeting record sharing method according to any one of the embodiments described in fig. 15 to 17 and the associated text, or to implement the meeting content recording method according to any one of the embodiments described in fig. 18 to 21 and the associated text.
The device 240 with a storage function may be a medium that can store the program instructions 241, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may be a server that stores the program instructions 21, and the server may send the stored program instructions 241 to other devices for operation, or may self-operate the stored program instructions 241.
As can be seen from the above description, the device with a storage function in this embodiment can be used to facilitate communication between a teacher and a student, and can effectively improve an educational effect. And the method is also used for helping the user to learn and review better, thereby improving the learning effect of the user. And the conference sharing method is also used for realizing efficient mutual sharing of originality or opinion in the conference, improving the communication efficiency in the conference and being beneficial to smooth proceeding of the conference. The conference system is also used for providing more detailed and clear reference in the process of subsequently arranging conference materials and transmitting conference contents, so that the conference efficiency is improved.
Be different from prior art, the notebook in this application can be used for making the communication between mr and the student can be more convenient, can effectively improve educational effect. And the method is also used for helping the user to learn and review better, thereby improving the learning effect of the user. And the conference sharing method is also used for realizing efficient mutual sharing of originality or opinion in the conference, improving the communication efficiency in the conference and being beneficial to smooth proceeding of the conference. The conference system is also used for providing more detailed and clear reference in the process of subsequently arranging conference materials and transmitting conference contents, so that the conference efficiency is improved.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method of conference record sharing, comprising:
the method comprises the steps that a first terminal obtains content written on paper by a first user, first character data are generated, and the first character data are displayed;
sending the first text data to at least one second terminal, so that the at least one second terminal displays the first text data;
Receiving conference recording data sent by at least one second terminal, wherein the conference recording data is generated according to the first character data;
and prompting the conference recording data to a user.
2. The method of claim 1, wherein the meeting record data comprises second textual data and first voice data;
the prompting of the meeting record data to the user comprises
And displaying the second text data and/or playing the first voice data, wherein the first text data and the second text data are displayed by adopting different display screens or displayed by adopting the same display screen in a split-screen manner.
3. The method of claim 2,
the second character data is generated by the second terminal acquiring the content written on paper by the user;
the first voice data is acquired by the second terminal from the environmental sound when the user writes the content on the paper.
4. The method of claim 2, wherein said displaying said first textual data comprises:
displaying the first text data in a first format;
The displaying the second text data includes:
displaying the second text data in a second format different from the first format.
5. The method of claim 1, wherein the obtaining content written by the first user on the paper comprises:
recording the environmental sound when a user writes on paper, generating second voice data according to the environmental sound, and storing the second voice data.
6. The method according to claim 5, wherein the displaying the corresponding first text data and/or the second text data according to the reference instruction comprises:
playing the second voice data obtained at the same time of obtaining the first character data; and/or playing the first voice data obtained while the second text data is obtained.
7. The method of claim 5, further comprising:
recording at least one of the first text data, the second text data, the first voice data and the second voice data;
acquiring a reference instruction input by a user, and displaying corresponding first text data and/or second text data according to the reference instruction.
8. The method of claim 1, further comprising:
and projecting and displaying the first text data and/or the second text data.
9. An electronic conference terminal comprising a processor, a memory, a communication circuit sensing circuit and a display, said processor being coupled to said memory, said communication circuit sensing circuit and said display respectively, said processor being operative to control itself to implement the steps of the method according to any of claims 1-8.
10. An apparatus having a storage function, characterized in that program data are stored, which program data can be executed to implement the steps in the method according to any of claims 1-8.
CN201910335946.6A 2019-04-24 2019-04-24 Conference record sharing method, electronic conference terminal and storage device Pending CN111865871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910335946.6A CN111865871A (en) 2019-04-24 2019-04-24 Conference record sharing method, electronic conference terminal and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910335946.6A CN111865871A (en) 2019-04-24 2019-04-24 Conference record sharing method, electronic conference terminal and storage device

Publications (1)

Publication Number Publication Date
CN111865871A true CN111865871A (en) 2020-10-30

Family

ID=72952314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910335946.6A Pending CN111865871A (en) 2019-04-24 2019-04-24 Conference record sharing method, electronic conference terminal and storage device

Country Status (1)

Country Link
CN (1) CN111865871A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020081790A (en) * 2001-04-19 2002-10-30 미래통신 주식회사 A remote education system and method using learning terminal
CN201237781Y (en) * 2008-08-07 2009-05-13 唐桥科技(杭州)有限公司 Write-displaying type computer writing pad
CN103957190A (en) * 2014-04-02 2014-07-30 北京百度网讯科技有限公司 Online education interaction method, client-sides, server and system
CN104054046A (en) * 2013-01-08 2014-09-17 冯林 Writing tablet and teaching system based on trackpad
CN104882033A (en) * 2015-06-19 2015-09-02 山西大学 Interactive electronic plate device and method for effectively using electronic teaching resource
CN105224575A (en) * 2014-06-30 2016-01-06 珠海金山办公软件有限公司 A kind of document display method and device
CN106161654A (en) * 2016-08-30 2016-11-23 孟玲 A kind of cloud educational system
CN106327929A (en) * 2016-08-23 2017-01-11 北京汉博信息技术有限公司 Visualized data control method and system for informatization
CN107093340A (en) * 2017-06-22 2017-08-25 宁波宁大教育设备有限公司 The intelligent rendering method of course of solving questions
CN109032999A (en) * 2018-08-15 2018-12-18 掌阅科技股份有限公司 Take down notes display methods, electronic equipment and computer storage medium
CN109085965A (en) * 2018-07-19 2018-12-25 掌阅科技股份有限公司 Take down notes generation method, electronic equipment and computer storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020081790A (en) * 2001-04-19 2002-10-30 미래통신 주식회사 A remote education system and method using learning terminal
CN201237781Y (en) * 2008-08-07 2009-05-13 唐桥科技(杭州)有限公司 Write-displaying type computer writing pad
CN104054046A (en) * 2013-01-08 2014-09-17 冯林 Writing tablet and teaching system based on trackpad
CN103957190A (en) * 2014-04-02 2014-07-30 北京百度网讯科技有限公司 Online education interaction method, client-sides, server and system
CN105224575A (en) * 2014-06-30 2016-01-06 珠海金山办公软件有限公司 A kind of document display method and device
CN104882033A (en) * 2015-06-19 2015-09-02 山西大学 Interactive electronic plate device and method for effectively using electronic teaching resource
CN106327929A (en) * 2016-08-23 2017-01-11 北京汉博信息技术有限公司 Visualized data control method and system for informatization
CN106161654A (en) * 2016-08-30 2016-11-23 孟玲 A kind of cloud educational system
CN107093340A (en) * 2017-06-22 2017-08-25 宁波宁大教育设备有限公司 The intelligent rendering method of course of solving questions
CN109085965A (en) * 2018-07-19 2018-12-25 掌阅科技股份有限公司 Take down notes generation method, electronic equipment and computer storage medium
CN109032999A (en) * 2018-08-15 2018-12-18 掌阅科技股份有限公司 Take down notes display methods, electronic equipment and computer storage medium

Similar Documents

Publication Publication Date Title
McKay Teaching English as an international language: The Chilean context
Sali et al. Challenges of first years of teaching in Turkey: Voices of novice EFL teachers.
Bradbury Teaching writing in the context of a national digital literacy narrative
Bai et al. Integrating Technology in the Teaching of Advanced Chinese.
CN210122013U (en) Notebook computer
Swathi The importance of effective presentation for organizational success
Jensen et al. Feature: A Partnership Teaching Externship Program: A Model That Makes Do
Ahmad et al. Google classroom m-learning readiness for culinary college diploma students of langkawi vocational college in facing covid-19 pandemic
Crane Transforming our perspectives as language professionals during COVID-19
CN111865871A (en) Conference record sharing method, electronic conference terminal and storage device
CN111857618A (en) Conference content recording method, electronic conference terminal and storage device
CN111862697A (en) Electronic education method, terminal and device with storage function
CN111862704A (en) Education method, electronic education terminal and device with storage function
Dos Santos et al. technology integration and pedagogical practice in english language teaching: Lessons learnt
Robinson Technology tools for paperless formative assessment
Burns # FormativeTech: Meaningful, Sustainable, and Scalable Formative Assessment With Technology
Green et al. Global perspectives: Exploring school-based Brazilian librarianship through institutional ethnography
Swaggerty et al. Making a true shift into 21st century literacy learning with multimodal digital response projects.
Numazawa et al. Education and learning support system using proposed note-taking application
Harhai et al. Past events, current teens, future skills: producing digital oral history
Dorsett Audio-visual teaching machines
Lander et al. MALL tools tried and tested
Ng mLearning literacy
Loo et al. Mobile Phone Photo Notetaking when learning during the Pandemic: Experiences and Motives of Special Needs Education Students.
Miller Effectively enabling translanguaging in the classroom

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030

RJ01 Rejection of invention patent application after publication