CN113126865A - Note generation method and device in video learning process, electronic equipment and medium - Google Patents
Note generation method and device in video learning process, electronic equipment and medium Download PDFInfo
- Publication number
- CN113126865A CN113126865A CN202110444071.0A CN202110444071A CN113126865A CN 113126865 A CN113126865 A CN 113126865A CN 202110444071 A CN202110444071 A CN 202110444071A CN 113126865 A CN113126865 A CN 113126865A
- Authority
- CN
- China
- Prior art keywords
- note
- type
- preset
- acquiring
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 230000008569 process Effects 0.000 title claims abstract description 40
- 238000004590 computer program Methods 0.000 claims abstract description 17
- 238000012544 monitoring process Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 7
- 238000004891 communication Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure provides a note generation method and device in a video learning process, electronic equipment, a computer readable storage medium and a computer program product, and relates to the field of computers, in particular to the technical field of video processing. The implementation scheme is as follows: acquiring a note generation request of a user in a video learning process; determining a preset note type corresponding to the note generation request; acquiring corresponding note content based on the note generation request; and storing the note content and the preset note type in a note file in an associated mode.
Description
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method and an apparatus for generating a note in a video learning process, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the rapid popularization of the mobile internet and the continuous influence of epidemic situations, online learning becomes one of the main ways for students to acquire knowledge. However, unlike online classroom teaching, teachers teach more smoothly during online teaching and often do not leave time to record notes for students. In the process of video learning, students often miss the content explained by teachers because of taking notes, and the effect of listening to lessons is affected.
Disclosure of Invention
The present disclosure provides a note generation method, apparatus, electronic device, computer-readable storage medium, and computer program product in a video learning process.
According to an aspect of the present disclosure, a note generation method in a video learning process is provided, including: acquiring a note generation request of a user in a video learning process; determining a preset note type corresponding to the note generation request; acquiring corresponding note content based on the note generation request; and storing the note content and the preset note type in a note file in an associated mode.
According to another aspect of the present disclosure, there is provided a note generating apparatus in a video learning process, including: the video learning system comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is configured to acquire a note generation request of a user in a video learning process; the determining unit is configured to determine a preset note type corresponding to the note generating request; a second obtaining unit configured to obtain corresponding note content based on the note generation request; and the storage unit is configured to store the note content and the preset note type in a note file in an associated mode.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a note generation method according to the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a note generation method according to the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements a note generation method according to the present disclosure.
According to one or more embodiments of the disclosure, note contents can be recorded quickly and conveniently in a video learning process, and each note content can be stored in association with a corresponding note type, so that a user can conveniently view the note contents.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of a note generation method in a video learning process, according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow diagram for obtaining corresponding note content in accordance with an embodiment of the present disclosure;
FIG. 4 shows a flow diagram for capturing a video clip through a screen recording in accordance with an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a note generation apparatus in a video learning process, according to an embodiment of the present disclosure; and
FIG. 6 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable the image filling method to be performed.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store data such as note content. The data store 130 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
If the students need to take notes in the video learning process, the note contents are usually recorded on a paper notebook. Therefore, the student can not be classified easily, the follow-up searching is not convenient, the content explained by the teacher can be missed by the student when the student takes notes, and the effect of listening to the class is influenced.
Therefore, according to an embodiment of the present disclosure, there is provided a note generating method 200 in a video learning process, as shown in fig. 2, including: acquiring a note generation request of a user in a video learning process (step 210); determining a preset note type corresponding to the note generation request (step 220); acquiring corresponding note content based on the note generation request (step 230); and storing the note content in a note file in association with the preset note type (step 240).
According to the embodiment of the disclosure, note contents can be recorded quickly and conveniently in the video learning process, and each note content can be stored in association with a corresponding note type, so that a user can check conveniently.
In step 210, a note generation request of a user in a video learning process is obtained.
In some examples, the video learning may be live learning or recorded learning, such as a web lesson, and the like, without limitation. The user can learn through video learning equipment such as a tablet, a mobile phone, a desktop computer, a portable computer and the like.
In some examples, the note generation request is a request issued by a user during a video learning process to record a note. The method can be initiated by clicking operation of a designated key or area by a user or video learning voice instructions of the user and the like.
According to some embodiments, step 210 may comprise: receiving an operation of a user in a predetermined area of a user interface in a video learning process, wherein the predetermined area of the user interface comprises a note option corresponding to a preset note type; and determining a note option corresponding to the operation so as to generate and acquire a note generation request based on the note option.
The predetermined area of the user interface may be an area in the video learning page, or may be another user interface area other than the video learning page, for example, a function bar area, and the like, which is not limited herein.
For example, a note entry may be provided at a sidebar of a video learning page. And after the user clicks the note inlet, the displayed note options correspond to the preset note types. The user may select among the displayed note options to select a corresponding note type as desired.
According to some embodiments, step 210 may further comprise: monitoring the editing operation of a user on a video playing interface of a user interface in the video learning process to obtain a coordinate point set corresponding to the editing operation; and generating and acquiring a note generation request based on the coordinate point set.
In some examples, a trajectory of an editing operation of the capacitive pen on the user interface may be obtained through a coordinate point positioning technique of the capacitive screen to generate a corresponding note generation request based on the editing operation. For example, the user may circle the content to be recorded on the video playback interface.
In step 220, a preset note type corresponding to the note generation request is determined.
By acquiring a note generation request obtained based on a user operation, a preset note type selected by the user operation can be determined.
According to some embodiments, the preset note type may include: a first type, a second type, etc. For example, the first type may be an emphasis or a difficulty, and so on. The second type can be a topic, a question point, etc. The note type may be configured accordingly according to the user's needs, and is not limited herein.
According to some embodiments, step 220 may comprise: determining the shape of the area corresponding to the editing operation based on the coordinate point set; and determining a preset note type corresponding to the area shape. The coordinate point set is obtained by the editing operation of the user on the video playing interface in the above embodiment. Different region shapes may represent different note types, e.g., circles may represent emphasis or difficulty, triangles may represent title, etc. Of course, other region shapes and note type correspondences are possible, and are not limited herein.
At step 230, corresponding note content is obtained based on the note generation request.
In some embodiments, as shown in fig. 3, step 230 may include: responding to the preset note type as a first type, and acquiring a video clip as note content through a screen recording (step 310); and responding to the preset note type being the second type, acquiring a video screenshot through the screenshot to serve as note content (step 320).
For example, if a user selects a point or difficulty through a note entry set at a sidebar of a video learning page, a screen recording function of the device may be automatically invoked, so that video pictures and sound are recorded and saved. If the user selects to mark the subject through a note inlet arranged at a side bar of the video learning page, the screen capture function of the equipment can be automatically called, so that the current video picture is captured and stored.
The current common triggering mode of screen capture of some devices is through a combination of keys, such as power key + up volume key, power + down volume key, etc. This triggering method is not easy to operate in the video learning process, thereby causing the user to miss the corresponding video content. Moreover, the screenshot mode of the device is to store the captured picture in a picture library, and the picture library also contains pictures obtained in other modes, such as shooting, so that the device is not beneficial to classified storage of note contents, and cannot add corresponding note tags, so that the device is inconvenient for a user to check.
According to some embodiments, as shown in fig. 4, step 310 may comprise: acquiring a start time of the screen recording based on the user interface, and starting the screen recording at the start time (step 410); acquiring the end time of the screen recording based on the user interface, and ending the screen recording at the end time (step 420); and acquiring a video clip between the start time and the end time as the note content (step 430).
After the note content is acquired, the note content and the note type corresponding to the note content can be stored in the note file in an associated mode.
In some examples, upon determining that the preset note type is the first type, the user interface may be invoked to generate a start screen recording option, and the current time is obtained as the screen recording start time when the user clicks the option. And after the screen recording is started, generating a screen recording ending option on the user interface, so that the screen recording operation is suspended when the user clicks the option, and the video stream clip obtained by screen recording is stored as note content.
In some examples, the screen recording start time may also be further preset. For example, the screen recording start time may be moved from the current time to the previous time by a predetermined time period when the user clicks the screen recording start option, so that the obtained video stream segment contains more content, thereby preventing the user from missing important knowledge points.
According to some embodiments, an image of an area formed based on a set of coordinate points obtained by an editing operation by a user may be acquired as note content. For example, after an editing operation of a user on a video playing interface of a user interface is monitored, a coordinate point set corresponding to the operation is acquired through a coordinate point identification technology. And then determining the region shape corresponding to the coordinate point set, so as to obtain the corresponding note type according to the region shape. And finally, acquiring an image of an area formed by the coordinate point set to serve as note content, and storing the note content and a corresponding note type in an associated manner.
The note content and the corresponding note type are stored in an associated mode, so that the note content can be conveniently checked by a user at a later stage, the time can be saved when the user checks the note content, and the note content does not need to be opened one by one each time to determine whether the note content is the content which the user wants to check.
According to some embodiments, the note content can be stored in the note file in a time sequence, so that the user can conveniently check the note content according to the learning progress.
According to some embodiments, the method 200 may further comprise: and checking the note content in the note file based on the preset note type classification. For example, the user may filter the corresponding note types to sort the viewing notes, such as viewing emphasis, viewing difficulty, viewing title, and the like. The user may also view only one or more of the plurality of preset note types, without limitation.
There is also provided a note generation apparatus 500 in a video learning process according to an embodiment of the present disclosure, as shown in fig. 5, including: a first obtaining unit 510 configured to obtain a note generation request of a user in a video learning process; a determining unit 520 configured to determine a preset note type corresponding to the note generation request; a second obtaining unit 530 configured to obtain corresponding note content based on the note generation request; and a storage unit 540 configured to store the note content in association with the preset note type in a note file.
According to some embodiments, the first obtaining unit 510 may include: the unit is used for receiving the operation of a user in a preset area of a user interface in the video learning process, wherein the preset area of the user interface comprises a note option corresponding to the preset note type; and a unit for determining a note option corresponding to the operation to generate and acquire a note generation request based on the note option.
According to some embodiments, the preset note type comprises at least one of the group consisting of: a first type and a second type. The second obtaining unit 530 may include: responding to the fact that the preset note type is the first type, and obtaining a video clip through a screen recording to serve as a note content unit; and responding to the preset note type being the second type, and acquiring a video screenshot through a screenshot to serve as a unit of note content.
According to some embodiments, the unit for acquiring the video clip as the note content through the screen recording comprises: acquiring the starting time of screen recording based on a user interface, and starting a screen recording unit at the starting time; a unit for acquiring the end time of screen recording based on the user interface and outputting the end screen recording at the end time; and acquiring a video clip between the start time and the end time as a unit of note content.
Here, the operations of the above units 510 to 540 of the note generating apparatus 500 in the video learning process are similar to the operations of the steps 210 to 240 described above, and are not described herein again.
There is also provided, in accordance with an exemplary embodiment of the present disclosure, an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the note generation method in the video learning process described above.
There is also provided, in accordance with an exemplary embodiment of the present disclosure, a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the note generating method in the video learning process described above.
There is also provided, in accordance with an exemplary embodiment of the present disclosure, a computer program product, comprising a computer program, wherein the computer program, when executed by a processor, implements the note generating method in the video learning process described above.
Referring to fig. 6, a block diagram of a structure of an electronic device 600, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the device 600, and the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. Output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 608 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver, and/or a chipset, such as a bluetooth (TM) device, an 1302.11 device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 601 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by the computing unit 601, one or more steps of the method 200 described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method 200 in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.
Claims (16)
1. A note generation method in a video learning process comprises the following steps:
acquiring a note generation request of a user in a video learning process;
determining a preset note type corresponding to the note generation request;
acquiring corresponding note content based on the note generation request; and
and storing the note content and the preset note type in a note file in an associated mode.
2. The method of claim 1, wherein obtaining a note generation request of a user during a video learning process comprises:
receiving an operation of a user in a predetermined area of a user interface in a video learning process, wherein the predetermined area of the user interface comprises a note option corresponding to the preset note type; and
and determining a note option corresponding to the operation so as to generate and acquire a note generation request based on the note option.
3. The method of claim 1 or 2, wherein the preset note type comprises at least one of the group consisting of: a first type and a second type, wherein,
wherein obtaining the corresponding note content based on the note generation request comprises:
responding to the fact that the preset note type is the first type, and obtaining a video clip as note content through a screen recording; and
and responding to the preset note type as a second type, and acquiring a video screenshot through a screenshot to serve as note content.
4. The method of claim 3, wherein acquiring the video clip as the note content through the screen recording comprises:
acquiring the starting time of screen recording based on a user interface, and starting screen recording at the starting time;
acquiring the end time of screen recording based on a user interface, and ending screen recording at the end time; and
and acquiring a video clip between the starting time and the ending time as note content.
5. The method of any one of claims 1-4, wherein obtaining a note generation request of a user during a video learning process comprises:
monitoring editing operation of a user on a video playing interface of a user interface in a video learning process to obtain a coordinate point set corresponding to the editing operation;
and generating and acquiring a note generation request based on the coordinate point set.
6. The method of claim 5, wherein determining the preset note type corresponding to the note generation request comprises:
determining an area shape corresponding to the editing operation based on the coordinate point set; and
and determining a preset note type corresponding to the area shape.
7. The method of claim 5 or 6, wherein obtaining respective note content based on the note generation request comprises:
and acquiring an image of an area formed by the coordinate point set as note content.
8. The method of any of claims 1-7, wherein the note content is stored in the note file in chronological order.
9. The method of claim 1, further comprising: and checking the note content in the note file based on the preset note type classification.
10. A note generation apparatus in a video learning process, comprising:
the video learning system comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is configured to acquire a note generation request of a user in a video learning process;
the determining unit is configured to determine a preset note type corresponding to the note generating request;
a second obtaining unit configured to obtain corresponding note content based on the note generation request; and
the storage unit is configured to store the note content and the preset note type in a note file in an associated mode.
11. The apparatus of claim 10, wherein the first obtaining unit comprises:
the unit is used for receiving the operation of a user in a preset area of a user interface in the video learning process, wherein the preset area of the user interface comprises a note option corresponding to the preset note type; and
and determining a note option corresponding to the operation to generate and acquire a note generation request unit based on the note option.
12. The apparatus of claim 10 or 11, wherein the preset note type comprises at least one of the group consisting of: a first type and a second type, wherein,
wherein the second acquisition unit includes:
responding to the fact that the preset note type is the first type, and obtaining a video clip through a screen recording to serve as a note content unit; and
and responding to the preset note type as a second type, and acquiring a video screenshot through a screenshot to serve as a unit of note content.
13. The apparatus of claim 12, wherein the means for acquiring the video clip as the note content through the screen recording comprises:
acquiring the starting time of screen recording based on a user interface, and starting a screen recording unit at the starting time;
a unit for acquiring the end time of screen recording based on the user interface and outputting the end screen recording at the end time; and
and acquiring a video clip between the starting time and the ending time as a unit of note content.
14. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
15. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
16. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-9 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110444071.0A CN113126865B (en) | 2021-04-23 | 2021-04-23 | Note generation method and device in video learning process, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110444071.0A CN113126865B (en) | 2021-04-23 | 2021-04-23 | Note generation method and device in video learning process, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113126865A true CN113126865A (en) | 2021-07-16 |
CN113126865B CN113126865B (en) | 2024-05-17 |
Family
ID=76779688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110444071.0A Active CN113126865B (en) | 2021-04-23 | 2021-04-23 | Note generation method and device in video learning process, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113126865B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103607457A (en) * | 2013-11-20 | 2014-02-26 | 广州华多网络科技有限公司 | Note processing method, apparatus, terminal, server and system |
KR101390968B1 (en) * | 2013-10-22 | 2014-05-02 | (주)나라소프트 | Smart education system |
CN104346963A (en) * | 2014-10-23 | 2015-02-11 | 江苏黄金屋教育咨询有限公司 | Multimedia audio and video-based digital note-taking system for student |
CN107066619A (en) * | 2017-05-10 | 2017-08-18 | 广州视源电子科技股份有限公司 | User's notes generation method, device and terminal based on multimedia resource |
CN109166373A (en) * | 2018-09-12 | 2019-01-08 | 深圳点猫科技有限公司 | It is a kind of for educating the content of courses store method and system of operating system |
CN110446097A (en) * | 2019-08-26 | 2019-11-12 | 维沃移动通信有限公司 | Record screen method and mobile terminal |
CN111290688A (en) * | 2018-12-06 | 2020-06-16 | 中兴通讯股份有限公司 | Multimedia note taking method, terminal and computer readable storage medium |
CN111539188A (en) * | 2020-04-23 | 2020-08-14 | 掌阅科技股份有限公司 | Note generation method, computing device and computer storage medium |
CN111556371A (en) * | 2020-05-20 | 2020-08-18 | 维沃移动通信有限公司 | Note recording method and electronic equipment |
CN112087656A (en) * | 2020-09-08 | 2020-12-15 | 远光软件股份有限公司 | Online note generation method and device and electronic equipment |
-
2021
- 2021-04-23 CN CN202110444071.0A patent/CN113126865B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101390968B1 (en) * | 2013-10-22 | 2014-05-02 | (주)나라소프트 | Smart education system |
CN103607457A (en) * | 2013-11-20 | 2014-02-26 | 广州华多网络科技有限公司 | Note processing method, apparatus, terminal, server and system |
CN104346963A (en) * | 2014-10-23 | 2015-02-11 | 江苏黄金屋教育咨询有限公司 | Multimedia audio and video-based digital note-taking system for student |
CN107066619A (en) * | 2017-05-10 | 2017-08-18 | 广州视源电子科技股份有限公司 | User's notes generation method, device and terminal based on multimedia resource |
CN109166373A (en) * | 2018-09-12 | 2019-01-08 | 深圳点猫科技有限公司 | It is a kind of for educating the content of courses store method and system of operating system |
CN111290688A (en) * | 2018-12-06 | 2020-06-16 | 中兴通讯股份有限公司 | Multimedia note taking method, terminal and computer readable storage medium |
CN110446097A (en) * | 2019-08-26 | 2019-11-12 | 维沃移动通信有限公司 | Record screen method and mobile terminal |
CN111539188A (en) * | 2020-04-23 | 2020-08-14 | 掌阅科技股份有限公司 | Note generation method, computing device and computer storage medium |
CN111556371A (en) * | 2020-05-20 | 2020-08-18 | 维沃移动通信有限公司 | Note recording method and electronic equipment |
CN112087656A (en) * | 2020-09-08 | 2020-12-15 | 远光软件股份有限公司 | Online note generation method and device and electronic equipment |
Non-Patent Citations (2)
Title |
---|
"IPv6让在线学习充分互动", 《中国教育网络》, no. 09, pages 44 * |
"OneNote 2003妙用三则", 《计算机与网络》, no. 23, pages 16 * |
Also Published As
Publication number | Publication date |
---|---|
CN113126865B (en) | 2024-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112749758A (en) | Image processing method, neural network training method, device, equipment and medium | |
CN112836072A (en) | Information display method and device, electronic equipment and medium | |
WO2024036899A1 (en) | Information interaction method and apparatus, device and medium | |
CN113256583A (en) | Image quality detection method and apparatus, computer device, and medium | |
CN113824899B (en) | Video processing method, video processing device, electronic equipment and medium | |
CN113723305A (en) | Image and video detection method, device, electronic equipment and medium | |
CN114860995B (en) | Video script generation method and device, electronic equipment and medium | |
CN116152607A (en) | Target detection method, method and device for training target detection model | |
CN114510308B (en) | Method, device, equipment and medium for storing application page by mobile terminal | |
CN116361547A (en) | Information display method, device, equipment and medium | |
CN113126865B (en) | Note generation method and device in video learning process, electronic equipment and medium | |
CN114842476A (en) | Watermark detection method and device and model training method and device | |
CN115359309A (en) | Training method, device, equipment and medium of target detection model | |
CN115050396A (en) | Test method and device, electronic device and medium | |
CN114999449A (en) | Data processing method and device | |
CN114429678A (en) | Model training method and device, electronic device and medium | |
CN113312511A (en) | Method, apparatus, device and computer-readable storage medium for recommending content | |
CN114494797A (en) | Method and apparatus for training image detection model | |
CN113641929A (en) | Page rendering method and device, electronic equipment and computer-readable storage medium | |
CN112579587A (en) | Data cleaning method and device, equipment and storage medium | |
CN113722534B (en) | Video recommendation method and device | |
CN113609370B (en) | Data processing method, device, electronic equipment and storage medium | |
CN115016760B (en) | Data processing method, device, equipment and medium | |
CN115578451B (en) | Image processing method, training method and device of image processing model | |
CN116070711B (en) | Data processing method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |