CN115729424A - Query method, electronic device, and medium therefor - Google Patents

Query method, electronic device, and medium therefor Download PDF

Info

Publication number
CN115729424A
CN115729424A CN202110985097.6A CN202110985097A CN115729424A CN 115729424 A CN115729424 A CN 115729424A CN 202110985097 A CN202110985097 A CN 202110985097A CN 115729424 A CN115729424 A CN 115729424A
Authority
CN
China
Prior art keywords
knowledge
key
application
retrieval
courseware
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110985097.6A
Other languages
Chinese (zh)
Inventor
刘绍存
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110985097.6A priority Critical patent/CN115729424A/en
Publication of CN115729424A publication Critical patent/CN115729424A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The method is suitable for the technical field of live teaching and provides an inquiry method, electronic equipment and a medium thereof. In addition, the electronic equipment also generates a knowledge map according to the correlation among key knowledge contents, and each key knowledge in the knowledge map corresponds to a button, so that when a student downloads and looks up courseware, the student can not only check the key knowledge of each department through the knowledge map, but also can click the button at the key knowledge position to realize one-key inquiry of the key knowledge, and the learning efficiency of the student is improved.

Description

Query method, electronic device, and medium therefor
Technical Field
The application belongs to the technical field of teaching live broadcast, and particularly relates to an inquiry method, electronic equipment and a medium thereof.
Background
With the development of the network live broadcast technology, more and more traditional classroom-classroom type courses are changed into network online live broadcast classes, so that the teaching is not limited by time and place any more, and great convenience is brought to teacher groups and student groups.
However, after all, the online classroom is different from offline teaching, in the online offline teaching process, students can ask questions at any time when encountering unknown problems, the online classroom draws the space distance between teachers and students, the question asking enthusiasm of students is not as good as offline classroom, and as the questions of students are not interrupted, the online teacher teaching content is more than that of offline teaching, and more electronic notes are formed. When the electronic notes are few, the students can find out key knowledge taught by teachers through manual browsing, but under the condition that the electronic notes are many, how to quickly inquire and retrieve key knowledge contents becomes a problem to be solved.
Disclosure of Invention
In order to solve the technical problems, the embodiment of the application provides a knowledge query method, electronic equipment and a medium. By the mode, students can not only quickly inquire key knowledge through the knowledge map, but also can further retrieve the key knowledge content by clicking the button of the relevant key knowledge, so that the understanding of the students to the key knowledge is deepened.
In a first aspect, an embodiment of the present application provides a query method, which is applicable to an electronic device, in which: a first application window of a first application is displayed on the electronic equipment, wherein the first application window comprises a first interface, and the first interface comprises at least one retrieval identifier; detecting a selection operation of a user for the retrieval identification; calling a second application, wherein the second application can be used for searching the search keyword corresponding to the search identifier; and displaying the retrieval result of the second application.
In some embodiments, the first application includes at least one of an online live application, a remote conference application, and a document editing application, and the user may select the retrieval identifier through the first application. The first application window of the first application refers to a window where the first application is located when the first application is opened, and one or more first interfaces can be displayed in the first application window of the first application. The plurality of first interfaces can be displayed in the first application window in a manner similar to the arrangement of webpages of a webpage computer, and the display manner of the first interfaces is not limited by the application.
In the first interface, a plurality of retrieval identifications are displayed, each retrieval identification corresponds to a retrieval keyword, a user can select the retrieval identification by performing selection operation on the retrieval identification, so that the electronic equipment calls the second application, the retrieval keyword corresponding to the retrieval identification is retrieved, and a retrieval result of the second application is displayed on the electronic equipment.
Optionally, the selection operation includes a touch operation or a click operation. The touch operation refers to a selection operation performed by a user on the retrieval identifier through a finger or a gesture through a touch screen of the electronic device, for example, the user clicks the retrieval identifier through the finger to select the retrieval identifier, and then the electronic device invokes the second application to retrieve the retrieval keyword corresponding to the retrieval identifier in response to the touch operation based on the retrieval identifier selected by the user. The click operation refers to a selection operation performed by the user for the retrieval identifier through an external device of the electronic device, for example, a device such as a stylus or a mouse, for example, the user clicks the retrieval identifier through the mouse to select the retrieval identifier, and then the electronic device invokes the second application to retrieve the retrieval keyword corresponding to the retrieval identifier in response to the click operation based on the retrieval identifier selected by the user.
In some embodiments, the second application comprises a retrieval application, e.g.
Figure BDA0003229240670000021
Or a search engine, etc. carried by the electronic device system, and may be other applications capable of retrieval, such as
Figure BDA0003229240670000022
Etc., which the present application does not limit.
With reference to the first aspect, in a possible implementation manner of the first aspect, the manner in which the electronic device displays the search result of the second application includes displaying the search result in a second application window of the second application, where the first application window and the second application window are displayed independently.
With reference to the first aspect and possible implementation manners of the first aspect, in another possible implementation manner of the first aspect, the electronic device may also display a second interface in the first application window, where the second interface displays an initial search page of the second application, and a search keyword corresponding to the search identifier has been input in a search bar of the initial search page.
In particular, the location and manner of display of the second application on the electronic device depends on the association or link relationship of the retrieval identity with the second application. In some embodiments, when the retrieval identifier is linked to the second application, then, after the electronic device retrieves a selection operation of the user for the retrieval identifier, the electronic device invokes and opens the second application, where the retrieval identifier may be understood as a "switch" for the user to open and run the second application, and it may be understood that the second application is displayed independently of the first application, that is, the second application is displayed in the second application window, the first application is displayed in the first application window, and at the same time, the second application displays a retrieval result of the retrieval keyword corresponding to the retrieval identifier in the second application window. The method has the advantages that after the user selects the retrieval identifier, the user can directly see the retrieval result of the retrieval keyword corresponding to the retrieval identifier, and the user can drag the second application window to the parallel or parallel position of the first application window to compare and view the retrieval result in the second application window with the retrieval keyword in the first application window.
In other embodiments, when the retrieval identifier is linked to an interface webpage in the second application, the electronic device may also invoke the second application after the electronic device retrieves a selection operation of the user for the retrieval identifier, but the second application is not opened, that is, the second application is not displayed on the electronic device in a separate window. Since the search identification link is a specific web page in the second application in this case, the search result of the second application is displayed only in the first application window of the first application. For example, the search identifier is linked to a search header interface (or an initial search interface) of the second application, and then, after the electronic device retrieves a selection operation of the user for the search identifier, the search header interface of the second application is displayed in the first window of the first application, and in order to omit an operation of the user to input the search keyword again in the search bar of the search header interface, the search bar of the displayed search header interface already includes the search keyword corresponding to the search identifier.
The specific implementation manner of the link between the search identifier and the second application, and the specific implementation process of using the search keyword as the search parameter to be transmitted to the second application or the search header interface of the second application will be described in the following specific embodiments, and will not be described herein again.
It is to be understood that, based on the idea that the location and manner of displaying the second application on the electronic device depends on the retrieval identification of the association or link relationship with the second application, in some embodiments, the retrieval result may also be directly displayed within the first application window of the first application of the electronic device, or only the search header interface of the second application may also be displayed in the second application window of the second application of the electronic device, which is not limited in this application.
With reference to the first aspect and possible implementations of the first aspect, in a possible implementation of the first aspect, the search identifier includes at least one display form of a button, a link, and a tag, which is not limited in this application.
With reference to the first aspect and possible implementations of the first aspect, in one possible implementation of the first aspect, a first document is displayed on the first interface, and the retrieval identifier is generated based on at least part of content in the first document.
The first document can be a teaching courseware, a meeting document and the like, and the retrieval identification is generated according to at least part of the content in the first document. For example, taking the first document as a teaching course, the search identifier may be generated for a subject title in the teaching course, key knowledge in the subject, and other parts of content.
In some embodiments, the electronic device generates the search identifier by: and in the case of detecting a retrieval operation of the user on at least part of the content in the first document, generating a retrieval identifier of the at least part of the content, wherein the retrieval identifier is associated with an initial retrieval page of the second application or the retrieval identifier is associated with the second application. In some embodiments, the user's operation of retrieving at least part of the content in the first document includes: the user marks at least part of the content in the first document. For example, continuing to take the first document as the teaching courseware as an example, the retrieval operation performed by the user for at least part of the content in the first document is a marking performed by the user for the important knowledge in the teaching courseware, for example, the user delineates the important knowledge.
In the foregoing embodiments, the retrieval identifiers are distributed and displayed in the first document, and if a user needs to find a specific retrieval identifier, the user needs to browse the first document to find the specific retrieval identifier before performing query. Since the retrieval identifier corresponds to the retrieval keyword, and the retrieval identifier is generated according to at least part of the content in the first document, and the part of the content is determined by the retrieval operation performed by the user on the first document, the retrieval identifier can reflect the content that the user wishes to retrieve in the first document, and the retrieval keyword can be understood as a keyword which is consistent with or can reflect the part of the content. Therefore, a user can determine the retrieval identifier contained in the document and part of the content which is possibly required to be queried in the first document corresponding to the retrieval identifier through the knowledge graph of the first document, and then the user can directly query the part of the content in the first document corresponding to the retrieval identifier through the retrieval identifier in the knowledge graph according to the requirement, so that the user query efficiency is improved.
As described above, the knowledge graph includes the retrieval identifiers and the association relationships between the retrieval identifiers, so constructing the knowledge graph requires determining the association relationships between the retrieval identifiers, and further generating the knowledge graph based on the association relationships between the retrieval identifiers and the retrieval identifiers.
Specifically, in some embodiments, the electronic device generates the knowledge-graph by: extracting the characteristics of partial content in the first document corresponding to each retrieval identifier in the plurality of retrieval identifiers; calculating the similarity between a plurality of characteristics corresponding to a plurality of retrieval identifications; and generating a knowledge graph according to the similarity among the plurality of features. The features may include image features and text features, among others. Specifically, the electronic device may extract features of part of the content in the first document corresponding to the search identifier, calculate similarity between the features, and then establish an association relationship between the search identifiers corresponding to the features according to a relationship between the similarity between the features and a preset similarity threshold, for example, if the similarity between two features A, B is greater than or equal to the preset similarity threshold, the electronic device establishes an association relationship for the search identifier a 'corresponding to the feature a and the search identifier B' corresponding to the feature B, and then the electronic device generates the knowledge graph according to the association relationship between the search identifiers and the search identifier.
In a second aspect, an embodiment of the present application further provides an electronic device, which includes a memory storing computer program instructions; a processor, the processor coupled to the memory, the computer program instructions stored by the memory when executed by the processor causing the electronic device to implement the query method of any of the first aspects described above.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program is configured to, when executed by a processor, implement the query method in any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer program product, which, when run on an electronic device, causes the electronic device to execute the query method of any one of the above first aspects.
It is understood that the beneficial effects of the second to fourth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a diagram of an example teaching live scene including a teacher-side teaching device 100 and a student-side student device 200, according to some embodiments;
FIG. 2 is a diagram of an example live interface displayed on a tutorial device, according to some embodiments;
FIG. 3 is an illustration of an example interface for performing a knowledge search at a learning device, in accordance with certain embodiments;
FIG. 4 is an illustration of an example interface for performing a knowledge search at a learning device, in accordance with certain embodiments;
FIG. 5 is a diagram of an example knowledge graph displayed at a learning device according to some embodiments;
FIG. 6 is a diagram of an example courseware displayed on a teaching device according to some embodiments;
FIG. 7 is a schematic view of a courseware displayed on the learning device corresponding to FIG. 6;
FIG. 8 is an illustration of an example interface for performing a knowledge search at a learning device, in accordance with certain embodiments;
FIG. 9 is an example interface diagram for performing a knowledge search at a learning device according to some embodiments;
FIG. 10 is a schematic diagram of an example of the form of highlight markers provided by some embodiments;
FIG. 11 is a schematic diagram illustrating an example of smoothing a highlight mark according to some embodiments;
FIG. 12 is a schematic diagram of an example generate button provided by some embodiments;
FIG. 13 is a schematic diagram of yet another example generate button provided by some embodiments;
FIG. 14 is a schematic diagram of a process provided by some embodiments for building a knowledge-graph that includes buttons;
FIG. 15 is a flow diagram illustrating an example of an focused knowledge query using buttons, according to some embodiments;
FIG. 16 is a flow diagram that illustrates yet another example of an important knowledge query using buttons, according to some embodiments;
FIG. 17 is a schematic diagram illustrating interaction between a teaching device, a learning device, and a camera according to some embodiments;
FIG. 18 is a diagram illustrating an exemplary hardware configuration of an electronic device, according to some embodiments;
fig. 19 is a schematic diagram of an example software structure of an electronic device according to some embodiments.
Detailed Description
Various aspects of the illustrative embodiments will be described using terms commonly employed by those skilled in the art. It is to be appreciated that the illustrative embodiments of the present application include, but are not limited to, a method and electronic device for fast querying key knowledge, a storage medium, and the like.
As mentioned above, when a student queries a certain key knowledge in a courseware, the student needs to input the key knowledge in the browser search bar and then click the search button to perform search query, which is cumbersome to operate, and especially inconvenient to operate when the key knowledge is large in space or the key knowledge is a picture.
In order to improve the efficiency of the students for inquiring and retrieving key knowledge, the electronic equipment can provide an intelligent screen recognition function, can directly recognize and process the contents of courseware, then inquires according to the recognized contents and displays the inquiry result, so that the steps of inquiry and retrieval are simplified, and the students can efficiently retrieve the contents of the courseware.
Specifically, for example, fig. 1 shows a live teaching scene, as shown in fig. 1, the electronic device 100 is a teaching device on the teacher side, the electronic device 200 is a learning device on the student side, online learning APP10 is installed on both the teaching device 100 and the learning device 200, both the teacher and the student need to enter a live teaching interface through the online learning APP10, and during the live teaching process, the interface contents displayed by the teaching device 100 and the learning device 200 are consistent.
Illustratively, the content displayed by the tutorial device 100 shown in fig. 2 is taken as an example, and the content displayed by the tutorial device 100 includes a portrait display interface 101, an exit button 102, a live ID number 103, a mute mode button 104, a talk-off mode button 105, a save button 106, a note button 107, a more options button 108, a live content interface 109, and a scroll bar 110.
In this case, the portrait display interface 101 displays teaching environments on the student side and the teacher side and the current status of the students or teachers. In some embodiments, the teaching environment may include a student-side or teacher-side network environment, and the teacher may learn about the learning environment of the students through the student-side environment to decide whether to continue the live teaching. For example, if the teacher finds that the students are in a noisy environment or that the network environment in which the students are located is poor, the teacher may not perform live teaching and supervise the students to change the learning environment as soon as possible. In other embodiments, the teacher may decide whether to start or continue the live teaching based on the current status of the students displayed on the portrait display 101. For example, the teacher may stop the live teaching if the teacher determines that the current learning status of the student is not suitable for live teaching through the portrait of the student displayed on the portrait display interface 101.
The exit button 102 is a button provided for teachers and students to exit live teaching. In some embodiments, the teacher or student may end the current live by clicking the exit button 102.
The mute mode button 104 is used to turn off or on the entry of speech. In some implementations, the teacher may stop voice entry by clicking on the silent mode button 104 to provide the student with time to ask questions.
The purpose of the talk-inhibit mode button 105 on the instructional device 100 is to inhibit the student from speaking during the live instructional session. In some embodiments, the teacher may inhibit students from speaking during the live by clicking on the inhibit mode button 105. In other embodiments, the talk-inhibit mode button 105 may not be provided on the learning device 200.
The save button 106 is used to save the interface contents currently displayed on the screen in response to an instruction from the teacher's side or the student's side during the live broadcast. For example, in some embodiments, after the teacher has made a highlight mark for a certain chapter, the teacher may click on the save button 106 to save the current highlight mark. In other embodiments, the student may choose to click the save button 106 immediately to save the focus marks of the courseware in the current interface according to his or her learning ability and receptivity. In other embodiments, when the teacher uses the paper courseware, the save button 106 is further configured to trigger a camera for acquiring an image of the paper courseware, so that the camera can generate a snapshot of the paper courseware including the key marks according to an instruction from the teacher to click on the save button 106, which will be described in detail below and will not be described herein.
The note button 107 is used for the student or teacher to record notes during the course of the teaching live. In some embodiments, the teacher or student may click on the note button 107 to enter the note mode, and then mark the important knowledge in the courseware displayed on the live content interface 109, and click on the save button 106 to save the courseware and note content before the teaching live is finished.
When the live content interface 109 is unable to display all of the courseware, the teacher or student can drag the right scroll bar 110 to display all of the courseware. In some embodiments, the teacher may further obtain more options related to the live teaching function by clicking the more options button 108, for example, after clicking the more options button 108, the teacher may select a full screen button (not shown) to display the current live content interface 109 in full screen in the corresponding function options, or after clicking the more options button 108, the teacher may select a network diagnostic button (not shown) to perform diagnostic repair on the network environment currently being taught live in the corresponding function options.
After the teacher finishes live teaching, the teaching device 100 can store the courseware in the form of pictures or PDFs into the memory of the teaching device 100, and simultaneously send the pictures or PDFs corresponding to the courseware to the learning device 200 at the student side, so that the student can learn the content explained by the teacher by combining the courseware through the learning device 200.
For example, in some embodiments, after the learning device 200 receives the courseware, if the student needs to search the important content marked by the teacher while browsing the courseware, the student may press the screen for a long time to call up the intelligent screen recognition function of the learning device 200, then the student may adjust the range included by the left and right boundaries of the boundary selector 211 so as to determine the recognition range of the learning device 200, then the learning device 200 recognizes the content in the recognition range determined by the boundary selector 211, and if the content in the recognition range is a flower image, the learning device 200 may display a search result list 212 as shown in fig. 3 (a) according to the recognized result, where the search result list 212 includes search results related to flowers such as peony, narcissus, and the like; if the content in the recognition range is the headphone image, the learning device 200 may display a search result list 212 as shown in fig. 3 (B) according to the recognized result, where the search result list 212 includes a plurality of search results such as airpots, freebuns, bluetooth headphones, and the like, and the student may click to view more to display more relevant search results. For the retrieval result list, the students can directly select proper retrieval results from the retrieval result list according to the brief introduction displayed by the retrieval results, and the students can also click each retrieval result in the retrieval result list respectively to determine the proper retrieval results more accurately.
In some embodiments, if the content in the recognition range belongs to the text content, the learning device 200 displays the recognition result interface 212 shown in fig. 3 (C) after the content in the recognition range is recognized, that is, the keywords of the text content, and the student can select one or more keywords from the recognition result interface 212 shown in fig. 3 (C), and then click the search button 213 below the recognition result interface 212 to perform the search query. In some embodiments, if the keywords displayed by the current recognition result interface 212 do not meet the query requirements of the student, for example, the keywords displayed by the current recognition result interface 212 do not accurately represent the general content of the current courseware, the student may replace a batch of important words by clicking the replace button 214 on the recognition result interface 212 until the keywords displayed by the recognition result interface 212 can reflect the content of the words.
As described above, when a student searches and inquires key knowledge of a courseware by using the above method, the operation is cumbersome, and since the maximum range and the minimum range of the boundary selector 211 are often fixed, if the range of the key knowledge is small, the student easily makes mistakes when adjusting the boundary selector 211, for example, in a scene shown in fig. 3 (C), the student is difficult to adjust the boundary selector 211 to a size that includes exactly one line of text content, and when there are many courseware, the student has no way to search and review key knowledge in many courseware in a targeted manner, which affects the learning efficiency of the student.
In order to solve the technical problem, the electronic equipment can generate a retrieval identifier corresponding to the key mark according to the key mark made on the courseware by the teacher when the teacher directly broadcasts the teaching, and when the student browses the courseware through the application program, the electronic equipment can call the third-party search application program to retrieve the key knowledge and display the query result related to the key knowledge, so that the student can realize the one-key query of the key knowledge, and the operation of the student for retrieving the key knowledge in the courseware is simplified.
Alternatively, the way in which the electronic device displays the query result related to the important knowledge may be: the electronic equipment calls and opens a third-party search application program, and displays a query result related to the key knowledge in the third-party search application program, so that students can directly obtain a retrieval result related to the key knowledge; or the electronic equipment displays the first interface webpage of the third-party search application program in the application program used by the student when browsing courseware, and the first interface webpage of the third-party search application program already contains important knowledge serving as a retrieval keyword, so that the student can edit the retrieval keyword in a search bar in the first interface of the third-party application program and then retrieve according to the requirement. It is to be understood that the display of the search result corresponding to the search identifier is not limited to the above two manners, and may be other manners, which are not limited herein.
For example, in other embodiments of the present application, when the electronic device generates a retrieval identifier corresponding to a key point mark on a courseware by a teacher, the electronic device links the retrieval identifier to a third-party search application package, and simultaneously uses key knowledge content corresponding to the retrieval identifier in a parameter form as a retrieval keyword of the third-party search application, after a student browses the courseware in an online APP of a web lesson and clicks the retrieval identifier, the electronic device opens and runs the third-party search application, the third-party search application retrieves key knowledge according to the retrieval keyword related to the key knowledge and transmitted by the electronic device, and finally displays a query result related to the key knowledge in the third-party search application.
For example, in some embodiments of the present application, when the electronic device generates a retrieval identifier corresponding to a key mark on a courseware according to the key mark of a teacher, the electronic device links the retrieval identifier to a home interface webpage of a third-party search application, and simultaneously introduces key knowledge content corresponding to the retrieval identifier into a search bar of the third-party search application in a form of a parameter, and after a student browses the courseware through a web-lesson online APP and clicks the retrieval identifier, the electronic device displays a home interface of the third-party application in the web-lesson online APP. The third party searching application program is
Figure BDA0003229240670000071
For the purpose of example only,
Figure BDA0003229240670000072
the first interface web page address of (1) is' https: // www.baidu.com/", the electronic device links the retrieval identification to" https: after// www.baidu.com/", as shown in fig. 4 (a), when the student browses courseware through online APP of the web lesson and clicks the retrieval identifier of the key knowledge" hongmeng ", as shown in fig. 4 (B), the electronic device will read the courseware and click the retrieval identifier of the key knowledge" hongmeng
Figure BDA0003229240670000073
The first interface web page of (a) is displayed within the web lesson online APP, and as can be seen from figure 4 (B),
Figure BDA0003229240670000074
the search keyword in the search bar of the first interface web page has the key knowledge of "hongmeng", and after the search keyword is added, deleted and modified in the first interface web page displayed in fig. 4 (B), the student can click a search button on the first interface web page to start the search, so as to obtain a search result web page corresponding to "hongmeng" as shown in fig. 4 (C).
As another example, the application is searched by a third party
Figure BDA0003229240670000075
For the purpose of example only,
Figure BDA0003229240670000076
after the electronic device links the retrieval identifier to the com, baidu, browser, APPs, as shown in fig. 4 (a), the student browses the courseware through the online APP of the web lesson and clicks the retrieval identifier, and then the electronic device opens and runs the courseware
Figure BDA0003229240670000077
The application program(s) is (are) executed,
Figure BDA0003229240670000078
the instant program will search according to the incoming "hongmeng" of the electronic device, and as shown in FIG. 4 (D), it is shown in the figure
Figure BDA0003229240670000079
The search result is displayed in the standalone program 11.
In addition, in some embodiments, in order to further improve the learning efficiency of the students, the electronic device may further generate a knowledge map between the important knowledge according to the relevance between the important knowledge corresponding to the important mark of the teacher on the courseware, so that the students can preliminarily know the important knowledge contained in the courseware through the knowledge map before browsing the courseware, so as to study in a targeted manner in the later study, and each important knowledge in the knowledge map corresponds to one retrieval identifier, and when browsing the knowledge map, the students can click the retrieval identifier corresponding to the important knowledge to query the important knowledge in a one-touch manner.
For example, in some embodiments of the present application, the electronic device may first associate the important knowledge in the same subject according to the subject to which the important knowledge belongs, for example, associate the important knowledge in the chinese subject to the chinese subject; then, associating key knowledge with relevance among different subjects, for example, associating key knowledge in a history course with key knowledge in a Chinese course; and finally, generating a knowledge graph comprising key knowledge of all subjects, and generating the retrieval identifier for each key knowledge in the knowledge graph, wherein when a student browses the knowledge graph in an online APP of an online course, as shown in fig. 5, the student can realize one-key query of the key knowledge by clicking the retrieval identifier corresponding to the key knowledge of any item in the knowledge graph, so that the student can preliminarily know the key knowledge in the course before warming the course, and does not need to search the key knowledge in a plurality of subject courses, and the one-key query of the key knowledge is directly realized through the retrieval identifier corresponding to the key knowledge in the knowledge graph, thereby improving the learning efficiency of the student.
Optionally, in some embodiments, the specific form of the search identifier may be a button, for example, a button with a text label or a button with an image label, or may be a link, for example, a hyperlink, and the like, which is not limited in this application.
For convenience of description, a specific implementation process of the method of the present application is described below by taking the search identifier as a button and the electronic device displaying the search result webpage in the third-party application as an example. Specifically, continuing with the example of the live teaching scene shown in fig. 1, an interface shown in fig. 6 is displayed on the teaching device 100 on the teacher's side, where the live content interface 109 displays the courseware content of the chemical subject being taught by the teacher, and the teacher makes key marks Mark1 and Mark2 in the live content interface 109 at the positions of "concentrated hydrochloric acid" and "concentrated sulfuric acid" in the courseware, that is, circles are used to circle out the "concentrated hydrochloric acid" and the "concentrated sulfuric acid".
Then, when the teacher clicks the save button 106 in the interface shown in FIG. 6 and exits the live tutorial via the exit button 102, the tutoring device 100 generates text-tagged buttons related to the content of the key knowledge at the locations of the teacher's key marks and links these text-tagged buttons to a third party search application (e.g., a third party search application)
Figure BDA0003229240670000081
) The application package 'com.baidu.browser.apps' takes the key knowledge corresponding to the button or the label content on the button as the same time
Figure BDA0003229240670000082
Then the tutor device 100 marks the teacher's focus, text-tagged buttons, and buttons with a third-party search application (e.g., a search engine)
Figure BDA0003229240670000083
) The links of the application package "com.
When a student opens a courseware of a chemical subject through online APP on the learning device 200, and meets the focus knowledge of concentrated hydrochloric acid, concentrated sulfuric acid and the like, as shown in FIG. 7, the student can see concentrated hydrochloric acid "The student can call and turn on the learning device 200 by clicking the button with the label of concentrated sulfuric acid or concentrated sulfuric acid
Figure BDA0003229240670000084
Search application
11 and are in
Figure BDA0003229240670000085
The search application 11 displays the search results for "concentrated hydrochloric acid" as shown in fig. 8.
In some embodiments, the teaching apparatus 100 may further associate all key knowledge of subjects such as a chinese subject, a math subject, and a chemical subject according to the subject, and then associate some key knowledge of the subjects according to the similarity between key knowledge features of the subjects to form a knowledge map of all key knowledge as shown in the live content interface 209 in fig. 5, and then send the knowledge map and electronic courseware of the subjects to the learning apparatus 200. When the student opens the electronic courseware on the learning apparatus 200 through the online APP of the web lesson, the student can see the knowledge map as shown in fig. 5, which contains key knowledge of all subjects, i.e., the chinese subjects include key knowledge of "quality", "hongmeng", the mathematical subjects include key knowledge of "trigonometric function" and "graph and hyperbola", the chemical subjects include key knowledge of "chemical equation of reaction of sulfuric acid and sodium hydrogen sulfate", "molecular structure of dimethylbenzoic acid" and "quality", and the buttons corresponding to the key knowledge, and the student can also see key knowledge having an association between different subjects, e.g., an association between "quality" of the chinese subject and "quality" of the chemical subjects. When the students preview key knowledge contents of each subject through the knowledge map, the students can carry out one-key query by clicking corresponding buttons.
Illustratively, assuming that the student clicked the "Hongmon" button in the knowledge graph shown in FIG. 5, as previously described, the buttons corresponding to the key knowledge are all linked to
Figure BDA0003229240670000091
Application packages and label content with emphasis on knowledge or buttons have also been used
Figure BDA0003229240670000092
Search keywords in the application search bar so that when the student clicks the "hongmeng" button on the knowledge graph, the learning device 200 will invoke and open the search in response to this action
Figure BDA0003229240670000093
Application program, and in
Figure BDA0003229240670000094
The application program displays the relevant search results with the key knowledge of "hongmeng" as shown in fig. 9, for example, entries such as "hua be a shopping mall" and "hongmeng chinese words" and the students can click the corresponding entry to view the entry as required. If the currently displayed retrieval result cannot meet the requirements of the student, the student can display other related retrieval results by pulling down the scroll bar on the left side of the interface.
It is to be understood that in the search result interface 209 shown in fig. 9, the student may also re-edit the search term in the search bar 210, for example, the student may add a new search term to improve the accuracy of the search result, for example, if the student adds the search term "hua ye" on the basis that the current search term is "hongmeng", the results displayed by the search result interface 209 will be related to "hua ye" and "hongmeng"; or the student can delete the search term to display more search results; alternatively, the student may reconstruct the search term by re-typing a new search term in the current search result interface 209 for the next search.
By the aid of the method, the operation process of the students in searching key knowledge is simplified, the students do not need to manually adjust the identification range of the intelligent screen identification function of the electronic equipment, identification and retrieval are carried out on the key knowledge, efficiency of inquiring the key knowledge of the students is improved, meanwhile, the knowledge map is provided for the students, the students can firstly know key knowledge contents of all subjects and relevance of the key knowledge contents among all subjects before browsing courseware of many subjects, and then study is carried out on the basis, and learning efficiency of the students is improved.
It is understood that the teaching device 100 and the learning device 200 can be various electronic devices having display function and supporting live teaching, for example, electronic devices including but not limited to laptop computers, desktop computers, tablet computers, mobile phones, servers, wearable devices, head-mounted displays, mobile email devices, portable game machines, portable music players, reader devices, or other electronic devices capable of accessing a network, and the application is not limited thereto.
Moreover, the students can directly open the courseware in the online APP of the network course, download the courseware through the online APP of the network course and open the courseware in other application programs capable of opening the courseware, and the application programs for opening the courseware for the students are not limited.
In order to further understand the implementation process of the present application, the following description will proceed with the example of using the search identifier as a button, and describe the button corresponding to the key knowledge and the method for generating the knowledge graph including the key knowledge of the button in each of the above embodiments. It should be understood that the button corresponding to the key knowledge and the method for generating the knowledge map of the key knowledge including the button may be implemented on the teaching device 100, that is, the key mark made by the teacher on the courseware generates the button with the text label, and the process of generating the knowledge map of the key knowledge may be implemented on the teaching device 100, and then the teaching device 100 sends the electronic courseware generated by the method of the present application to the learning device 200 for the students to use.
In some embodiments, the button corresponding to the key knowledge and the method for generating the knowledge map of the key knowledge including the button may also be implemented on the learning device 200, that is, after the teacher finishes the live teaching on the teaching device 100, the electronic courseware including the key knowledge and the key marks corresponding to the key knowledge may be stored and uploaded to the online class APP shown in fig. 1, and after the student downloads the electronic courseware through the online class APP, the learning device 200 responds to the operation of the student downloading the electronic courseware, identifies the key marks in the electronic courseware, generates the button with the text labels, and generates the knowledge map of the key knowledge. In some embodiments, during the process of recognizing the electronic courseware key labels and generating buttons with text labels and generating the key knowledge map, the learning device 200 may display a prompt message such as "electronic courseware one-key query function generating in progress" on the online APP interface shown in fig. 2 to prompt the student that the button corresponding to the key label in the electronic courseware is being generated at this time. This is not limited by the present application.
It should be further understood that general courseware may be divided into paper courseware and electronic courseware, the paper courseware refers to the paper courseware used by the teacher side, when the teacher uses the paper courseware to perform teaching live broadcast, the teacher side needs to configure a camera to obtain an image of the teacher paper courseware, the camera sends the image to the teaching Device 100, and then the teaching Device 100 sends the image to the learning Device 200, so that courseware contents displayed by the teacher side and the student side are consistent, wherein the camera 300 may be an ecological camera which is independent from the teaching Device 100 and supports distributed Device Virtualization (DVKIT), or a camera which is integrated on the teaching Device 100, and the present application does not limit the above.
When the teacher uses the electronic courseware to perform teaching live broadcast, the teaching device 100 does not need to use a camera to acquire the content of the electronic courseware, the teaching device 100 and the learning device 200 can directly communicate based on a network protocol, and the content of the electronic courseware displayed on the teaching device 100 and the learning device 200 is consistent.
For convenience of description, a button corresponding to realization of important knowledge on the teaching apparatus 100 and a method of generating a knowledge map including the important knowledge of the button will be described by taking an example in which a teacher uses an electronic courseware. The method mainly comprises the following steps: firstly, the teaching device 100 needs to determine one or more key knowledge items in the courseware according to the key marks, then the teaching device 100 generates a button with a label related to the key knowledge at a position corresponding to the key mark or the key knowledge, and links the button to a search head interface webpage of the third party application program including or the third party application program, or can link the button to a webpage corresponding to a search result of the third party search application program, or even links the button to a storage path of a text related to the key knowledge and recording related knowledge, and the like. This is explained below:
(1) And determining one or more key knowledge in the courseware according to the key marks.
It can be understood that the teaching apparatus 100 needs to determine the key knowledge of the courseware according to the key marks on the courseware, so that in the subsequent steps, the teaching apparatus 100 generates the button with the text label corresponding to the key knowledge according to the key knowledge.
Specifically, in some embodiments, the teaching device 100 needs to first identify the key-point marks that the teacher makes on the key-point knowledge in the course, for example, as shown in fig. 10, the teacher adds a circle R1 to the key-point knowledge in the course, or marks that the teacher makes on the middle or end of a certain segment or a certain sentence of the course, such as an asterisk R2, a numeric mark, a letter mark, etc., and the teaching device 100 needs to identify these marks before determining the key-point knowledge. In other embodiments, the teaching apparatus 100 may determine whether the mark is a valid mark according to the size, the position, and the like of the key mark made by the teacher, for example, the key mark is used to remind students of key knowledge, if the mark is small, the function of reminding cannot be performed, the teaching apparatus determines that the mark is an invalid mark, or the key mark made by the teacher is outside the text range of the paper courseware, the teaching apparatus also determines that the mark is an invalid mark.
After identifying the key marks, the teaching apparatus 100 determines key knowledge of the teacher marks based on the key marks made by the teacher. Specifically, for example, assuming that the teacher circles the text "wuxie in the summer, more do it, ru Nai to hijack the river thief" with a circle R1 in the courseware shown in fig. 10, the teaching apparatus 100 recognizes the circle R1, determines a closed region corresponding to the circle R1, and then recognizes the text "wuxie in the summer, more do it, ru Nai to hijack the river thief" in the closed region, and uses "wuxie in the summer, more do it, ru Nai to hijack the river thief" as the key knowledge of the teacher mark.
In other embodiments, if the teacher is marked at the middle or end of a certain paragraph or a certain sentence, such as a "four-star" R2 in fig. 10, the teaching apparatus 100 first identifies the focus mark "four-star" R2 made by the teacher, then determines a complete sentence close to the focus mark "four-star" R2 according to the position of the focus mark, and takes this sentence as the focus knowledge. For example, assuming that the teacher adds a key mark "four" R2 at the end of the sentence xth in the courseware shown in fig. 10, the teaching apparatus 100 first identifies the position (xr 2, yr 2) of the key mark "four" R2, and then abandons the sentence xth "closest to the position (xr 2, yr 2) as the key knowledge, and then discards Jiang Xia and the teacher goes back to the east of the river.
In some embodiments, the teaching apparatus 100 may identify the type of the key mark by using the image Recognition neural network model, determine a region corresponding to a certain type of key mark, identify a text in the region corresponding to the key mark by using an Optical Character Recognition (OCR) technology, and use the text as key knowledge of the teacher mark. Specifically, the process may include: (1) the teaching device 100 trains an image recognition network model; (2) The teaching device 100 identifies the key marks using the image recognition network model; (3) The teaching equipment 100 determines an area including key knowledge corresponding to the key mark according to the shape of the key mark; (4) The teaching equipment 100 recognizes the text content in the area by using an OCR recognition technology, which is as follows:
(1) teaching equipment 100 training image recognition network model
The teaching device 100 trains the image recognition network model based on the image with the pixel-level label in the existing training data set, so that the trained image recognition network model can recognize the key mark in the courseware. The image with the pixel level label means that each pixel in the image has its corresponding type label, for example, part of the pixels corresponds to characters, part of the pixels corresponds to shapes of objects, and the like.
Specifically, the teaching apparatus 100 takes an image in the training data set with a preset pixel level class label as target data; then, the teaching device 100 inputs the target data into the image recognition network model to be trained to obtain an image recognition result of the target data, and calculates a loss function of the image recognition network model according to the image recognition result of the target data.
In some embodiments, the calculation formula (1) for the loss function may be
L Seg =-y(t)log F(x t ) (1)
Wherein, F (x) t ) Y (t) is a preset pixel level class label of the target data, L, for the result identified by the image recognition network model Seg A loss function of the network model is identified for the image. Then, the teaching apparatus 100 adjusts parameters in the image recognition network model according to the result of the loss function, such as a weight value of each layer of neural network in the neural network used by the image recognition network model, to reduce the result of the loss function, so that the output result of the image recognition network model is the same as or similar to the input result (i.e., the preset pixel level class label), and when the output result of the image recognition network model is the same as or similar to the input result, it is determined that the training of the image recognition network model is completed.
In some embodiments of the present application, the result of the loss function may be compared with a preset threshold to determine whether the training of the image recognition network model is completed. And when the result of the loss function is smaller than or equal to the preset threshold value, the output result of the image recognition network model is considered to be the same as or similar to the input result, namely the training of the image recognition network model is finished. The setting of the preset threshold is related to the adopted neural network model and the loss function, the performance of the neural network model is better, the preset threshold can be set to be lower, and the setting mode of the preset threshold is not limited.
Alternatively, the training data set used for training the image recognition network model may be a 2D data set such as VOC (nominal visual object classes), MS COCO (systematic micro soft common objects in context), and the like; or a 2.5D data set such as NYU-D V2, SUN-3D, SUN RGB-D and the like; the method can also be a 3D data set such as Stanford 2D-3D, shape-Net Core, and the training data set adopted by the training image recognition network model is not limited in the application.
Optionally, the semantic segmentation model for training may be based on a neural network model architecture such as Full Convolution Networks (FCNs), seg-Net, U-Net, deep-Lab V1-V3, and the like. The application does not limit the type of the architecture of the neural network model used for training the image recognition network.
(2)The teaching apparatus 100 recognizes the key mark using the image recognition network model
The teaching equipment 100 identifies the key marks on the courseware by using the trained image identification network model, and determines the shapes of the key marks. The teaching apparatus 100 may recognize key marks on the courseware using the trained image recognition network model, such as circles outlined by the teacher including key knowledge, or "star" marks added by the teacher at the end of a sentence, etc.
In order to facilitate the teaching device 100 to recognize the key marks, in some embodiments, when the teacher marks on the electronic courseware or the image of the paper courseware, the teaching device 100 may smooth the teacher marks according to the teacher's stroke track, or perform automatic head-to-tail connection. For example, as shown in FIG. 11 (A), the text of the teacher on the electronic courseware for the purpose of the language science "Wuxi in the summer, du Huo, ru Nai to hijack the Equisetum, and how to say! "when the drawing is performed, the formed closed curve is an irregular curve L1, and the teaching device 100 automatically performs smoothing processing on the irregular curve L1 after the teacher completes the key point marking or while the teacher performs the key point marking, so as to obtain a smooth curve L2 shown in fig. 11 (B); or as shown in fig. 11 (C), after the teacher finishes drawing on the electronic courseware, a closed curve is not formed, but since the distance between the head (starting point) and the tail (end point) of the curve L3 is particularly close, the teaching apparatus 100 automatically connects the head and the tail of the curve L3 to form a closed curve.
In order to improve the efficiency of the teaching device 100 in recognizing the key marks, in other embodiments, after the teacher opens the live teaching interface shown in fig. 2, a prompt message "please mark key contents with a smooth closed curve" may be displayed on the live teaching interface, so as to reduce the occurrence of missing due to the fact that the key mark form made by the teacher does not meet the recognition requirement of the teaching device 100. The prompt information can appear in the form of a floating window, the floating window can appear when a teacher clicks a note button 207 shown in fig. 2, the prompt information can also be fixed at a certain position of a live interface shown in fig. 2, and the prompt information plays a role in reminding the teacher in real time. The identification requirement of the teaching device 100 may be definition of the key marks, size of the key marks, positions of the key marks, and the like preset by the developer, for example, if the key marks are not clear enough, for example, the lines are too thin to be identified by the teaching device 100, or the key marks are too small to be identified by the teaching device 100, or the key marks are out of the text range of the courseware, and the like, the application also does not limit this.
(3) The teaching device 100 determines the area including the key knowledge corresponding to the key mark according to the shape of the key mark
Since the shape of the key marks is different, each of the key marks of the shapes corresponds to a region including key knowledge, and the teaching apparatus 100 determines the region of key knowledge corresponding to the key mark of a certain shape according to the shape of the key mark. The shape of the highlight mark may include, but is not limited to, a closed curve, a star, and the like, and the shape of the highlight mark may also be a square, or the highlight mark may be drawn by a pointer with a stylus or a finger, or may be formed by a writing tool carried by the teaching apparatus 100 by the teacher, such as a square, a circle, a shadow, and the like, which should be understood that the present application is not limited thereto. It should be understood that when the key marks are drawn by a stylus or a finger of the teacher or formed by the teacher using a writing tool of the teaching apparatus 100, the teaching apparatus 100 performs the smoothing process, the automatic end-to-end connection process, and the like on the key marks as described in the above embodiment of step (2).
Specifically, for example, taking the key point label shown in fig. 10 as an example, the teaching apparatus 100 identifies a closed curve R1 on the courseware by using the image recognition network model, and then determines a range covered by the closed curve R1, for example, coordinates P1, P2, P3, and P4 of the outermost end of the outline of the closed curve R1, that is, the range is an area including key point knowledge; as another example, continuing with the example of the key mark "four-star" R2 shown in fig. 10, the teaching apparatus 100 recognizes the key mark "four-star" R2 in the courseware by using the image recognition network model, determines the position (xr 2, yr 2) of "four-star" R2, and then takes (xr 2, yr 2) as the end point, and takes the complete sentence word before (xr 2, yr 2), "right to go from it, and Jiang Xia, and the region where the class returns to the east of the river" as the key knowledge region.
(4) The teaching device 100 uses the OCR recognition technique to recognize the text content in the area
It is understood that the teaching apparatus 100 can train the OCR recognition network model in advance, and then recognize the content in the area including the key knowledge by using the trained OCR recognition network model, and use the content as the key knowledge.
In some embodiments, the teaching device 100 may train the OCR recognition model using the sample image with the tag data, wherein the training process is specifically consistent with the way in which the training image of the teaching device 100 is the recognition model, and the training process may be referred to specifically, and will not be described herein again.
After the OCR recognition network model is trained, the teaching apparatus 100 recognizes the text content in the area where the key knowledge is located by using the OCR recognition network model, and uses the text content as the key knowledge.
In other embodiments, the content of the area where the important knowledge is located may also be an image, so the teaching apparatus 100 may further identify the area where the important knowledge is located by using the image identification network model to identify the image content in the area where the important knowledge is located, and use the image content as the important knowledge.
It will also be appreciated that the image recognition network model and the OCR recognition network model may be trained by other teaching devices 100 and then pre-configured with the teaching device 100 functionality at the time of system development of the teaching device 100. The image recognition network model and the OCR recognition network model may also be implemented by other teaching devices 100 after model training is completed, the trained models are packaged and encapsulated by other teaching devices 100 into Software Development Kit (SDK) files, the SDK files are sent to the teaching devices 100 and installed on the teaching devices 100, and when the teaching devices 100 need to use the network model, the teaching devices 100 use the network model by calling functions related to the network model. The other teaching devices 100 may be teaching devices 100 with model training capability, such as a notebook computer, a desktop computer, a cloud computer, and a tablet computer, and the form of the other teaching devices 100 is not limited in the present application.
(5) Generating a button corresponding to each item of key knowledge
In order to realize the one-key query of the key knowledge, the teaching device 100 generates a corresponding button for each key knowledge, links the button to a search application package of a third party or a home interface webpage of a third party search application, and simultaneously transmits the label content on the button of each key knowledge as a search keyword to a search bar of the third party search application, so that when a student browses an electronic courseware through the online APP of the web lesson on the learning device 200 and clicks the button corresponding to a certain key knowledge, the learning device 200 can respond to the operation of the student, call the third party search application to search the key knowledge, and directly display the query result related to the knowledge in the third party application, or display the home interface of the third party search application in the online APP of the web lesson.
Optionally, the key knowledge button may be a button with a text label or a button with an image label, which is not limited in the present application. For example, as shown in fig. 12, the teaching apparatus 100 recognizes that the important knowledge in the chemical subject is "concentrated hydrochloric acid" or "concentrated sulfuric acid", and then the teaching apparatus 100 generates a button with a text label "concentrated hydrochloric acid" covering the original text "concentrated hydrochloric acid" and a button with a text label "concentrated sulfuric acid" covering the original text "concentrated sulfuric acid" at the position where the important knowledge is located. For another example, if the key words of the key knowledge are long, the teaching apparatus 100 may generate the button with the key word label based on the key words of the key knowledge words. For another example, if the focus knowledge is an image, the teaching apparatus 100 may generate a button with an image ID number label, for example, as shown in fig. 13, the teacher circles the image with an image ID number of "3-1" in the first section of the third chapter of the biological subject with a focus mark, and then the teaching apparatus 100 will generate a button with a text label of "3-1".
For example, taking the creation of the button shown in fig. 12 as an example, the teaching apparatus 100 first introduces an Abstract Windowing Toolkit (AWT) and a lightweight graphical interface component (Swing), wherein the AWT is a basic tool for creating and setting a graphical user interface of java, and the Swing is a new component built based on the AWT platform.
Then the tutorial device 100 creates a "concentrated hydrochloric acid" button with JButton jb = new JButton ("concentrated hydrochloric acid"), and then sets other properties of the button, such as size, position, color, etc., for example, assuming that the coordinates of the four boundary points of the region where the concentrated hydrochloric acid is located are { (x 1, y 1), (x 2, y 1), (x 1, y 2), (x 2, y 2) }, the tutorial device 100 will set the size of the position of the button by this.
Thereafter, the tutorial device 100 registers an event response for the button by jb.
In some embodiments, the tutorial device 100 may display the first interface of the third party search application within the current application, i.e., the tutorial device 1, in a Webview manner00 buttons can be linked to the home interface of a third party search application, e.g.
Figure BDA0003229240670000141
The first interface of the application, specifically, tutorial device 100 adds the following method to the onClick () function:
firstly, creating an object of WebView by a WebView method, and then correspondingly pointing to a current application program this of an open courseware, namely WebView = newWebView (this); then causing display of webview within the current application to be loaded into
Figure BDA0003229240670000142
The first interface web page address "https: // www.baidu.com/", webview. LoadUrl (" https:// www.baidu.com/"), and is shown within the current application
Figure BDA0003229240670000143
The first interface web page of the application program, setContentView (webView).
Through the method, after the key-point knowledge button is clicked, the first interface of the third-party search application program can be displayed in the current application program (see the upper graph (B) of the figure 4), then the student can edit the search keyword in the search bar in the first interface of the third-party application program, and then click the search button (for example) in the first interface of the third-party application program
Figure BDA0003229240670000144
One hundred degrees in the initial interface of the application program) to search key words related to the key knowledge and display a search result webpage in the current application program.
In other embodiments, the teaching apparatus 100 can also directly open the third-party application program, so that the third-party application program automatically searches for the search keyword after opening the third-party application program, and directly display the search result in the third-party application program, that is, link the button to the third-party search application program package, for example, the same way as the above
Figure BDA0003229240670000145
For the application, the tutorial device 100 needs to add the following method to the onClick () function: firstly, setting a third-party search application package needing to be called, namely private static String MOSPACKAGE =' com. Then, the key knowledge is used as a retrieval key word value to be transmitted into a third-party search application program, namely Public void jumpBrowser (String value); then, in the third-party SEARCH application, retrieving the retrieval keyword value, that is, the Intent SEARCH = new Intent (intent.action _ WEB _ SEARCH); pumxtra (searchmanager. Query, value). In this way, when the key knowledge button is clicked, the third-party search application can be opened to search the key knowledge, and the search result is directly displayed in the third-party application (see fig. 4 (D)).
By the method, the teaching equipment 100 can generate the button corresponding to the key knowledge according to the key mark of the teacher on the courseware, so that when the student browses the courseware through a specific application program on the learning equipment 200, the student can realize one-key query of the key knowledge according to the button of the key knowledge in the courseware, the operation that the boundary selection box (see the upper figure 3) needs to be adjusted when the student searches by using the intelligent screen recognition function is simplified, and the efficiency of the student in searching the key knowledge is improved.
However, it can be understood that, because the key knowledge and the corresponding button are generally in the courseware of a certain subject, the student can see the key knowledge and the corresponding button only after opening the courseware of the subject, and when there are many courseware, the student must browse the electronic courseware corresponding to all the subjects to know the key knowledge and the corresponding button capable of realizing one-key query. Therefore, in order to save the time for the student to browse the electronic courseware corresponding to each department and search for key knowledge, in some embodiments, the teaching device 100 may generate a knowledge map including key knowledge of each department, and then generate a corresponding button for each key knowledge included in the knowledge map by using the button generation method, so that the student can know the key knowledge of each department through the knowledge map, and realize one-key query of the key knowledge through the buttons for each key knowledge in the knowledge map.
The knowledge map between the key knowledge is composed of the key knowledge and the incidence relation between the key knowledge. In some embodiments, the relationship between the important knowledge may be determined according to the relationship between the features of the important knowledge, for example, the relationship between the important knowledge may be determined according to whether the features of the important knowledge are similar, for example, if the "hongmeng" of the linguistic object is similar to the "hongmeng" of the historical object, the "hongmeng" of the linguistic object is related to the "hongmeng" of the historical object; the association relationship between the key knowledge may also be determined according to whether there is a context concept between the features of the key knowledge, for example, if there is a context concept between the "mechanics" of the physical subject and the "gravity" or "friction" of the physical subject, then there is an association relationship between the "mechanics" of the physical subject and the "gravity" or "friction" of the physical subject, and the determination method of the association relationship is not limited in the present application.
In the following, taking the feature similarity of the key knowledge of each subject to determine the correlation between key knowledge as an example, the method for generating the knowledge map of key knowledge in each embodiment is described, which mainly comprises:
(1) And extracting the characteristics of the key knowledge.
As described above, the knowledge graph is constructed by the key knowledge and the association relationship between the key knowledge, and the association relationship between the key knowledge is established according to the relationship between the features of the key knowledge, such as the similarity between the features of the key knowledge, so that the teaching apparatus 100 needs to extract the features of the key knowledge first, and then determine the association relationship between the key knowledge based on the feature similarity of the key knowledge to construct the knowledge graph of the key knowledge.
In some embodiments, the feature of the key knowledge may include content features capable of reflecting the key knowledge, such as keywords of text content, content of images, and may also include source features reflecting the subject of the key knowledge, for example, the key knowledge in the courseware of the same subject has the same source, such as the source features of "Yueyangtai" and "Tengwangge" in the courseware of the Chinese subject are "Chinese subjects", and the source features of "conic curve" and "trigonometric function" are "mathematical subjects".
Specifically, in some embodiments, if some key knowledge in the physical subject courseware recognized by the teaching apparatus 100 is text, the teaching apparatus 100 will extract keywords from the text by using the OCR recognition technology, and use the keywords of the text as the content features of the text. For example, the teaching apparatus 100 recognizes that "newton's third law of motion is only applicable to the interaction between physical objects in the inertial system, such as electrons moving in an electromagnetic field, and will be influenced by the electromagnetic field force, but does not talk about the reaction force of the electrons to the electromagnetic field; the inertial force in the non-inertial system has no reaction force, and then the teaching device 100 extracts the keyword of the period as a 'physical course' by using an OCR recognition technology; newton's third law of motion; an inertial system; electrons; electromagnetic field force ".
In other embodiments, if the teaching apparatus 100 recognizes that a certain important knowledge of the history lesson is an image, the teaching apparatus 100 recognizes the image content of the image using the image recognition network model, and features the image content as the important knowledge, for example, the teaching apparatus 100 recognizes that the content of the image a includes mountains, rivers, birds and animals, the teaching apparatus 100 will "history; mountain and water; the bird and beast "are characteristic of the content of image a.
(2) Establishing a relation for key knowledge with feature similarity greater than or equal to a preset similarity threshold in the key knowledge And (5) associating the relations to construct a knowledge graph.
After extracting the features of the key knowledge, the teaching apparatus 100 needs to further calculate the similarity between the features of the key knowledge to determine whether there is an association relationship between the key knowledge according to the similarity between the features of the key knowledge, and form a knowledge map according to the association relationship between the key knowledge of each subject.
Alternatively, the calculation formula of the similarity between the features of the key knowledge may be a euclidean distance formula, a manhattan distance calculation formula, a minkowski distance formula, or a pearson correlation coefficient calculation formula.
For example, in some embodiments, the teaching device 100 calculates feature similarity between some two key knowledge items using the following euclidean distance formula (2):
Figure BDA0003229240670000161
wherein d is n (p, q) represents the degree of similarity of features between the important knowledge p and the important knowledge q, n represents the number of features of the important knowledge p or the important knowledge q, p i Characteristic value q representing a certain characteristic of the key knowledge p i The feature value of a certain feature representing the important knowledge q, for example, the important knowledge may include a text feature value, an image content feature value, and a source feature value.
In some embodiments, the teaching apparatus 100 may compare the similarity between the features of two key knowledge items with a preset similarity threshold, and determine that the two key knowledge items have an association relationship therebetween according to the relationship between the similarity between the features of the two key knowledge items and the preset similarity threshold.
Specifically, when the feature similarity between two key knowledge is greater than or equal to a preset similarity threshold, the teaching apparatus 100 determines that there is an association between the two key knowledge, and when the feature similarity between the two key knowledge is less than the preset similarity threshold, the teaching apparatus 100 determines that there is no association between the two key knowledge.
For example, taking fig. 14 (a) as an example, if the feature similarity between the important knowledge "hong meng" of the chinese subject and the important knowledge "hong meng" of the history subject is greater than the preset similarity threshold, the important knowledge "hong meng" of the chinese subject and the important knowledge "hong meng" of the history subject have an association relationship (represented by a line in the figure); the similarity between the key knowledge "hong meng" of the chinese subject and the key knowledge "trigonometric function" of the mathematical subject is smaller than the preset similarity threshold, the key knowledge "quality" of the chinese subject and the key knowledge "trigonometric function" of the mathematical subject do not have an association relationship therebetween, the association relationship between the key knowledge of each subject is determined according to the similarity between each two key knowledge and the preset similarity relationship therebetween, and the knowledge map of the key knowledge shown in fig. 14 (B) can be obtained.
In some embodiments, the teaching apparatus 100 may classify the important knowledge belonging to the same subject under the same subject according to the source characteristics of the important knowledge, for example, as shown in fig. 14 (B), the teaching apparatus 100 classifies the important knowledge "hongmeng" and "quality" belonging to the same language subject under the same language subject, classifies the important knowledge "trigonometric function", "graph and hyperbolic curve" belonging to the same mathematics subject under the same digital subject, classifies the important knowledge "chemical equation reflected by sulfuric acid and sodium bisulfate" belonging to the same chemical subject under the same chemical subject, and classifies the molecular structural formula of dimethylbenzoic acid "belonging to the same history subject under the same important knowledge" hongmeng "and" wei jin nan north orientation "belonging to the same history subject. And then establishing an association relation for the key knowledge with the feature similarity of the key knowledge being greater than the preset similarity among different subjects according to other features in the key knowledge, such as the keyword of the text content or the content features of the image content. For example, with continued reference to fig. 14 (B), the feature similarity of "hong meng" in a chinese subject to "hong meng" in a history lesson is 1, equal to a preset similarity threshold, e.g., 1, so an association is established between "hong meng" in a chinese subject and "hong meng" in a history subject.
In some embodiments, the preset similarity threshold may be determined according to the correlation between the subjects belonging to the courseware, and if the correlation between the subjects belonging to the two courseware is low, it indicates that the key knowledge in the two courseware is not correlated to a great extent, the preset similarity may be set higher, for example, 0.8, 0.9, and the like, so as to avoid that the key knowledge between the subjects with low correlation is correlated, which may cause confusion of students in understanding the key knowledge; if the correlation between the subjects belonging to the two courseware is high, it is indicated that the key knowledge in the two courseware is correlated to a great extent, and the preset similarity can be set to be lower, for example, 0.4, 0.5, and the like, so as to correlate the key knowledge between the correlated subjects as much as possible and enhance the associative memory of students. It should be understood that the setting method of the preset similarity threshold is not limited in the present application.
In some embodiments, the teaching apparatus 100 may determine the level of the correlation between the subjects to which the courseware belongs according to the relationship between the sum of the correlations between the subjects and the preset correlation threshold, for example, assuming courseware of three subjects, i.e. a math subject M, a chinese subject C, and an english subject E, the correlations between them are shown in table 1 below:
TABLE 1
Mathematics class M Chinese lesson C English class E
Mathematics class M 1 0.2 0.1
Chinese lesson C 0.2 1 0.7
English class E 0.1 0.7 1
That is, the correlation between the Chinese lesson C and the math lesson M is 0.2, the correlation between the Chinese lesson C and the English lesson E is 0.7, and the correlation between the math lesson C and the English lesson E is 0.1, so the sum of the correlations between the three subjects is: 0.7+0.1+0.2=1.
Assuming that the preset correlation threshold is 1, the sum of the correlations among the three subjects is equal to the preset correlation threshold, the correlations among the three subjects are high, and correspondingly, the preset similarity threshold may be set to be lower; assuming that the sum of the preset correlations is 1.5, the sum of the correlations between the three subjects is smaller than a preset correlation threshold, the correlations between the three subjects are low, and correspondingly, the preset similarity threshold may be set higher.
Then, the teaching apparatus 100 establishes an association relationship for the key knowledge based on the relationship between the feature similarity between the key knowledge and a preset similarity threshold to construct a knowledge map as shown in fig. 14 (B).
(3) And generating a button for each key knowledge in the knowledge graph.
After the knowledge map between key knowledge is generated, a student can know key knowledge of each subject according to the knowledge map, but since each key knowledge in the knowledge map has no button, when the student needs to inquire and retrieve certain key knowledge, the student needs to find the button corresponding to the key knowledge from the electronic courseware corresponding to the subject according to the subject to which the key knowledge belongs, and then click the button to realize one-key inquiry, and the student still needs to read the electronic courseware corresponding to a specific subject to use the one-key inquiry function. In order to further improve the efficiency of the student in inquiring and retrieving the key knowledge, the teaching apparatus 100 further needs to generate a corresponding button for each key knowledge in the knowledge map by using the above button generation method, so as to finally form the knowledge map including the key knowledge of the button as shown in fig. 14 (C). Therefore, students can click the buttons corresponding to all key knowledge in the knowledge map to realize one-key query of the key knowledge based on the knowledge map with the buttons when browsing the knowledge map.
It can be understood that, since the label content on the button of the key-point knowledge is consistent with the key-point knowledge or consistent with the key word of the key-point knowledge, in some embodiments of the present application, the teaching apparatus 100 may also directly generate the map of the key-point knowledge with the button according to the label content on the button corresponding to the key-point knowledge. Specifically, the teaching device 100 may extract features of the label content on the key knowledge buttons, and then determine the association relationship between the buttons corresponding to the labels according to the relationship between the similarity between the features of the label content and the preset similarity threshold, where the teaching device 100 extracts the features of the label content, then calculates the similarity between the features of the label content, and references the above-mentioned related description in a manner of determining the association relationship between the buttons according to the similarity between the features of the label content, and details are not repeated here.
In the above description, the method for generating the button corresponding to the key knowledge and constructing the knowledge graph including the key knowledge of each subject and the button corresponding to the key knowledge is introduced, and as described above, the student can realize the one-key query of the key knowledge in the electronic courseware corresponding to each subject according to the button corresponding to the key knowledge, can also directly know the key knowledge of each subject in advance according to the knowledge graph of the key knowledge of each subject, and realizes the one-key query of the key knowledge according to the button corresponding to the key knowledge in the knowledge graph, thereby improving the learning efficiency.
It can be understood that, in the process of implementing one-key query by the student through the button corresponding to the important knowledge, the learning device 100 needs to detect that "the student clicks a certain important knowledge button" and then "in response to the operation of clicking the button by the student, the query result corresponding to the button is displayed" as described below.
It should be understood that, in general, students query the key knowledge by clicking buttons corresponding to the key knowledge, but it is not excluded that a group other than the students, for example, teachers, parents, and other users query the key knowledge by using buttons corresponding to the key knowledge through other electronic devices (such as the teaching device 100, a smart phone, and the like). The following description will be given taking an example in which a user inquires about key knowledge on an electronic device through a key button corresponding to the key knowledge. Specifically, as shown in fig. 15, the method 1500 includes:
step 1502, detecting whether the user clicks a button corresponding to a certain key knowledge in the courseware.
It can be understood that, since each key knowledge has a corresponding button, the electronic device needs to detect whether the student clicks the button of the key knowledge and which key knowledge the student clicks the button corresponding to.
Specifically, in some embodiments, the electronic device may detect, through the touch sensor, whether a user has an operation of clicking a button, and according to a position where a touch operation of clicking the button by the user occurs, the electronic device determines a button with important knowledge corresponding to the position. For example, the electronic device detects a position (x, y) of a touch operation of a student through a touch sensor, and then determines whether there is a button with important knowledge at the position (x, y), and if the button position of the important knowledge "hongmeng" of the chinese subject in fig. 4 (a) is also (x, y), the button with the position corresponding to the touch operation of the user is a button with "hongmeng", and if there is no button with important knowledge at the position (x, y) is consistent with the position (x, y), it is described that there is no button with important knowledge at the position (x, y).
It can be understood that, as described above, the actual position of the key-focus knowledge button is actually a "region", so the electronic device may also determine whether the touch operation of the student detected by the electronic device occurs in the region of the key-focus knowledge button, determine that the key-focus knowledge button is clicked by the user if the touch operation occurs in the region of the key-focus knowledge button, and determine that the key-focus knowledge button is not clicked by the user if the touch operation does not occur in the region of any key-focus knowledge button. For example, assume that the area covered by the button of "hongmeng" which is the important knowledge of the chinese subject in fig. 4 (a) is (x 1, x2, y1, y 2), and since the position where the student touch operation occurs is (x, y), if the position is within the range of (x 1, x2, y1, y 2), it indicates that the user clicked the button of "hongmeng", and if the position is outside the range of (x 1, x2, y1, y 2), it indicates that the user did not click the button of "hongmeng".
Step 1504, responding to the operation of clicking the button by the user, and opening a third-party application program to retrieve the key knowledge.
As described above, when the button is linked with the third-party application package, the user may cause the electronic device to open the third-party application for retrieving the important knowledge by clicking the button, and directly display the retrieval result in the third-party application.
Therefore, in some embodiments of the present application, when the user clicks the button, the electronic device opens the third-party application linked to the button, so that the third-party application searches the key knowledge.
Step 1506, displaying the search results associated with the key knowledge in the third party application.
Since the electronic device searches the important knowledge by opening the third-party application program, the search result related to the important knowledge is directly displayed in the third-party application program (see fig. 8 to 9 above).
It is understood that, in some embodiments, the electronic device may also display the search header interface of the third-party application only in the third-party application, that is, after the button is clicked by the user, the electronic device invokes and opens the third-party application, but the third-party application does not perform the retrieval, and after the user clicks the corresponding search button in the search header interface of the third-party application, the third-party application performs the retrieval and displays the retrieval result in the third-party application. At this time, the retrieval process is similar to the retrieval process when the key knowledge button is linked to the home interface of the third-party application, except that the position where the retrieval result is displayed is different, in this embodiment, the retrieval result is displayed in the third-party application, and in the manner of linking the key knowledge button to the home interface of the third-party application, the retrieval result is displayed in the current application used by the user to browse the courseware.
Referring now to fig. 16, a process of querying for important knowledge by a user via a button corresponding to the important knowledge when the button is linked to the third party application home interface will be described, wherein the same steps as those in the method 1500 can be referred to the relevant description in the method 1500. Specifically, as shown in fig. 16, method 1600 includes:
step 1602: and detecting whether the user clicks a button corresponding to certain key knowledge in the courseware.
Step 1604: and responding to the operation of clicking the button by the user, and displaying a first interface of the third-party application program in the current application program.
It can be understood that the courseware is stored in a certain form, such as PDF, word, or picture, so the user needs to open the courseware using a specific application program, for example, the user may open the courseware directly through the online APP of the above-mentioned online lesson, or the user may open the courseware through the PDF document editor. Therefore, current applications include online class APPs, PDF document editors, and other applications that users use to open and view courseware.
Since the button link is the home interface web page of the third-party application program, when the user clicks the button corresponding to the key knowledge in the current application program, the electronic device only displays the home interface of the third-party application program in the current application program in response to the operation of clicking the button by the user (see fig. 4 (B)).
Step 1606: and displaying the retrieval result in the current application program according to the instruction of starting the retrieval by the user.
After the electronic device displays the first interface of the third-party application program in the current application program, whether to start the retrieval or not needs to be determined according to a further operation instruction of the user. In some embodiments, the user may click a search button (e.g., a search button) in the third party application's home interface after editing the search keywords displayed in the home interface of the third party application
Figure BDA0003229240670000191
Figure BDA0003229240670000192
One hundred degrees in the application) to allow third party applications to retrieve the retrieval key. In other embodiments, the user may also directly click a search button in the first interface of the third-party application program, so that the third-party application program can retrieve the retrieval keyword.
After the third-party application program searches the search keyword, the electronic device displays the webpage corresponding to the search result in the current application program (see fig. 4 (C)).
In order to more fully understand the teaching live broadcast method of the present application, the following takes the courseware on the teacher side as a paper courseware as an example, and introduces an interactive process for realizing the teaching live broadcast method in cooperation among the teaching equipment 100, the learning equipment 200 and the camera 300 in the teaching live broadcast scene shown in fig. 1.
It should be appreciated that the interaction process described below can be simplified to the interaction between the teaching apparatus 100 and the learning apparatus 200 when the courseware is an electronic courseware.
Specifically, as shown in fig. 17, method 1700 includes:
step 1702, responding to an operation instruction of the teacher, starting the camera, and acquiring an image of the paper courseware.
As mentioned above, courseware generally includes two forms, one is paper courseware and one is electronic courseware, and it can be understood that, when the courseware belongs to electronic courseware, the content of the electronic courseware does not need to be acquired by the camera 300, and the teaching device 100 and the learning device 200 can realize the synchronization of the electronic courseware in the teaching device and the learning device in a screen sharing manner through a network protocol; when the courseware belongs to the paper courseware, the camera 300 is needed to obtain the image of the paper courseware when the teaching live broadcast starts. In some embodiments, the camera 300 may be manually turned on by a teacher, or may be automatically turned on after the teacher enters the live teaching interface shown in fig. 2, for example, before the teacher enters the live teaching interface shown in fig. 2, the camera 300 may be manually turned on first, so as to debug the camera 300 in advance, and avoid that the camera 300 has a fault after the live teaching is started, which is not limited in this application.
In step 1704, the camera sends a preview frame of the image of the paper courseware to the teaching device.
It can be understood that before the live teaching broadcast starts, the camera 300 needs to send a preview frame of a document image to the teaching device, so that a teacher can determine whether to start live teaching broadcast according to the quality of the preview frame of the image of the paper courseware, for example, whether the image of the paper courseware is clear, whether the interface is stuck, and the like.
For example, in some embodiments, before starting live teaching, a teacher may determine whether to adjust network settings or change the camera 300 according to whether an image of a paper courseware acquired by the camera 300 is clear, whether a live broadcast interface is blocked when the camera 300 transmits a paper document image, and the like, so as to avoid that the network reasons or the hardware quality of the camera 300 affect the live teaching effect.
Step 1706, the teaching device sends the preview frame of the image of the paper courseware to the learning device.
It is understood that, limited by the quality of network transmission, there may be data loss, such as packet loss, during the network transmission process, which may cause problems such as the learning apparatus 200 and the teacher apparatus 100 displaying pictures being out of synchronization or failing to normally display pictures. Therefore, the student needs to determine whether the live teaching can be started according to the quality of the preview frame of the displayed image of the paper courseware, and thus, after the teaching device 100 acquires the preview frame of the image of the paper courseware sent by the camera 300, the teaching device 100 needs to send the preview frame of the image of the paper courseware to the learning device 200, so that the student determines whether the live teaching can be started according to the quality of the preview frame of the image of the paper courseware displayed on the learning device 200. For example, if the preview frame of the image of the paper document displayed on the learning apparatus 200 is very blurred, or the learning apparatus 200 cannot normally display any screen at all, the student can inform the teacher of the above situation through the voice call mode button on the learning apparatus 200 as shown in fig. 2, so as to adjust the network settings and the like under the guidance of the teacher, so that the live teaching can be smoothly carried out.
In step 1708, the camera 300 generates a snapshot including the key knowledge in response to an operation instruction for the teacher to save the key knowledge.
It can be understood that after the live teaching is started, the teacher may directly perform the focus marking on the paper courseware, and then the camera 300 acquires the paper courseware with the focus marking, or the teacher may perform the focus marking on the image of the paper courseware acquired by the camera 300 on the teaching device 100. The method for adding the key mark can refer to the description of the related content, and the description is not repeated here.
After the teacher finishes making the key mark, the teacher clicks the save button 106 in the live teaching interface in fig. 2 on the teaching device 100, so that the camera 300 can respond to the operation instruction, intercept the screen shot of the paper courseware including key knowledge in the current interface, and save the screen shot in the memory of the camera 300. It is to be appreciated that if the camera 300 is a camera integrated with the teaching apparatus 100, the camera 300 can save a snapshot of a paper courseware including key knowledge in the local memory of the teaching apparatus 100 after obtaining the screen snapshot.
Step 1710, the camera sends a snapshot including the key knowledge to the teaching device.
It can be understood that the camera 300 only needs to mark key points and key knowledge marked by teachers on the snapshot of the paper courseware including key knowledge, and does not have buttons corresponding to key knowledge, if the buttons shown in fig. 5 need to be generated, the camera 300 also needs to send the snapshot of the paper courseware including key knowledge to the teaching equipment 100 after the snapshot of the paper courseware including key knowledge is obtained, so that the teaching equipment 100 can generate the buttons of key knowledge according to the snapshot of the paper courseware including key knowledge. The manner of generating the button corresponding to the key knowledge by the teaching device 100 can refer to the description of the related content, and is not described herein again.
Step 1712, the teaching device identifies key marks in the snapshot including key knowledge and determines areas corresponding to the key marks.
The method for the teaching device 100 to determine the valid marks in the snapshot of the paper courseware and determine the areas corresponding to the key marks may refer to the description of the related contents above, and details are not repeated here.
Step 1714, the teaching device identifies the content in the area and takes the content as key knowledge.
After the teaching device 100 determines the area corresponding to the valid mark, the content in the area is also identified by using the OCR character recognition technology, and the content is used as key knowledge.
And step 1716, generating a button with a text label at the position of the key knowledge content.
The way for the teaching apparatus 100 to generate the button with the text label at the position of the key knowledge content may refer to the description of the related content, and is not described herein again.
Step 1718, extracting the characteristics of the key knowledge content, and then establishing an association relation for the key knowledge content based on the similarity between the characteristics of the key knowledge content.
The way of extracting the features of the key knowledge content and the way of establishing the association relationship for the key knowledge content by the teaching apparatus 100 can refer to the above description, and will not be described herein again.
Step 1720, responding to the operation instruction that the teacher finishes explaining, generating a snapshot set of the paper courseware.
The teacher clicks the exit button 102 shown in fig. 2 to end the live teaching, and the teaching apparatus 100 generates an image of a paper courseware including key knowledge. In some embodiments, the courseware may be composed of snapshots of all paper courseware of the teacher, or may be composed of only images of paper courseware including important knowledge content in the paper courseware, which is not limited in this application
Step 1722, the teaching device sends the snapshot set of the paper courseware to the learning device.
The teaching apparatus 100 sends the snapshot set of paper courseware to the learning apparatus 200 for the student to download and review courseware through a view courseware button (not shown) of the more buttons 208 in the teaching live interface shown in fig. 5. In some embodiments, a student may open a snapshot set of paper courseware for review at a live teaching interface as shown in fig. 5; in other embodiments, the student may open the snapshot set of paper courseware for review in other applications of the learning device 200, which is not limited in this application.
In the above embodiment, the teacher adds key marks to courseware (including electronic courseware and paper courseware) in the live teaching interface of the teaching device 100, and students can have preliminary knowledge about key knowledge along with the teacher's thought in the live teaching process. In order to enable students to be capable of pre-learning key knowledge of a certain subject in advance so as to improve teaching efficiency of a teaching live classroom, in other embodiments, teachers can also add key marks to electronic courseware or paper courseware in advance outside a teaching live interface and upload the electronic courseware or the paper courseware to the online APP10 of the network lesson as shown in fig. 1, the background of the online APP10 of the network lesson can identify the key marks of the courseware uploaded by the teachers and generate key knowledge buttons, so that the students can download and look up the courseware through the online APP10 of the network lesson of the learning equipment 200 before the teaching live broadcast starts, and pre-learn contents of the courseware in advance.
Fig. 18 is a schematic diagram illustrating a hardware configuration of an exemplary teaching apparatus 100 according to some embodiments.
The teaching apparatus 100 may include a processor 110, an external storage interface 120, an internal memory 121, a usb interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a Subscriber Identification Module (SIM) card interface 195, and the like.
The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the teaching apparatus 100. In other embodiments of the present application, instructional device 100 may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
A memory may also be provided in processor 110 for storing instructions and data. In an embodiment of the present application, the processor 120 may perform an interface display method for an application.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area.
The USB interface module 130 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the teaching device 100. The external memory card communicates with the processor 120 through the interface module 130 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The charging management module 140 receives the input of the battery and supplies power to the processor 110, the internal memory 121, the display screen 194, and the like.
The wireless communication module 150 may provide a solution for wireless communication applied to the teaching apparatus 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (blue tooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and so on.
The mobile communication module 160 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the teaching apparatus 100.
The teaching apparatus 100 implements a display function by the GPU, the display screen 194, and the application processor, etc.
The audio module 170 is used to convert digital audio information into analog audio signals for output, and also used to convert analog audio inputs into digital audio signals. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The teaching apparatus 100 can implement a shooting function by the camera 193, the application processor, and the like. For example, in some embodiments, when a teacher uses paper courseware, the camera 193 of the teaching device 100 can obtain the content of the teacher in the paper courseware, as well as the accent marks that the teacher made on the paper courseware.
Fig. 19 is a block diagram illustrating a software structure of an exemplary teaching apparatus 100 according to some embodiments.
As shown in fig. 19, the tutorial device 100 can be divided into an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer.
Wherein the application layer may include a series of application packages.
As shown in fig. 19, the application package may include camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications. In embodiments of the present application, the application package may include a gallery application or the like.
The application framework layer may include a view system, a gesture recognition system, and the like.
In an embodiment of the present application, the gesture recognition system is used to recognize a user operation performed by a user on the gallery application on the screen of the tutorial device 100.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build a display interface for an application. The display interface may be composed of one or more display elements, where a display element refers to an element in the display interface of an application in the screen of the electronic device.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., opengL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
An embodiment of the present application further provides an electronic device, where the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be implemented by a computer program, which can be stored in a computer readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, and software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (14)

1. An inquiry method applied to electronic equipment is characterized by comprising the following steps:
a first application window of a first application is displayed on the electronic equipment, wherein the first application window comprises a first interface, and the first interface comprises at least one retrieval identifier;
detecting a selection operation of a user for the retrieval identification;
calling a second application, wherein the second application can be used for searching the search keyword corresponding to the search identification;
and displaying the retrieval result of the second application.
2. The method of claim 1, wherein the first application comprises at least one of an online live application, a teleconferencing application, and a document editing application.
3. The method of claim 1, wherein the retrieval identifier comprises at least one of a button, a link, and a label.
4. The method of claim 1, wherein the selecting operation comprises: touch operation or click operation.
5. The method of claim 1, wherein said displaying the search results of the second application comprises:
and displaying the retrieval result in a second application window of the second application, wherein the first application window and the second application window are independently displayed.
6. The method of claim 1, wherein said displaying search results of said search application comprises:
and displaying a second interface in the first application window, wherein the second interface displays an initial retrieval page of the second application, and a retrieval keyword corresponding to the retrieval identification is input into a search bar of the initial retrieval page.
7. The method of claim 1, wherein the first interface displays a first document and the retrieval identification is based on at least a portion of content in the first document.
8. The method of claim 7, wherein the search identifier is generated by:
in the case of detecting a retrieval operation performed by a user on at least part of content in the first document, generating a retrieval identifier of the at least part of content, wherein the retrieval identifier is associated with an initial retrieval page of the second application, or the retrieval identifier is associated with the second application.
9. The method of claim 8, wherein the user's retrieval of at least some of the content in the first document comprises: the user marks at least part of the content in the first document.
10. The method of claim 1 or 8, wherein the first interface displays a knowledge-graph of the first document, the knowledge-graph including associations between a plurality of retrieval identifiers.
11. The method of claim 10, wherein the knowledge-graph is generated by:
extracting the characteristics of partial content in the first document corresponding to each retrieval identifier in a plurality of retrieval identifiers;
calculating the similarity between a plurality of characteristics corresponding to a plurality of retrieval identifications;
and generating the knowledge graph according to the similarity among the plurality of characteristics.
12. The method of claim 11, wherein the features include image features and text features.
13. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the query method of any one of claims 1 to 12 when executing the computer program.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the query method according to any one of claims 1 to 12.
CN202110985097.6A 2021-08-25 2021-08-25 Query method, electronic device, and medium therefor Pending CN115729424A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110985097.6A CN115729424A (en) 2021-08-25 2021-08-25 Query method, electronic device, and medium therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110985097.6A CN115729424A (en) 2021-08-25 2021-08-25 Query method, electronic device, and medium therefor

Publications (1)

Publication Number Publication Date
CN115729424A true CN115729424A (en) 2023-03-03

Family

ID=85289846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110985097.6A Pending CN115729424A (en) 2021-08-25 2021-08-25 Query method, electronic device, and medium therefor

Country Status (1)

Country Link
CN (1) CN115729424A (en)

Similar Documents

Publication Publication Date Title
CN107210033B (en) Updating language understanding classifier models for digital personal assistants based on crowd sourcing
WO2022142014A1 (en) Multi-modal information fusion-based text classification method, and related device thereof
CN107608652B (en) Method and device for controlling graphical interface through voice
KR102270394B1 (en) Method, terminal, and storage medium for recognizing an image
US20210004405A1 (en) Enhancing tangible content on physical activity surface
CN107609092B (en) Intelligent response method and device
US20100100371A1 (en) Method, System, and Apparatus for Message Generation
CN107294837A (en) Engaged in the dialogue interactive method and system using virtual robot
CN105204886B (en) A kind of method, user terminal and server activating application program
US20200412864A1 (en) Modular camera interface
CN112752121B (en) Video cover generation method and device
CN106527945A (en) Text information extracting method and device
CN107527619A (en) The localization method and device of Voice command business
CN101231567A (en) Human-computer interaction method and system base on hand-written identification as well as equipment for running said system
KR20200115625A (en) How to learn personalized intent
KR20220155601A (en) Voice-based selection of augmented reality content for detected objects
WO2021139486A1 (en) Text incrementation method and apparatus, and terminal device
CN111460231A (en) Electronic device, search method for electronic device, and medium
CN110430356A (en) One kind repairing drawing method and electronic equipment
CN112732379A (en) Operation method of application program on intelligent terminal, terminal and storage medium
CN115729424A (en) Query method, electronic device, and medium therefor
WO2022247466A1 (en) Resource display method, terminal and server
US11978252B2 (en) Communication system, display apparatus, and display control method
US12010257B2 (en) Image classification method and electronic device
WO2020235538A1 (en) System and stroke data processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination