CN111327943B - Information management method, device, system, computer equipment and storage medium - Google Patents

Information management method, device, system, computer equipment and storage medium Download PDF

Info

Publication number
CN111327943B
CN111327943B CN201910646245.4A CN201910646245A CN111327943B CN 111327943 B CN111327943 B CN 111327943B CN 201910646245 A CN201910646245 A CN 201910646245A CN 111327943 B CN111327943 B CN 111327943B
Authority
CN
China
Prior art keywords
information
target
multimedia
target user
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910646245.4A
Other languages
Chinese (zh)
Other versions
CN111327943A (en
Inventor
王洁
金鑫
周理孟
沈鹏辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN201910646245.4A priority Critical patent/CN111327943B/en
Publication of CN111327943A publication Critical patent/CN111327943A/en
Application granted granted Critical
Publication of CN111327943B publication Critical patent/CN111327943B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses an information management method, an information management device, an information management system, computer equipment and a storage medium, and belongs to the technical field of electronics. The method comprises the following steps: sending interaction information to a target terminal corresponding to a target user based on state information when the target user receives first multimedia information within a specific time period so as to wait for the target user to input feedback information according to the interaction information, wherein the interaction information is associated with a first key information point contained in the first multimedia information, and the state information comprises behavior state information and/or face state information of the target user; receiving feedback information of the target user on the interactive information, which is sent by the target terminal; and when the matching rate of the feedback information and the specified feedback information is smaller than a matching rate threshold value, sending second multimedia information corresponding to the first key information point to the target terminal. The invention effectively improves the efficiency and the management flexibility of the multimedia information sent to the target user.

Description

Information management method, device, system, computer equipment and storage medium
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to an information management method, apparatus, system, computer device, and storage medium.
Background
In the process of broadcasting the multimedia information, because there is a difference in the absorption capacity of the user for the multimedia information, the multimedia playing information or the multimedia information associated with the multimedia playing information is usually sent to the user after the multimedia information is played, so that the user can review the key information points related to the multimedia playing information.
However, after the multimedia information is played, the content of the multimedia playing information sent to the user is large, which results in low sending efficiency of the multimedia information and poor management flexibility.
Disclosure of Invention
The invention provides an information management method, an information management device, an information management system, computer equipment and a storage medium, which can solve the problems of low multimedia information sending efficiency and poor management flexibility in the related technology. The technical scheme is as follows:
in a first aspect, an information management method is provided, where the method includes:
sending interaction information to a target terminal corresponding to a target user based on state information of the target user when receiving first multimedia information within a specific time period so as to wait for the target user to input feedback information according to the interaction information, wherein the interaction information is associated with a first key information point contained in the first multimedia information, and the state information comprises behavior state information and/or face state information of the target user;
receiving feedback information of the target user on the interactive information, which is sent by the target terminal;
and when the matching rate of the feedback information and the specified feedback information is smaller than a matching rate threshold value, sending second multimedia information corresponding to the first key information point to the target terminal.
Optionally, before the sending the interaction information to the target terminal corresponding to the target user based on the state information of the target user when receiving the first multimedia information within the specific time period, the method further includes:
identifying a plurality of user images acquired in the specific time period to obtain the behavior state information, wherein the behavior state information comprises first behavior state information, second behavior state information, third behavior state information and/or fourth behavior state information;
wherein the first behavior state information is used for indicating whether the target user generates the target behavior within the specific time period;
the second behavior state information is the proportion of the duration of the target behavior generated by the target user to the total duration of the target behavior generated by the user corresponding to the plurality of user images;
the third behavior state information is a ratio of the number of times the target user generates the target behavior to the total number of times the target user generates the target behavior for users corresponding to the plurality of user images;
the fourth behavior state information is used for indicating whether the target user generates the target behavior at a specific time point within the specific time period.
Optionally, before the sending the interaction information to the target terminal corresponding to the target user based on the state information of the target user when receiving the first multimedia information within the specific time period, the method further includes:
identifying a plurality of user images acquired in the specific time period to obtain the face state information, wherein the face state information comprises first face state information, second face state information, third face state information, fourth face state information and/or fifth face state information;
wherein the first face state information is used to indicate whether the target user generates the target face state within the specific time period;
the second face state information is the proportion of the total duration of the target face state to the total duration of the target face state in the plurality of user images;
the third face state information is a ratio of a total number of first images including an image of the target user whose face state is the target face state to a total number of second images including an image of the target user;
the fourth face state information indicates a number of times the target user generated the target face state;
the fifth face state information is used to indicate whether the target user generates the target face state at a specific time point within the specific time period.
Optionally, the status information further includes: the method includes that attendance information is information reflecting whether a target user is attendance within a specific time period, and before sending interaction information to a target terminal corresponding to the target user based on state information of the target user when receiving first multimedia information within the specific time period, the method further includes:
determining whether the target user is on duty in the specific time period based on a plurality of user images acquired in the specific time period;
and determining the attendance checking information based on the judgment result.
Optionally, before the sending of the second multimedia information corresponding to the first key information point to the target terminal, the method further includes:
when the first multimedia information comprises a plurality of pieces of multimedia sub information adjacent in time sequence, dividing the first multimedia information into a plurality of information sets according to the playing state of the first multimedia information, wherein the multimedia sub information is a multimedia image or an audio clip;
correspondingly, the sending of the second multimedia information corresponding to the first key information point to the target terminal includes:
determining at least one information set in the first multimedia information, wherein the information related to the first key information point is located in the first multimedia information;
and sending second multimedia information consisting of the at least one information set to the target terminal.
Optionally, the first multimedia information includes a plurality of sub-information groups adjacent in time sequence, each sub-information group corresponds to a key information point,
the dividing the first multimedia information into a plurality of information sets according to the playing state of the first multimedia information includes:
if the content of two adjacent pieces of multimedia sub information in time sequence changes, creating a time stamp at the initial playing time of the multimedia sub information with the time sequence later in the two pieces of multimedia sub information, and forming one information set by the multimedia sub information between any two adjacent time stamps;
correspondingly, the sending of the second multimedia information composed of the at least one information set to the target terminal includes:
determining first target multimedia sub-information with the earliest time sequence and second target multimedia sub-information with the latest time sequence in a sub-information group corresponding to the first key information point;
and sending second multimedia information consisting of multimedia sub-information between a first timestamp and a second timestamp to the target terminal, wherein the first timestamp is a timestamp which is before the time sequence of the first target multimedia sub-information and is closest to the time sequence of the first target multimedia sub-information, and the second timestamp is a timestamp which is after the time sequence of the first target multimedia sub-information and is closest to the time sequence of the second target multimedia sub-information.
Optionally, before the sending, to the target terminal, the second multimedia information corresponding to the first key information point, the method further includes:
when the first multimedia information comprises a plurality of multimedia sub information adjacent in time sequence, grouping the plurality of multimedia sub information to obtain a plurality of information sets, wherein each information set corresponds to a key information point, and each multimedia sub information is a multimedia image or an audio clip;
correspondingly, the sending of the second multimedia information corresponding to the first key information point to the target terminal includes:
and sending the second multimedia information consisting of the information set corresponding to the first key information point to the target terminal.
Optionally, the sending, to a target terminal corresponding to a target user, interactive information based on state information of the target user when receiving first multimedia information within a specific time period includes:
determining a target information amount of the interactive information related to the first key information point based on the state information of the target user;
and sending the interactive information of the target information amount to the target terminal.
Optionally, before the sending the interaction information to the target terminal corresponding to the target user based on the state information of the target user when receiving the first multimedia information within the specific time period, the method further includes:
identifying first text information of each multimedia image in one or more multimedia images corresponding to the first multimedia information,
acquiring a first keyword in each first text message,
acquiring a key information point related to the content of each multimedia image based on each first keyword to obtain a key information point contained in the first multimedia information;
or converting the target audio corresponding to the first multimedia information into second text information,
acquiring a second keyword in the second text information,
and acquiring key information points related to the content of the target audio based on the second keywords to obtain the key information points contained in the first multimedia information.
Optionally, the multimedia sub information is a multimedia image, and the grouping the plurality of multimedia sub information to obtain a plurality of sub information groups includes:
judging whether the first keywords of every two adjacent multimedia images in the time sequence are the same correspondingly;
and when the first keywords of the two multimedia images are correspondingly the same, dividing the two multimedia images into the same image group.
Optionally, the grouping the multimedia sub information to obtain a plurality of sub information groups includes:
judging whether the second keywords of every two adjacent audio clips in the time sequence are the same correspondingly;
and when the second keywords of the two audio clips are correspondingly the same, dividing the two audio clips into the same audio group.
In a second aspect, there is provided an information management apparatus, the apparatus comprising:
the sending module is used for sending interaction information to a target terminal corresponding to a target user based on state information of the target user when the target user receives first multimedia information within a specific time period so as to wait for the target user to input feedback information according to the interaction information, wherein the interaction information is associated with a first key information point contained in the first multimedia information, and the state information comprises behavior state information and/or face state information of the target user;
the receiving module is used for receiving feedback information of the target user on the interactive information, which is sent by the target terminal;
the sending module is further configured to send, to the target terminal, second multimedia information corresponding to the first key information point when a matching rate of the feedback information and the specified feedback information is smaller than a matching rate threshold.
Optionally, the apparatus further comprises:
the first identification module is used for identifying a plurality of user images acquired in the specific time period to obtain the behavior state information, and the behavior state information comprises first behavior state information, second behavior state information, third behavior state information and/or fourth behavior state information;
wherein the first behavior state information is used for indicating whether the target user generates the target behavior within the specific time period;
the second behavior state information is the proportion of the duration of the target behavior generated by the target user to the total duration of the target behavior generated by the users corresponding to the plurality of user images;
the third behavior state information is a ratio of the number of times the target user generates the target behavior to the total number of times the target user generates the target behavior for users corresponding to the plurality of user images;
the fourth behavior state information is used for indicating whether the target user generates the target behavior at a specific time point within the specific time period.
Optionally, the apparatus further comprises:
the first identification module is used for identifying a plurality of user images acquired in the specific time period to obtain the face state information, wherein the face state information comprises first face state information, second face state information, third face state information, fourth face state information and/or fifth face state information;
wherein the first face state information is used to indicate whether the target user generates the target face state within the certain period of time;
the second face state information is the proportion of the total duration of the target face state to the total duration of the target face state in the plurality of user images;
the third face state information is a ratio of a total number of first images to a total number of second images, the first images including an image of the target user, the second images including an image of the target user whose face state is the target face state;
the fourth face state information indicates a number of times the target user generated the target face state;
the fifth face state information is used to indicate whether the target user generates the target face state at a specific time point within the specific time period.
Optionally, the status information further includes: attendance information which reflects whether the target user is attendance within a specific time period, and the device further comprises:
the judging module is used for judging whether the target user is on duty in the specific time period based on a plurality of user images acquired in the specific time period;
and the determining module is used for determining the attendance information based on the judgment result.
Optionally, the apparatus further comprises:
the dividing module is used for dividing the first multimedia information into a plurality of information sets according to the playing state of the first multimedia information when the first multimedia information comprises a plurality of multimedia sub information adjacent in time sequence, wherein the multimedia sub information is a multimedia image or an audio clip;
the sending module is further configured to:
determining at least one information set in the first multimedia information, wherein the information related to the first key information point is located in the first multimedia information;
and sending second multimedia information consisting of the at least one information set to the target terminal.
Optionally, the first multimedia information includes a plurality of sub-information groups adjacent in time sequence, each sub-information group corresponds to a key information point,
the dividing module is specifically configured to:
if the content of two adjacent pieces of multimedia sub information in time sequence changes, creating a time stamp at the initial playing time of the multimedia sub information with the later time sequence in the two pieces of multimedia sub information, and forming one information set by the multimedia sub information between any two adjacent time stamps;
correspondingly, the sending module is specifically configured to:
determining first target multimedia sub-information with the earliest time sequence and second target multimedia sub-information with the latest time sequence in a sub-information group corresponding to the first key information point;
and sending second multimedia information consisting of multimedia sub-information between a first timestamp and a second timestamp to the target terminal, wherein the first timestamp is a timestamp which is before the time sequence of the first target multimedia sub-information and is closest to the time sequence of the first target multimedia sub-information, and the second timestamp is a timestamp which is after the time sequence of the first target multimedia sub-information and is closest to the time sequence of the second target multimedia sub-information.
Optionally, the apparatus further comprises:
the dividing module is used for grouping the multimedia sub-information to obtain a plurality of information sets when the first multimedia information comprises a plurality of multimedia sub-information which are adjacent in time sequence, wherein each information set corresponds to a key information point, and each multimedia sub-information is a multimedia image or an audio clip;
correspondingly, the sending module is specifically configured to:
and sending the second multimedia information consisting of the information set corresponding to the first key information point to the target terminal.
Optionally, the sending module is configured to:
determining a target information amount of the interactive information related to the first key information point based on the state information of the target user;
and sending the interactive information of the target information amount to the target terminal.
Correspondingly, the device further comprises:
a second identification module for identifying first text information of each multimedia image in one or more multimedia images corresponding to the first multimedia information,
an obtaining module, configured to obtain a first keyword in each of the first text messages,
the obtaining module is further configured to obtain, based on each first keyword, a key information point related to the content of each multimedia image, to obtain a key information point included in the first multimedia information;
alternatively, the apparatus further comprises:
a conversion module for converting the target audio corresponding to the first multimedia information into second text information,
the acquisition module is used for acquiring a second keyword in the second text information,
the obtaining module is further configured to obtain, based on the second keyword, a key information point related to the content of the target audio to obtain a key information point included in the first multimedia information.
In a third aspect, an information management system is provided, which includes a terminal and a server, and the server includes the information management apparatus according to any one of the second aspects.
In a fourth aspect, a computer device is provided, comprising a processor and a memory;
wherein the content of the first and second substances,
the memory is used for storing a computer program;
the processor is configured to execute the program stored in the memory, and implement the information management method according to any one of the first aspect.
In a fifth aspect, a storage medium is provided, wherein a computer program is stored in the storage medium, and when being executed by a processor, the computer program realizes the information management method of any one of the first aspects.
In a sixth aspect, embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the information management method of any one of the first aspect.
The technical scheme provided by the invention has the beneficial effects that:
according to the information management method, the information management device, the information management system, the computer equipment and the storage medium, the second multimedia information is obtained by screening the first multimedia information according to the state information, and the content of the second multimedia information is less; and the second multimedia information is determined according to the feedback information of the interactive information, and has more pertinence to the target user, so that the efficiency and the management flexibility of the multimedia information sent to the target user are effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an information management system according to an embodiment of the present invention;
fig. 2 is a flowchart of an information management method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another information management method provided by an embodiment of the invention;
FIG. 4 is a flowchart of a method for dividing a first multimedia message into a plurality of message sets according to an embodiment of the present invention;
FIG. 5 is a flowchart of another method for dividing first multimedia information into a plurality of information sets according to an embodiment of the present invention;
fig. 6 is a flowchart of a method for acquiring key information points included in a multimedia image according to an embodiment of the present invention;
FIG. 7 is a structural diagram of a knowledge tree structure provided by an embodiment of the invention;
fig. 8 is a flowchart of a method for acquiring key information points included in an audio clip according to an embodiment of the present invention;
fig. 9 is a flowchart of a method for acquiring third behavior state information according to an embodiment of the present invention;
FIG. 10 is a flowchart of a method for obtaining third face state information according to an embodiment of the present invention;
fig. 11 is a flowchart of a method for acquiring attendance information according to an embodiment of the present invention;
fig. 12 is a flowchart of a method for determining a target information amount of mutual information related to a first key information point according to an embodiment of the present invention;
fig. 13 is a flowchart of a method for sending second multimedia information corresponding to a first key information point to a target terminal according to an embodiment of the present invention;
FIG. 14 is a schematic structural diagram of an information management apparatus according to an embodiment of the present invention;
FIG. 15 is a schematic structural diagram of another information management apparatus according to an embodiment of the present invention;
fig. 16 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of an information management system related to an information management method according to an embodiment of the present invention. As shown in fig. 1, the system may include: a server 110 and at least one terminal 120.
Wherein, the server 110 and the terminal 120 can establish a connection through a wired network or a wireless network. The server 110 may obtain state information of a target user when receiving first multimedia information within a specific time period, determine the absorption degree of the target user for the first multimedia information according to the state information, send interaction information to a target terminal corresponding to the target user, and then determine whether to send second multimedia information to the target user according to feedback information of the target user for the interaction information, so that the target user receives a key information point related to the interaction information again.
Alternatively, the first multimedia information may be a multimedia image or audio. For example, the first multimedia information may be a picture, a video or an audio of a lecture in the course of the lecture.
For example, the information management method may be applied to a teaching scene, and accordingly, the multimedia image may be a teaching video (e.g., a slide for teaching, etc.), or a teaching video recorded during a teaching process, or a teaching picture taken during a teaching process. Similarly, the content of the first multimedia information may include one or more of image content of a teaching video, voice content, image content of a lecturer, and image content of a blackboard-writing. The audio can be teaching audio or audio recorded during the course of teaching.
For example, the information management method may also be applied to a conference scene, and accordingly, the multimedia image may be a conference video or a conference PPT. The audio may be a conference recording.
For example, the information management method can also be applied to a concert scene, and accordingly, the multimedia image can be a concert video or picture. The audio may be recorded for a concert.
The information management method provided by the embodiment of the invention can also be applied to other scenes in which multimedia information pushing is required, and the embodiment of the invention does not limit the scenes.
The multimedia image can be acquired by an image acquisition device arranged in a target environment, and can also be generated in advance; the audio may be captured by an audio capture device disposed in the target environment, or may be pre-generated. For example, the target environment may be a lecture environment. The target environment may be a classroom, and the image capturing device may be disposed in the middle or at the back of the classroom, and the camera of the image capturing device faces the front of the classroom, so as to record teaching videos during the course of teaching. Optionally, in order to record a teaching audio in a teaching process, the audio acquisition device may be disposed on a desk of a teacher, or the audio acquisition device may also be a headset or a hanging microphone, or may be a built-in microphone of a teaching device in a teaching environment, which is not specifically limited in this embodiment of the present invention.
The server may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center. The terminal 120 may be a smartphone, a computer, a tablet computer, a multimedia player, an e-reader, a wearable device, or the like. The terminal 120 may log in an account of a target user, so that after the server 110 sends the interactive information or the second multimedia information to the terminal 120, the target user can realize a viewing function by logging in the target account.
The following describes an information management method provided in an embodiment of the present invention. As shown in fig. 2, the method may include:
step 201, based on the state information of the target user when receiving the first multimedia information in a specific time period, sending interaction information to a target terminal corresponding to the target user, so that the target user inputs feedback information according to the interaction information.
The interactive information is associated with a first key information point contained in the first multimedia information, and the state information comprises behavior state information and/or face state information of the target user. The first key information point may be one or more key information points.
Step 202, receiving feedback information of the target user to the interactive information, which is sent by the target terminal.
And 203, when the matching rate of the feedback information and the specified feedback information is smaller than the matching rate threshold, sending second multimedia information corresponding to the first key information point to the target terminal.
In summary, in the information management method provided in the embodiment of the present invention, the second multimedia information is obtained by screening the first multimedia information according to the state information, and the content of the second multimedia information is less; and the second multimedia information is determined according to the feedback information of the interactive information and has more pertinence to the target user, so that the efficiency and the management flexibility of the multimedia information sent to the target user are effectively improved.
Fig. 3 is a flowchart of another information management method according to an embodiment of the present invention. The method may be applied to the server 110 shown in fig. 1. For the convenience of the reader to understand, the information management method is described in the embodiment of the present invention by taking a process of performing information management on a first key information point in first multimedia information as an example, where the first key information point is one or more key information points. As shown in fig. 3, the method may include:
step 301, obtaining a key information point in the first multimedia information.
Since the subsequent interactive information is associated with the key information point in the first multimedia information, the key information point to which the first multimedia information relates needs to be determined before the interactive information is pushed.
In a first implementation manner, the key information points of the first multimedia information are pre-divided, and the content of the key information points is known, and the server can directly obtain the key information points. Such as from a designated storage location.
In a second implementation manner, a plurality of multimedia sub information included in the first multimedia information is divided into a plurality of information sets. Each information set corresponds to a key information point.
Optionally, the first multimedia information may include a plurality of multimedia sub information adjacent in time sequence, and the multimedia sub information may be a multimedia image or an audio clip, at this time, there may be a plurality of implementation manners for dividing the plurality of multimedia sub information into a plurality of information sets, and the following examples are described below:
in a first implementation manner of the division, the first multimedia information may be divided into a plurality of information sets according to the playing status of the first multimedia information. Wherein, the playing status can reflect the change of the first multimedia information content. The play status may be different for different types of first multimedia information. For example, the multimedia information is ppt, the playing status is a page turning status, and one or more pages of multimedia images in the ppt are divided into an information set. For another example, if the multimedia information is a video and time stamps are set in image frames of video changes, the image frames between every two adjacent time stamps can be determined as an information set.
In one implementation, the playing status may be reflected by a timestamp of the multimedia sub-information, and accordingly, as shown in fig. 4, the implementation process of dividing the first multimedia information into a plurality of information sets according to the playing status of the first multimedia information may include the following steps:
step 3011a, in the plurality of multimedia sub information adjacent in time sequence included in the first multimedia information, determining whether contents of two pieces of multimedia sub information adjacent in time sequence change.
In a first implementation manner of step 3011a, when the first multimedia information is a multimedia image and the multimedia sub information is a multimedia image, the content of the multimedia sub information can be represented by the content displayed by the multimedia sub information. At this time, for each of a plurality of pixel positions in the designated area of the multimedia image, a pixel value difference value of a pixel located at the same pixel position in two multimedia images adjacent in time series may be determined first. Then, the total number of the pixel value difference values corresponding to the plurality of pixel positions in the preset difference value range is determined. And when the total number is larger than a preset total number threshold value, determining that the image content displayed by the two multimedia images changes.
The preset total number threshold value can be understood as the maximum number of pixels with different display contents at the same pixel position in the two multimedia images when the display contents are the same within the error range, so that when the total number is greater than the preset total number threshold value, it can be determined that the image contents displayed by the multimedia images with the later time sequence are actually changed compared with the image contents displayed by the multimedia images with the earlier time sequence.
Wherein, the preset difference range can be set according to actual needs. For example, for two multimedia images A adjacent in time sequence t-1 And A t A designated area of a size of a x b, if the pixel value I at the pixel position of the ith row and the jth column in the designated area t (I, j) and I t-1 (i, j) satisfies: i t (i,j)-I t-1 (i,j)|<τ, it may be determined that the pixel value difference is within a preset difference range. Illustratively, the value of τ may be 0.5.
In a second implementation manner of step 3011a, when the first multimedia information is a multimedia image and the multimedia sub information is a multimedia image, the first text information displayed by each multimedia sub information may be obtained, then the text matching is performed on the first text information corresponding to two multimedia sub information that are adjacent in the time sequence, and when the matching degree of the two text information is lower than a preset matching degree threshold, the content of the two text information is determined to be changed.
After the multimedia images are obtained, the server can perform character recognition on characters displayed in each multimedia image, or can perform recognition on characters displayed in a designated area in each multimedia image, and output first text information displayed in each multimedia image according to a character recognition result. The designated area may be an area of the multimedia image for displaying the main content of the multimedia image. For example, when the multimedia image is a teaching image, the designated area may be an area in which at least one of an image content of the multimedia image for displaying a teaching video, an image content of a teacher giving a teaching, and an image content of a blackboard-writing on a blackboard is located in the multimedia image. Alternatively, the first text information displayed by the lecture image may be acquired by an Optical Character Recognition (OCR) method.
For example, when the first multimedia information is a teaching video acquired from a video source of a player for playing the teaching video, the entire area of the multimedia image of the first multimedia information is displayed with the image content of the teaching video, and thus, the designated area may be the entire area of the multimedia image. Alternatively, when the multimedia image of the first multimedia information includes the image content of the teaching video and the wall in front of the classroom, the designated area is an area for displaying the teaching video. The teaching Video may be a signal accessed by a Video Graphics Array (VGA) Interface or a High Definition Multimedia Interface (HDMI).
In a third implementation manner, when the first multimedia information is an audio and the multimedia sub information is an audio segment, each audio segment may be converted into second text information, then the second text information corresponding to two multimedia sub information that are adjacent in sequence is subjected to text matching, and when the matching degree of the two text information is lower than a preset matching degree threshold, the change of the content of the two text information is determined.
After the audio clip is obtained, the server may perform analog-to-digital conversion on the audio clip, extract acoustic features in the converted audio clip, and then decode the acoustic features according to a preset acoustic model and a language model to obtain second text information corresponding to the audio clip. In addition, in order to ensure the accuracy of the second text information obtained according to the audio clip, the audio clip may be preprocessed before the acoustic features in the audio clip are extracted.
Step 3012a, when the content of two multimedia sub-messages adjacent in time sequence changes, a time stamp is created at the starting playing time of the multimedia sub-message with the later time sequence in the two multimedia sub-messages, and the multimedia sub-messages between any two adjacent time stamps form an information set.
When the content of two multimedia sub-information adjacent in sequence is not changed, the key point information reflected by the two multimedia sub-information is not changed. When the content of two multimedia sub-information adjacent in the time sequence changes, the key point information reflected by the two multimedia sub-information may change, so that the time stamp may be created at the initial playing time of the multimedia sub-information with the later time sequence in the two multimedia sub-information. At this time, since the contents of the multimedia sub-information between every two time-sequentially adjacent time stamps are the same, the multimedia sub-information between every two time-sequentially adjacent time stamps can be divided into one information set.
In a second implementation manner of the division, a plurality of multimedia sub-information may be grouped to obtain a plurality of information sets, and each information set corresponds to one key information point. As shown in fig. 5, the implementation process may include:
and step 3011b, obtaining the key information points included in the multimedia sub information.
Alternatively, the multimedia sub-information may be a multimedia image (e.g., a lecture image) and/or an audio clip (e.g., a lecture audio clip). The implementation manner of determining the key information points included in the multimedia image according to the multimedia image is different from the implementation manner of determining the key information points included in the audio clip according to the audio clip, and the following description is provided for each of them.
As shown in fig. 6, the implementation process of acquiring the key information points included in the multimedia image may include:
step 3011b1, based on each multimedia image, obtaining the first text information displayed by each multimedia image.
The implementation process of step 3011b1 may refer to the implementation manner of obtaining the first text information in step 3011 a.
And step 3011b2, obtaining the first keyword in the first text information.
After the first text information is obtained, word segmentation processing may be performed on the first text information to segment a text in the first text information into at least one single word. Then, counting the occurrence frequency of each word, and determining the word with the occurrence frequency larger than a preset frequency threshold value as a first keyword in the first text information. The preset time threshold value can be set according to actual needs.
And step 3011b3, obtaining key information points included in the multimedia image based on the first keyword.
In a first implementation manner of step 3011b3, for each multimedia image, each first keyword in the multimedia image may be matched with each key information node in a preset key information tree structure, and according to a position of a successfully matched key information node in the key information tree structure, a key information point corresponding to the multimedia image is determined. And the format of the key information point can be as follows: the root node-leaf node 1- ·. Each node in the format may be a node matching the first key, or at least the lowest node among the nodes in the format is a node matching the first key. The key information tree structure can be obtained in advance according to application requirements. For example, when the multimedia image is a teaching image, the key information points in the teaching image may be knowledge points taught by the teaching image, and in this case, the key information tree structure is a knowledge tree structure, which may be obtained based on the teaching outline of the teaching content.
For example, please refer to the knowledge tree structure shown in fig. 7, which includes knowledge nodes: the light source, the speed of light, the transmission in a uniform medium along a straight line, the interface between two media, the reflection of light, the law of reflection, the imaging of a plane line, the refraction of light, the law of refraction, a convex lens and application. Assume that the first keyword displayed by the lecture image includes: the refraction and refraction law of light, by comparing the first keyword with each knowledge node in the knowledge tree structure shown in fig. 7, the knowledge node matching the first keyword can be determined as: refraction and law of refraction of light. According to the two matched knowledge nodes, the knowledge points taught by the teaching contents reflected by the teaching image can be determined as follows: light source-at the interface of two media-the refraction-refraction law of light, or, the refraction-refraction law of light.
It should be noted that, after determining the key information point corresponding to each multimedia image, the multimedia image may be marked according to the key information point, so as to distinguish the multimedia images including different key information points.
As shown in fig. 8, the implementation process of obtaining the key information points included in the audio clip may include:
step 3011b4, convert each audio clip into second text information.
The implementation process of step 3011b4 may refer to the implementation manner of obtaining the second text information in step 3011 a.
And step 3011b5, obtaining a second keyword in the second text information.
The implementation procedure of step 3011b5 refers to the implementation procedure of step 3011b 2.
And step 3011b6, obtaining key information points included in the audio clip based on the second keyword.
The implementation process of step 3011b6 refers to the implementation process of step 3011b 3.
It should be noted that the audio clip may include a plurality of sub-audios, a word text message may be obtained according to each sub-audio, the corresponding second text message may include a plurality of sub-text messages, and at this time, the second keyword of each sub-text message may be obtained respectively, so as to reduce the calculation amount for determining the keyword, and further improve the processing efficiency.
Step 3012b, grouping the multimedia sub-information according to the key information points included in the multimedia sub-information to obtain a plurality of information sets
Because the key point information is determined according to the text information of the multimedia sub information, and the key point information can reflect the content of the corresponding multimedia sub information, when the key information points included in the plurality of multimedia sub information are the same, the content reflected by the plurality of multimedia sub information can be determined to be the same, and the plurality of multimedia sub information can be divided into the same information set. By dividing the multimedia sub-information into the information sets, all the multimedia sub-information in a certain information set can be recommended to a target user when interactive information is sent to the target user in the subsequent process, so that the integrity of the multimedia sub-information corresponding to the key point information corresponding to the certain information set is ensured.
Step 302, obtaining the state information when the target user receives the first multimedia information in a specific time period.
Optionally, the status information may include: behavioral state information and/or facial state information. The behavior state information is used for reflecting the behavior state of the user, such as raising hands or raising heads; the facial state information is used to reflect the facial state of the user, such as expression. Further, the status information may further include: the attendance information is information reflecting whether the target user is attendance within a specific time period. The specific time period may be a time period within a specified historical time period before the time period of playing the information of the first multimedia information related to the first key information point, and/or a time period within a specified time period after the time period of playing the information of the first multimedia information related to the first key information point. For example, the specific time period may be a time period during which the content related to the first key information point in the teaching video is played, and at this time, the state information may be concentration level reflecting that the target user views the content of the first key information point. Alternatively, the specific time period may be a time period within a specified historical duration of playing the content related to the first key information point in the teaching video, and at this time, the state information may be a historical concentration level reflecting a degree of concentration before the target user views the content of the first key information point, the historical concentration level reflecting a degree of concentration of the target user viewing the content of the first key information point to some extent. Alternatively, the specific time period may be a time period within a specified time period after the content of the first key information point related to the teaching video is played, and at this time, the state information may be a state information for reflecting the concentration degree of the target user when thinking after watching the content of the first key information point.
The following describes implementation manners of acquiring the behavior state information, the face state information, and the attendance information, respectively.
In a first implementation manner of obtaining the state information, a plurality of user images collected in a specific time period may be identified to obtain behavior state information, where the behavior state information includes first behavior state information, second behavior state information, third behavior state information, and/or fourth behavior state information.
Wherein the first behavior state information is used for indicating whether the target user generates the target behavior within a specific time period. For example, the first behavior state information is used to indicate whether the target user heads up within a certain period of time.
The second behavior state information is the ratio of the duration of the target behavior generated by the target user to the total duration of the target behavior generated by the user corresponding to the plurality of user images. For example, the second behavior state information is a ratio of the duration of the head-up of the target user to the total duration of the head-up of the user corresponding to the plurality of user images.
The third behavior state information is a ratio of the number of times the target behavior is generated for the target user to the total number of times the target behavior is generated for the user corresponding to the plurality of user images. For example, the third behavior state information may be a ratio of the total number of times that the target user heads up in a specific time period to the total number of times that the user heads up corresponding to the plurality of user images.
The fourth behavior state information is used to indicate whether the target user produces the target behavior at a specific point in time within a specific time period. For example, the fourth behavior state information is used to indicate whether the target user heads up at a specific time point within a specific time period. Wherein the specific time point may be a time point determined based on a preset rule. For example, the specific time point may be an image capturing time of a multimedia image in which the total number of all the heads-up users is greater than a first number threshold, or the specific time point may be an image capturing time of a multimedia image in which the total number of all the heads-down users is less than a second number threshold.
For example, assuming that the information management method is applied to a lecture scene, the specific time period belongs to a lecture period during which a lecture is given to a user, and a plurality of user images collected in the specific time period may be images for reflecting the lecture listening states of a plurality of users, and therefore, the first behavior state information, the third behavior state information, and/or the fourth behavior state information may be determined according to whether a target user in each image is moving up and whether a plurality of users are moving up, and the second behavior state information may be determined by estimating a head-up time of each user according to a time point at which an image is collected.
Taking the third behavior state information as an example, the following describes an implementation manner of obtaining the behavior state information, and as shown in fig. 9, the implementation process may include:
step 3021a, based on each of the collected user images, counting a first total number of users who are attendance users and a second total number of users who have produced the target behavior at the collection time point of each of the user images.
The image recognition may be performed on the user image at each acquisition time point to determine whether each user in the user image generates the target behavior, and count a first total number of users included in the user image and a second total number of users generating the target behavior. Moreover, in order to obtain the state information of the user receiving the first multimedia information in a specific time period according to the user image, the user image generally comprises the image content of each user, and therefore, the users contained in the user image are all users who are on duty in the specific time period.
Step 3022a, determining the behavior state coefficient of the target user at each acquisition time point based on the first total number of users and the second total number of users corresponding to each acquisition time point, and obtaining a plurality of behavior state coefficients corresponding to the plurality of acquisition time points.
Each behavior state coefficient is a numerical value reflecting whether the target user generates the target behavior at the corresponding time point.
Assuming that the information management method is applied to a teaching scene, when teaching knowledge points in the course of teaching, generally, if a user is in a serious class (i.e. the state of the class is good), the user should be in a new head state. When the user is doing exercises alone, if the user is doing exercises carefully (i.e. the state of listening to lessons is good), the user should be in a head-down state. Therefore, when determining the behavior state coefficient of the target user at the specified time point, if it is determined that the target user heads up at the specified time point, the first total number of users a, the second total number of users L and the behavior state coefficient Si of the target user at the specified time point i should satisfy: si is L/A-0.5, and i is a positive integer. If it is determined that the target user does not raise his head at the specified time point, it may be determined that the behavior state coefficient of the target user at the specified time point is 0.
And when judging whether the target user generates the target behavior, the face recognition can be carried out on all the users generating the target behavior, and if the users generating the target behavior comprise the target user according to the face recognition result, the target user can be determined to generate the target behavior. If the target user is not included in the users who generate the target behavior, it is determined that the target user does not generate the target behavior.
Step 3023a, determining third behavior state information based on the plurality of behavior state coefficients.
Alternatively, the accumulated sum of the plurality of behavior state coefficients may be obtained, the maximum value and the minimum value of the plurality of behavior state coefficients may be filtered, and then the third behavior state information may be determined based on the accumulated value, the maximum value, and the minimum value.
In one implementation manner, the accumulated value S0 of the plurality of behavior state coefficients, the maximum value Smax of the plurality of behavior state coefficients, the minimum value Smin of the plurality of behavior state coefficients, and the third behavior state information S may satisfy: s ═ S0-Smin)/(Smax-Smin.
In a second implementation of obtaining the status information, a plurality of user images collected during a specific time period may be identified to obtain face status information, which includes first face status information, second face status information, third face status information, fourth face status information, and/or fifth face status information.
Wherein the first face state information is used to indicate whether the target user generates the target face state within a certain period of time.
The second face state information is a ratio of the total duration of the face state of the target user to the total duration of the appearance of the target user in the plurality of user images.
The third face state information is a ratio of a total number of the first images including an image of the target user whose face state is the target face state to a total number of the second images including an image of the target user.
The fourth face state information generates a number of times the target face state is generated for the target user.
The fifth face state information is used to indicate whether the target user generates the target face state at a specific time point within a specific time period.
For example, in the first to fifth face state information, the target face state may be a face state corresponding to an expression such as "eye closing", "thinking", "confusion", or "absently big understanding".
For example, assuming that the information management method is applied to a lecture scene, the plurality of user images acquired within the specific time period may be images for reflecting the listening states of the plurality of users, and thus, the first facial state information, the third facial state information, and/or the fourth facial state information may be determined according to whether the target user generates an expression of "confusion" in each image and whether the plurality of users generate an expression of "confusion", and the duration of the target user generating an expression of "confusion" may be estimated according to the time point of acquiring the image to determine the second facial state information.
Taking the third face state information as an example, the implementation of obtaining the face state information is described below, and fig. 10 is a flowchart of a method for determining the face state information, as shown in fig. 10, where the implementation process may include:
step 3021b, counting a total number of first images including the target user's face state as the target face state among the plurality of user images.
The target face state may be a face state indicating that the target user is in a focused state. For example, the target facial state may appear as a suspicious expression to the target user or may appear as a thinking expression to the target user.
In the implementation process of step 3021b, face recognition may be performed on all face images in each user image to determine a face image of the target user in all face images. And then, performing expression recognition on the facial image of the target user, and counting the total number of the first images indicating that the expression of the target user is in the target facial state according to the expression recognition result, wherein the total number is the total number of the first images representing that the expression of the target user is in the target facial state.
Step 3022b, counting a total number of second images including the target user among the plurality of user images.
Since each user image typically includes image content for each user, the total number of user images may be determined as the total number of second images if the target user is on attendance.
Step 3023b, determining third face state information based on the total number of the first images and the total number of the second images.
Alternatively, a weight of the total number of the first images and a weight of the total number of the second images may be determined, respectively, and a ratio of the total number of the weighted second images to the total number of the weighted first images may be determined as the third face state information. For example, the weight of the total number of the first images and the weight of the total number of the second images may both be 1, and at this time, the third face state information is a ratio of the total number of the second images to the total number of the first images.
In a third implementation manner of obtaining the status information, as shown in fig. 11, the implementation manner of obtaining the attendance information may include:
and step 3021c, judging whether the target user is on duty in a specific time period based on a plurality of user images acquired in the specific time period.
Face recognition may be performed on all face images in each user image to determine whether the face image of the target user is included in all face images. And when the facial images of the target user are included in all the facial images, determining that the target user is on duty. And when the facial images of the target user are not included in all the facial images, determining that the target user is absent from the duty.
And step 3022c, determining attendance information based on the judgment result.
The attendance information can be characterized by an attendance coefficient, and when the target user is determined to be on duty, the attendance coefficient is determined to be 1. And when the target user is determined to be absent from the attendance, determining that the attendance coefficient is 0.
It should be noted that, when acquiring the state information, the state information when the target user receives each key information point may be respectively counted, and at this time, a time period for reflecting each key information point in the first multimedia information may be determined first, and then the state information in each time period may be determined respectively. The implementation process for determining the status information in each time period may refer to the implementation manner for determining the status information in the specific time period in step 302.
It should be noted that the attendance information may also be obtained in other manners, such as receiving the attendance information sent by a card punch or other attendance identification device, which is not limited in the embodiment of the present invention.
Step 303, based on the state information of the target user when receiving the first multimedia information within the specific time period, sending interaction information to a target terminal corresponding to the target user, so that the target user inputs feedback information according to the interaction information.
The interactive information is associated with a first key information point contained in the first multimedia information, and the first key information point may include one or more key information points.
Optionally, in the implementation process of step 303, the interaction information related to each key information point may be sent to the target terminal corresponding to the target user according to the state information of the target user in the time period for reflecting each key information point. For example, assuming that the first multimedia information includes a key information point a and a key information point b, when it is determined that the target user has a poor receiving state for the key information point a according to the state information of the target user in the time period for reflecting the key information point a, the interactive information related to the key information point a may be transmitted to the target terminal.
And, according to the state information of the target user, the target information quantity of the mutual information related to each key information point can be determined, and then the mutual information of the target information quantity is sent to the target terminal. For example, as shown in fig. 12, a target information amount of the interaction information related to the first key information point may be determined according to the state information of the target user for each key information point, and the implementation process may include:
step 3031, determining the absorption information of the target user to each key information point according to the state information of the target user.
The state information may include: one or more of behavioral state information, facial state information, and attendance information. The behavior state information, the face state information and the attendance information are quantized numerical values, and correspondingly, the absorption information is also quantized numerical values. Optionally, the following implementation manners are provided, and the implementation process of determining the absorption information of any key information point by the target user based on the state information corresponding to any key information point is described by taking the behavior state information, the face state information, the attendance information, and the absorption information as examples, where the behavior state information, the face state information, the attendance information, and the absorption information are quantized values:
in a first implementation manner of step 3031, when the state information corresponding to any one of the key information points is represented by behavior state information or face state information, the state information corresponding to any one of the key information points is the behavior state information, and at this time, the state information corresponding to any one of the key information points may be determined as absorption information.
In a second implementation manner of step 3031, when the state information corresponding to any one of the key information points is characterized by behavior state information and face state information, a weighted sum of the behavior state information and the face state information is determined as absorption information.
At this time, the sum of the weight of the behavior state information and the weight of the face state information is 1, and the values of the weight of the behavior state information and the weight of the face state information can be determined according to actual needs. For example, the weight of the behavior state information may be 0.6, and the weight of the face state information may be 0.4.
In a third implementation manner of step 3031, when the state information corresponding to any one of the key information points is represented by one of the behavior state information and the face state information and the attendance information, determining a product of the target information and the attendance information as absorption information.
In a fourth implementable manner of step 3031, when the state information corresponding to any one of the key information points is characterized by the behavior state information, the face state information, and the attendance information, determining a product of a weighted sum of the behavior state information and the face state information and the attendance information as the absorption information.
For example, assuming that the behavior state information is S1, the face state information is S2, and the attendance information is S3, the absorption information S is S3 × (m1 × S1+ m2 × S2), where m1 is a weight of the behavior state information, m2 is a weight of the face state information, and m1+ m2 is 1.
Step 3032, determining the weight of the target user to each key information point in the first key information points based on the absorption information.
In an implementation manner, a first reciprocal of the absorption information of each key information point by the target user may be determined, then, a target reciprocal of the absorption information of the target user to any key information point in the first key information points is screened from at least one first reciprocal, a reciprocal sum of the first reciprocals corresponding to at least one key information point by the target user is determined, and a ratio of the target reciprocal to the reciprocal sum is determined as a weight of any key information point. That is, the absorption information S (i, j) of the ith user to the jth key information point and the weight of the ith user to the mth key information point should satisfy:
Figure BDA0002133670830000221
wherein M is the total number of key information points to which the first multimedia information relates.
Step 3033, determining the target information amount of the interactive information related to each key information point in the first key information points based on the weight of the target user to each key information point in the first key information points.
The target information amount of the interactive information related to any one of the first key information points may be equal to a product of a weight of the target user to the any one key information point and a preset amount. The preset number may be the total number of the interactive information that can be sent to the target user. In order to ensure that the time consumed by all users receiving the first multimedia information for inputting the feedback information according to the interactive information can be the same as much as possible, the total number of the interactive information sent to each user can be equal, that is, the preset number corresponding to each user can be equal.
After the target information amount of each key information point is determined, the interactive information can be sent to the target terminal according to the target information amount of each key information point in the first key information points.
And step 304, receiving feedback information of the target user on the interactive information, which is sent by the target terminal.
The target terminal may present the interaction information to the target user, input feedback information based on the interaction information by the target user information, send the feedback information to the server after receiving the feedback information,
for example, the interaction information may be exercise information, and the feedback information may be response information to the exercise information.
And 305, when the matching rate of the feedback information and the specified feedback information is smaller than the matching rate threshold, sending second multimedia information corresponding to the first key information point to the target terminal.
The feedback information can be feedback information which meets the requirement and is aimed at the interactive information, for example, when the interactive information is exercise information, the feedback information is appointed to be a correct answer of the exercise information. And when the matching rate of the feedback information and the specified feedback information is smaller than the threshold value of the matching rate, indicating that the feedback information of the target user aiming at the interactive information is not qualified. For example, the interactive information may be information determined according to state information when the user receives each key information point, and when the feedback information of the target user does not meet the requirement, it indicates that the target user has a poor absorption result for the first key information point corresponding to the interactive information, and may send the multimedia information of the first key information point related to the first multimedia information, that is, the second multimedia information, to the target user, so as to facilitate the multimedia information of the first key information point related to the target user again.
Corresponding to the first implementation manner of the division in step 301, since the first multimedia information is divided into a plurality of information sets, the information of the first key information point is in at least one information set. As shown in fig. 13, the implementation of this step 305 may include:
3051a, determining at least one information set in which information related to the first key information point in the first multimedia information is located.
Step 3052a, in each information set, determining first target multimedia sub-information with the earliest time sequence and second target multimedia sub-information with the latest time sequence in the sub-information group corresponding to the first key information point, and sending second multimedia information consisting of the multimedia sub-information between the first time stamp and the second time stamp to the target terminal.
The first time stamp is the time stamp which is before the time sequence of the first target multimedia sub information and is closest to the time sequence of the first target multimedia sub information, and the second time stamp is the time stamp which is after the time sequence of the first target multimedia sub information and is closest to the time sequence of the second target multimedia sub information.
Corresponding to the realizations of the second division in step 301, the realizations of step 305 may include: and sending second multimedia information consisting of information sets corresponding to the first key information points to the target terminal.
Alternatively, the second multimedia information may be transmitted to the target terminal in a network configuration mode formed by the server and the terminal for transmitting the second multimedia information. The network structure mode can be B/S (Browser/Server) architecture or C/S (Client/Server) architecture.
Further, if the multimedia sub-information is a multimedia image, before sending the second multimedia information to the target terminal, it may be further determined whether the content of every two multimedia images adjacent in time sequence is changed, and one multimedia image of the plurality of multimedia images whose content is not changed in the second multimedia information is retained, and the duplicate images in the plurality of multimedia images are deleted, so as to avoid the occurrence of a plurality of duplicate multimedia images to the target terminal, thereby improving the absorption efficiency of the target user.
In summary, in the information management method provided in the embodiment of the present invention, the second multimedia information is obtained by screening the first multimedia information according to the state information, and the content of the second multimedia information is less; and the second multimedia information is determined according to the feedback information of the interactive information and has more pertinence to the target user, so that the efficiency and the management flexibility of the multimedia information sent to the target user are effectively improved.
It should be noted that, the order of the steps of the information management method provided in the embodiment of the present invention may be appropriately adjusted, and the steps may also be increased or decreased according to the circumstances, and any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application shall be covered in the protection scope of the present application, and therefore, the details are not described again.
The information management method provided by the embodiment of the invention can be executed by one or more execution subjects. For example, the method may be performed by one server or one cluster of servers. Alternatively, the method may be performed by a plurality of servers or a plurality of server clusters. At this time, the plurality of servers or the plurality of server clusters may constitute the information management system.
Optionally, when the implementation environment of the information management method according to the embodiment of the present invention is a teaching environment, the information management system may include: the system comprises an audio analysis system, an image analysis system, a teaching outline system, a classroom interaction system and an information management system.
The audio analysis system is used for analyzing the teaching audio to obtain a second keyword in the teaching audio. For example, the audio analysis system may be used to perform steps 3011b 4-3011 b 5.
The image analysis system may include: the system comprises a picture switching and identifying subsystem, a picture character analyzing subsystem and a knowledge point mastery degree judging subsystem. The frame switching identification subsystem is used for analyzing the multimedia images to judge whether the contents of two multimedia images adjacent in time sequence are the same. For example, the screen switching recognition subsystem may be configured to perform step 3011 a. The picture character analysis subsystem is used for analyzing the first multimedia information to obtain a first keyword in the first multimedia information. For example, the screen text analysis subsystem may be configured to perform steps 3011b1 through 3011b 2. The knowledge point mastery degree judging subsystem is used for analyzing the lecture listening images of the students in the course of teaching so as to determine the absorption information of the students to the knowledge points. For example, the knowledge point mastery degree determination subsystem may be used to execute step 302.
The teaching outline system is used for determining knowledge points taught by the teaching materials according to keywords in the teaching materials. For example, the tutorial outline system may be used to perform step 3011b3 and/or step 3011b 6.
The classroom interaction system is used for recommending specified numbers of exercises to students according to the absorption information of the students to the knowledge points, obtaining answering results of the students to the exercises, and judging the mastery degree of the students to the knowledge points according to the answering results, so that the information management system can determine whether to recommend teaching resources corresponding to the knowledge points to the students according to the mastery degree. For example, the classroom interaction system can be used to perform steps 303 through 305.
Further, the information management system may further include: the video management system stores all teaching resources, and can provide the teaching resources for the recorded broadcast host deployed in the teaching classroom according to the teaching requirements of the teaching classroom.
An embodiment of the present invention provides an information management apparatus, as shown in fig. 14, the information management apparatus 600 may include:
the sending module 601 is configured to send interaction information to a target terminal corresponding to a target user based on state information of the target user when receiving first multimedia information within a specific time period, so that the target user inputs feedback information according to the interaction information, the interaction information is associated with a first key information point included in the first multimedia information, and the state information includes behavior state information and/or face state information of the target user.
The receiving module 602 is configured to receive feedback information of the target user on the interaction information, where the feedback information is sent by the target terminal.
The sending module 601 is further configured to send, to the target terminal, second multimedia information corresponding to the first key information point when a matching rate of the feedback information and the specified feedback information is smaller than a matching rate threshold.
In summary, in the information management apparatus provided in the embodiment of the present invention, the second multimedia information is obtained by screening the first multimedia information according to the status information, and the content of the second multimedia information is less; and the second multimedia information is determined according to the feedback information of the interactive information and has more pertinence to the target user, so that the efficiency and the management flexibility of the multimedia information sent to the target user are effectively improved.
Optionally, as shown in fig. 15, the apparatus 600 may further include:
the first identifying module 603 is configured to identify a plurality of user images collected in a specific time period to obtain behavior state information, where the behavior state information includes first behavior state information, second behavior state information, third behavior state information, and/or fourth behavior state information.
Wherein the first behavior state information is used for indicating whether the target user generates the target behavior within a specific time period.
The second behavior state information is the ratio of the duration of the target behavior generated by the target user to the total duration of the target behavior generated by the user corresponding to the plurality of user images.
The third behavior state information is a ratio of the number of times the target behavior is generated for the target user to the total number of times the target behavior is generated for the user corresponding to the plurality of user images.
The fourth behavior state information is used to indicate whether the target user produces the target behavior at a specific point in time within a specific time period.
Optionally, the first identifying module 603 is further configured to identify a plurality of user images collected in a specific time period, and obtain face state information, where the face state information includes first face state information, second face state information, third face state information, fourth face state information, and/or fifth face state information.
Wherein the first face state information is used to indicate whether the target user generates the target face state within a certain period of time.
The second face state information is a ratio of the total duration of the face state of the target user to the total duration of the appearance of the target user in the plurality of user images.
The third face state information is a ratio of a total number of the first images including the image of the target user to a total number of the second images including the image of the target user whose face state is the target face state.
The fourth face state information generates a number of times the target face state is generated for the target user.
The fifth face state information is used to indicate whether the target user generates the target face state at a specific time point within a specific time period.
Optionally, the status information further includes: as shown in fig. 15, the apparatus 600 may further include:
a determining module 604, configured to determine whether the target user is attendance at a specific time period based on a plurality of user images collected at the specific time period.
A determining module 605, configured to determine attendance information based on the determination result.
Optionally, as shown in fig. 15, the apparatus 600 may further include:
the dividing module 606 is configured to, when the first multimedia information includes a plurality of multimedia sub information adjacent in time sequence, divide the first multimedia information into a plurality of information sets according to a playing state of the first multimedia information, where the multimedia sub information is a multimedia image or an audio clip.
Correspondingly, the sending module 601 is further configured to:
at least one information set in which information relating to the first key information point is located in the first multimedia information is determined.
And sending second multimedia information consisting of at least one information set to the target terminal.
Optionally, the first multimedia information includes a plurality of sub information groups adjacent in time sequence, each sub information group corresponds to a key information point, and the dividing module 606 is specifically configured to:
if the content of two adjacent multimedia sub-information in the time sequence changes, a time stamp is created by the initial playing time of the multimedia sub-information with the later time sequence in the two multimedia sub-information, and the multimedia sub-information between any two adjacent time stamps forms an information set.
Correspondingly, the sending module 601 is specifically configured to:
and determining first target multimedia sub-information with the earliest time sequence and second target multimedia sub-information with the latest time sequence in the sub-information group corresponding to the first key information point.
And sending second multimedia information consisting of the multimedia sub information between the first timestamp and the second timestamp to the target terminal, wherein the first timestamp is the timestamp which is before the time sequence of the first target multimedia sub information and is closest to the time sequence of the first target multimedia sub information, and the second timestamp is the timestamp which is after the time sequence of the first target multimedia sub information and is closest to the time sequence of the second target multimedia sub information.
Optionally, the dividing module 606 is further configured to, when the first multimedia information includes a plurality of multimedia sub information adjacent in time sequence, group the plurality of multimedia sub information to obtain a plurality of information sets, where each information set corresponds to one key information point, and each multimedia sub information is a multimedia image or an audio clip.
Correspondingly, the sending module 601 is specifically configured to: and sending second multimedia information consisting of information sets corresponding to the first key information points to the target terminal.
Optionally, the sending module 601 is configured to:
and determining a target information amount of the interactive information related to the first key information point based on the state information of the target user.
And sending the interactive information of the target information amount to the target terminal.
Optionally, as shown in fig. 15, the apparatus 600 may further include:
a second identification module 607 for identifying the first text information of each multimedia image in the one or more multimedia images corresponding to the first multimedia information,
an obtaining module 608, configured to obtain the first keyword in each first text message,
the obtaining module 608 is further configured to obtain, based on each first keyword, a key information point related to the content of each multimedia image, to obtain a key information point included in the first multimedia information.
Alternatively, as shown in fig. 15, the apparatus may further include:
a conversion module 609, configured to convert the target audio corresponding to the first multimedia information into the second text information,
an obtaining module 608, configured to obtain a second keyword in the second text information,
the obtaining module 608 is further configured to obtain, based on the second keyword, a key information point related to the content of the target audio, so as to obtain a key information point included in the first multimedia information.
In summary, in the information management apparatus provided in the embodiment of the present invention, the second multimedia information is obtained by screening the first multimedia information according to the status information, and the content of the second multimedia information is less; and the second multimedia information is determined according to the feedback information of the interactive information and has more pertinence to the target user, so that the efficiency and the management flexibility of the multimedia information sent to the target user are effectively improved.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiment of the invention provides computer equipment, which comprises a processor and a memory; when the processor executes the computer program stored in the memory, the computer device executes the information management method provided by the embodiment of the invention.
Alternatively, the computer device may be a server. Fig. 16 is a schematic diagram illustrating a configuration of a server according to an example embodiment. The server 700 includes a Central Processing Unit (CPU)701, a system memory 704 including a Random Access Memory (RAM)702 and a Read Only Memory (ROM)703, and a system bus 705 connecting the system memory 704 and the central processing unit 701. The server 700 also includes a basic input/output system (I/O system) 706, which facilitates transfer of information between devices within the computer, and a mass storage device 707 for storing an operating system 713, application programs 714, and other program modules 715.
The basic input/output system 706 comprises a display 708 for displaying information and an input device 709, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 708 and input device 709 are connected to the central processing unit 701 through an input output controller 710 coupled to the system bus 705. The basic input/output system 706 may also include an input/output controller 710 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 710 may also provide output to a display screen, a printer, or other type of output device.
The mass storage device 707 is connected to the central processing unit 701 through a mass storage controller (not shown) connected to the system bus 705. The mass storage device 707 and its associated computer-readable media provide non-volatile storage for the server 700. That is, the mass storage device 707 may include a computer-readable medium (not shown), such as a hard disk or CD-ROM drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 704 and mass storage device 707 described above may be collectively referred to as memory.
The server 700 may also operate as a remote computer connected to a network via a network, such as the internet, according to various embodiments of the invention. That is, the server 700 may be connected to the network 712 through a network interface unit 711 connected to the system bus 705, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 711.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 701 implements the information management method provided by the embodiment of the present invention by executing the one or more programs.
The embodiment of the invention provides a storage medium which can be a nonvolatile computer readable storage medium, wherein a computer program is stored in the storage medium, and when the computer program is executed by a processor, the information management method provided by the embodiment of the invention is realized.
Embodiments of the present invention provide a computer program product including instructions, which, when running on a computer, enable the computer to execute the information management method provided by the embodiments of the present invention.
An embodiment of the present invention provides an information management system, including a terminal and a server, where the server includes any one of the information management apparatuses described above.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent replacements, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (15)

1. An information management method, characterized in that the method comprises:
sending interaction information to a target terminal corresponding to a target user based on state information of the target user when receiving first multimedia information within a specific time period so as to wait for the target user to input feedback information according to the interaction information, wherein the interaction information is associated with a first key information point contained in the first multimedia information, and the state information comprises behavior state information and/or face state information of the target user;
receiving feedback information of the target user on the interactive information, which is sent by the target terminal;
when the matching rate of the feedback information and the specified feedback information is smaller than a matching rate threshold value, sending second multimedia information corresponding to the first key information point to the target terminal;
wherein, the first multimedia information includes a plurality of multimedia sub information adjacent in time sequence, the multimedia sub information is a multimedia image or an audio clip, before the second multimedia information corresponding to the first key information point is sent to the target terminal, the method further includes:
if the content of two adjacent multimedia sub-information in time sequence changes, creating a time stamp at the initial playing time of the multimedia sub-information with the time sequence later in the two multimedia sub-information, and forming an information set by the multimedia sub-information between any two adjacent time stamps to obtain a plurality of information sets, wherein each information set corresponds to a key information point;
the sending of the second multimedia information corresponding to the first key information point to the target terminal includes:
determining an information set corresponding to the first key information point in the first multimedia information;
determining first target multimedia sub-information with the earliest time sequence and second target multimedia sub-information with the latest time sequence in an information set corresponding to the first key information point;
and sending second multimedia information consisting of multimedia sub-information between a first timestamp and a second timestamp to the target terminal, wherein the first timestamp is a timestamp which is before the time sequence of the first target multimedia sub-information and is closest to the time sequence of the first target multimedia sub-information, and the second timestamp is a timestamp which is after the time sequence of the first target multimedia sub-information and is closest to the time sequence of the second target multimedia sub-information.
2. The method of claim 1, wherein before the sending of the interactive information to the target terminal corresponding to the target user based on the status information of the target user when receiving the first multimedia information within a specific time period, the method further comprises:
identifying a plurality of user images acquired in the specific time period to obtain the behavior state information, wherein the behavior state information comprises first behavior state information, second behavior state information, third behavior state information and/or fourth behavior state information;
wherein the first behavior state information is used for indicating whether the target user generates a target behavior within the specific time period;
the second behavior state information is the proportion of the duration of the target behavior generated by the target user to the total duration of the target behavior generated by the users corresponding to the plurality of user images;
the third behavior state information is a ratio of the number of times the target user generates the target behavior to the total number of times the target user generates the target behavior for users corresponding to the plurality of user images;
the fourth behavior state information is used for indicating whether the target user generates the target behavior at a specific time point within the specific time period.
3. The method of claim 1, wherein before the sending of the interactive information to the target terminal corresponding to the target user based on the status information of the target user when receiving the first multimedia information within a specific time period, the method further comprises:
identifying a plurality of user images acquired in the specific time period to obtain the face state information, wherein the face state information comprises first face state information, second face state information, third face state information, fourth face state information and/or fifth face state information;
wherein the first face state information is used to indicate whether the target user generates a target face state within the certain time period;
the second face state information is the proportion of the total duration of the target face state to the total duration of the target face state in the plurality of user images;
the third face state information is a ratio of a total number of first images to a total number of second images, the first images including an image of the target user, the second images including an image of the target user whose face state is the target face state;
the fourth face state information is a number of times the target user generates the target face state;
the fifth face state information is used to indicate whether the target user generates the target face state at a specific time point within the specific time period.
4. The method of claim 1, wherein the status information further comprises: the method includes that attendance information is information reflecting whether a target user is attendance within a specific time period, and before sending interaction information to a target terminal corresponding to the target user based on state information of the target user when receiving first multimedia information within the specific time period, the method further includes:
determining whether the target user is on duty in the specific time period based on a plurality of user images acquired in the specific time period;
and determining the attendance checking information based on the judgment result.
5. The method of claim 1, wherein sending the interaction information to the target terminal corresponding to the target user based on the status information of the target user when receiving the first multimedia information within a specific time period comprises:
determining a target information amount of the interactive information related to the first key information point based on the state information of the target user;
and sending the interactive information of the target information amount to the target terminal.
6. The method of claim 1, wherein before the sending of the interactive information to the target terminal corresponding to the target user based on the status information of the target user when receiving the first multimedia information within a specific time period, the method further comprises:
identifying first text information of each multimedia image in one or more multimedia images corresponding to the first multimedia information,
acquiring a first keyword in each first text message,
acquiring a key information point related to the content of each multimedia image based on each first keyword to obtain a key information point contained in the first multimedia information;
or converting the target audio corresponding to the first multimedia information into second text information,
acquiring a second keyword in the second text information,
and acquiring key information points related to the content of the target audio based on the second keywords to obtain the key information points contained in the first multimedia information.
7. An information management apparatus, characterized in that the apparatus comprises:
the sending module is used for sending interaction information to a target terminal corresponding to a target user based on state information of the target user when the target user receives first multimedia information within a specific time period so as to wait for the target user to input feedback information according to the interaction information, wherein the interaction information is associated with a first key information point contained in the first multimedia information, and the state information comprises behavior state information and/or face state information of the target user;
the receiving module is used for receiving feedback information of the target user on the interactive information, which is sent by the target terminal;
the sending module is further configured to send second multimedia information corresponding to the first key information point to the target terminal when a matching rate of the feedback information and the specified feedback information is smaller than a matching rate threshold;
wherein the first multimedia information comprises a plurality of multimedia sub information adjacent in time sequence, and the multimedia sub information is a multimedia image or an audio clip, and the device further comprises:
the dividing module is used for creating a time stamp at the initial playing time of the multimedia sub-information with the later time sequence in the two pieces of multimedia sub-information if the contents of the two pieces of multimedia sub-information adjacent in time sequence change, and forming an information set by the multimedia sub-information between any two adjacent time stamps to obtain a plurality of information sets, wherein each information set corresponds to a key information point;
the sending module is further configured to determine an information set corresponding to the first key information point in the first multimedia information; determining first target multimedia sub-information with the earliest time sequence and second target multimedia sub-information with the latest time sequence in an information set corresponding to the first key information point; and sending second multimedia information consisting of multimedia sub-information between a first timestamp and a second timestamp to the target terminal, wherein the first timestamp is a timestamp which is before the time sequence of the first target multimedia sub-information and is closest to the time sequence of the first target multimedia sub-information, and the second timestamp is a timestamp which is after the time sequence of the first target multimedia sub-information and is closest to the time sequence of the second target multimedia sub-information.
8. The apparatus of claim 7, further comprising:
the first identification module is used for identifying a plurality of user images acquired in the specific time period to obtain the behavior state information, and the behavior state information comprises first behavior state information, second behavior state information, third behavior state information and/or fourth behavior state information;
wherein the first behavior state information is used for indicating whether the target user generates a target behavior within the specific time period;
the second behavior state information is the proportion of the duration of the target behavior generated by the target user to the total duration of the target behavior generated by the users corresponding to the plurality of user images;
the third behavior state information is a ratio of the number of times the target user generates the target behavior to the total number of times the target user generates the target behavior for users corresponding to the plurality of user images;
the fourth behavior state information is used to indicate whether the target user produces the target behavior at a specific time point within the specific time period.
9. The apparatus of claim 7, further comprising:
the first identification module is used for identifying a plurality of user images acquired in the specific time period to obtain the face state information, wherein the face state information comprises first face state information, second face state information, third face state information, fourth face state information and/or fifth face state information;
wherein the first face state information is used to indicate whether the target user generates a target face state within the certain time period;
the second face state information is the proportion of the total duration of the target face state to the total duration of the target face state in the plurality of user images;
the third face state information is a ratio of a total number of first images to a total number of second images, the first images including an image of the target user, the second images including an image of the target user whose face state is the target face state;
the fourth face state information indicates a number of times the target user generated the target face state;
the fifth face state information is used to indicate whether the target user generates the target face state at a specific time point within the specific time period.
10. The apparatus of claim 7, wherein the status information further comprises: attendance information which reflects whether the target user is attendance within a specific time period, and the device further comprises:
the judging module is used for judging whether the target user is on duty in the specific time period based on a plurality of user images acquired in the specific time period;
and the determining module is used for determining the attendance information based on the judgment result.
11. The apparatus of claim 7, wherein the sending module is configured to:
determining a target information amount of the interactive information related to the first key information point based on the state information of the target user;
and sending the interactive information of the target information amount to the target terminal.
12. The apparatus of claim 7, further comprising:
a second identification module for identifying first text information of each multimedia image in one or more multimedia images corresponding to the first multimedia information,
an obtaining module, configured to obtain a first keyword in each of the first text messages,
the obtaining module is further configured to obtain, based on each first keyword, a key information point related to the content of each multimedia image, to obtain a key information point included in the first multimedia information;
alternatively, the apparatus further comprises:
a conversion module for converting the target audio corresponding to the first multimedia information into second text information,
the acquisition module is used for acquiring a second keyword in the second text information,
the obtaining module is further configured to obtain, based on the second keyword, a key information point related to the content of the target audio to obtain a key information point included in the first multimedia information.
13. An information management system comprising a terminal and a server, the server comprising the information management apparatus according to any one of claims 7 to 12.
14. A computer device comprising a processor and a memory;
wherein the content of the first and second substances,
the memory is used for storing a computer program;
the processor is configured to execute the program stored in the memory, and implement the information management method according to any one of claims 1 to 6.
15. A storage medium, characterized in that a computer program is stored in the storage medium, and the computer program realizes the information management method according to any one of claims 1 to 6 when executed by a processor.
CN201910646245.4A 2019-07-17 2019-07-17 Information management method, device, system, computer equipment and storage medium Active CN111327943B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910646245.4A CN111327943B (en) 2019-07-17 2019-07-17 Information management method, device, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910646245.4A CN111327943B (en) 2019-07-17 2019-07-17 Information management method, device, system, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111327943A CN111327943A (en) 2020-06-23
CN111327943B true CN111327943B (en) 2022-08-02

Family

ID=71171031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910646245.4A Active CN111327943B (en) 2019-07-17 2019-07-17 Information management method, device, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111327943B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112527870B (en) * 2020-12-03 2023-09-12 北京百度网讯科技有限公司 Electronic report generation method, device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106057004A (en) * 2016-05-26 2016-10-26 广东小天才科技有限公司 Online learning method, online learning device and mobile device
CN107481568A (en) * 2017-09-19 2017-12-15 广东小天才科技有限公司 Method and user terminal are consolidated in a kind of knowledge point
CN107958433A (en) * 2017-12-11 2018-04-24 吉林大学 A kind of online education man-machine interaction method and system based on artificial intelligence
CN108304793A (en) * 2018-01-26 2018-07-20 北京易真学思教育科技有限公司 On-line study analysis system and method
CN109035082A (en) * 2017-06-09 2018-12-18 曾静怡 Automatic evaluation system for education management
CN109523852A (en) * 2018-11-21 2019-03-26 合肥虹慧达科技有限公司 The study interactive system and its exchange method of view-based access control model monitoring
CN109885727A (en) * 2019-02-21 2019-06-14 广州视源电子科技股份有限公司 Data method for pushing, device, electronic equipment and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6928260B2 (en) * 2001-04-10 2005-08-09 Childcare Education Institute, Llc Online education system and method
CN104935627B (en) * 2015-04-06 2018-08-03 马常群 A kind of route system having online education function
CN105451039B (en) * 2015-09-15 2019-05-24 北京合盒互动科技有限公司 A kind of interaction of multimedia information method and system
CA2998956C (en) * 2015-11-26 2023-03-21 Sportlogiq Inc. Systems and methods for object tracking and localization in videos with adaptive image representation
CN106056997A (en) * 2016-08-16 2016-10-26 葫芦岛市连山区职业教育中心 Computer multimedia remote education training teaching device
CN107067851A (en) * 2017-05-27 2017-08-18 乐学汇通(北京)教育科技有限公司 A kind of on-demand interactive system and method based on video flowing
CN109493661A (en) * 2018-11-02 2019-03-19 广州睿致教育咨询有限公司 A kind of Online Video teaching method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106057004A (en) * 2016-05-26 2016-10-26 广东小天才科技有限公司 Online learning method, online learning device and mobile device
CN109035082A (en) * 2017-06-09 2018-12-18 曾静怡 Automatic evaluation system for education management
CN107481568A (en) * 2017-09-19 2017-12-15 广东小天才科技有限公司 Method and user terminal are consolidated in a kind of knowledge point
CN107958433A (en) * 2017-12-11 2018-04-24 吉林大学 A kind of online education man-machine interaction method and system based on artificial intelligence
CN108304793A (en) * 2018-01-26 2018-07-20 北京易真学思教育科技有限公司 On-line study analysis system and method
CN109523852A (en) * 2018-11-21 2019-03-26 合肥虹慧达科技有限公司 The study interactive system and its exchange method of view-based access control model monitoring
CN109885727A (en) * 2019-02-21 2019-06-14 广州视源电子科技股份有限公司 Data method for pushing, device, electronic equipment and system

Also Published As

Publication number Publication date
CN111327943A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
WO2021088510A1 (en) Video classification method and apparatus, computer, and readable storage medium
US11151892B2 (en) Internet teaching platform-based following teaching system
US10909111B2 (en) Natural language embellishment generation and summarization for question-answering systems
CN113709384A (en) Video editing method based on deep learning, related equipment and storage medium
EP4099709A1 (en) Data processing method and apparatus, device, and readable storage medium
CN110619460A (en) Classroom quality assessment system and method based on deep learning target detection
CN113870395A (en) Animation video generation method, device, equipment and storage medium
CN110516749A (en) Model training method, method for processing video frequency, device, medium and calculating equipment
CN110531849A (en) A kind of intelligent tutoring system of the augmented reality based on 5G communication
CN113132741A (en) Virtual live broadcast system and method
DE102021125184A1 (en) PERSONAL TALK RECOMMENDATIONS USING LISTENER RESPONSES
CN111615002A (en) Video background playing control method, device and system and electronic equipment
CN111629222B (en) Video processing method, device and storage medium
CN116049557A (en) Educational resource recommendation method based on multi-mode pre-training model
CN116018789A (en) Method, system and medium for context-based assessment of student attention in online learning
CN109862375B (en) Cloud recording and broadcasting system
CN111427990A (en) Intelligent examination control system and method assisted by intelligent campus teaching
CN111327943B (en) Information management method, device, system, computer equipment and storage medium
CN109885727A (en) Data method for pushing, device, electronic equipment and system
CN112995690B (en) Live content category identification method, device, electronic equipment and readable storage medium
Sarkar et al. Avcaffe: A large scale audio-visual dataset of cognitive load and affect for remote work
CN111161592B (en) Classroom supervision method and supervising terminal
CN112040277B (en) Video-based data processing method and device, computer and readable storage medium
MacHardy et al. Engagement analysis through computer vision
CN113257060A (en) Question answering solving method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant