CN111460218A - Information processing method and device - Google Patents

Information processing method and device Download PDF

Info

Publication number
CN111460218A
CN111460218A CN202010244732.0A CN202010244732A CN111460218A CN 111460218 A CN111460218 A CN 111460218A CN 202010244732 A CN202010244732 A CN 202010244732A CN 111460218 A CN111460218 A CN 111460218A
Authority
CN
China
Prior art keywords
target
video
information
list
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010244732.0A
Other languages
Chinese (zh)
Inventor
陶嘉明
李刚
武亚强
张晓平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010244732.0A priority Critical patent/CN111460218A/en
Publication of CN111460218A publication Critical patent/CN111460218A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Abstract

The application discloses an information processing method and device, and a target video is obtained. And when the target object determines the target requirement according to the target video, determining a generation rule according to the target requirement. And generating a first abstract list according to the target video and the generation rule, and displaying the target video and the first abstract list. Therefore, the generation rule corresponding to the target object can be generated according to the target requirement determined by the target object, so that the first abstract list generated according to the generation rule can be matched with the target object, the first abstract list meets the requirement of the target object, and the experience effect of the target object is improved.

Description

Information processing method and device
Technical Field
The present application belongs to the field of video technologies, and in particular, to an information processing method and apparatus.
Background
With the development of information technology, users can obtain more and more information through the internet. For example, a user may obtain a video over the internet. However, since videos are universal, if a user finds a part of interest in a video, the user needs to browse the video from beginning to end to find the part of interest to watch, so that the searching time of the user is wasted, the watching requirements of each user cannot be met better, and the experience effect of the user is reduced.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
an information processing method comprising:
acquiring a target video;
acquiring a target demand, wherein the target demand is determined by a target object according to the target video;
determining a generation rule according to the target requirement;
generating a first abstract list according to the target video and the generation rule;
and displaying the target video and the first abstract list.
Optionally, the first summary list includes at least one piece of first summary information, and each piece of the first summary information has a corresponding relationship with at least one video segment in the target video.
Optionally, the method further comprises:
acquiring a video segment in the target video corresponding to each piece of first summary information in the first summary list;
generating a target demand video based on the acquired video clip;
and displaying the target demand video.
Optionally, the generating a first summary list according to the target video and the generation rule includes:
acquiring a second summary list matched with the target video, wherein the second summary list comprises at least one piece of second summary information, and each piece of second summary information has a corresponding relation with at least one video segment in the target video;
determining the second summary information meeting the generation rule as first summary information;
and generating a first abstract list based on the first abstract information.
Optionally, the generating a first summary list according to the target video and the generation rule includes:
acquiring first characteristic information of the target video, wherein the first characteristic information corresponds to at least one video segment of the target video and can represent video content of the corresponding video segment;
determining the first feature information meeting the generation rule as target feature information;
generating first summary information corresponding to each target characteristic information;
and generating a first abstract list based on the first abstract information.
Optionally, the method further comprises:
acquiring second characteristic information;
the determining that the first feature information satisfying the generation rule is target feature information includes:
and determining target characteristic information having a characteristic association relation with the second characteristic information in the first characteristic information.
Optionally, the obtaining the second feature information includes:
and acquiring second characteristic information from a target database, wherein the target database and the target correspond to each other in a corresponding relationship.
Optionally, the obtaining the second feature information includes:
acquiring an associated video having a corresponding relation with the target video;
and determining second characteristic information having a corresponding relation with the target object based on the associated video.
Optionally, the determining, based on the associated video, second feature information having a corresponding relationship with the target object includes:
and acquiring third characteristic information corresponding to the associated video, and determining the third characteristic information having a corresponding relation with the target object as the second characteristic information.
An information processing apparatus comprising:
a first acquisition unit configured to acquire a target video;
the second acquisition unit is used for acquiring a target demand, wherein the target demand is determined by a target object according to the target video;
the determining unit is used for determining a generating rule according to the target requirement;
the generating unit is used for generating a first abstract list according to the target video and the generating rule;
and the display unit is used for displaying the target video and the first abstract list.
According to the technical scheme, the application discloses an information processing method and device for acquiring a target video. And when the target object determines the target requirement according to the target video, determining a generation rule according to the target requirement. And generating a first abstract list according to the target video and the generation rule, and displaying the target video and the first abstract list. Therefore, the generation rule corresponding to the target object can be generated according to the target requirement determined by the target object, so that the first abstract list generated according to the generation rule can be matched with the target object, the first abstract list meets the requirement of the target object, and the experience effect of the target object is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on the provided drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating a scenario of an information processing system according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating an information processing method according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating an information processing method based on an original summary list of a target video according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a method for generating a summary list based on feature information of a target video according to an embodiment of the present application;
fig. 5 is a schematic structural diagram illustrating an information processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to facilitate understanding of the information processing method of the present application, a description is given below of a scenario to which the information processing method of the present application is applied. Referring to fig. 1, a schematic view of a scene of an information processing system according to the present application is shown, where the information processing system may include any one of the information processing apparatuses provided in the embodiments of the present application, and the information processing apparatus may be integrated in an electronic device, where the electronic device may be a server or a client, the server may be a background server of a video website, a background server of web browsing, or a background server of instant messaging software, and the client may be a video playing device, a smart phone, or a tablet computer.
The electronic device 100 shown in fig. 1 may acquire a target video; acquiring a target requirement; determining a generation rule according to the target requirement; generating a first abstract list according to the target video and a generation rule; and displaying the target video and the first abstract list.
The target video may represent a video having target identification information, such as a video having time identification information, source device identification information, or user identification information. Wherein the video having the time identification information may be a newly updated video, the video having the source device presentation information may be a video having designated video sender information, and the video having the user identification information may be a video including a designated receiving user. Specifically, the target video may be a video imitated by a video website, a video shared or sent by a specific user, or a video downloaded by the user, and the like.
In one possible implementation, the target video may be a video having an association relationship with a target object, which may be a recipient of the target video, i.e., representing a user viewing or utilizing the target video. For example, the user is a student and the target video may be a video of the student on a class.
The target object may generate or select a corresponding target requirement according to the target video, where the target requirement may include a requirement of a corresponding video clip or information related to video content that the target user desires to obtain through the target video. The corresponding electronic device for information processing can acquire the target requirement of the target object when the target object triggers the target video in a specific mode. As shown in fig. 1, the target object generates an information processing instruction for the target video by way of a long-press target video. At this time, the electronic device 100 may generate an information input box to obtain the target requirement of the target object, i.e., the target object may input corresponding requirement information through the information input box. The information input box can also be arranged in a selection box form, and some alternative requirements are provided for the user to select.
Then, the electronic device 100 determines a generation rule according to the target requirement, wherein the generation rule represents a processing rule for the target video, and the processing rule satisfies the requirement of the current target object. Specifically, the generation rule may be a rule for extracting some segments of the target video and generating summary information of the segments, a rule for extracting corresponding feature information of the target video and generating the summary information, a rule for screening original summary information of the target video, and the like.
After the generation rule is determined, a first summary list is generated according to the target video and the generation rule. The first summary list may include at least one piece of first summary information, and each piece of summary information can characterize the main content of the video segment corresponding thereto. And the first summary information in the first summary list can jump to the video segment corresponding to the first summary information in response to a triggering operation of a user. For example, after the user selects the first summary information, the displayed target video can jump to the video segment corresponding to the first summary information, and the video segment is displayed. Finally, the electronic device 100 displays the target video and the first summary list. So that the target object can obtain not only the target video but also the first summary list matching with the target object itself. During the subsequent click viewing process of the target video, the target object can obtain the video segment corresponding to the target video based on the first abstract list, so that the viewing or searching time of the video which is not interested in the target object can be reduced. The target object can meet the requirements based on the first abstract list in the process of watching or utilizing the target video, and the experience effect of the target object is improved.
Referring to fig. 2, a schematic flow chart of the information processing method provided in the embodiment of the present application is shown, where the information processing method is applied to an electronic device such as a server or a client, and a specific flow may be as follows:
and S101, acquiring a target video.
In this embodiment, the target video may be a video played by a video website, a video shared or sent by a specific user, or a video downloaded by a user, and so on. Before processing the target video, it may be detected whether an information processing instruction is received, where the information processing instruction is used to instruct processing the target video, and the information processing instruction may be automatically generated by the system, for example, when a user receives a video shared by other users through the instant messaging software, a background server of the instant messaging software may automatically generate an information processing instruction corresponding to the target video. The information processing instruction may also be triggered manually by a user, for example, when the user browses videos of a video website, the information processing instruction may be generated in a predetermined triggering manner, such as by pressing a video for a long time or clicking a shortcut key for selection.
In a possible implementation manner, the information processing instruction may be obtained first, and then the corresponding video is obtained as the target video according to the information processing instruction. At this time, the information processing instruction may be parsed to obtain information capable of extracting the target video, and the target video may be obtained according to the information, for example, if the information processing instruction includes an instruction to process videos of a specified sender, then videos sent by the specified sender in real time may all be regarded as the target video for subsequent processing. In the embodiment of the present application, the format and the acquisition mode of the target video are not limited, as long as the corresponding video can be obtained as the target video.
And S102, acquiring target requirements.
In the embodiment of the present application, the target object may be a receiver of the target video, a user of the target video, a person included in the target video or an associated person corresponding to the person, or the like. The target object may determine the target demand from the target video. In one possible implementation, the target object may be a target object that generates a corresponding target requirement after the representation content of the target video is known. In another possible implementation, the target user may generate the target demand without knowing the specific content of the target video, but the target demand is also corresponding to the target video, only the content or range of the specific target video may not be known to the target user.
The target requirements include the acquisition requirements of the target object for the corresponding video segments of the target video, and may also include requirements for acquiring feature information generated by the target object, where the feature information is embodied by the target video.
S103, determining a generation rule according to the target requirement.
And S104, generating a first abstract list according to the target video and the generation rule.
After the target object determines the target requirement according to the target video, the generation rule is determined according to the requirement information in the target requirement. The generation rule is a rule for generating a summary list corresponding to the target requirement, and specifically, the generation rule may include a rule for extracting a video segment satisfying the target requirement from the target video, and then generating corresponding summary information according to the representation content corresponding to the extracted video segment. And then combining the summary information corresponding to the video clips to generate a first summary list. The first abstract list is made to correspond to the target requirement determined by the target object, namely the first abstract list is matched with the target object. Correspondingly, the generation rule may also be a rule for screening existing summary information, for example, the target video includes corresponding original summary information, and the original summary information may be screened through the screening condition information included in the generation rule to obtain a first summary list matched with the generation rule.
At least one piece of first summary information may be included in the first summary list, and each piece of first summary information has a corresponding relationship with at least one video segment in the target video. And the text content of each piece of first summary information can represent the content of the video segment corresponding to the text content, the representation form can be a link button of the corresponding video segment, and the corresponding video segment can be played by clicking the first summary information. For example, the first summary information is represented as a title of the corresponding video clip. In a specific embodiment, when the target video is a teaching video, the text information of the first summary information may correspond to information about a knowledge point or topic in the teaching video, such as "XX topic explanation".
And S105, displaying the target video and the first abstract list.
After the first summary list is generated, the target video and the first summary list are displayed. The target object can obtain the video segment corresponding to the selected first summary information according to the first summary information displayed by the first summary list. This may allow a first summary list to be displayed that matches the target object when the target video is displayed. The target object can determine the video clip corresponding to the first summary list according to the first summary list, and the video clip interested by the target object can be displayed more quickly compared with a processing mode that the target object needs to obtain the video clip interested by the target object after the target video is completely viewed. The corresponding video clips are displayed through the first abstract information of the first abstract list, the corresponding clips are extracted in advance to be stored, and when the first abstract information is clicked, the corresponding video clips are played; the first summary information may be associated with a time axis of the target video, so that the first summary information is associated with a time point of the corresponding video clip, and when the first summary information is clicked, the user directly jumps to the corresponding time point of the target video to play the video.
According to the information processing method provided by the embodiment of the application, a target video and a target requirement are obtained; determining a generation rule according to the target requirement; generating a first abstract list according to the target video and a generation rule; and displaying the target video and the first abstract list. The target requirement is determined according to the target video for the target object, so that the determined generation rule meets the target object, and the first abstract list generated based on the generation rule is more consistent with the requirement of the target object. The experience effect of the target object on the video is improved.
In order to meet the requirement of the target object and enable the target object to acquire the video segment corresponding to the target object more conveniently, after the first summary list is generated in the embodiment of the present application, the information processing method in the embodiment of the present application may further include the following processing procedures:
s201, video clips in the target video corresponding to each piece of first abstract information of the first abstract list are obtained;
s202, generating a target demand video based on the acquired video clips;
and S203, displaying the target demand video.
According to the generation rule, the generated first summary list comprises at least one piece of first summary information, and each piece of first summary information has a corresponding relation with at least one video segment in the target video. Therefore, the video segments in the target video corresponding to each piece of first summary information in the first summary list may be obtained, it should be noted that each piece of first summary information may correspond to at least one video segment, and in a possible implementation manner, a certain piece of first summary information in the first summary list may also correspond to two video segments. For example, the target video is an instructor's explanation video of the lesson. The first summary list is a summary list for the user a, a piece of first summary information included in the first summary list is a reading skill, the corresponding explanation video is explained by two video segments, and the two video segments are not adjacent video segments, so the summary information of the reading skill corresponds to the two video segments.
After the video segments corresponding to the respective first summary information are obtained, the target demand video is generated based on the obtained video segments. When the target demand video is generated, the sequence of each video clip in the target demand video can be determined according to the sequence of each piece of first abstract information in the first abstract list, so that the target demand video is obtained by forming each video clip according to the sequence. Correspondingly, the sequence of each video clip in the target demand video can also be determined according to the determined sequence of the positions of the video clips in the target video.
After the target demand video is generated, the target demand video may be displayed. The target object can directly watch or download the generated target demand video, the demand of the target object is met, and the target object can acquire the demanded video more quickly and conveniently. In another possible implementation manner, the generated target demand video may also be directly sent to the target object. For example, on a video download platform, after a target object determines a corresponding target requirement according to an existing video of the video download platform, a generation rule is determined according to the target requirement, and the generation rule may be a rule for generating a summary list for the target requirement. In this way, after the first summary list determined by the target object is generated, the target demand video corresponding to the first summary list is generated. The target demand video may be sent directly to the target object so that the target object may save time for downloading the video on the video download platform. The target object can be watched and utilized based on the received target demand video, and the experience effect of the target object is improved.
The format and content of the target video are not limited in the embodiments of the present application. When the target video has the original abstract list corresponding to the target video, the target video can be further processed through the original abstract list to obtain a first abstract list matched with the target object. Referring to fig. 3, a schematic flowchart of an information processing method based on an original summary list of a target video according to an embodiment of the present application is shown, where the flowchart may include the following steps:
s301, acquiring a target video;
s302, acquiring a target requirement;
s303, determining a generation rule according to a target requirement;
s304, acquiring a second abstract list matched with a target video, wherein the second abstract list comprises at least one piece of second abstract information, and each piece of second abstract information has a corresponding relation with at least one video segment in the target video;
s305, determining second summary information meeting the generation rule as first summary information;
s306, generating a first abstract list based on the first abstract information;
and S307, displaying the target video and the first abstract list.
In this embodiment, the target video carries a second summary list that matches the target video, where the second summary list is an original summary list of the target video, for example, a summary list that is automatically generated for the target video by a video backend server of a target video source. The second summary list comprises at least one piece of second summary information, and each piece of second summary information has a corresponding relation with at least one segment in the target video, namely each piece of second summary information at least can represent the content of the video segment corresponding to the second summary information and can jump to the corresponding video segment by clicking the second summary information. It should be noted that each piece of second summary information is generated according to the content of the video segment of the target video, and is only related to the target video, so that the method has general applicability.
When the first summary list matched with the target object is obtained, a second summary list of the target video can be obtained, and then the second summary information meeting the generation rule is determined to be the first summary information. It is understood that the second summary information is filtered to obtain the first summary information. Since each piece of second summary information in the second summary list is summary information generated by the corresponding processing platform or processor by default, it is related to the content of the video segment of the corresponding target video. However, the second summary information is not suitable for the requirement of each user. At this time, since the generation rule is determined according to the target requirement, the actual requirement of the target object can be reflected. Therefore, the second summary information may be filtered based on the generation rule to obtain summary information satisfying the requirement of the target object, the summary information may be determined as the first summary information, and the first summary list may be generated from the filtered first summary information. Therefore, the first summary list which accords with the target object is extracted from the second summary list, and each piece of first summary information in the first summary list can receive a trigger instruction of the target object and jump to a video segment which is required by the target object and corresponds to the first summary information. Therefore, the target object can more quickly acquire the video clips with due requirements according to the first summary list. For example, the target video includes a plurality of people, and the second summary list corresponding to the target video is generated according to the track of the people, that is, each piece of second summary information includes description information of the corresponding person. If the target requirement is to acquire a video clip of the target object, the corresponding generation rule is to acquire abstract information matched with the target object, and the description information of each person can be screened in the second abstract information, namely only the description information matched with the target object is acquired; then, the description information is used as first abstract information, and the first abstract information is combined to obtain a first abstract list. When the first summary list is generated according to the first summary information, the first summary list may be obtained by combining the first summary list based on the sequence of the corresponding video segments of the first summary information, or the first summary list may be generated according to the corresponding relationship of each first summary information. And then displaying the target video and the first abstract list on the corresponding electronic equipment.
In another possible implementation manner, the target video does not have a pre-generated or default second summary list, and at this time, the first summary list corresponding to the target object needs to be obtained according to the video segments or the information content included in the target video itself. In this embodiment, after the target video and the target requirement are acquired, and the generation rule is determined according to the target requirement, the first summary list is generated according to the target video and the generation rule. Referring to fig. 4, a flowchart of a method for generating a summary list based on feature information of a target video is shown, which may include the following steps:
s401, acquiring a target demand, wherein the target demand is determined by a target object according to a target video;
s402, determining a generation rule according to a target requirement;
s403, acquiring first characteristic information of the target video;
s404, determining first characteristic information meeting the generation rule as target characteristic information;
s405, generating first summary information corresponding to each target feature information;
s406, generating a first abstract list based on the first abstract information;
and S407, displaying the target video and the first abstract list.
In this embodiment, it is required to obtain the first feature information of the target video. The first characteristic information corresponds to at least one video segment of the target video, and the first characteristic information can represent video content of the corresponding video segment. The first characteristic information may be information carried by the target video, which can represent the content of each video segment of the target video, but the first characteristic information cannot be directly used as summary information, because the first characteristic information cannot jump to the corresponding video segment according to a trigger instruction of a user, that is, the first characteristic information is represented by text information, rather than a link button. In another possible implementation manner, the target video may be analyzed to obtain first feature information corresponding to the target video. That is, the first feature information is obtained by analyzing the target video, and may be obtained by analyzing, for example, the main content of the target video, the attribute information of the video, or the like.
Since the first feature information is generated to correspond to the video content of the video clip of the target video, it has an association relation only with the target video. However, to generate the first digest list corresponding to the target object, processing needs to be performed based on the feature information that can be matched with the target object. Therefore, after the first feature information of the target video is obtained, it is necessary to determine the first feature information satisfying the generation rule as the target feature information. The first characteristic information is screened through the generation rule to obtain the target characteristic information. The target feature information is also a video content representing a corresponding video segment, and cannot be directly used as the first summary information, and the target feature information also needs to be processed, that is, first summary information corresponding to each piece of the target feature information is generated. For example, information summarization and summarization are performed on the target feature information, and link information that can jump to a corresponding video segment is also required to be added to the target feature information to obtain the first summary information, so that a user can jump to the corresponding video segment through operations such as selection of the first summary information. And finally, generating a first abstract list based on the first abstract information. And displaying the target video and the first abstract list on the corresponding electronic equipment. According to the embodiment, the first abstract list capable of meeting the requirements of the target object is generated by representing the characteristic information of the video clip content of the target video, the abstract list is matched with the target object, the abstract list can be generated in a personalized mode, and the experience effect of a user is improved.
If the target requirement of the target object cannot be directly matched with the first feature information in the target video, for example, the requirement information of the target requirement is not direct requirement information but indirectly generated requirement information, at this time, the target feature information corresponding to the generation rule cannot be directly obtained according to the first feature information of the target requirement. In this application scenario, the information processing method according to the embodiment of the present application further includes:
and acquiring second characteristic information.
The determining that the first feature information meeting the generation rule is the target feature information includes:
and determining target characteristic information having a characteristic association relation with the second characteristic information in the first characteristic information.
The second characteristic information is information different from the first characteristic information, and the second characteristic information may be information having an association relationship with a target requirement of the target object. And the second characteristic information also has a characteristic matching relationship with the first characteristic information. For example, the target object needs to acquire an explanation video of the wrong topic. Correspondingly, if the explanation video of the wrong question needs to be obtained, it needs to know which videos can correspond to the explanation process of the wrong question. Therefore, it is necessary to first specify what features belong to the wrong question or what features can correspond to the interpretation of the wrong question. Therefore, the corresponding second feature information may characterize the error or the feature information of the knowledge point corresponding to the error.
Then, target feature information having a feature association relationship with the second feature information is determined in the first feature information. The first characteristic information can represent video content of a video clip of the target video, and the second characteristic information can be matched with the target requirement of the target object, so that information having a characteristic association relation with the second characteristic information can be determined in the first characteristic information, namely, the information associated with the target requirement of the target object is determined in the first characteristic information, and the target characteristic information is obtained. And then generating first summary information corresponding to each target characteristic information. And generating a summary list based on the first summary information.
For example, the target video is a recorded video of a teacher classroom. The target requirement is that a target object (such as student A) acquires an explanation video corresponding to the wrong question. Correspondingly, the first characteristic information of the target video may include information of each content explained by the teacher. The second characteristic information can represent the characteristics of the test questions corresponding to the wrong questions of the student A and the characteristics of the knowledge points corresponding to the wrong questions of the student A. Therefore, the characteristics of the test questions corresponding to the wrong questions of the student a and the characteristics of the knowledge points corresponding to the wrong questions of the student a can be matched with the information of each content explained by the teacher, and the obtained information can be used as the target characteristic information. And then generating corresponding first abstract information according to the target characteristic information, wherein the text content of the first abstract information can represent the content of the corresponding video segment, and the first abstract information also comprises link information capable of jumping to the corresponding video segment, namely the first abstract information can be represented in the form of a link button of the corresponding video segment. And skipping to the corresponding video clip for playing by clicking the first summary information. In the above example, the text content of the first summary information may be the wrong-question knowledge point and the wrong-question explanation process, when the user clicks the summary information of the "wrong-question knowledge point", the video clip corresponding to the wrong-question knowledge point in the target video may be played, and correspondingly, when the user clicks the summary information of the "wrong-question explanation process", the video clip corresponding to the wrong-question that the teacher explains may be played. Then, each piece of first summary information is grouped into a first summary list, so that the student A can obtain each video clip matched with the corresponding error question according to the first summary list.
In the embodiment of the present application, a method for acquiring second feature information is further provided, where the method includes:
second characteristic information is obtained from the target database.
The target database and the target object have a corresponding relation. The corresponding relationship may represent the relationship between information in the target database and the target object, for example, the target database may store operation information generated by the target object, or information recorded by the target object, and so on.
The retrieval of the second characterizing information from the target database may then be determined based on the target requirements generated by the target object. The second characteristic information can be matched with the requirement information in the target requirement, so that the requirement of the user can be matched better when the target characteristic information is determined according to the second characteristic information, and the requirement of the target object can be met by the further generated first abstract list. For example, the target database may be a problem set or error set of the target object, and the like.
In another possible implementation manner, the obtaining the second feature information includes:
acquiring an associated video having a corresponding relation with a target video;
and determining second characteristic information having a corresponding relation with the target object based on the associated video.
In this embodiment, the acquisition of the second feature information is realized based on the associated video having a correspondence relationship with the target video. The correspondence may represent a content correspondence, a time correspondence, or a correspondence including an object in the video between the target video and the associated video, or the like. For example, the target video is a classroom explanation video of a teacher, and the corresponding associated video may be a student classroom recording video or a student question making recording video.
Then, second feature information having a correspondence relationship with the target object is determined based on the associated video. The correspondence relationship with the target object may be a correspondence relationship between the feature information and the target object, or a correspondence relationship between the feature information and information generated by the target object. The second characteristic information may be obtained by actively performing video content analysis on the associated video. For example, the association video characterizes a classroom recording video of student a, which can document the classroom performance of student a as well as video of the course of doing the question. Which questions student A does are wrong questions may be included in the associated video. Therefore, second feature information that the target object (student a) has a correspondence relationship can be determined based on the associated video, that is, the second feature information can be wrong question information that characterizes student a. Specifically, the information in the associated video may be identified, information with a wrong question mark is obtained, and the information is determined as the second feature information.
In some embodiments, the determining, based on the associated video, second feature information having a corresponding relationship with a target object includes:
and acquiring third characteristic information corresponding to the associated video, and determining the third characteristic information having a corresponding relation with the target object as second characteristic information.
Third characteristic information corresponding to the associated video may be obtained first, the third characteristic information corresponding to video content of at least one video segment of the associated video. And then determining third characteristic information having a corresponding relationship with the target object as second characteristic information. Specifically, the third characteristic information corresponding to the target requirement of the target object may be determined as the second characteristic information; determining third characteristic information including the target object as second characteristic information; the third feature information having a correspondence relationship with the specific information of the target object may be determined as the second feature information.
For example, the associated video is a classroom recording video of student a, the third feature information may be information of a video clip corresponding to the video, such as recording note information, question making information, and the like, and the information is an information set of all students. And if the target requirement of the student A is to obtain the explanation video corresponding to the wrong question, determining wrong question information with a corresponding relation with the wrong question mark corresponding to the student A in the third characteristic information as second characteristic information.
In another possible implementation manner, if the associated video itself carries a third summary list corresponding to the associated video, and each piece of third summary information in the third summary list has a corresponding relationship with at least one video segment in the associated video. Each piece of third summary information in the third summary list may be analyzed to obtain feature information corresponding to the third summary information, and the feature information is screened to obtain second feature information. For example, the associated video is a classroom recording video of a student, and the corresponding third summary information of the third summary list may include summary information of a course that the student listens to a class, a course that the student answers a question, a course that the student does a question, and the like. Correspondingly, the user selects the third summary information to display the corresponding video segment in the associated video. At this time, if the third summary information is analyzed, feature information included in the third summary information may be obtained, and then the feature information is screened to obtain feature information related to the target demand of the target object as second feature information. For example, the feature information of the wrong test question in the screened video of the student doing the question is used as the second feature information.
The following describes an information processing method according to the present application by taking a classroom video scene as an example. In the wisdom classroom, can install the camera record video of going to class in the classroom, the mr can use electronic equipment projection teaching, and the student can use portable electronic equipment to carry out the classroom with the mr and interdynamic. All electronic information in a classroom can be recorded in a video mode, and the video mode comprises what operations are performed by a teacher using a large screen when the teacher goes in class, which test questions are sent to students, and which questions or knowledge points are explained at what time. The student's information is also recorded, including what time and what test questions were done, what test questions were right, and what test questions were wrong. These information can be utilized to generate a customized list of summaries specific to the corresponding student.
In the application scenario, the target video is a classroom recording video. The target requirement is the requirement that the student A acquires the weak knowledge point explanation video determined according to the classroom recording video. The generation rules are determined according to the target requirements of student A. The generation rule is a rule that recording a video in a classroom can extract a video clip for explaining weak knowledge points of student a.
And weak knowledge points of student a may include knowledge points newly taught by the teacher and knowledge points not yet mastered by student a. The knowledge points which are not mastered by the student A can be determined through the questions asked by the teacher or the answering process of the student A corresponding to the test questions which need to be completed by the student. Or by student a's active settings.
Knowledge points for the teacher's new lectures and information that the teacher asked questions in the class but student a failed to answer can be obtained through the target video (i.e., the class recording video). Accordingly, a first summary list may be generated for the target video and the generation rule. Specifically, video clips of the classroom record video are analyzed to obtain first feature information corresponding to each video clip, and then feature information meeting a generation rule (i.e., a rule of explaining the video clip by a weak knowledge point belonging to the student a) is determined as target feature information in the first feature information. And generating first summary information corresponding to the target characteristic information according to the target characteristic information, and generating a first summary list based on the first summary information. At this time, the first summary information includes knowledge points of which student a did not answer the question, and new knowledge points of which teacher taught lessons.
When the video clip corresponding to the weak knowledge point of the student a is obtained through the analysis of the wrong question of the student a, the target feature information needs to be determined through the second feature information when the video clip is obtained through the target video (i.e., the classroom recording video). The second feature information in this scenario may be information characterizing the error problem. The second characteristic information may be obtained from a target database (e.g., a wrong question database for recording a wrong question of student a, or a wrong question book for recording a wrong question of student a), that is, information capable of characterizing the wrong question of student a may be obtained. The second characteristic information may also be obtained by a recorded video of student a doing the topic. And then determining target feature information having a feature association relation with the second feature in the first feature information corresponding to the target video (i.e. the classroom recording video) information. Namely, the target characteristic information which is associated with the student A wrong question is determined in the content characteristic information in the classroom recording video. The target feature information may be explanation information of the test question corresponding to the wrong question.
Specifically, when the target feature information having a feature association relationship with the second feature information is determined in the first feature information, the video clip of the corresponding target video may be further extracted according to the time information corresponding to the feature information when the feature association relationship is determined, that is, after the feature information satisfying the feature association relationship is found. For example, if the question in the student wrong-answer book appears in the classroom explanation video, that is, the test question in the student wrong-answer book can be matched with the lesson preparation information of the teacher or the question making information of the student, and if the matching is successful, the explanation time of the question can be obtained to obtain the video segment corresponding to the explanation time.
In the scene embodiment, the summary list corresponding to the student is generated according to the requirement of the student, so that the interested video clips can be efficiently and conveniently acquired by using the summary list when the student reviews or consolidates the functions, and the student can learn more efficiently.
Referring to fig. 5, a schematic structural diagram of an information processing apparatus provided in an embodiment of the present application is shown, where the apparatus includes:
a first obtaining unit 501, configured to obtain a target video;
a second obtaining unit 502, configured to obtain a target requirement, where the target requirement is determined by a target object according to the target video;
a determining unit 503, configured to determine a generation rule according to the target requirement;
a generating unit 504, configured to generate a first summary list according to the target video and the generating rule;
a display unit 505, configured to display the target video and the first summary list.
On the basis of the above embodiment, the first summary list in the generating unit 504 includes at least one piece of first summary information, and each piece of the first summary information has a corresponding relationship with at least one video segment in the target video.
On the basis of the above embodiment, the apparatus further includes:
a third obtaining unit, configured to obtain a video segment in the target video corresponding to each piece of first summary information in the first summary list;
the demand video generation unit is used for generating a target demand video based on the acquired video clips;
and the demand video display unit is used for displaying the target demand video.
On the basis of the above embodiment, the generating unit 504 includes:
the first obtaining subunit is configured to obtain a second summary list matched with the target video, where the second summary list includes at least one piece of second summary information, and each piece of the second summary information has a corresponding relationship with at least one video segment in the target video;
a first determining subunit, configured to determine that the second summary information that satisfies the generation rule is first summary information;
and the first generation subunit is used for generating a first abstract list based on the first abstract information.
On the basis of the above embodiment, the generating unit 504 includes:
the second acquiring subunit is configured to acquire first feature information of the target video, where the first feature information corresponds to at least one video segment of the target video and can represent video content of the corresponding video segment;
a second determining subunit, configured to determine that the first feature information that satisfies the generation rule is target feature information;
the second generation subunit is used for generating first summary information corresponding to each target characteristic information;
and the third generation subunit is used for generating a first summary list based on the first summary information.
On the basis of the above embodiment, the apparatus further includes:
the fourth acquiring subunit is used for acquiring the second characteristic information;
the second determining subunit is specifically configured to:
and determining target characteristic information having a characteristic association relation with the second characteristic information in the first characteristic information.
On the basis of the foregoing embodiment, the fourth obtaining subunit is specifically configured to:
and acquiring second characteristic information from a target database, wherein the target database and the target correspond to each other in a corresponding relationship.
On the basis of the foregoing embodiment, the fourth obtaining subunit is specifically configured to:
acquiring an associated video having a corresponding relation with the target video;
and determining second characteristic information having a corresponding relation with the target object based on the associated video.
On the basis of the foregoing embodiment, the determining, based on the associated video, second feature information having a correspondence relationship with the target object includes:
and acquiring third characteristic information corresponding to the associated video, and determining the third characteristic information having a corresponding relation with the target object as the second characteristic information.
An embodiment of the present application provides a storage medium having a program stored thereon, which when executed by a processor implements the information processing method.
The embodiment of the application provides a processor, wherein the processor is used for running a program, and the information processing method is executed when the program runs.
The present application also provides a computer program product adapted to perform a program for initializing any of the steps of the information processing method as described above when executed on a data processing device.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.

Claims (10)

1. An information processing method comprising:
acquiring a target video;
acquiring a target demand, wherein the target demand is determined by a target object according to the target video;
determining a generation rule according to the target requirement;
generating a first abstract list according to the target video and the generation rule;
and displaying the target video and the first abstract list.
2. The method of claim 1, wherein the first summary list comprises at least one piece of first summary information, each piece of the first summary information having a corresponding relationship with at least one video segment in the target video.
3. The method of claim 1, further comprising:
acquiring a video segment in the target video corresponding to each piece of first summary information in the first summary list;
generating a target demand video based on the acquired video clip;
and displaying the target demand video.
4. The method of claim 1, the generating a first summary list according to the target video and the generation rule, comprising:
acquiring a second summary list matched with the target video, wherein the second summary list comprises at least one piece of second summary information, and each piece of second summary information has a corresponding relation with at least one video segment in the target video;
determining the second summary information meeting the generation rule as first summary information;
and generating a first abstract list based on the first abstract information.
5. The method of claim 1, the generating a first summary list according to the target video and the generation rule, comprising:
acquiring first characteristic information of the target video, wherein the first characteristic information corresponds to at least one video segment of the target video and can represent video content of the corresponding video segment;
determining the first feature information meeting the generation rule as target feature information;
generating first summary information corresponding to each target characteristic information;
and generating a first abstract list based on the first abstract information.
6. The method of claim 5, further comprising:
acquiring second characteristic information;
the determining that the first feature information satisfying the generation rule is target feature information includes:
and determining target characteristic information having a characteristic association relation with the second characteristic information in the first characteristic information.
7. The method of claim 6, the obtaining second feature information comprising:
and acquiring second characteristic information from a target database, wherein the target database and the target correspond to each other in a corresponding relationship.
8. The method of claim 6, the obtaining second feature information comprising:
acquiring an associated video having a corresponding relation with the target video;
and determining second characteristic information having a corresponding relation with the target object based on the associated video.
9. The method of claim 8, wherein the determining second feature information having a correspondence relationship with the target object based on the associated video comprises:
and acquiring third characteristic information corresponding to the associated video, and determining the third characteristic information having a corresponding relation with the target object as the second characteristic information.
10. An information processing apparatus comprising:
a first acquisition unit configured to acquire a target video;
the second acquisition unit is used for acquiring a target demand, wherein the target demand is determined by a target object according to the target video;
the determining unit is used for determining a generating rule according to the target requirement;
the generating unit is used for generating a first abstract list according to the target video and the generating rule;
and the display unit is used for displaying the target video and the first abstract list.
CN202010244732.0A 2020-03-31 2020-03-31 Information processing method and device Pending CN111460218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244732.0A CN111460218A (en) 2020-03-31 2020-03-31 Information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244732.0A CN111460218A (en) 2020-03-31 2020-03-31 Information processing method and device

Publications (1)

Publication Number Publication Date
CN111460218A true CN111460218A (en) 2020-07-28

Family

ID=71680644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244732.0A Pending CN111460218A (en) 2020-03-31 2020-03-31 Information processing method and device

Country Status (1)

Country Link
CN (1) CN111460218A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431689A (en) * 2007-11-05 2009-05-13 华为技术有限公司 Method and device for generating video abstract
CN103365848A (en) * 2012-03-27 2013-10-23 华为技术有限公司 Method, device and system for inquiring videos
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
US20170264973A1 (en) * 2016-03-14 2017-09-14 Le Holdings (Beijing) Co., Ltd. Video playing method and electronic device
CN108028962A (en) * 2015-08-21 2018-05-11 韦林克斯股份有限公司 Video service condition information is handled to launch advertisement
CN108846098A (en) * 2018-06-15 2018-11-20 上海掌门科技有限公司 A kind of information flow summarization generation and methods of exhibiting

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431689A (en) * 2007-11-05 2009-05-13 华为技术有限公司 Method and device for generating video abstract
CN103365848A (en) * 2012-03-27 2013-10-23 华为技术有限公司 Method, device and system for inquiring videos
CN108028962A (en) * 2015-08-21 2018-05-11 韦林克斯股份有限公司 Video service condition information is handled to launch advertisement
US20170264973A1 (en) * 2016-03-14 2017-09-14 Le Holdings (Beijing) Co., Ltd. Video playing method and electronic device
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN108846098A (en) * 2018-06-15 2018-11-20 上海掌门科技有限公司 A kind of information flow summarization generation and methods of exhibiting

Similar Documents

Publication Publication Date Title
CN111949822B (en) Intelligent education video service system based on cloud computing and mobile terminal and operation method thereof
CN109670110B (en) Educational resource recommendation method, device, equipment and storage medium
US8140544B2 (en) Interactive digital video library
CN111611434B (en) Online course interaction method and interaction platform
US20070202481A1 (en) Method and apparatus for flexibly and adaptively obtaining personalized study content, and study device including the same
CN107066619B (en) User note generation method and device based on multimedia resources and terminal
CN110035330A (en) Video generation method, system, equipment and storage medium based on online education
US10803491B2 (en) Digital content generation based on user feedback
US10089898B2 (en) Information processing device, control method therefor, and computer program
US20160217109A1 (en) Navigable web page audio content
CN113254708A (en) Video searching method and device, computer equipment and storage medium
KR20180125358A (en) Apparatus and method for providing education contents
CN111739358A (en) Teaching file output method and device and electronic equipment
CN102663907B (en) Video teaching system and video teaching method
CN113391745A (en) Method, device, equipment and storage medium for processing key contents of network courses
JP2019128850A (en) Information processing device, moving-image search method, generation method, and program
CN111523028A (en) Data recommendation method, device, equipment and storage medium
KR102036639B1 (en) Mobile terminal of playing moving picture lecture and method of displaying related moving picture
CN112601129B (en) Video interaction system, method and receiving terminal
CN111460218A (en) Information processing method and device
CN113420135A (en) Note processing method and device in online teaching, electronic equipment and storage medium
JP2004126401A (en) History information utilization type education method and history information utilization type education system
JP2014059452A (en) Learning device, control method of the same, learning program, and learning system
KR20200089417A (en) Method and apparatus of providing learning content based on moving pictures enabling interaction with users
KR20140048603A (en) System and method for forming customized moving picture lecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination