CN117793478A - Method, apparatus, device, medium, and program product for generating explanation information - Google Patents

Method, apparatus, device, medium, and program product for generating explanation information Download PDF

Info

Publication number
CN117793478A
CN117793478A CN202211159235.6A CN202211159235A CN117793478A CN 117793478 A CN117793478 A CN 117793478A CN 202211159235 A CN202211159235 A CN 202211159235A CN 117793478 A CN117793478 A CN 117793478A
Authority
CN
China
Prior art keywords
live broadcast
target
live
video
explanation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211159235.6A
Other languages
Chinese (zh)
Inventor
张泓洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202211159235.6A priority Critical patent/CN117793478A/en
Publication of CN117793478A publication Critical patent/CN117793478A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure relates to an explanation information generation method, apparatus, device, storage medium, and program product, including: responding to an explanation creation instruction aiming at a target object, acquiring a plurality of live broadcast fragments associated with the target object, wherein the live broadcast fragments are obtained by intercepting recorded videos from a live broadcast room, and acquiring live broadcast data corresponding to the live broadcast fragments aiming at each live broadcast fragment; screening out live broadcast fragments meeting preset requirements based on the live broadcast data to serve as target videos and displaying the target videos; and responding to the selection operation of the target video, and generating explanation information of the target object based on the key information in the target video. According to the embodiment of the disclosure, the plurality of live fragments associated with the target object are firstly obtained, then, better live videos are recommended according to live data corresponding to the live fragments, and text materials of the target object are obtained from the screened recommended videos, so that a user can quickly pass through the recommended videos, and the creative inspiration is stimulated.

Description

Method, apparatus, device, medium, and program product for generating explanation information
Technical Field
The present disclosure relates to the field of computer processing technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for generating explanation information.
Background
With the continuous development of internet technology, network live broadcast is generated and is more and more favored. For example: through webcasting, the anchor may introduce details of the object to the audience in real time, for example: the effect of use of an article, precautions, advantages and disadvantages, material composition, etc.
In order to improve the live broadcast effect, before live broadcast, processing and creation are generally required to be performed on related information of the object, so that a host can conveniently introduce the situation of the object in more detail in the live broadcast process, and more audiences are attracted. However, it is difficult for the anchor to find high-quality live material from massive live videos to inspire the inspiration of creation.
Disclosure of Invention
In order to solve the technical problems, the embodiments of the present disclosure provide a method, an apparatus, a device, a storage medium, and a program product for generating explanation information, which screen recommended live segments meeting requirements according to live data, so that a user can quickly pass through the recommended live segments, and excite creative inspiration to obtain better quality explanation information.
In a first aspect, an embodiment of the present disclosure provides a method for generating explanation information, including:
responding to an explanation creation instruction aiming at a target object, acquiring a plurality of live broadcast fragments associated with the target object, wherein the live broadcast fragments are obtained by intercepting recorded videos from a live broadcast room, the live broadcast fragments are video fragments comprising the explanation object, and the explanation object is the target object or belongs to the same category with the target object;
acquiring live broadcast data corresponding to each live broadcast segment;
screening out live broadcast fragments meeting preset requirements based on the live broadcast data to serve as target videos and displaying the target videos;
and responding to the selection operation of the target video, and generating explanation information of the target object based on the key information in the target video.
In a second aspect, an embodiment of the present disclosure provides an explanation information generating apparatus, including:
the live broadcast segment acquisition module is used for responding to an explanation creation instruction aiming at a target object and acquiring a plurality of live broadcast segments associated with the target object, wherein the live broadcast segments are obtained by intercepting recorded videos from a live broadcast room, the live broadcast segments are video segments comprising explanation objects, and the explanation objects are target objects or belong to the same category with the target object;
The live broadcast data acquisition module is used for acquiring live broadcast data corresponding to each live broadcast segment;
the live video screening module is used for screening out live video segments meeting preset requirements based on the live data to serve as target videos and displaying the target videos;
and the explanation information generation module is used for responding to the operation on the target video and generating the explanation information of the target object based on the key information in the target video.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of generating lecture information as described in any one of the first aspects above.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the explanation information generation method according to any one of the first aspects.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement a method of generating lecture information as described in any one of the first aspects above.
The embodiment of the disclosure provides a method, a device, equipment, a storage medium and a program product for generating explanation information, wherein the method comprises the following steps: responding to an explanation creation instruction aiming at a target object, acquiring a plurality of live broadcast fragments associated with the target object, wherein the live broadcast fragments are obtained by intercepting recorded videos from a live broadcast room, and acquiring live broadcast data corresponding to the live broadcast fragments aiming at each live broadcast fragment; screening out live broadcast fragments meeting preset requirements based on the live broadcast data to serve as target videos and displaying the target videos; and responding to the selection operation of the target video, and generating explanation information of the target object based on the key information in the target video. According to the embodiment of the disclosure, the plurality of live broadcast segments associated with the target object are firstly obtained, and then the better quality live broadcast video is recommended according to the live broadcast data corresponding to the live broadcast segments, so that a user can quickly pass through the recommended live broadcast segments, and the creative inspiration is stimulated, so that better quality explanation information is obtained.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of a scenario for generating explanation information according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of generating explanation information in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an object editing page in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a video recommendation page in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a video detail page in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an explanation information generating apparatus in an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or messages interacted between the various devices in the disclosed embodiments are for illustrative purposes only and are not intended to limit the scope of such messages or messages.
Before explaining the embodiments of the present disclosure in further detail, terms and terminology involved in the embodiments of the present disclosure are explained, and the terms and terminology involved in the embodiments of the present disclosure are applicable to the following explanation.
In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
With the continuous development of internet technology, network live broadcast is generated and is more and more favored. For example: through webcasting, the anchor may introduce details of the object to the audience in real time, for example: the effect of use of an article, precautions, advantages and disadvantages, material composition, etc.
In order to improve the live broadcast effect, before live broadcast, processing and creation are generally required to be performed on related information of the object, so that a host can conveniently introduce the situation of the object in more detail in the live broadcast process, and more audiences are attracted. However, it is difficult for the anchor to find high-quality live material from massive live videos to inspire the inspiration of creation.
In order to solve the above technical problems, an embodiment of the present disclosure provides a method for generating explanation information, including: responding to an explanation creation instruction aiming at a target object, acquiring a plurality of live broadcast fragments associated with the target object, wherein the live broadcast fragments are obtained by intercepting recorded videos from a live broadcast room, and acquiring live broadcast data corresponding to the live broadcast fragments aiming at each live broadcast fragment; screening out live broadcast fragments meeting preset requirements based on the live broadcast data to serve as target videos and displaying the target videos; and responding to the selection operation of the target video, and generating explanation information of the target object based on the key information in the target video.
According to the embodiment of the disclosure, the plurality of live broadcast segments associated with the target object are firstly obtained, and then the better quality live broadcast video is recommended according to the live broadcast data corresponding to the live broadcast segments, so that a user can quickly pass through the recommended live broadcast segments, and the creative inspiration is stimulated, so that better quality explanation information is obtained.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. It should be noted that the same reference numerals in different drawings will be used to refer to the same elements already described.
Fig. 1 is a system that may be used to implement the method for generating lecture information provided in an embodiment of the present disclosure. As shown in fig. 1, the system 100 may include a plurality of user terminals 110, a network 120, a server 130, and a database 140. For example, the system 100 may be used to implement the text material acquisition method described in any of the embodiments of the present disclosure.
It is understood that user terminal 110 may be any other type of electronic device capable of performing data processing, which may include, but is not limited to: mobile handsets, sites, units, devices, multimedia computers, multimedia tablets, internet nodes, communicators, desktop computers, laptop computers, notebook computers, netbook computers, tablet computers, personal Communications Systems (PCS) devices, personal navigation devices, personal Digital Assistants (PDAs), audio/video players, digital cameras/video cameras, locating devices, television receivers, radio broadcast receivers, electronic book devices, gaming devices, or any combination thereof, including accessories and peripherals for these devices, or any combination thereof.
The user may operate through an application installed on the user terminal 110, the application transmits user behavior data to the server 130 through the network 120, and the user terminal 110 may also receive the data transmitted by the server 130 through the network 120. The embodiments of the present disclosure are not limited to the hardware system and the software system of the user terminal 110, for example, the user terminal 110 may be based on an ARM, an X86, or the like, may be provided with an input/output device such as a camera, a touch screen, a microphone, or the like, and may be operated with an operating system such as Windows, iOS, linux, android, hong OS, or the like.
For example, the application on the user terminal 110 may be a social application providing live services, such as a short video social application based on multimedia resources such as video, pictures, text, etc. Taking a short video social application program based on multimedia resources such as videos, pictures and texts as an example, a user can live broadcast through the short video social application program on the user terminal 110, and can watch or browse live videos of other users, and can perform operations such as praise, comment and forwarding.
The user terminal 110 may implement the explanation information generating method provided in the embodiments of the present disclosure by running a process or a thread. In some examples, the user terminal 110 may perform the lecture information generation method using an application built therein. In other examples, the user terminal 110 may perform the lecture information generating method by invoking an application program stored externally to the user terminal 110.
Network 120 may be a single network or a combination of at least two different networks. For example, network 120 may include, but is not limited to, one or a combination of several of a local area network, a wide area network, a public network, a private network, and the like. The network 120 may be a computer network such as the Internet and/or various telecommunications networks (e.g., 3G/4G/5G mobile communication networks, W IFI, bluetooth, zigBee, etc.), to which embodiments of the present disclosure are not limited.
The server 130 may be a single server, or a group of servers, or a cloud server, with each server within the group of servers being connected via a wired or wireless network. A server farm may be centralized, such as a data center, or distributed. The server 130 may be local or remote. The server 130 may communicate with the user terminal 110 through a wired or wireless network. Embodiments of the present disclosure are not limited to the hardware system and software system of server 130.
Database 140 may refer broadly to a device having a storage function. The database 140 is mainly used to store various data utilized, generated, and outputted by the user terminal 110 and the server 130 in operation. For example, taking an application on the user terminal 110 as the short video application based on the multimedia resources such as video, picture, audio, etc., the data stored in the database 140 may include resource data such as video, audio, etc., and interactive operation data such as praise, comment, heat, etc., which are live broadcast by the user through the user terminal 110.
Database 140 may be local or remote. The database 140 may include various memories, such as random access Memory (Random Access Memory, RAM), read Only Memory (ROM), and the like. The above-mentioned storage devices are merely examples and the storage devices that may be used by the system 100 are not limited in this regard. Embodiments of the present disclosure are not limited to hardware systems and software systems of database 140, and may be, for example, a relational database or a non-relational database.
Database 140 may be interconnected or in communication with server 130 or a portion thereof via network 120, or directly with server 130, or a combination thereof.
In some examples, database 140 may be a stand-alone device. In other examples, database 140 may also be integrated in at least one of user terminal 110 and server 130. For example, the database 140 may be provided on the user terminal 110 or on the server 130. For another example, the database 140 may be distributed, with one portion being provided on the user terminal 110 and another portion being provided on the server 130.
Fig. 2 is a flowchart of an explanation information generating method according to an embodiment of the present disclosure, where the method may be performed by an explanation information generating apparatus, and the explanation information generating apparatus may be implemented in software and/or hardware, and the explanation information generating method may be performed by the user terminal 110 described in fig. 1.
As shown in fig. 2, the explanation information generating method provided in the embodiment of the present disclosure mainly includes steps S101 to S104.
S101, responding to an explanation creation instruction aiming at a target object, and acquiring a plurality of live broadcast fragments associated with the target object, wherein the live broadcast fragments are obtained by intercepting recorded videos from a live broadcast room, the live broadcast fragments are video fragments comprising the explanation object, and the explanation object is the target object or belongs to the same category with the target object.
In an alternative embodiment of the present disclosure, the target object is an object that the user wants to introduce to the viewer during live broadcast. Wherein the target object may be determined by selection of the anchor user. The target object may be one or more of a person, an article, a movie fragment, a tourist attraction, a physical store, an online store, a virtual task, and the like, which are not specifically limited in the embodiments of the present disclosure. Alternatively, the embodiment of the present disclosure is described taking an example in which the target object is an article.
In one embodiment of the present disclosure, a generating manner of an instruction for explaining creation is provided, specifically, in response to an instruction for explaining creation for a target object, including: responding to an object editing instruction, displaying an object editing page, wherein the object editing page comprises at least one or more objects to be edited, and each object to be managed corresponds to one explanation creation control; and responding to the triggering operation of the explanation creation control, taking the object to be edited corresponding to the explanation creation control as a target object, and responding to the explanation creation instruction aiming at the target object.
In the embodiment of the present disclosure, the object editing instruction may be an instruction input by a user to open an object editing page. Wherein the object editing control is displayed in the first page or one of the pages in the live class application running in the user terminal 110. In response to a triggering operation on the object editing control, an object management instruction is received and responded.
In the embodiments of the present disclosure, the object boundary page may be an interactive interface provided for a live class application or software on the user terminal 110, which may be used as a display interface for presenting a plurality of operable visual information to a user (i.e., the user of the user terminal 110), and may also be used for receiving operations performed by the user in the interactive interface.
In the disclosed embodiment of the present disclosure, a plurality of objects to be edited are displayed in the object editing page, and when the number of objects to be edited exceeds a set number, the objects to be edited are displayed in an upward movement in response to an upward sliding operation of the user on the object editing page, and the objects to be edited are displayed in a downward movement in response to a downward sliding operation of the user on the object editing page.
The object to be edited may be an object that can be edited by a user. For example: the object is added to the object editing library. As shown in fig. 3, a plurality of objects to be edited may be presented in the form of cards in the object editing page 30. An object image 31, an object title 32, and an explanation authoring control 33 are shown in the card corresponding to each object to be edited. Wherein, the object title is a simple introduction to the main features of the object. For example: object name, preference information, object usage scenario, and the like. Further, if the object is an article, the card corresponding to the object to be edited also comprises the selling price of the article.
In one embodiment of the present disclosure, the user terminal 110 detects a trigger operation of the explanation authoring control 33 by a user, takes an object to be edited associated with the explanation authoring control as a target object in response to the trigger operation, and responds to an explanation authoring instruction for the target object.
In the embodiment of the disclosure, the manner of responding to the explanation creation instruction can enable the user to select the target object intuitively, and is convenient for the user to operate.
In one embodiment of the present disclosure, the live video clip may be understood as a video clip including an explanation object, which is captured from a video recorded in a live room. In an embodiment of the present disclosure, a method for capturing live video clips from live room recorded video is provided. Specifically, live broadcasting room video data is obtained, and the explanation start time and the explanation end time corresponding to each explanation object in the live broadcasting process of a host are obtained from the live broadcasting room video data. And determining the video segment in the time interval of the explanation start time and the explanation end time as the live broadcast segment corresponding to the explanation object, wherein the explanation start time is the time when the explanation start control is triggered by the anchor, and the explanation end time is the time when the explanation end control is triggered by the anchor. The explanation start control and the explanation end control are controls that are displayed in the anchor client interface during live broadcast.
In one embodiment of the present disclosure, the live fragment associated with the target object may be the same object as the target object, for example: the explanation object and the target object are both "XX brand short skirt". The live fragment associated with the target object may be an explanation object of the live fragment belonging to the same category as the target object. The explanation object of the live broadcast segment and the target object belong to the same class two category, and/or the explanation object of the live broadcast segment and the target object belong to the same class one category. Wherein the primary category includes a secondary category. For example: the second class of skirt may be the next dress and the corresponding first class may be the female dress. For example: the first class of skirt may be skirt, and the corresponding first class may be apparel. The primary category and the secondary category may be set according to actual situations, and are not specifically limited in the embodiments of the present disclosure.
In an embodiment of the present disclosure, a live segment screening set is provided, where live segments included in the live segment screening set are live segments with live time within a preset duration, for example: live fragments within 1 month. And acquiring live broadcast data of the explanation object included in the live broadcast segment in the live broadcast scene, sequencing the live broadcast data from big to small, and adding the live broadcast segment corresponding to the live broadcast data of the front 100 to the live broadcast segment screening set. The live broadcast data comprise the number of online people in a live broadcast room, the number of praise in the live broadcast room, the number of comments in the live broadcast room, the corresponding order quantity or sales of the explanation object in the live broadcast room, and the like.
In an embodiment of the present disclosure, a method for acquiring a live fragment in which an explanation object is a target object is provided. Specifically, all videos with target objects as explanation objects are obtained from a live broadcast segment screening set, and after the all videos are subjected to duplicate elimination processing, a first group of live broadcast segments are obtained, wherein the first group of live broadcast segments comprise live broadcast segments with the plurality of explanation objects as the target objects.
In one embodiment of the present disclosure, the acquiring a plurality of live fragments associated with the target object includes: taking the secondary category to which the target object belongs as a first explanation object; acquiring a plurality of live broadcast segments comprising the first explanation object; when the number of the live broadcast fragments comprising the first explanation object is smaller than a first number threshold, taking the primary category to which the secondary category belongs as a second explanation object; and acquiring a plurality of live fragments comprising the second explanation object.
In an embodiment of the present disclosure, a manner is provided in which an explanation object and a target object are live fragments of the same class of purposes. Specifically, a secondary category corresponding to the target object is determined, and a live broadcast segment screening set is obtained to take all objects in the secondary category as live broadcast segments of the explanation object. And after carrying out duplicate elimination treatment on the live broadcast segments corresponding to the second category, determining the number of the live broadcast segments subjected to duplicate elimination treatment, and if the number is larger than the preset number, taking the live broadcast segments subjected to duplicate elimination treatment as a second group of live broadcast segments. If the number is smaller than the preset number, acquiring live fragments taking all objects in the primary category as explanation objects from a live fragment screening set, performing duplicate elimination processing on the live fragments corresponding to the primary category to obtain live fragments corresponding to the primary category after duplicate elimination processing, and finally, taking the live fragments corresponding to the secondary category after duplicate elimination processing and the live fragments corresponding to the primary category after duplicate elimination processing as a second group of live fragments. The second group of live broadcast segments comprise a plurality of target live broadcast segments of which the explanation objects and the target objects are of the same type.
For example: if the corresponding second category is skirt, all live fragments of which the explanation objects are skirt can be subjected to duplicate elimination treatment, and the live fragments corresponding to the duplicate elimination treated skirt are obtained. If the number of the live broadcast segments corresponding to the skirt is larger than the preset number, the live broadcast segments corresponding to the skirt are the second group of live broadcast segments, if the number of the live broadcast segments corresponding to the skirt is smaller than the preset number, all the live broadcast segments of which the explanation objects are clothes are obtained, duplicate elimination processing is carried out, the live broadcast segments corresponding to the clothes are obtained, and the live broadcast segments corresponding to the skirt and the live broadcast segments corresponding to the clothes are used as the second group of live broadcast segments.
S102, acquiring live broadcast data corresponding to each live broadcast segment according to each live broadcast segment.
The live broadcast data comprise the number of online people in a live broadcast room, the number of praise in the live broadcast room, the number of comments in the live broadcast room, the corresponding order quantity or sales of the explanation object in the live broadcast room, and the like.
And S103, screening out live broadcast fragments meeting preset requirements based on the live broadcast data to serve as target videos and displaying the target videos.
The target video is a live broadcast segment recommended to the user and displayed in a video recommendation page for the user to watch.
In one embodiment of the present disclosure, a live segment with live data greater than a data threshold is determined to be a target video. For example: live broadcast data are live broadcast room online numbers, the data threshold value of the live broadcast room online numbers is 1 ten thousand people, and live broadcast fragments with the live broadcast room online numbers larger than 1 ten thousand people serve as target videos.
In one embodiment of the present disclosure, the screening, by using the live broadcast data, a live broadcast segment meeting a preset requirement as a target video includes: and sequencing the live broadcast data according to the sequence from big to small, and taking live broadcast fragments corresponding to the live broadcast data with the preset number before sequencing as target videos.
The preset number can be set according to actual conditions, and optionally, the preset number is 20. And sequencing the live broadcast data corresponding to all the live broadcast segments according to the sequence from big to small, and determining the live broadcast segment corresponding to the live broadcast data sequenced in the front 20 as a target video.
Therefore, the live broadcast segment with the highest live broadcast data can be recommended to the user so as to help the user to know the current hot spot information and the favorite hot spot of the current audience, and the creating inspiration of the user is stimulated.
In one embodiment of the present disclosure, the target videos include a first set of target videos, a second set of target videos, and a third set of target videos. The first group of target videos are live broadcast segments with live broadcast data screened from the first group of live broadcast segments meeting preset requirements, the second group of target videos are live broadcast segments with live broadcast data screened from the second group of live broadcast segments meeting preset requirements, and the third group of target videos are live broadcast segments with live broadcast data screened from the live broadcast segment screening set meeting preset requirements.
In one embodiment of the present disclosure, the live data includes object data of the explanation object in a live session corresponding to a live segment; the step of screening out the live broadcast segments meeting the preset requirements as target videos based on the live broadcast data comprises the following steps: and if the plurality of live broadcast segments correspond to the same main broadcasting user, acquiring the corresponding plurality of live broadcast segments in the live broadcast scene with the highest object data as target videos.
In one embodiment of the present disclosure, the live data includes a live time corresponding to a live segment; the method for acquiring the corresponding live fragment in the live broadcast scene with the highest object data as the target video comprises the following steps: acquiring a plurality of corresponding live broadcast fragments in the live broadcast time with the highest object data; and taking the second preset number of live fragments with the live time closest to the current time as target videos.
In one embodiment of the present disclosure, the first set of target videos, in which live data selected from the first set of live segments meets a preset requirement, includes: and determining a main broadcasting user corresponding to each live broadcasting segment in the first group of live broadcasting segments, if the first group of live broadcasting segments comprise a plurality of live broadcasting segments belonging to the same main broadcasting user, respectively acquiring object data of live broadcasting sites to which the live broadcasting segment corresponding to the main broadcasting user belongs, and taking a plurality of corresponding live broadcasting segments in the live broadcasting sites with the highest object data as a first group of target videos.
For example: the live broadcast segment 1, the live broadcast segment 2, the live broadcast segment 3 and the live broadcast segment 4 belong to the same main broadcast user A, and at the moment, the object data of each live broadcast segment in the corresponding live broadcast field time are respectively obtained. For example: the live broadcast segment 1 and the live broadcast segment 2 belong to a first live broadcast time, and the live broadcast segment 3 and the live broadcast segment 4 belong to a second live broadcast time. And respectively acquiring object data in the first live broadcast time and the second live broadcast time, comparing the object data, and determining a live broadcast segment corresponding to the live broadcast time with the highest object data as a target video. For example: and if the object data in the first direct broadcast session is larger than the object data in the second direct broadcast session, determining the direct broadcast segment 1 and the direct broadcast segment 2 included in the first direct broadcast session as a first group of target videos.
Further, if the image data is the highest, a plurality of corresponding live fragments are in the live broadcast; and taking the 3 live fragments with the closest live time to the current time as a first group of target videos.
In one embodiment of the present disclosure, the second set of target videos, in which live data selected from the second set of live segments meets a preset requirement, includes: and determining a main broadcasting user corresponding to each live broadcasting segment in the second group of live broadcasting segments, if the second group of live broadcasting segments comprise a plurality of live broadcasting segments belonging to the same main broadcasting user, respectively acquiring object data of live broadcasting sites to which the live broadcasting segment corresponding to the main broadcasting user belongs, and taking a plurality of corresponding live broadcasting segments in the live broadcasting sites with the highest object data as a second group of target videos.
Further, if the object data is the corresponding plurality of live fragments in the highest live broadcast; and taking the 3 live fragments with the closest live time to the current time as a second group of target videos.
In the embodiment of the present disclosure, if the explanation objects included in the plurality of live fragments in the second set of live fragments are the same, only one of the live fragments is reserved as the second set of target videos.
In one embodiment of the present disclosure, the live broadcast segments, in which live broadcast data selected from the live broadcast segment screening set meets a preset requirement, include: and determining a main broadcasting user corresponding to each live broadcasting segment in the live broadcasting segment screening set, if the live broadcasting segment screening set comprises a plurality of live broadcasting segments belonging to the same main broadcasting user, respectively acquiring object data of live broadcasting sites to which the live broadcasting segment corresponding to the main broadcasting user belongs, and taking a plurality of corresponding live broadcasting segments in the live broadcasting sites with the highest object data as a third group of target videos.
Further, if the object data is the corresponding plurality of live fragments in the highest live broadcast; and taking the 3 live fragments with the closest live time to the current time as a third group of target videos.
In the embodiment of the present disclosure, if the explanation objects included in the live fragment screening set in which a plurality of live fragments exist are identical, only one of the live fragments is reserved as the third group of target videos. Further, the live segment with the highest sales can be used as the third group of target videos.
In an embodiment of the present disclosure, the displaying the target video includes: displaying a plurality of target videos in the video display page in the form of information streams; the plurality of target videos are displayed in a moving manner in response to a longitudinal sliding operation of the video display page.
In the embodiments of the present disclosure, the video presentation page may be an interactive interface provided by a live-broadcast application or software on the user terminal 110, which may be used as a display interface for presenting a plurality of target videos to a user (i.e., the user of the user terminal 110), and may also be used to receive operations performed by the user in the interactive interface.
In one embodiment of the present disclosure, taking the user terminal 110 as a smart phone or a tablet computer as an example, the video display page may occupy all display screens of the smart phone or the tablet computer, in other words, the video display page is the entire display interface of the smart phone or the tablet computer. For another example, taking the user terminal 110 as a notebook computer, the video presentation page may occupy all of the display screen of the notebook computer, or may occupy only a portion of the display screen of the notebook computer.
As shown in fig. 4, an object presentation area 41, an explanation authoring control 42, and a video presentation area 43 are included in the video editing page.
Wherein, key information for displaying the target object in the object display area 41 is as follows: object pictures, object attribute information and video shooting controls, wherein the object attribute information comprises object prices, object order quantities and the like. The video shooting control is used for responding to the triggering operation of the user, jumping to a video shooting page, and recording video in the video shooting page.
Wherein, the explanation authoring control 42 is used for responding to the triggering operation of the user and jumping to an explanation information editing page, and the user can input text content in the explanation information editing page. Further, the explanation authoring control 42 includes: guide information for explaining information creation, for example: topic recommendation identification, title recommendation identification, object selling point templates, and the like. Further, if the user has recorded and saved the lecture information using the lecture information editing page, the lecture information authoring control 42 further includes: the prompt information is recently edited, and the prompt information is saved as part of the explanation information. For example: the first 20 characters of the stored explanation information.
In the disclosed embodiment, the video presentation area 43 is used to present a plurality of target videos in the form of a stream of information (feed). As shown in fig. 4, the video presentation area 43 includes a plurality of sub-areas, each of which presents a recommended video. And responding to the longitudinal sliding operation of the user on the video recommendation page, and vertically moving and displaying the target video. Further, in response to the user sliding operation upwards, the explanation creation control 42 in the video recommendation page disappears, the explanation creation control is inserted in the video display area 43 in the form of guiding a card, the card style appears randomly, and the text material editing page is entered after clicking.
Further, a video cover of the target video is displayed in the above-mentioned subareas, and a video duration, a video title, an associated topic and a video brief are displayed on the video cover. Wherein the above-mentioned subregion also includes: information such as order quantity corresponding to the explanation object in the video.
In the embodiment of the present disclosure, the video display area 43 further includes a first video control, a second video control and a third video control; wherein the first video control is configured to display a first set of target videos in the video display area 43 in response to a triggering operation by a user. The second video control is configured to display a second set of target videos in the video display area 43 in response to a triggering operation by a user. The third video control is configured to display a third set of target videos in the video display area 43 in response to a triggering operation by a user.
S104, responding to the selection operation of the target video, and generating explanation information of the target object based on key information in the target video.
The selection operation of the target video may be a trigger operation of a video that the user wants to watch. The key information of the target video comprises a video title, an associated topic and text information after audio information conversion of the recommended video. In the embodiment of the disclosure, the key information of the target video can be directly used as the explanation information corresponding to the target object. The key information may be edited based on an editing operation by the user, and then may be used as the explanation information corresponding to the target object.
In one embodiment of the present disclosure, generating, in response to a selection operation of the target video, interpretation information of the target object based on key information in the target video includes: in response to a selection operation of the target video, acquiring detail information of the target video, wherein the detail information comprises at least one of the following: video information, audio information, live broadcast data, and anchor information; displaying a video detail page, wherein the video detail page comprises a video playing area and an audio text conversion control, and the video playing area is used for playing the video information and displaying live broadcast data and/or anchor information on a playing picture; responding to the triggering operation of the audio text conversion control, and converting the audio information into text information; and generating the explanation information of the target object from the text information.
In one embodiment of the present disclosure, the presentation video detail page further includes a text presentation area; generating the explanation information of the target object from the text information, including: displaying the text information in the text display area; and in response to the operation of the text display area, copying part or all of the text information serving as key information in the target video, and generating explanation information of the target object.
In the embodiment of the present disclosure, as shown in fig. 5, a video detail page is displayed in response to a user selection operation of a target video. The video detail page 50 comprises a video playing area 51, a text display area 52, an object information area 53, a text editing area 54 and a video display control 55,
Wherein, the video playing area 51 is used for playing video pictures. The play progress bar is displayed on the video screen in response to a click operation of the video play area 51 by the user, and can be fast-forwarded or reverse in response to a drag operation of the play progress bar. In response to an operation on the control, playing of the video may be paused. After the user does not perform the click operation on the video play area 51 or the click operation is performed for a preset period of time, the detail information of the video is displayed at the bottom of the video screen. The detailed information disappears and a progress bar is displayed in response to a click operation by the user on the video play area 51.
For example: video information, video data, anchor information, etc. are displayed at the bottom of the video picture. The video information can comprise a video title, a video topic, video release time and the like, wherein in response to clicking operation of the video topic by a user, the video information jumps to the topic page, and various detailed information under the topic is displayed in the topic page. The host information includes a host head portrait, a host nickname, and in response to a trigger operation on the host information, the host information jumps to a personal homepage of the host, and a user in the personal homepage of the host displays various information of the host, for example: a head portrait, a nickname of the anchor, videos published by the anchor, favorite videos of the anchor, a focus list of the anchor, a fan list of the anchor, and the like. Wherein the video data includes video play amount, praise amount, comment amount and the like. The object information includes the amount of orders, etc. of the explanation object in the video.
In one embodiment of the present disclosure, audio-to-text conversion instructions may be understood as instructions to convert audio information in video to text information. Wherein responding to the audio text conversion instruction may include: an audio text conversion control is set in the video detail page 50, and after a triggering operation of the audio text conversion control is detected, the audio text conversion instruction is responded. Responding to the audio text conversion instruction may further include: and after detecting the detail information of the acquired target video, automatically generating and responding to the audio text conversion instruction.
In one embodiment of the present disclosure, a part or all of the converted text information is used as the explanation information of the target object. Specifically, generating the explanation information of the target object from the text information includes: displaying the text information in the text display area; and in response to the operation of the text display area, copying part or all of the text information serving as key information in the target video, and generating explanation information of the target object.
In an embodiment of the present disclosure, the converted text information is displayed in the text presentation area while entering the video detail page. If the video detail page is already displayed, but the audio-to-text conversion is not completed, in-conversion prompt information is displayed in a text display area, wherein the in-conversion prompt information indicates to a user that the audio-to-text conversion is not completed, the user is in conversion, please wait slightly. After the conversion is completed, the text information is displayed directly in the text display area.
Further, the Chinese characters in the text display area are displayed in a highlighted form along with the audio in the video. In the embodiment of the disclosure, the audio which is played can be displayed in a highlight mode, the sentence to which the audio which is being played belongs can be highlighted, and the word to which the audio which is being played belongs can be highlighted. Thus, the user can clearly watch the text corresponding to the audio currently being played.
In the embodiment of the present disclosure, the text display area 52 expands for display in response to a click operation of the text display area 52, and text in the text display area 52 is displayed in a vertically moving form in response to a slide operation of the text display area 52. Meanwhile, the characters are displayed in a highlighted form along with the audio in the video.
In one embodiment of the present disclosure, if there is no voice information in the target video and the audio information is purely background music, no voice prompt information is displayed in the text display area 52, where the no voice prompt information is used to prompt the user that there is no spoken voice content in the target video.
In one embodiment of the present disclosure, the text in text presentation area 52 is rich text, supporting long press selection, full selection, copy, paste, and the like.
In one embodiment of the present disclosure, a portion of the text information is selected or all of the text information is selected in response to a selection operation of the text information in the text presentation area 52. Then, a copy control is displayed in the text presentation area 52, and in response to a trigger operation on the copy control, the selected text information is determined as key information in the target video, and the explanation information of the target object is generated.
In one embodiment of the present disclosure, the object information area 53 is used to present relevant information of an object to a user. For example: object pictures, image titles, object sales prices, and so on.
In one embodiment of the present disclosure, when the target video is the second set of target videos or the third set of target videos, the object information area 53 further includes an object selection control, and the object decision page is displayed in response to a triggering operation of the object selection control. The object decision page is used for displaying a plurality of objects. In response to a vertical sliding of the object decision page, the plurality of objects move up and down for presentation. In response to a selection operation of an object included in the object decision page, an object corresponding to the selection operation is determined as a target object, and object information of the target object is displayed in the object information area 53.
In one embodiment of the present disclosure, the text editing area 54 is used to obtain and display text information corresponding to an input operation of a user in response to the input operation. The input operation may be a text message that is input by calling a virtual keyboard, or may be a text message copied from the text presentation area in the above embodiment, and is input to the text editing area 54 by a paste operation.
Further, the text information in the text editing area 54 may be edited in response to an editing operation on the text editing area 54. The editing operation includes text operations such as adding, deleting, copying, pasting and the like.
In the embodiment of the present disclosure, in response to a trigger operation of the text editing area 54 including the text saving control, the text information in the text editing area 54 is saved as the text material corresponding to the target object.
In one embodiment of the present disclosure, the video presentation control 55 is configured to present a plurality of recommended videos in response to a trigger operation by a user. The recommended video and the video played in the video detail page belong to the same category.
In one embodiment of the present disclosure, after the explanation information is obtained according to the explanation information obtaining manner provided in the foregoing embodiment, the obtained explanation information is stored in the client, so as to facilitate subsequent editing by the user. The acquired explanation information can be stored locally in the form of pictures, so that a user can check the explanation information conveniently in the live broadcast process.
Fig. 6 is a schematic structural diagram of an explanation information generating apparatus according to an embodiment of the present disclosure, where the present embodiment is applicable to a case of generating explanation information of a target object, the explanation information generating apparatus may be implemented in a software and/or hardware manner, and the explanation information generating apparatus may be configured in the user terminal 110 described in fig. 1.
As shown in fig. 6, the explanation information generating apparatus provided in the embodiment of the present disclosure mainly includes: a scene showing module 61 and a position adjusting module 62.
The live broadcast segment obtaining module 61 is configured to obtain, in response to an instruction for authoring an explanation for a target object, a plurality of live broadcast segments associated with the target object, where the live broadcast segments are obtained by capturing video recorded in a live broadcast room, and the live broadcast segments are video segments including an explanation object, where the explanation object is the target object, or where the explanation object and the target object belong to the same category; a live broadcast data acquisition module 62, configured to acquire, for each live broadcast segment, live broadcast data corresponding to the live broadcast segment; the live video screening module 63 is configured to screen out, based on the live data, a live video segment meeting a preset requirement as a target video and display the target video; and the explanation information generating module 64 is configured to generate explanation information of the target object based on the key information in the target video in response to the operation on the target video.
In one embodiment of the present disclosure, the apparatus comprises: an instruction response module for responding to the instruction for the target object, comprising: the object editing page display unit is used for responding to an object editing instruction and displaying an object editing page, wherein the object editing page comprises at least one or a plurality of objects to be edited, and each object to be managed corresponds to one explanation creation control; and the explanation authoring instruction response unit is used for responding to the triggering operation of the explanation authoring control, taking the object to be edited corresponding to the explanation authoring control as a target object, and responding to the explanation authoring instruction aiming at the target object.
In one embodiment of the present disclosure, the live fragment acquisition module 61 includes: a first explanation object determining unit, configured to take, as a first explanation object, a second class to which the target object belongs; a live broadcast segment acquisition unit, configured to acquire a plurality of live broadcast segments including the first explanation object; a second explanation object determining unit, configured to, when the number of live fragments including the first explanation object is smaller than a first number threshold, take, as a second explanation object, a first class to which the second class belongs; and the live broadcast segment acquisition unit is also used for acquiring a plurality of live broadcast segments comprising the second explanation object.
In one embodiment of the present disclosure, the live data includes object data of the explanation object in a live session corresponding to a live segment; the live video filtering module 63 is specifically configured to obtain, if the plurality of live segments correspond to the same anchor user, the corresponding plurality of live segments in the live scene with the highest object data as the target video.
In one embodiment of the present disclosure, the live data includes a live time corresponding to a live segment; the live video screening module 63 is further configured to obtain a plurality of corresponding live segments in a live scene with highest object data; and taking the second preset number of live fragments with the live time closest to the current time as target videos.
In one embodiment of the present disclosure, the apparatus includes a target video display module for displaying the target video, including: the target video display unit is used for displaying a plurality of target videos in the form of information streams in the video display page; and the target video moving unit is used for responding to the longitudinal sliding operation of the video display page, and the target videos are displayed in a moving mode.
In one embodiment of the present disclosure, the explanation information generation module 64 includes: a detail information acquisition unit configured to acquire, in response to a selection operation of the target video, detail information of the target video, wherein the detail information includes at least one of: video information, audio information, live broadcast data, and anchor information; the video detail page display unit is used for displaying a video detail page, wherein the video detail page comprises a video playing area and an audio text conversion control, and the video playing area is used for playing the video information and displaying live broadcast data and/or anchor information on a playing picture; the audio conversion unit is used for responding to the triggering operation of the audio text conversion control and converting the audio information into text information; and the explanation information generating unit is used for generating the explanation information of the target object from the text information.
In one embodiment of the present disclosure, the presentation video detail page further includes a text presentation area; the explanation information generation module 64 includes: a text information display unit, configured to display the text information in the text display area; and the text information copying unit is used for copying part or all of the text information serving as key information in the target video and generating explanation information of the target object in response to the operation of the text display area.
In one embodiment of the present disclosure, when the explanation object included in the recommended video and the target object belong to the same category, the object selection control is included in the displayed video detail page; the device further comprises a step of responding to the triggering operation of the object selection control, wherein the object decision page comprises at least one or a plurality of objects to be selected; and responding to the triggering operation of the object to be selected, and taking the object to be selected corresponding to the triggering operation as a target object.
The explanation information generating device provided by the embodiment of the present disclosure may execute steps executed in the explanation information generating method provided by the embodiment of the present disclosure, and the execution steps and the beneficial effects are not described herein.
Fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure. Referring now in particular to fig. 7, a schematic diagram of an electronic device 700 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 700 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable terminal devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703 to implement the explanation information generation method of the embodiment as described in the present disclosure. In the RAM 703, various programs and data necessary for the operation of the terminal apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the terminal device 700 to communicate with other devices wirelessly or by wire to exchange data. While fig. 7 shows a terminal device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts, thereby implementing the method of generating lecture information as described above. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 701.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: responding to an explanation creation instruction aiming at a target object, acquiring a plurality of live broadcast fragments associated with the target object, wherein the live broadcast fragments are obtained by intercepting recorded videos from a live broadcast room, the live broadcast fragments are video fragments comprising the explanation object, and the explanation object is the target object or belongs to the same category with the target object; acquiring live broadcast data corresponding to each live broadcast segment; screening out live broadcast fragments meeting preset requirements based on the live broadcast data to serve as target videos and displaying the target videos; and responding to the selection operation of the target video, and generating explanation information of the target object based on the key information in the target video.
Alternatively, the terminal device may perform other steps described in the above embodiments when the above one or more programs are executed by the terminal device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, the present disclosure provides an explanation information generating method, including: responding to an explanation creation instruction aiming at a target object, acquiring a plurality of live broadcast fragments associated with the target object, wherein the live broadcast fragments are obtained by intercepting recorded videos from a live broadcast room, the live broadcast fragments are video fragments comprising the explanation object, and the explanation object is the target object or belongs to the same category with the target object; acquiring live broadcast data corresponding to each live broadcast segment; screening out live broadcast fragments meeting preset requirements based on the live broadcast data to serve as target videos and displaying the target videos; and responding to the selection operation of the target video, and generating explanation information of the target object based on the key information in the target video.
According to one or more embodiments of the present disclosure, the present disclosure provides an explanation information generating apparatus including: the live broadcast segment acquisition module is used for responding to an explanation creation instruction aiming at a target object and acquiring a plurality of live broadcast segments associated with the target object, wherein the live broadcast segments are obtained by intercepting recorded videos from a live broadcast room, the live broadcast segments are video segments comprising explanation objects, and the explanation objects are target objects or belong to the same category with the target object; the live broadcast data acquisition module is used for acquiring live broadcast data corresponding to each live broadcast segment; the live video screening module is used for screening out live video segments meeting preset requirements based on the live data to serve as target videos and displaying the target videos; and the explanation information generation module is used for responding to the operation on the target video and generating the explanation information of the target object based on the key information in the target video.
According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the methods of generating lecture information as provided by the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the lecture information generating methods provided by the present disclosure.
Embodiments of the present disclosure also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement the method of generating lecture information as described above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (13)

1. An explanation information generation method, comprising:
responding to an explanation creation instruction aiming at a target object, acquiring a plurality of live broadcast fragments associated with the target object, wherein the live broadcast fragments are obtained by intercepting recorded videos from a live broadcast room, the live broadcast fragments are video fragments comprising the explanation object, and the explanation object is the target object or belongs to the same category with the target object;
Acquiring live broadcast data corresponding to each live broadcast segment;
screening out live broadcast fragments meeting preset requirements based on the live broadcast data to serve as target videos and displaying the target videos;
and responding to the selection operation of the target video, and generating explanation information of the target object based on the key information in the target video.
2. The method of claim 1, wherein responding to the lecture authoring instruction for the target object comprises:
responding to an object editing instruction, displaying an object editing page, wherein the object editing page comprises at least one or more objects to be edited, and each object to be managed corresponds to one explanation creation control;
and responding to the triggering operation of the explanation creation control, taking the object to be edited corresponding to the explanation creation control as a target object, and responding to the explanation creation instruction aiming at the target object.
3. The method of claim 1, wherein the obtaining a plurality of live fragments associated with the target object comprises:
taking the secondary category to which the target object belongs as a first explanation object;
acquiring a plurality of live broadcast segments comprising the first explanation object;
When the number of the live broadcast fragments comprising the first explanation object is smaller than a first number threshold, taking the primary category to which the secondary category belongs as a second explanation object;
and acquiring a plurality of live fragments comprising the second explanation object.
4. The method of claim 1, wherein the live data includes object data of the explanation object in a live session corresponding to a live segment; the step of screening out the live broadcast segments meeting the preset requirements as target videos based on the live broadcast data comprises the following steps:
and if the plurality of live broadcast segments correspond to the same main broadcasting user, acquiring the corresponding plurality of live broadcast segments in the live broadcast scene with the highest object data as target videos.
5. The method of claim 4, wherein the live data comprises a live time corresponding to a live segment; the method for acquiring the corresponding live fragment in the live broadcast scene with the highest object data as the target video comprises the following steps:
acquiring a plurality of corresponding live broadcast fragments in the live broadcast time with the highest object data;
and taking the second preset number of live fragments with the live time closest to the current time as target videos.
6. The method of claim 1, wherein displaying the target video comprises:
Displaying a plurality of target videos in the video display page in the form of information streams;
the plurality of target videos are displayed in a moving manner in response to a longitudinal sliding operation of the video display page.
7. The method of claim 1, wherein generating the lecture information of the target object based on the key information in the target video in response to the selection operation of the target video comprises:
in response to a selection operation of the target video, acquiring detail information of the target video, wherein the detail information comprises at least one of the following: video information, audio information, live broadcast data, and anchor information;
displaying a video detail page, wherein the video detail page comprises a video playing area and an audio text conversion control, and the video playing area is used for playing the video information and displaying live broadcast data and/or anchor information on a playing picture;
responding to the triggering operation of the audio text conversion control, and converting the audio information into text information;
and generating the explanation information of the target object from the text information.
8. The method of claim 7, wherein the presentation video details page further comprises a text presentation area; generating the explanation information of the target object from the text information, including:
Displaying the text information in the text display area;
and in response to the operation of the text display area, copying part or all of the text information serving as key information in the target video, and generating explanation information of the target object.
9. The method of claim 7, wherein the presentation video detail page includes an object selection control when an explanation object included in the recommended video and the target object belong to the same category;
responding to triggering operation of an object selection control, and displaying an object decision page, wherein the object decision page comprises at least one or more objects to be selected;
and responding to the triggering operation of the object to be selected, and taking the object to be selected corresponding to the triggering operation as a target object.
10. An explanation information generating apparatus, comprising:
the live broadcast segment acquisition module is used for responding to an explanation creation instruction aiming at a target object and acquiring a plurality of live broadcast segments associated with the target object, wherein the live broadcast segments are obtained by intercepting recorded videos from a live broadcast room, the live broadcast segments are video segments comprising explanation objects, and the explanation objects are target objects or belong to the same category with the target object;
The live broadcast data acquisition module is used for acquiring live broadcast data corresponding to each live broadcast segment;
the live video screening module is used for screening out live video segments meeting preset requirements based on the live data to serve as target videos and displaying the target videos;
and the explanation information generation module is used for responding to the operation on the target video and generating the explanation information of the target object based on the key information in the target video.
11. An electronic device, the electronic device comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-9.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-9.
13. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of any of claims 1-9.
CN202211159235.6A 2022-09-22 2022-09-22 Method, apparatus, device, medium, and program product for generating explanation information Pending CN117793478A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211159235.6A CN117793478A (en) 2022-09-22 2022-09-22 Method, apparatus, device, medium, and program product for generating explanation information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211159235.6A CN117793478A (en) 2022-09-22 2022-09-22 Method, apparatus, device, medium, and program product for generating explanation information

Publications (1)

Publication Number Publication Date
CN117793478A true CN117793478A (en) 2024-03-29

Family

ID=90378589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211159235.6A Pending CN117793478A (en) 2022-09-22 2022-09-22 Method, apparatus, device, medium, and program product for generating explanation information

Country Status (1)

Country Link
CN (1) CN117793478A (en)

Similar Documents

Publication Publication Date Title
CN109413483B (en) Live content preview method, device, equipment and medium
CN111970577B (en) Subtitle editing method and device and electronic equipment
WO2021196903A1 (en) Video processing method and device, readable medium and electronic device
US9451305B2 (en) Method, computer readable storage medium, and introducing and playing device for introducing and playing media
CN113992934B (en) Multimedia information processing method, device, electronic equipment and storage medium
WO2022237908A1 (en) Information display method and apparatus, electronic device, and storage medium
CN107979772B (en) Method and apparatus for providing personalized user functionality in collaboration with shared and personal devices
CN111078070B (en) PPT video barrage play control method, device, terminal and medium
WO2023088442A1 (en) Live streaming preview method and apparatus, and device, program product and medium
CN113225483B (en) Image fusion method and device, electronic equipment and storage medium
WO2023051294A1 (en) Prop processing method and apparatus, and device and medium
WO2023104102A1 (en) Live broadcasting comment presentation method and apparatus, and device, program product and medium
CN114598815B (en) Shooting method, shooting device, electronic equipment and storage medium
CN115190366B (en) Information display method, device, electronic equipment and computer readable medium
CN114154012A (en) Video recommendation method and device, electronic equipment and storage medium
CN115474085B (en) Media content playing method, device, equipment and storage medium
CN111818383B (en) Video data generation method, system, device, electronic equipment and storage medium
CN114679621A (en) Video display method and device and terminal equipment
CN114584716A (en) Picture processing method, device, equipment and storage medium
US9940645B1 (en) Application installation using in-video programming
CN115550723A (en) Multimedia information display method and device and electronic equipment
CN116156077A (en) Method, device, equipment and storage medium for multimedia resource clipping scene
CN117793478A (en) Method, apparatus, device, medium, and program product for generating explanation information
CN114520928A (en) Display information generation method, information display method and device and electronic equipment
CN117786159A (en) Text material acquisition method, apparatus, device, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination