CN113778717A - Content sharing method, device, equipment and storage medium - Google Patents

Content sharing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113778717A
CN113778717A CN202111074803.8A CN202111074803A CN113778717A CN 113778717 A CN113778717 A CN 113778717A CN 202111074803 A CN202111074803 A CN 202111074803A CN 113778717 A CN113778717 A CN 113778717A
Authority
CN
China
Prior art keywords
content
audio
sharing
video
clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111074803.8A
Other languages
Chinese (zh)
Inventor
范爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111074803.8A priority Critical patent/CN113778717A/en
Publication of CN113778717A publication Critical patent/CN113778717A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/543User-generated data transfer, e.g. clipboards, dynamic data exchange [DDE], object linking and embedding [OLE]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a content sharing method, device, equipment and storage medium, and relates to the technical field of computers, in particular to the fields of voice technology, knowledge maps and the like. The specific implementation scheme is as follows: acquiring an audio clip and/or a video clip selected by a user; acquiring wonderful content in the audio clip and/or the video clip; generating a corresponding sharing scheme based on the audio clip and/or the video clip; content sharing is performed based on the wonderful content and the sharing case. The content sharing method, device, equipment and storage medium can enrich sharing modes.

Description

Content sharing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and more particularly to the fields of speech technology, knowledge maps, and the like.
Background
Content sharing is one of the important ways in which users grow, and with the development of mobile internet, users can access rich information content through various Applications (APPs), and various APPs generally have a content sharing function.
Disclosure of Invention
The disclosure provides a content sharing method, device, equipment and storage medium.
According to a first aspect of the present disclosure, there is provided a content sharing method, including:
acquiring an audio clip and/or a video clip selected by a user;
acquiring wonderful content in the audio clips and/or the video clips;
generating a corresponding sharing scheme based on the audio clip and/or the video clip;
and sharing the content based on the wonderful content and the sharing scheme.
According to a second aspect of the present disclosure, there is provided a content sharing apparatus including:
the first acquisition module is used for acquiring an audio clip and/or a video clip selected by a user; acquiring wonderful content in the audio clips and/or the video clips;
the generating module is used for generating a corresponding sharing case based on the audio clip and/or the video clip;
and the sharing module is used for sharing the content based on the wonderful content and the sharing scheme.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to the first aspect.
The sharing mode can be enriched.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a content sharing method according to an embodiment of the disclosure;
fig. 2 is another flowchart of a content sharing method provided according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of pre-building a material library in an embodiment of the present disclosure;
fig. 4 is a flowchart of a content sharing method provided according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of creating a library of materials based on audio content in an embodiment of the disclosure;
FIG. 6 is a schematic diagram illustrating audio content based sharing in an embodiment of the disclosure;
FIG. 7A is a schematic illustration of an interface guide in an embodiment of the present disclosure;
FIG. 7B is another schematic illustration of an interface guide in an embodiment of the disclosure;
FIG. 7C is yet another schematic illustration of an interface guide in an embodiment of the disclosure;
FIG. 7D is yet another illustration of an interface guide in an embodiment of the disclosure;
fig. 8 is a schematic structural diagram of a content sharing apparatus according to an embodiment of the disclosure;
fig. 9 is a schematic structural diagram of a content sharing device according to an embodiment of the disclosure;
fig. 10 is a schematic structural diagram of a content sharing device according to an embodiment of the present disclosure;
fig. 11 is a block diagram of an electronic device for implementing a content sharing method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The existing sharing modes mainly include the following modes:
1. a third party platform: and in the sharing mode with the lowest cost, a user clicks a sharing button, a page containing the current content is shared to other social platforms by selecting different third-party platforms, and other users open corresponding pages to browse by clicking shared cards or applets.
2. Text/link: the user can select a link copying mode by clicking a sharing button, copy the page link containing the current content to a clipboard, and paste the page link to other platforms. Other users open corresponding pages to browse by clicking or copying the links to the browser.
3. Downloading: and for the content of the short video/short audio, storing the video or audio containing the watermark to the local in a downloading mode under the condition that the platform allows downloading, and then forwarding the video or audio to other third-party platforms.
The existing sharing mode is mature, but lacks of personalized related elements, and meanwhile, the product lacks of novelty in form, cannot add personalized contents into the sharing mode, and is not beneficial to refreshing and reflowing.
With the continuous upgrade of network technology, the audio and video field gradually becomes the mainstream direction of the development of internet APP. However, for APPs mainly with long video/audio, the wonderful part of the APP cannot be highlighted through the existing sharing mode, and the sharing reflux rate is low. That is, the existing sharing method can only share the whole long-view/audio analysis, and this method cannot selectively share the content in the long-view/audio and cannot highlight the wonderful part therein.
Aiming at the field of audios and videos, the embodiment of the disclosure can selectively share part of contents in the audios and/or videos, and adds a sharing scheme in the sharing process, thereby providing a new sharing mode and enriching the sharing mode. Meanwhile, the wonderful content in the audio and/or the video can be selected for sharing, and the wonderful content can be more intuitively represented through the sharing scheme, so that the refreshing and the backflow of the sharing can be improved.
The content sharing method provided by the embodiment of the disclosure can be applied to electronic equipment. Specifically, the electronic device may include a terminal, a server, and the like.
The embodiment of the disclosure provides a content sharing method, which includes:
acquiring an audio clip and/or a video clip selected by a user;
acquiring wonderful content in the audio clip and/or the video clip;
generating a corresponding sharing scheme based on the audio clip and/or the video clip;
content sharing is performed based on the wonderful content and the sharing case.
In the embodiment of the disclosure, content sharing can be performed based on the wonderful content and the sharing scheme, that is, personalized content can be added in the content sharing process, the sharing mode can be enriched, and refreshing and reflow of sharing can be improved.
Fig. 1 is a flowchart of a content sharing method according to an embodiment of the disclosure. Referring to fig. 1, a content sharing method provided by the embodiment of the present disclosure may include:
s101, acquiring an audio clip and/or a video clip selected by a user.
The audio and/or video clips may be long audio/video or audio and/or video clips extracted from long audio/video.
S102, acquiring the highlight content in the audio clip and/or the video clip.
In one implementation, the highlight content may be a highlight in the audio content and/or the video content that the user selects according to his/her preference.
For example, an audio clip may be extracted from only a long audio as a highlight content. Alternatively, video clips may be extracted from only long videos as highlight content. Alternatively, an audio clip may be extracted from a long audio, while a video clip is extracted from a long video, with the audio clip and the video clip as highlight content.
For example, an audio clip and/or a video clip is provided to a user, the user selects a start point and an end point, and the electronic device intercepts a portion of content in the audio clip and/or the video clip as highlight content through the user-selected start point and end point.
In another implementation manner, the electronic device may determine, in an intelligent analysis manner, a partial segment of the audio segment as the highlight of the audio segment, and/or determine a partial segment of the video segment as the highlight of the video segment. The intelligent analysis mode can be that the evaluation of different segments in the audio segments and/or the video segments by a plurality of users is counted, and the highlight segments in the audio segments and/or the video segments are selected as highlight contents according to the counting result.
And S103, generating a corresponding sharing scheme based on the audio clip and/or the video clip.
In one implementation, the corresponding shared copy may be generated in real-time based on the highlight content.
In another implementation manner, a material library may be established in advance, and after the highlight content is obtained, the shared scheme corresponding to the highlight content is obtained from the material library based on the highlight content.
And S104, sharing the content based on the wonderful content and the sharing copy.
The wonderful content and the sharing scheme can be combined, and the combined content can be shared.
Specifically, sharing may be performed in a manner of sharing to a third-party platform, for example, a user clicks a sharing button, and by selecting different third-party platforms, a page including the wonderful content and the shared document is shared to other social platforms, and other users open a corresponding page by clicking a shared card or applet to browse.
Alternatively, the sharing may be performed in a downloading manner, for example, the content including the highlight content and the shared copy is stored locally in a downloading manner and then forwarded to other third party platforms.
Or, the user may also share the content by text/link, for example, the user may select a link copying manner by clicking a sharing button, copy a page link containing the highlight content and the shared document to a clipboard, paste the page link to another platform, and open a corresponding page by clicking or copying the link to a browser by another user for browsing.
In an alternative embodiment, as shown in fig. 2, S102 may include:
s201, determining the text content of the audio clip and/or the video clip.
And S202, extracting the abstract of the text content.
S203, based on the abstract, obtaining the sharing pattern corresponding to the abstract.
In one approach, the corresponding shared pattern may be generated in real time based on the summary.
In another implementation manner, S203 may include:
searching a file matched with the abstract from a pre-established material library based on the abstract; and selecting a sharing case based on the case matched with the abstract.
The document that matches the summary may be a document that includes the summary. The finding of the document matching with the summary from the pre-established material library may be a finding of the document including the summary from the material library.
In one mode, there may be a plurality of documents matching the summary searched from the material library, and one of the plurality of documents may be randomly selected as the shared document.
Searching the case matched with the abstract from a pre-established material library, or calculating the similarity between the abstract and each case, and when the similarity between the abstract and the case is greater than a preset similarity threshold, determining that the case is the case matched with the abstract.
There may be a plurality of documents with the similarity greater than the preset similarity threshold, and in this case, one of the documents may be randomly selected as a shared document; the case with the highest similarity to the abstract can also be selected as the sharing case.
In the embodiment of the disclosure, a user may select an audio clip and/or a video clip according to audio content and/or video content, and acquire a corresponding sharing case, so that personalized elements (text content of the captured video clip and the corresponding sharing case) may be added to the shared content to improve update and reflow of audio and/or video sharing.
The method comprises the steps of establishing a material base in advance, and obtaining a shared case directly based on the material base after obtaining the abstract, so that the shared case can be obtained more conveniently.
Pre-building a material library, as shown in fig. 3, may include:
s301, audio content and/or video content are acquired.
Audio content and/or video content that may be selected from audio programs and/or video programs in the network. For example, the selection may be based on attribute information of the audio program and/or the video program, wherein the attribute information may include distribution duty, number of viewers, and the like.
S302, content information of the audio content and/or the video content is extracted.
S303, user information for the audio content and/or the video content is acquired.
S304, generating a file corresponding to the audio content and/or the video content based on the content information and the user information.
S305, storing the file corresponding to the audio content and/or the video content in a material library.
Content information may also be understood as static information of the audio content and/or the video content, i.e. information of the audio content and/or the video content itself.
For example, for audio content, the static information may contain information for the audio's classification, tags, domain, anchor information, etc. dimensions.
User information may also be understood as dynamic information of audio content and/or video content.
The dynamic information comprises dimensions such as a popularity trend, a crowd preference and the like in a certain field, is obtained by data mining and statistical analysis based on the behavior of the user and the user portrait combined with corresponding static information analysis results, and forms a dynamic information part in the material library.
Taking audio as an example for explanation, the current audio title is "chat with professional people about what to do with financing", the column name is Japanese park, the current anchor: buddy, guest: wild people and boring.
Current audio profile: noun explanations of financing products, funds, bonds, etc., fund classifications, how to buy funds, how to finance, etc.
The content information of the audio, i.e., the static information, may include: current podcast audio columns, column classifications, singleton classifications, content tags, audio text summaries, anchor and guest information, etc. The knowledge graph (such as the financial field) and the content information are combined to store the related information (such as fund classification, fund companies, investors and the like), and the knowledge graph is a common technology in the industry.
The user information, i.e., the dynamic information, of the audio may include: user behavior, user portrayal, etc.
Combining the static information and the dynamic information, and combining the ability of machine learning semantic understanding to generate a sentence recommendation (an audio will generate a content library of recommendations), such as: the menstruation is taken to the earners and the money is taken to the losers.
The recommended words can be understood as the documentations, and the documentations and the audios are correspondingly stored in the material library.
A plurality of documents can be obtained according to a plurality of audios, so that the material library can comprise a plurality of audios and corresponding documents.
In the process of establishing the material library, the processing for the video content is similar to the processing for the audio content, the video content is processed by referring to the processing process of the audio content, and the files corresponding to a plurality of video contents can be obtained and stored in the material library.
After the material library is established, in the process of content sharing, the material library can be directly used for distributing the file for the audio clip and/or the video clip. In addition, in the process of establishing the material library in the embodiment of the disclosure, various information of a plurality of audio contents and/or video contents, such as static information and dynamic information, is comprehensively considered, so that a file matched with the contents can be obtained, and thus, in the process of sharing the contents, a more appropriate file can be searched from the material library to be used for synthesizing a video clip to be shared.
The process of obtaining the shared case by using the material library is illustrated by taking the audio clip as an example:
the user can manually intercept an audio clip, extract the abstract of the current audio clip through semantic recognition, and perform word segmentation. And carrying out online random matching on the keywords and the existing material library in the intelligent material library, wherein if a certain keyword in the abstract is 'earning money', hundreds of recommended words (namely, documentations) meeting the semantic meaning are possible, and randomly selecting a certain one from the keywords as an intelligent documentary (namely, a shared documentary corresponding to the audio clip) to recommend the intelligent documentary to the user.
On the basis of the embodiment shown in fig. 1, as shown in fig. 4, S103 may include:
s401, synthesizing a video clip based on the wonderful content and the shared pattern.
The highlight content and the shared pattern can be used as subtitles of the highlight segment to synthesize the video segment.
For example, if the highlight content is an audio clip, the text content of the audio clip and the shared scheme corresponding to the audio clip can be obtained, and the text content and the shared scheme can be used as subtitles of the audio clip, and the subtitles and the audio clip are synthesized into a video clip.
For example, if the highlight content is a video clip, the text content of the video clip and the shared scheme corresponding to the video clip can be obtained, and the text content and the shared scheme can be used as subtitles of the video clip, and the subtitles and the video clip are synthesized into a new video clip.
According to the embodiment of the invention, the user can select the length of the audio clip to be shared by himself, and the electronic equipment automatically synthesizes a video clip which can be used for direct sharing by combining with the recommended materials in the material library according to the content of the audio clip. Therefore, the method can provide the low-cost attractive capability of intelligently selecting and synthesizing the short video from the audio clips, and solves the problems of single sharing mode and low backflow conversion rate at present.
In addition, in the process of synthesizing the video clip, materials such as an audio clip, a subtitle, a background image, a current product logo and the like can be synthesized into the video clip for sharing by combining the random background image.
For example, the highlight content is an audio clip and a video clip, the text content of the audio clip and the shared scheme corresponding to the audio clip, and the text content of the video clip and the shared scheme corresponding to the video clip can be obtained respectively, the text content and the shared scheme corresponding to the audio clip can be used as subtitles of the audio clip, the subtitles and the audio clip are synthesized into a video clip, the text content and the shared scheme corresponding to the video clip are used as subtitles of the video clip, the subtitles and the video clip are synthesized into another video clip, and the video clip synthesized based on the audio clip and another video clip resynthesized based on the video clip are combined to obtain the video clip to be shared.
And S402, sharing the video clip.
In the embodiment of the disclosure, the artificial intelligence technology is fully combined, the personalized elements are added to the sharing scene, the activity of user participation is improved, and a new sharing scene and a new playing method are explored. The method is suitable for scenes with long videos or long audios, can more remarkably show the sharing purpose of sharers, and improves market crowd expansion of products to a certain extent. Meanwhile, the highlight segments can be stored to serve as personalized recommendation data in a subsequent end, and interaction capacity and content richness in the end are enhanced.
In an alternative embodiment, a selection interface may be provided; the selection interface comprises options of a fragment sharing mode; and detecting the operation of the option by the user.
The method comprises the steps that simple understanding is carried out, namely, an entrance for entering a segment sharing mode is provided for a user, and when the fact that the user operates an option of the segment sharing mode is detected, the user can enter the segment sharing mode when the operation can be clicking, such as double clicking, clicking and the like, and wonderful content is obtained; acquiring a sharing scheme corresponding to the wonderful content based on the wonderful content; and sharing the content based on the wonderful content and the sharing case.
Therefore, sharing modes of the user are enriched, and interaction with the user can be facilitated.
In an alternative embodiment, the process of pre-establishing the material library may specifically be to perform offline content policy analysis on the podcast audio content with a high daily distribution ratio through content understanding and data mining.
Referring to fig. 5, a process of creating a material library based on audio contents will be described in detail.
The artificial intelligence module can utilize an artificial intelligence means related to Natural Language Processing (NLP) to perform offline analysis on audio content, convert the audio content into text content with timestamp labels by combining with an Automatic Speech Recognition technology (Automatic Speech Recognition), understand the text content based on a knowledge graph in a certain field, and store key content to form a static information part in a material library.
The static information may contain information in dimensions of classification of audio, tags, domain, anchor information, etc.
The dynamic information source is counted from the user dimension, namely, the user behavior and user image of the user aiming at the audio are counted, and the content of the vertical field can be further included.
And the static information is subjected to data statistics and analysis through the static information processing module and the dynamic information is subjected to data statistics and analysis through the dynamic information processing module to obtain an intelligent material library, namely the material library.
And the personalized content generation module is used for generating a large amount of short document materials by combining the intelligent material library and the materials in a certain field of the whole network based on a machine learning algorithm, supplementing the short document materials into the intelligent material library, and simultaneously performing off-line examination and verification on the part of data so as to ensure the safety of the materials generated by the machine. And then sharing can be performed by adopting a mode of intercepting the fragments and combining personalized content to synthesize the video.
After the material library is established, in the process of content sharing, as shown in fig. 6, audio processing, such as audio cropping and audio segment transcoding, is performed on the podcast video.
Obtaining text content corresponding to the audio clip through artificial intelligence processing, extracting an abstract of the text content through NLP, acquiring an intelligent case from an intelligent material library based on the abstract, and searching a case matched with the abstract from a pre-established material library based on the abstract in the specific process; and selecting a process of sharing the case based on the case matched with the abstract. And synthesizing a video clip based on the audio clip, the text content corresponding to the audio clip and the intelligent pattern, and sharing the video clip.
The embodiment of the disclosure provides a convenient and fast featured highlight sharing capability for long-distance audio product forms of podcasts, and can quickly introduce APPs to verify product functions (namely, various types of APPs can realize the functions). By means of the user portrait and the audio content label, the emotion and mood of listening to the audio at the moment are automatically generated for the user, the user portrait and the audio content label are converted into the personalized sharing file and the corresponding sharing video clip, the personalized sharing file and the corresponding sharing video clip can be used for sharing other third party platforms, the user liveness, the renewing conversion capability and the participation interest are favorably improved, and meanwhile the product daily life and the user retention rate are improved.
Specifically, in the application process, a sharing entrance can be provided for the user. As shown in fig. 7A, an "audio clip" option may be provided on the APP interface, and after the user clicks the sharing button, the option may be displayed, and the user clicks the option to enter the audio clip sharing mode. Specifically, the whole audio is played, and the user may click the confirmation button after selecting a start time and an end time, as shown in fig. 7B, and selecting a time point in the audio as the start time and another time point as the end time, and the electronic device detects that the confirmation button is clicked, and may receive the start time and the end time. And intercepting the audio segments corresponding to the starting time and the ending time, namely performing audio cutting, and then transcoding the intercepted audio segments to obtain text contents corresponding to the audio segments. Then, a summary of the text content may be extracted; searching a file matched with the abstract from a pre-established material library based on the abstract; and selecting a sharing case based on the case matched with the abstract. And synthesizing the video clip based on the text content and the sharing scheme. As shown in fig. 7C, in the process of composing a video clip, an interface may be provided to the user, and the user may select a material on the interface for the composition of the video clip. After the video clips are synthesized, the video clips can be shared, as shown in fig. 7D, the download option can be clicked to download the video clips to the local, and then the video clips are shared. Or, the video clip can be directly shared to the third-party platform by clicking the third-party platform option.
In the embodiment of the disclosure, based on audio content understanding and artificial intelligence strategies, the capability of a video editor is combined, a personalized sharing scheme of the current user for the audio is automatically generated based on the user portrait, and finally synthesized video clips are directly shared.
An embodiment of the present disclosure further provides a content sharing apparatus, as shown in fig. 8, the content sharing apparatus may include:
a first obtaining module 801, configured to obtain an audio segment and/or a video segment selected by a user; acquiring wonderful content in the audio clip and/or the video clip;
a first generating module 802, configured to generate a corresponding shared pattern based on the audio clip and/or the video clip;
and a sharing module 803, configured to share content based on the wonderful content and the sharing scheme.
Optionally, the first generating module 802 is specifically configured to obtain an audio clip and/or a video clip selected by a user; determining the text content of the audio and/or video clips; extracting an abstract of the text content; and acquiring the shared file corresponding to the abstract based on the abstract.
Optionally, the first generating module 802 is specifically configured to search, based on the summary, a document matching the summary from a pre-established material library; and determining a shared case based on the case matched with the abstract.
Optionally, the sharing module 803 is specifically configured to synthesize a video clip based on the highlight content and the shared pattern; and sharing the video clip.
Optionally, as shown in fig. 9, the apparatus further includes:
a second obtaining module 901, configured to obtain audio content and/or video content;
an extracting module 902, configured to extract content information of the audio content and/or the video content;
a third obtaining module 903, configured to obtain user information for the audio content and/or the video content;
a second generating module 904, configured to generate a document corresponding to the audio content and/or the video content based on the content information and the user information;
the storage module 905 is configured to store the file corresponding to the audio content and/or the video content in a material library.
Optionally, as shown in fig. 10, the apparatus further includes:
a providing module 1001 for providing a selection interface; the selection interface comprises options of a fragment sharing mode;
the detecting module 1002 is configured to detect an operation of the option by the user.
In the technical scheme, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related users are all in accordance with the regulations of related laws and regulations and do not violate the customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 11 shows a schematic block diagram of an example electronic device 1100 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the device 1100 comprises a computing unit 1101, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the device 1100 may also be stored. The calculation unit 1101, the ROM1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in device 1100 connect to I/O interface 1105, including: an input unit 1106 such as a keyboard, a mouse, and the like; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108 such as a magnetic disk, optical disk, or the like; and a communication unit 1109 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1109 allows the device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1101 can be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 1101 performs the respective methods and processes described above, such as a content sharing method. For example, in some embodiments, the content sharing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1108. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1100 via ROM1102 and/or communication unit 1109. When the computer program is loaded into RAM 1103 and executed by computing unit 1101, one or more steps of the content sharing method described above may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured to perform the content sharing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A content sharing method, comprising:
acquiring an audio clip and/or a video clip selected by a user;
acquiring wonderful content in the audio clips and/or the video clips;
generating a corresponding sharing scheme based on the audio clip and/or the video clip;
and sharing the content based on the wonderful content and the sharing scheme.
2. The method of claim 1, wherein the generating a corresponding shared scrip based on the audio and/or video clips comprises:
determining the text content of the audio and/or video clips;
extracting an abstract of the text content;
and acquiring the sharing scheme corresponding to the abstract based on the abstract.
3. The method according to claim 2, wherein the obtaining the shared pattern corresponding to the abstract based on the abstract comprises:
searching a file matched with the abstract from a pre-established material library based on the abstract;
and determining the sharing scheme based on the scheme matched with the abstract.
4. The method of claim 3, the sharing content based on the highlight content and the shared copy, comprising:
synthesizing a video clip based on the wonderful content and the sharing scheme;
and sharing the video clip.
5. The method of claim 3, further comprising:
acquiring audio content and/or video content;
extracting content information of the audio content and/or the video content;
acquiring user information for the audio content and/or the video content;
generating a file corresponding to the audio content and/or the video content based on the content information and the user information;
and storing the file corresponding to the audio content and/or the video content to a material library.
6. The method of any of claims 1 to 5, further comprising:
providing a selection interface; the selection interface comprises an option of a segment sharing mode;
and detecting the operation of the user on the option.
7. A content sharing apparatus, comprising:
the first acquisition module is used for acquiring an audio clip and/or a video clip selected by a user; acquiring wonderful content in the audio clips and/or the video clips;
the first generation module is used for generating a corresponding sharing scheme based on the audio clip and/or the video clip;
and the sharing module is used for sharing the content based on the wonderful content and the sharing scheme.
8. The device according to claim 7, wherein the first generating module is specifically configured to obtain an audio segment and/or a video segment selected by a user; determining the text content of the audio and/or video clips; extracting an abstract of the text content; and acquiring the sharing scheme corresponding to the abstract based on the abstract.
9. The apparatus according to claim 8, wherein the first generating module is specifically configured to search, based on the summary, a document matching the summary from a pre-established material library; and determining the sharing scheme based on the scheme matched with the abstract.
10. The apparatus of claim 9, the sharing module, in particular, to compose a video clip based on the highlight content and the shared pattern; and sharing the video clip.
11. The apparatus of claim 9, further comprising:
the second acquisition module is used for acquiring audio content and/or video content;
the extraction module is used for extracting the content information of the audio content and/or the video content;
a third obtaining module, configured to obtain user information for the audio content and/or the video content;
the second generation module is used for generating a file corresponding to the audio content and/or the video content based on the content information and the user information;
and the storage module is used for storing the file corresponding to the audio content and/or the video content to a material library.
12. The apparatus of any of claims 7 to 11, further comprising:
a providing module for providing a selection interface; the selection interface comprises an option of a segment sharing mode;
and the detection module is used for detecting the operation of the user on the options.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
CN202111074803.8A 2021-09-14 2021-09-14 Content sharing method, device, equipment and storage medium Pending CN113778717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111074803.8A CN113778717A (en) 2021-09-14 2021-09-14 Content sharing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111074803.8A CN113778717A (en) 2021-09-14 2021-09-14 Content sharing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113778717A true CN113778717A (en) 2021-12-10

Family

ID=78843539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111074803.8A Pending CN113778717A (en) 2021-09-14 2021-09-14 Content sharing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113778717A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449327A (en) * 2021-12-31 2022-05-06 北京百度网讯科技有限公司 Video clip sharing method and device, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959828A (en) * 2016-06-27 2016-09-21 乐视控股(北京)有限公司 Audio/video sharing method and device, audio/video playing method and device and electronic equipment
WO2017092439A1 (en) * 2015-11-30 2017-06-08 香港欢乐谷科技有限公司 Method and device for editing a file
CN112579826A (en) * 2020-12-07 2021-03-30 北京字节跳动网络技术有限公司 Video display and processing method, device, system, equipment and medium
CN113157153A (en) * 2021-02-07 2021-07-23 北京字节跳动网络技术有限公司 Content sharing method and device, electronic equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092439A1 (en) * 2015-11-30 2017-06-08 香港欢乐谷科技有限公司 Method and device for editing a file
CN105959828A (en) * 2016-06-27 2016-09-21 乐视控股(北京)有限公司 Audio/video sharing method and device, audio/video playing method and device and electronic equipment
CN112579826A (en) * 2020-12-07 2021-03-30 北京字节跳动网络技术有限公司 Video display and processing method, device, system, equipment and medium
CN113157153A (en) * 2021-02-07 2021-07-23 北京字节跳动网络技术有限公司 Content sharing method and device, electronic equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王娇娇;: "主流媒体短视频新闻的"四力"提升――以《人民日报》为例", 今传媒, no. 01 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449327A (en) * 2021-12-31 2022-05-06 北京百度网讯科技有限公司 Video clip sharing method and device, electronic equipment and readable storage medium
CN114449327B (en) * 2021-12-31 2024-03-26 北京百度网讯科技有限公司 Video clip sharing method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN111143610B (en) Content recommendation method and device, electronic equipment and storage medium
US9400833B2 (en) Generating electronic summaries of online meetings
US10560734B2 (en) Video segmentation and searching by segmentation dimensions
AU2014309040B9 (en) Presenting fixed format documents in reflowed format
CN109154943B (en) Server-based conversion of automatically played content to click-to-play content
US10116981B2 (en) Video management system for generating video segment playlist using enhanced segmented videos
CN109275047B (en) Video information processing method and device, electronic equipment and storage medium
CN113079417B (en) Method, device and equipment for generating bullet screen and storage medium
CN115982376B (en) Method and device for training model based on text, multimode data and knowledge
CN107566906B (en) Video comment processing method and device
CN111050191B (en) Video generation method and device, computer equipment and storage medium
CN111263186A (en) Video generation, playing, searching and processing method, device and storage medium
KR20150095663A (en) Flat book to rich book conversion in e-readers
CN111177462A (en) Method and device for determining video distribution timeliness
CN113038175B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN113407775B (en) Video searching method and device and electronic equipment
CN113778717A (en) Content sharing method, device, equipment and storage medium
US20170132198A1 (en) Provide interactive content generation for document
CN114880498B (en) Event information display method and device, equipment and medium
CN115357755A (en) Video generation method, video display method and device
US11074939B1 (en) Disambiguation of audio content using visual context
CN113923479A (en) Audio and video editing method and device
CN114238689A (en) Video generation method, video generation device, electronic device, storage medium, and program product
CN113965798A (en) Video information generating and displaying method, device, equipment and storage medium
CN111259181B (en) Method and device for displaying information and providing information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination