CN111158924B - Content sharing method and device, electronic equipment and readable storage medium - Google Patents

Content sharing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111158924B
CN111158924B CN201911212878.0A CN201911212878A CN111158924B CN 111158924 B CN111158924 B CN 111158924B CN 201911212878 A CN201911212878 A CN 201911212878A CN 111158924 B CN111158924 B CN 111158924B
Authority
CN
China
Prior art keywords
content
information
image
user
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911212878.0A
Other languages
Chinese (zh)
Other versions
CN111158924A (en
Inventor
刘俊启
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911212878.0A priority Critical patent/CN111158924B/en
Publication of CN111158924A publication Critical patent/CN111158924A/en
Application granted granted Critical
Publication of CN111158924B publication Critical patent/CN111158924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/543User-generated data transfer, e.g. clipboards, dynamic data exchange [DDE], object linking and embedding [OLE]

Abstract

The application discloses a content sharing method, a content sharing device, electronic equipment and a readable storage medium, and relates to a content identification technology. According to the embodiment of the application, the content summary information of the target content is obtained by analyzing the target content, the content summary information is displayed, the user is used for confirming the content summary information as the user viewpoint in response to receiving the confirmation information of the user on the content summary information, the user viewpoint and the resource description information of the target content are sent to the position appointed by the user, and the user is not required to repeatedly check the shared content to manually edit the user viewpoint content related to the shared content, so that the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility of other users to view the shared content and the interaction effect based on the shared content are improved.

Description

Content sharing method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to internet technologies, and in particular, to a content identification technology, and in particular, to a content sharing method, a device, an electronic apparatus, and a readable storage medium.
Background
With the popularization of mobile internet, it has become normal for people to browse web content using mobile phone terminals. Meanwhile, the types of application programs (APP) on mobile phone terminals are also increasing, and social application programs are one of them. Social applications are used to communicate information, including video, pictures, text, voice, etc., between users and friends, and between users and groups. The user may share the current browsing content within the sharing social application (e.g., within the social application's internal space or to friends and groups within the social application) or may share the current browsing content to a third party application (e.g., within a third party application's space or to friends and groups within the third party application).
Currently, the sharing links typically include titles, either within the social application or when the user shares browsing content with a third party application. If the user wants to express the views related to the shared content, the user needs to repeatedly check the shared content to manually edit the corresponding view content, so that the sharing efficiency is low, the sharing effect is poor, and the user experience is poor.
Disclosure of Invention
Aspects of the present application provide a content sharing method, apparatus, electronic device, and readable storage medium, so as to improve sharing efficiency and user experience.
In one aspect of the present application, there is provided a content sharing method, including:
analyzing the target content to obtain content abstract information of the target content;
displaying the content abstract information;
and in response to receiving the confirmation information of the user on the content summary information, sending the user viewpoint and the resource description information of the target content to the position appointed by the user by taking the content summary information confirmed by the user as the user viewpoint.
The aspects and any possible implementation manner as described above further provide an implementation manner, after the displaying the content summary information, the method further includes:
and modifying the content summary information of the target content according to the modification instruction of the user income, wherein the modification instruction comprises modification information.
The aspects and any possible implementation manner as described above further provide an implementation manner, after the displaying the content summary information and the resource description information of the target content, the method further includes:
And deleting the content abstract information of the target content according to the deletion instruction of the income of the user.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, in response to receiving acknowledgement information of the content summary information from the user, sending the user viewpoint and the resource description information to the location specified by the user with the content summary information confirmed by the user as the user viewpoint, including:
storing the content summary information confirmed by the user and the resource description information of the target content into a clipboard in response to receiving the confirmation information of the user on the content summary information;
and responding to a sharing instruction sent by a user selecting a sharing object, acquiring content abstract information confirmed by the user and resource description information of the target content from the clipboard, taking the content abstract information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object.
Aspects and any one of the possible implementations as described above, further providing an implementation, the target content including any one or more of: web pages, text, images, audio, video; or alternatively, the process may be performed,
The content summary information includes any one or more of the following: text, image, audio, video.
In the foregoing aspect and any possible implementation manner, there is further provided an implementation manner, where the target content is text, and the analyzing the target content to obtain content summary information of the target content includes: performing content identification and information extraction on the text by using a natural language processing technology to generate content abstract information of the text; or alternatively, the process may be performed,
the target content is an image, and the analyzing the target content to obtain content abstract information of the target content includes: performing content recognition on the image, and obtaining content abstract information of the image based on a content recognition result; or alternatively, the process may be performed,
the target content is a web page, and the analyzing the target content to obtain content abstract information of the target content includes: performing content recognition on the webpage, and obtaining content abstract information of the webpage based on a content recognition result; or alternatively, the process may be performed,
the target content is audio, and the analysis of the target content to obtain content abstract information of the target content includes: performing content recognition on the audio, and obtaining content abstract information of the audio based on a content recognition result; or alternatively, the process may be performed,
The target content is video, and the analysis of the target content obtains content abstract information of the target content, including: and carrying out content identification on the video, and obtaining content abstract information of the video based on a content identification result.
In the foregoing aspect and any possible implementation manner, there is further provided an implementation manner, where the identifying content of the web page, and obtaining content summary information of the web page based on a content identification result, includes:
performing content identification and information extraction on texts in the webpage by using a natural language processing technology to generate content abstract information of the webpage; or alternatively, the process may be performed,
classifying the content in the webpage, identifying the content and extracting the information from the text in the webpage to obtain first information, and extracting the non-text in the webpage to obtain second information; and obtaining the content abstract information of the webpage from the first information and the second information.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where the identifying content of the audio, and obtaining content summary information of the audio based on a content identification result, includes:
Performing content identification and information extraction on the text on the audio by using a natural language processing technology to generate content abstract information of the audio; or alternatively, the process may be performed,
converting the audio into text, and performing content identification and information extraction on the converted text by using a natural language processing technology to generate content abstract information of the audio; or alternatively, the process may be performed,
and converting the audio into text, and carrying out content identification and information extraction on the converted text and the text on the audio by using a natural language processing technology to generate content abstract information of the audio.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where the identifying content of the image, and obtaining content summary information of the image based on a content identification result, includes: and extracting and classifying the characteristics of the image by utilizing an image processing technology, and obtaining the content abstract information of the image based on a classification result.
In the aspects and any possible implementation manner as described above, there is further provided an implementation manner, where the obtaining content summary information of the image based on the classification result includes:
Obtaining text abstract information of the image based on the classification result, and selecting at least partial area image of the image based on the classification result;
and obtaining the content abstract information based on the text abstract information of the image and the at least partial area image.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where the identifying content of the video, and obtaining content summary information of the video based on a content identification result, includes:
selecting multi-frame images from the video;
taking each frame of image in the multi-frame images as a current image, and carrying out feature extraction and classification on the current image by utilizing an image processing technology to obtain a content identification result of the current image;
and obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where the selecting a multi-frame image from the video includes:
segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from each segmented video in the plurality of segmented videos respectively to obtain the multi-frame images; or alternatively, the process may be performed,
Segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the segmented videos respectively to obtain the multi-frame images; or alternatively, the process may be performed,
and randomly selecting images from the video to obtain the multi-frame images.
In the aspects and any possible implementation manner as described above, there is further provided an implementation manner, where the obtaining the content summary information of the video based on the content identification result of the multi-frame image includes:
obtaining text abstract information of the multi-frame images based on the content identification result of the multi-frame images, and selecting at least one frame of images or region images in at least one frame of images based on the content identification result of the multi-frame images;
and obtaining the content abstract information based on the text abstract information of the multi-frame image and the at least one frame image or the regional image in the at least one frame image.
In another aspect of the present application, there is provided a content sharing apparatus including:
the content analysis unit is used for analyzing the target content to obtain content abstract information of the target content;
The interaction unit is used for displaying the content abstract information;
and the sharing unit is used for responding to the received confirmation information of the user on the content abstract information, taking the content abstract information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to the position appointed by the user.
Aspects and any one of the possible implementations as described above, further providing an implementation, the interaction unit further being configured to
And modifying the content summary information of the target content according to the modification instruction of the user income, wherein the modification instruction comprises modification information.
Aspects and any one of the possible implementations as described above, further providing an implementation, the interaction unit further being configured to
And deleting the content abstract information of the target content according to the deletion instruction of the income of the user.
Aspects and any possible implementation manner as described above, further provide an implementation manner, where the sharing unit is specifically configured to
Storing the content summary information confirmed by the user and the resource description information of the target content into a clipboard in response to receiving the confirmation information of the user on the content summary information;
And responding to a sharing instruction sent by a user selecting a sharing object, acquiring content abstract information confirmed by the user and resource description information of the target content from the clipboard, taking the content abstract information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object.
Aspects and any one of the possible implementations as described above, further providing an implementation, the target content including any one or more of: web pages, text, images, audio, video; or alternatively, the process may be performed,
the content summary information may be any one or more of the following: text, image, audio, video.
In the aspect and any possible implementation manner as described above, there is further provided an implementation manner, where the target content is text, and the content analysis unit is configured to perform content identification and information extraction on the text by using a natural language processing technology, and generate content summary information of the text; or alternatively, the process may be performed,
the target content is an image, and the content analysis unit is used for carrying out content identification on the image and obtaining content abstract information of the image based on a content identification result; or alternatively, the process may be performed,
The target content is a webpage, and the content analysis unit is used for carrying out content identification on the webpage and obtaining content abstract information of the webpage based on a content identification result; or alternatively, the process may be performed,
the target content is audio, the content analysis unit is used for carrying out content identification on the audio, and obtaining content abstract information of the audio based on a content identification result; or alternatively, the process may be performed,
the target content is a video, and the content analysis unit is used for carrying out content identification on the video and obtaining content abstract information of the video based on a content identification result.
In the aspect and any possible implementation manner as described above, there is further provided an implementation manner, the target content is a web page, and the content analysis unit is specifically configured to
Performing content identification and information extraction on texts in the webpage by using a natural language processing technology to generate content abstract information of the webpage; or alternatively, the process may be performed,
classifying the content in the webpage, identifying the content and extracting the information from the text in the webpage to obtain first information, and extracting the non-text in the webpage to obtain second information; and obtaining the content abstract information of the webpage from the first information and the second information.
In aspects and any one of the possible implementations as described above, there is further provided an implementation, the target content is audio, and the content analysis unit is specifically configured to
Performing content identification and information extraction on the text on the audio by using a natural language processing technology to generate content abstract information of the audio; or alternatively, the process may be performed,
converting the audio into text, and performing content identification and information extraction on the converted text by using a natural language processing technology to generate content abstract information of the audio; or alternatively, the process may be performed,
and converting the audio into text, and carrying out content identification and information extraction on the converted text and the text on the audio by using a natural language processing technology to generate content abstract information of the audio.
In the aspect and any possible implementation manner as described above, there is further provided an implementation manner, the target content is an image, and the content analysis unit is specifically configured to
And extracting and classifying the characteristics of the image by utilizing an image processing technology, and obtaining the content abstract information of the image based on a classification result.
In the aspect and any possible implementation manner as described above, there is further provided an implementation manner, the target content is a video, and the content analysis unit is specifically configured to
Selecting multi-frame images from the video;
taking each frame of image in the multi-frame images as a current image, and carrying out feature extraction and classification on the current image by utilizing an image processing technology to obtain a content identification result of the current image;
and obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
In the aspects and any possible implementation manner as described above, there is further provided an implementation manner, where the content analysis unit is specifically configured to, when selecting a plurality of frames of images from the video
Segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from each segmented video in the plurality of segmented videos respectively to obtain the multi-frame images; or alternatively, the process may be performed,
segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the segmented videos respectively to obtain the multi-frame images; or alternatively, the process may be performed,
and randomly selecting images from the video to obtain the multi-frame images.
In another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the aspects and methods of any one of the possible implementations described above.
In another aspect of the application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of the aspects and any possible implementation described above.
As can be seen from the above technical solution, in the embodiment of the present application, by analyzing a target content, obtaining content summary information of the target content, and displaying the content summary information, further, in response to receiving confirmation information of a user on the content summary information, sending the user viewpoint and resource description information of the target content to the user-specified location with the content summary information confirmed by the user as user viewpoints, when the user shares the content, summary information related to the shared content can be automatically generated as the user viewpoints, and the user is not required to repeatedly view the shared content to manually edit user viewpoint content related to the shared content, so that sharing efficiency can be improved, and user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility of other users to view the shared content and the interaction effect based on the shared content are improved.
In addition, after the technical scheme provided by the application is adopted to display the content abstract information and the resource description information of the target content, the content abstract information of the target content can be modified according to the modification instruction of the income of the user, so that the optimization adjustment of the automatically generated content abstract information is realized, the personalized viewpoint expression requirement of the user can be met, and the user experience is further improved.
In addition, by adopting the technical scheme provided by the application, the content abstract information of the automatically generated target content can be deleted according to the deletion instruction of the income of the user, thereby meeting the personalized requirements of the user and further improving the user experience.
In addition, by adopting the technical scheme provided by the application, the target content can comprise any one or more of the following: web pages, texts, images, audios and videos can automatically generate content abstract information aiming at various types of contents and share the content abstract information together as views of sharing users, so that sharing efficiency is improved, user experience is improved, and possibility of other users viewing the shared contents and interaction effect based on the shared contents are improved.
In addition, by adopting the technical scheme provided by the application, the automatically generated content abstract information comprises any one or more of the following: text, images, audio and video, so that the expression content of the user viewpoint can be enriched, and the user experience is further improved.
Other effects of the above aspects or possible implementations will be described below in connection with specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art. The drawings are only for a better understanding of the present solution and are not to be construed as limiting the application. Wherein:
fig. 1A is a flowchart illustrating a content sharing method according to an embodiment of the present application;
fig. 1B is a flowchart illustrating a content sharing method according to another embodiment of the present application;
FIGS. 1C-1F are diagrams illustrating exemplary targeted content and content summary information in accordance with embodiments of the present application;
fig. 2A is a schematic structural diagram of a content sharing device according to an embodiment of the application;
fig. 2B is a schematic structural diagram of a content sharing device according to another embodiment of the present application;
fig. 3 is a schematic diagram of an electronic device for implementing the data indexing method according to the embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It will be apparent that the described embodiments are some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, the terminal according to the embodiment of the present application may include, but is not limited to, a mobile phone, a personal digital assistant (PersonalDigitalAssistant, PDA), a wireless handheld device, a tablet computer (tablet computer), a personal computer (PersonalComputer, PC), an MP3 player, an MP4 player, a wearable device (e.g., smart glasses, smart watches, smart bracelets, etc.), a smart home device (e.g., smart speaker device, smart television, smart air conditioner, etc.), and so on.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Fig. 1A is a flowchart of a content sharing method according to an embodiment of the application, as shown in fig. 1A.
101. And analyzing the target content to obtain content abstract information of the target content.
Optionally, in a possible implementation of this embodiment, the target content may include, but is not limited to, any one or more of the following: web pages, text, images, audio, video, etc.
Optionally, in a possible implementation manner of this embodiment, the content summary information may include, but is not limited to, any one or more of the following: text, image, audio, video, etc. are arbitrarily included in the target content.
102. And displaying the content abstract information.
103. And in response to receiving the confirmation information of the user on the content summary information, sending the user viewpoint and the resource description information of the target content to the position appointed by the user by taking the content summary information confirmed by the user as the user viewpoint.
Optionally, in a possible implementation of this embodiment, the user-specified location may include, but is not limited to, any of the following: and the application program platform is used for friends or groups in the application program. The application program in the embodiment of the application can be any application program, such as WeChat, QQ, microblog and the like. The application program can be the application program where the target content is located, and can also be a third party application program. The application platform may be a space for an application, such as a WeChat album, QQ space, microblog space, and so forth.
The execution bodies 101 to 103 may be part or all of applications located in the local terminal, or may be functional units such as a plug-in unit or a software development kit (SoftwareDevelopmentKit, SDK) provided in the applications located in the local terminal, or may be a processing engine located in a server on the network side, or may be a distributed system located on the network side, for example, a processing engine or a distributed system in an intelligent home service platform on the network side, which is not particularly limited in this embodiment.
It will be appreciated that the application may be a native program (native app) installed on the terminal, or may also be a web page program (webApp) of a browser on the terminal, which is not limited in this embodiment.
Therefore, when the user shares the content, abstract information related to the shared content can be automatically generated as a user viewpoint, repeated viewing of the shared content is not needed to manually edit the user viewpoint content related to the shared content, the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility of other users to view the shared content and the interaction effect based on the shared content are improved.
Optionally, in the content sharing method provided by another embodiment of the present application, before 101 or after 101, the method may further include:
receiving a content sharing request of the target content sent by a user;
and generating resource description information of the target content.
The resource description information of the target content may include: the link address of the target content may further include the following related information: a main title, a sub-title, a release date, a release platform, etc. of the target content. For example, in one specific example, the resource description information of one target content may include: male … -hundred degrees in bar.
Based on the resource description information of the target content, the user can read the full text of the target content by clicking the link address therein, and can know the content type, source and the like of the target content based on the further included related information.
Alternatively, the content digest information displayed via 102 is editable information, and the user may directly confirm the content digest information displayed as the user's view, or may modify the content digest information displayed to confirm the content digest information after modification as the user's view. In a specific application, when the content summary information is displayed, an interactive interface for inputting a confirmation operation by a user may be displayed, and the interactive interface may be, for example, a "confirm" button, a "v" button, or a similar button for implementing a sharing function, which is not limited in this embodiment.
Optionally, in a possible implementation manner of this embodiment, after 102, the content summary information of the target content may be modified according to a modification instruction of the user income, where the modification instruction includes modification information.
Therefore, the content abstract information of the target content can be modified according to the modification instruction of the income of the user, and the optimization and adjustment of the automatically generated content abstract information are realized, so that the personalized viewpoint expression requirement of the user can be met, and the user experience is further improved.
Optionally, in a possible implementation manner of this embodiment, after 102, the content summary information of the target content may be deleted according to a deletion instruction of the user income.
Therefore, the content abstract information of the target content which is automatically generated can be deleted according to the deletion instruction of the income of the user, thereby meeting the personalized requirements of the user and further improving the user experience.
Fig. 1B is a flowchart illustrating a content sharing method according to another embodiment of the present application. As shown in fig. 1B, in one possible implementation of the present embodiment, 103 may include:
1031. and in response to receiving the confirmation information of the user on the content abstract information, storing the content abstract information confirmed by the user and the resource description information of the target content into a clipboard.
1032. And responding to a sharing instruction sent by a user selecting a sharing object, acquiring content abstract information confirmed by the user and resource description information of the target content from the clipboard, taking the content abstract information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object.
The embodiment provides a specific implementation scheme for sending the user viewpoint and the resource description information of the target content to the position appointed by the user, wherein the content abstract information confirmed by the user and the resource description information of the target content can be stored in the clipboard, and then the content abstract information confirmed by the user and the resource description information of the target content are obtained from the clipboard based on the sharing instruction sent by the user and are sent to the position corresponding to the sharing object.
Optionally, in a possible implementation manner of this embodiment, when the target content is text, in 101, content identification and information extraction may be performed on the text by using a natural language processing (NaturalLanguageProcessing, NLP) technology, so as to generate content summary information of the text.
Because the NPL adopts the computer science technology and the artificial intelligence technology, in the embodiment, the text is subjected to content identification and information extraction based on the NLP technology, so that semantic understanding, context association, key information extraction and the like of the text content can be realized, the generated content abstract information can more accurately represent the key information of the text, and the sharing efficiency is improved.
Optionally, in one possible implementation manner of this embodiment, when the target content is a web page, in 101, content identification is performed on the web page, and content summary information of the web page is obtained based on a content identification result.
For example, the text in the web page may be identified and extracted by using NLP technology, so as to generate content summary information of the web page. Or, the content in the web page can be classified, the text in the web page is subjected to content identification and information extraction to obtain first information, the first information is text information, and the non-text (such as pictures, videos, links, two-dimensional codes and the like) in the web page is subjected to key content extraction to obtain second information, and the second information is non-text information; and obtaining the content abstract information of the webpage through the first information and the second information, so that the generated content abstract information can comprise text information and non-text information (such as a poster two-dimensional code and the like) of key content, the content abstract information is richer, the user experience is further improved, and the possibility of other users viewing the shared content and the interaction effect based on the shared content are improved.
Optionally, in a possible implementation manner of this embodiment, when the target content is audio, in 101, content identification may be performed on the audio, and content summary information of the audio is obtained based on a content identification result.
For example, the content digest information of the audio may be generated by content identification and information extraction of text (e.g., song names, singers, lyrics, etc.) on the audio using NLP techniques. Or, an audio-to-text conversion technology can be adopted to convert the audio into text, and the content recognition and information extraction are carried out on the text obtained by conversion by utilizing an NLP technology to generate content abstract information of the audio. Or, the audio can be converted into text, and the text obtained by conversion and the text on the audio are subjected to content identification and information extraction by utilizing an NLP technology, so that content abstract information of the audio is generated.
Therefore, when text content is not available on the audio or the text content is not abundant enough, the audio can be converted into text by adopting an audio-to-text conversion technology, and then content abstract information of the audio is generated, so that the content abstract information is more abundant, and the sharing effect and the interaction effect are further improved.
Alternatively, in one possible implementation manner of this embodiment, when the target content is an image, in 101, content identification may be performed on the image, and content summary information of the image is obtained based on a content identification result.
For example, the image may be subjected to feature extraction and classification (e.g., animal, human, vehicle, building, flowers and trees, etc.) using an image processing technique, and content digest information of the image is obtained based on the classification result.
When obtaining the content abstract information of the image based on the classification result, obtaining the text abstract information of the image based on the classification result, selecting at least part of area images of the image based on the classification result, and obtaining the content abstract information based on the text abstract information of the image and the at least part of area images.
Therefore, the content abstract information shared as the user view can comprise the text and the key region images in the images, so that the content abstract information is richer, and the sharing effect and the interaction effect are further improved.
Optionally, in a possible implementation manner of this embodiment, when the target content is a video, in 101, content identification may be performed on the video, and content summary information of the video is obtained based on a content identification result.
For example, multiple frames of images can be selected from the video, then each frame of image in the multiple frames of images is used as a current image, the current image is subjected to feature extraction and classification by using an image processing technology to obtain a content identification result of the current image, and then content abstract information of the video is obtained based on the content identification result of the multiple frames of images.
When multi-frame images are selected from the video, the video can be segmented to obtain a plurality of segmented videos; and selecting a preset number of images from each of the plurality of segmented videos to obtain the multi-frame image.
Or, the video may be segmented to obtain a plurality of segmented videos; and selecting a preset number of images from any one or more segmented videos in the segmented videos respectively to obtain the multi-frame images.
Alternatively, the multi-frame image may be obtained by randomly selecting an image from the video.
In a specific implementation process, the video may be segmented, for example, by average segmentation or random segmentation, or may be segmented according to a scene, and an image of the same scene may be divided into one segmented video, or may be segmented by other manners, which is not limited in this embodiment.
When obtaining the content summary information of the video based on the content recognition result of the multi-frame image, the content summary information of the multi-frame image may be obtained based on the content recognition result of the multi-frame image, and at least one frame image or an area image in at least one frame image in the multi-frame image may be selected based on the content recognition result of the multi-frame image, and then the content summary information may be obtained based on the content summary information of the multi-frame image and the at least one frame image or the area image in the at least one frame image.
Fig. 1C-1F are exemplary diagrams of target content and content summary information in an embodiment of the application.
As shown in fig. 1C-1E, for the title "design stable lithium battery in challenge-! The three-frame image shown in fig. 1C-1E is a multi-frame image selected based on the embodiment of the application, and the content summary information of the video generated based on the embodiment of the application is as follows: we carefully choose what life service to use, as that will decide … our future-! The user selects to directly use the content summary information as the user viewpoint, as shown in fig. 1F, which is an example of a content display effect of sharing the user viewpoint and the resource description information of the video to the WeChat friend circle according to the embodiment of the present application.
According to the technical scheme provided by the application, the content summary information of the target content is obtained by analyzing the target content, and the content summary information is displayed, so that in response to receiving the confirmation information of the user on the content summary information, the content summary information confirmed by the user is used as a user viewpoint, the user viewpoint and the resource description information of the target content are sent to the position appointed by the user, and summary information related to the shared content can be automatically generated as the user viewpoint when the user shares the content, and the user viewpoint content related to the shared content does not need to be manually edited by repeatedly checking the shared content, so that the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility of other users to view the shared content and the interaction effect based on the shared content are improved.
In addition, after the technical scheme provided by the application is adopted to display the content abstract information and the resource description information of the target content, the content abstract information of the target content can be modified according to the modification instruction of the income of the user, so that the optimization adjustment of the automatically generated content abstract information is realized, the personalized viewpoint expression requirement of the user can be met, and the user experience is further improved.
In addition, by adopting the technical scheme provided by the application, the content abstract information of the automatically generated target content can be deleted according to the deletion instruction of the income of the user, thereby meeting the personalized requirements of the user and further improving the user experience.
In addition, by adopting the technical scheme provided by the application, the target content can comprise any one or more of the following: web pages, texts, images, audios and videos can automatically generate content abstract information aiming at various types of contents and share the content abstract information together as views of sharing users, so that sharing efficiency is improved, user experience is improved, and possibility of other users viewing the shared contents and interaction effect based on the shared contents are improved.
In addition, by adopting the technical scheme provided by the application, the automatically generated content abstract information comprises any one or more of the following: text, images, audio and video, so that the expression content of the user viewpoint can be enriched, and the user experience is further improved.
Fig. 2A is a schematic structural diagram of a content sharing device according to an embodiment of the application, as shown in fig. 2A. The content sharing apparatus 200 of the present embodiment may include a content analysis unit 201, an interaction unit 202, and a sharing unit 203. Wherein, the content analysis unit 201 is configured to analyze a target content to obtain content summary information of the target content; an interaction unit 202 for displaying the content summary information; and the sharing unit 203 is configured to send, in response to receiving the confirmation information of the user on the content summary information, the user viewpoint and the resource description information of the target content to the location specified by the user with the content summary information confirmed by the user as the user viewpoint.
It should be noted that, part or all of the execution body of the content sharing apparatus provided in this embodiment may be an application located at a local terminal, or may be a functional unit such as a plug-in unit or a software development kit (SoftwareDevelopmentKit, SDK) disposed in the application located at the local terminal, or may be a processing engine located in a server on a network side, or may be a distributed system located on the network side, for example, a processing engine or a distributed system in a test platform on the network side, which is not limited in this embodiment.
It will be appreciated that the application may be a native program (native app) installed on the terminal, or may also be a web page program (webApp) of a browser on the terminal, which is not limited in this embodiment.
Optionally, in a possible implementation manner of this embodiment, the interaction unit 202 may be further configured to receive a content sharing request sent by the user for the target content. Fig. 2B is a schematic structural diagram of a content sharing device according to another embodiment of the present application, as shown in fig. 2, the content sharing device 200 of this embodiment may further include a generating unit 204, configured to generate resource description information of the target content.
Optionally, in a possible implementation manner of this embodiment, the interaction unit 202 may be further configured to modify content summary information of the target content according to a modification instruction of the user income, where the modification instruction includes modification information.
Optionally, in a possible implementation manner of this embodiment, the interaction unit 202 may be further configured to delete the content summary information of the target content according to a deletion instruction of the user income.
In a specific implementation process, the sharing unit 203 is specifically configured to: storing the content summary information confirmed by the user and the resource description information of the target content into a clipboard in response to receiving the confirmation information of the user on the content summary information; and responding to a sharing instruction sent by a user selecting a sharing object, acquiring content abstract information confirmed by the user and resource description information of the target content from the clipboard, taking the content abstract information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object.
Optionally, in a possible implementation of this embodiment, the target content may include, but is not limited to, any one or more of the following: web pages, text, images, audio, video, etc.; alternatively, the content summary information may include, but is not limited to, any one or more of the following: text, images, audio, video, etc.
Optionally, in one possible implementation manner of this embodiment, the target content is text, and the content analysis unit 201 is configured to perform content identification and information extraction on the text by using NLP technology, so as to generate content summary information of the text; or, the target content is an image, and the content analysis unit 201 is configured to perform content recognition on the image, and obtain content summary information of the image based on a content recognition result; or, the target content is a web page, and the content analysis unit 201 is configured to identify content of the web page, and obtain content summary information of the web page based on a content identification result; or, the target content is audio, the content analysis unit 201 is configured to perform content recognition on the audio, and obtain content summary information of the audio based on a content recognition result; or, the target content is a video, and the content analysis unit 201 is configured to perform content identification on the video, and obtain content summary information of the video based on a content identification result.
In a specific implementation process, the target content is a web page, and the content analysis unit 201 is specifically configured to: performing content identification and information extraction on texts in the webpage by using an NLP technology to generate content abstract information of the webpage; or classifying the content in the webpage, identifying the content and extracting the information from the text in the webpage to obtain first information, and extracting the non-text in the webpage to obtain second information; and obtaining the content abstract information of the webpage from the first information and the second information.
In a specific implementation, the target content is audio, and the content analysis unit 201 is specifically configured to: performing content identification and information extraction on the text on the audio by using an NLP technology to generate content abstract information of the audio; or converting the audio into text, and performing content identification and information extraction on the converted text by using an NLP technology to generate content abstract information of the audio; or converting the audio into text, and performing content identification and information extraction on the converted text and the text on the audio by using an NLP technology to generate content abstract information of the audio.
In a specific implementation process, the target content is an image, and the content analysis unit 201 is specifically configured to perform feature extraction and classification on the image by using an image processing technology, and obtain content summary information of the picture based on a classification result.
For example, the content analysis unit 201 is further specifically configured to perform feature extraction and classification on the image by using an image processing technology; obtaining text abstract information of the image based on the classification result, and selecting at least partial area image of the image based on the classification result; and obtaining the content abstract information based on the text abstract information of the image and the at least partial area image.
In a specific implementation, the target content is a video, and the content analysis unit 201 is specifically configured to: selecting multi-frame images from the video; taking each frame of image in the multi-frame images as a current image, and carrying out feature extraction and classification on the current image by utilizing an image processing technology to obtain a content identification result of the current image; and obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
For example, when the content analysis unit 201 selects a plurality of frames of images from the video, the content analysis unit is specifically configured to: segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from each segmented video in the plurality of segmented videos respectively to obtain the multi-frame images; or segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the segmented videos respectively to obtain the multi-frame images; or randomly selecting images from the video to obtain the multi-frame images.
Specifically, the content analysis unit 201 is specifically configured to: selecting multi-frame images from the video; taking each frame of image in the multi-frame images as a current image, and carrying out feature extraction and classification on the current image by utilizing an image processing technology to obtain a content identification result of the current image; obtaining text abstract information of the multi-frame images based on the content identification result of the multi-frame images, and selecting at least one frame of images or region images in at least one frame of images based on the content identification result of the multi-frame images; and obtaining the content abstract information based on the text abstract information of the multi-frame image and the at least one frame image or the regional image in the at least one frame image.
Optionally, in a possible implementation of this embodiment, the user-specified location may include, but is not limited to, any of the following: an application platform, friends or groups in the application, and so on.
It should be noted that, the method in the embodiment corresponding to fig. 1A-1B may be implemented by the content sharing device provided in this embodiment. For detailed description, reference may be made to the relevant content in the corresponding embodiment of fig. 1A-1B, which is not repeated here.
According to the technical scheme provided by the application, the content summary information of the target content is obtained by analyzing the target content, and the content summary information is displayed, so that in response to receiving the confirmation information of the user on the content summary information, the content summary information confirmed by the user is used as a user viewpoint, the user viewpoint and the resource description information of the target content are sent to the position appointed by the user, and summary information related to the shared content can be automatically generated as the user viewpoint when the user shares the content, and the user viewpoint content related to the shared content does not need to be manually edited by repeatedly checking the shared content, so that the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility of other users to view the shared content and the interaction effect based on the shared content are improved.
In addition, after the technical scheme provided by the application is adopted to display the content abstract information and the resource description information of the target content, the content abstract information of the target content can be modified according to the modification instruction of the income of the user, so that the optimization adjustment of the automatically generated content abstract information is realized, the personalized viewpoint expression requirement of the user can be met, and the user experience is further improved.
In addition, by adopting the technical scheme provided by the application, the content abstract information of the automatically generated target content can be deleted according to the deletion instruction of the income of the user, thereby meeting the personalized requirements of the user and further improving the user experience.
In addition, by adopting the technical scheme provided by the application, the target content can comprise any one or more of the following: web pages, texts, images, audios and videos can automatically generate content abstract information aiming at various types of contents and share the content abstract information together as views of sharing users, so that sharing efficiency is improved, user experience is improved, and possibility of other users viewing the shared contents and interaction effect based on the shared contents are improved.
In addition, by adopting the technical scheme provided by the application, the automatically generated content abstract information comprises any one or more of the following: text, images, audio and video, so that the expression content of the user viewpoint can be enriched, and the user experience is further improved.
Other effects of the above aspects or possible implementations will be described below in connection with specific embodiments.
According to an embodiment of the present application, there is also provided an electronic device and a non-transitory computer-readable storage medium storing computer instructions.
Fig. 3 is a schematic diagram of an electronic device for implementing the content sharing method according to the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 3, the electronic device includes: one or more processors 301, memory 302, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 301 is illustrated in fig. 3.
Memory 302 is a non-transitory computer readable storage medium provided by the present application. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the content sharing method provided by the present application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to execute the content sharing method provided by the present application.
The memory 302 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and units, such as program instructions/units (e.g., the acquisition unit 201, the association unit 202, and the control unit 203 shown in fig. 2) corresponding to the content sharing method in the embodiment of the present application. The processor 301 executes various functional applications of the server and data processing by executing non-transitory software programs, instructions, and units stored in the memory 302, that is, implements the content sharing method in the above-described method embodiment.
Memory 302 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device implementing the content sharing method provided by the embodiment of the present application, and the like. In addition, memory 302 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 302 may optionally include memory remotely located with respect to processor 301, which may be connected via a network to an electronic device implementing the content sharing methods provided by embodiments of the present application. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the content sharing method may further include: an input device 303 and an output device 304. The processor 301, memory 302, input device 303, and output device 304 may be connected by a bus or other means, for example in fig. 3.
The input device 303 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of an electronic device implementing the content sharing method provided by embodiments of the present application, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a joystick, one or more mouse buttons, a track ball, a joystick, and the like. The output device 304 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), haptic feedback devices (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the embodiment of the application obtains the content abstract information of the target content by analyzing the target content, and displays the content abstract information, further, in response to receiving the confirmation information of the user on the content abstract information, the content abstract information confirmed by the user is taken as a user viewpoint, the user viewpoint and the resource description information of the target content are sent to the position appointed by the user, and when the user shares the content, the abstract information related to the shared content can be automatically generated as the user viewpoint, and the user viewpoint content related to the shared content does not need to be manually edited by repeatedly checking the shared content, so that the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility of other users to view the shared content and the interaction effect based on the shared content are improved.
In addition, after the technical scheme provided by the application is adopted to display the content abstract information and the resource description information of the target content, the content abstract information of the target content can be modified according to the modification instruction of the income of the user, so that the optimization adjustment of the automatically generated content abstract information is realized, the personalized viewpoint expression requirement of the user can be met, and the user experience is further improved.
In addition, by adopting the technical scheme provided by the application, the content abstract information of the automatically generated target content can be deleted according to the deletion instruction of the income of the user, thereby meeting the personalized requirements of the user and further improving the user experience.
In addition, by adopting the technical scheme provided by the application, the target content can comprise any one or more of the following: web pages, texts, images, audios and videos can automatically generate content abstract information aiming at various types of contents and share the content abstract information together as views of sharing users, so that sharing efficiency is improved, user experience is improved, and possibility of other users viewing the shared contents and interaction effect based on the shared contents are improved.
In addition, by adopting the technical scheme provided by the application, the automatically generated content abstract information comprises any one or more of the following: text, images, audio and video, so that the expression content of the user viewpoint can be enriched, and the user experience is further improved.
Other effects of the above aspects or possible implementations will be described below in connection with specific embodiments.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (24)

1. A content sharing method, comprising:
analyzing the target content to obtain content abstract information of the target content; wherein the target content includes at least one of a web page, text, image, audio, and video, and the content digest information includes at least one of text, image, audio, and video included in the target content; when the target content is a web page, the analyzing the target content to obtain content abstract information of the target content includes: classifying the content in the webpage, identifying the content and extracting the information from the text in the webpage to obtain first information, and extracting the non-text in the webpage to obtain second information; obtaining content abstract information of the webpage from the first information and the second information;
Displaying the content abstract information;
and in response to receiving the confirmation information of the user on the content summary information, sending the user viewpoint and the resource description information of the target content to the position appointed by the user by taking the content summary information confirmed by the user as the user viewpoint.
2. The method of claim 1, wherein after displaying the content summary information, further comprising:
and modifying the content summary information of the target content according to the modification instruction of the user income, wherein the modification instruction comprises modification information.
3. The method of claim 1, wherein after displaying the content summary information, further comprising:
and deleting the content abstract information of the target content according to the deletion instruction of the income of the user.
4. The method according to claim 1, wherein the transmitting the user view and the resource description information to the user-specified location with the user-confirmed content digest information as a user view in response to receiving the user-confirmed content digest information, comprises:
storing the content summary information confirmed by the user and the resource description information of the target content into a clipboard in response to receiving the confirmation information of the user on the content summary information;
And responding to a sharing instruction sent by a user selecting a sharing object, acquiring content abstract information confirmed by the user and resource description information of the target content from the clipboard, taking the content abstract information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object.
5. The method according to any one of claims 1-4, wherein the target content is text, and the analyzing the target content to obtain content summary information of the target content includes: performing content identification and information extraction on the text by using a natural language processing technology to generate content abstract information of the text; or alternatively, the process may be performed,
the target content is an image, and the analyzing the target content to obtain content abstract information of the target content includes: performing content recognition on the image, and obtaining content abstract information of the image based on a content recognition result; or alternatively, the process may be performed,
the target content is audio, and the analysis of the target content to obtain content abstract information of the target content includes: performing content recognition on the audio, and obtaining content abstract information of the audio based on a content recognition result; or alternatively, the process may be performed,
The target content is video, and the analysis of the target content obtains content abstract information of the target content, including: and carrying out content identification on the video, and obtaining content abstract information of the video based on a content identification result.
6. The method of claim 5, wherein the content recognition of the audio, based on the content recognition result, obtains content summary information of the audio, comprises:
performing content identification and information extraction on the text on the audio by using a natural language processing technology to generate content abstract information of the audio; or alternatively, the process may be performed,
converting the audio into text, and performing content identification and information extraction on the converted text by using a natural language processing technology to generate content abstract information of the audio; or alternatively, the process may be performed,
and converting the audio into text, and carrying out content identification and information extraction on the converted text and the text on the audio by using a natural language processing technology to generate content abstract information of the audio.
7. The method of claim 5, wherein the performing content recognition on the image, and obtaining content summary information of the image based on the content recognition result, comprises:
And extracting and classifying the characteristics of the image by utilizing an image processing technology, and obtaining the content abstract information of the image based on a classification result.
8. The method of claim 7, wherein the obtaining content digest information of the image based on the classification result comprises:
obtaining text abstract information of the image based on the classification result, and selecting at least partial area image of the image based on the classification result;
and obtaining the content abstract information based on the text abstract information of the image and the at least partial area image.
9. The method of claim 5, wherein the performing content recognition on the video, and obtaining content summary information of the video based on the content recognition result, comprises:
selecting multi-frame images from the video;
taking each frame of image in the multi-frame images as a current image, and carrying out feature extraction and classification on the current image by utilizing an image processing technology to obtain a content identification result of the current image;
and obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
10. The method of claim 9, wherein selecting a plurality of frames of images from the video comprises:
Segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from each segmented video in the plurality of segmented videos respectively to obtain the multi-frame images; or alternatively, the process may be performed,
segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the segmented videos respectively to obtain the multi-frame images; or alternatively, the process may be performed,
and randomly selecting images from the video to obtain the multi-frame images.
11. The method according to claim 9, wherein the obtaining content digest information of the video based on the content recognition result of the multi-frame image includes:
obtaining text abstract information of the multi-frame images based on the content identification result of the multi-frame images, and selecting at least one frame of images or region images in at least one frame of images based on the content identification result of the multi-frame images;
and obtaining the content abstract information based on the text abstract information of the multi-frame image and the at least one frame image or the regional image in the at least one frame image.
12. A content sharing apparatus, comprising:
The content analysis unit is used for analyzing the target content to obtain content abstract information of the target content; wherein the target content includes at least one of a web page, text, image, audio, and video, and the content digest information includes at least one of text, image, audio, and video included in the target content; when the target content is a webpage, a content analysis unit is used for classifying the content in the webpage, carrying out content identification and information extraction on texts in the webpage to obtain first information, and carrying out key content extraction on non-texts in the webpage to obtain second information; obtaining content abstract information of the webpage from the first information and the second information;
the interaction unit is used for displaying the content abstract information;
and the sharing unit is used for responding to the received confirmation information of the user on the content abstract information, taking the content abstract information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to the position appointed by the user.
13. The apparatus of claim 12, wherein the interaction unit is further configured to modify content summary information of the target content according to modification instructions received by the user, the modification instructions including modification information.
14. The apparatus of claim 12, wherein the interaction unit is further configured to delete content summary information of the target content according to a deletion instruction of the user income.
15. The apparatus of claim 12, wherein the sharing unit is configured to
Storing the content summary information confirmed by the user and the resource description information of the target content into a clipboard in response to receiving the confirmation information of the user on the content summary information;
and responding to a sharing instruction sent by a user selecting a sharing object, acquiring content abstract information confirmed by the user and resource description information of the target content from the clipboard, taking the content abstract information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object.
16. The apparatus according to any one of claims 12-15, wherein the target content is text, and the content analysis unit is configured to perform content recognition and information extraction on the text by using a natural language processing technology, and generate content summary information of the text; or alternatively, the process may be performed,
The target content is an image, and the content analysis unit is used for carrying out content identification on the image and obtaining content abstract information of the image based on a content identification result; or alternatively, the process may be performed,
the target content is audio, the content analysis unit is used for carrying out content identification on the audio, and obtaining content abstract information of the audio based on a content identification result; or alternatively, the process may be performed,
the target content is a video, and the content analysis unit is used for carrying out content identification on the video and obtaining content abstract information of the video based on a content identification result.
17. The apparatus according to claim 16, wherein the target content is audio, the content analysis unit being in particular for
Performing content identification and information extraction on the text on the audio by using a natural language processing technology to generate content abstract information of the audio; or alternatively, the process may be performed,
converting the audio into text, and performing content identification and information extraction on the converted text by using a natural language processing technology to generate content abstract information of the audio; or alternatively, the process may be performed,
and converting the audio into text, and carrying out content identification and information extraction on the converted text and the text on the audio by using a natural language processing technology to generate content abstract information of the audio.
18. The apparatus according to claim 16, wherein the target content is an image, the content analysis unit being in particular for
And extracting and classifying the characteristics of the image by utilizing an image processing technology, and obtaining the content abstract information of the image based on a classification result.
19. The apparatus according to claim 18, wherein the content analysis unit is in particular configured to
Extracting and classifying the characteristics of the image by utilizing an image processing technology;
obtaining text abstract information of the image based on the classification result, and selecting at least partial area image of the image based on the classification result;
and obtaining the content abstract information based on the text abstract information of the image and the at least partial area image.
20. The apparatus according to claim 16, wherein the target content is video, the content analysis unit being in particular for
Selecting multi-frame images from the video;
taking each frame of image in the multi-frame images as a current image, and carrying out feature extraction and classification on the current image by utilizing an image processing technology to obtain a content identification result of the current image;
And obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
21. The apparatus according to claim 20, wherein said content analysis unit is adapted to, when selecting a plurality of frames of images from said video
Segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from each segmented video in the plurality of segmented videos respectively to obtain the multi-frame images; or alternatively, the process may be performed,
segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the segmented videos respectively to obtain the multi-frame images; or alternatively, the process may be performed,
and randomly selecting images from the video to obtain the multi-frame images.
22. The apparatus according to claim 21, wherein the content analysis unit is in particular configured to
Selecting multi-frame images from the video;
taking each frame of image in the multi-frame images as a current image, and carrying out feature extraction and classification on the current image by utilizing an image processing technology to obtain a content identification result of the current image;
obtaining text abstract information of the multi-frame images based on the content identification result of the multi-frame images, and selecting at least one frame of images or region images in at least one frame of images based on the content identification result of the multi-frame images;
And obtaining the content abstract information based on the text abstract information of the multi-frame image and the at least one frame image or the regional image in the at least one frame image.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-11.
CN201911212878.0A 2019-12-02 2019-12-02 Content sharing method and device, electronic equipment and readable storage medium Active CN111158924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911212878.0A CN111158924B (en) 2019-12-02 2019-12-02 Content sharing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911212878.0A CN111158924B (en) 2019-12-02 2019-12-02 Content sharing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111158924A CN111158924A (en) 2020-05-15
CN111158924B true CN111158924B (en) 2023-09-22

Family

ID=70556294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911212878.0A Active CN111158924B (en) 2019-12-02 2019-12-02 Content sharing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111158924B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694984B (en) * 2020-06-12 2023-06-20 百度在线网络技术(北京)有限公司 Video searching method, device, electronic equipment and readable storage medium
CN113157153A (en) * 2021-02-07 2021-07-23 北京字节跳动网络技术有限公司 Content sharing method and device, electronic equipment and computer readable storage medium
CN115119069A (en) * 2021-03-17 2022-09-27 阿里巴巴新加坡控股有限公司 Multimedia content processing method, electronic device and computer storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452470A (en) * 2007-10-18 2009-06-10 石忠民 Method and apparatus for a web search engine generating summary-style search results
CN102404107A (en) * 2010-09-13 2012-04-04 腾讯科技(深圳)有限公司 Method, device, transmitting end and receiving end all capable of guaranteeing safety of inputted content
CN102567532A (en) * 2011-12-30 2012-07-11 奇智软件(北京)有限公司 Information distribution method and information distribution device
CN103207892A (en) * 2013-03-12 2013-07-17 百度在线网络技术(北京)有限公司 Method and device for sharing document through network
CN104731959A (en) * 2015-04-03 2015-06-24 北京威扬科技有限公司 Video abstraction generating method, device and system based on text webpage content
CN106331328A (en) * 2016-08-17 2017-01-11 北京小米移动软件有限公司 Information prompting method and device
CN107451139A (en) * 2016-05-30 2017-12-08 北京三星通信技术研究有限公司 File resource methods of exhibiting, device and corresponding smart machine
CN107831974A (en) * 2017-11-30 2018-03-23 腾讯科技(深圳)有限公司 information sharing method, device and storage medium
CN108133707A (en) * 2017-11-30 2018-06-08 百度在线网络技术(北京)有限公司 A kind of content share method and system
CN108363749A (en) * 2018-01-29 2018-08-03 上海星佑网络科技有限公司 Method and apparatus for information processing
CN108520014A (en) * 2018-03-21 2018-09-11 广东欧珀移动通信有限公司 Information sharing method, device, mobile terminal and computer-readable medium
CN110175323A (en) * 2018-05-31 2019-08-27 腾讯科技(深圳)有限公司 Method and device for generating message abstract

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8836771B2 (en) * 2011-04-26 2014-09-16 Echostar Technologies L.L.C. Apparatus, systems and methods for shared viewing experience using head mounted displays

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452470A (en) * 2007-10-18 2009-06-10 石忠民 Method and apparatus for a web search engine generating summary-style search results
CN102404107A (en) * 2010-09-13 2012-04-04 腾讯科技(深圳)有限公司 Method, device, transmitting end and receiving end all capable of guaranteeing safety of inputted content
CN102567532A (en) * 2011-12-30 2012-07-11 奇智软件(北京)有限公司 Information distribution method and information distribution device
CN103207892A (en) * 2013-03-12 2013-07-17 百度在线网络技术(北京)有限公司 Method and device for sharing document through network
CN104731959A (en) * 2015-04-03 2015-06-24 北京威扬科技有限公司 Video abstraction generating method, device and system based on text webpage content
CN107451139A (en) * 2016-05-30 2017-12-08 北京三星通信技术研究有限公司 File resource methods of exhibiting, device and corresponding smart machine
CN106331328A (en) * 2016-08-17 2017-01-11 北京小米移动软件有限公司 Information prompting method and device
CN107831974A (en) * 2017-11-30 2018-03-23 腾讯科技(深圳)有限公司 information sharing method, device and storage medium
CN108133707A (en) * 2017-11-30 2018-06-08 百度在线网络技术(北京)有限公司 A kind of content share method and system
CN108363749A (en) * 2018-01-29 2018-08-03 上海星佑网络科技有限公司 Method and apparatus for information processing
CN108520014A (en) * 2018-03-21 2018-09-11 广东欧珀移动通信有限公司 Information sharing method, device, mobile terminal and computer-readable medium
CN110175323A (en) * 2018-05-31 2019-08-27 腾讯科技(深圳)有限公司 Method and device for generating message abstract

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
轻松玩转雅虎收藏;王志军;《电脑知识与技术(学术交流)》;20061026(第28期);109-110 *

Also Published As

Publication number Publication date
CN111158924A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111158924B (en) Content sharing method and device, electronic equipment and readable storage medium
US10134194B2 (en) Marking up scenes using a wearable augmented reality device
CN111860167B (en) Face fusion model acquisition method, face fusion model acquisition device and storage medium
CN114787813A (en) Context sensitive avatar captions
US20110239148A1 (en) Method and Apparatus for Indicating Historical Analysis Chronicle Information
WO2020187012A1 (en) Communication method, apparatus and device, and group creation method, apparatus and device
CN111680517B (en) Method, apparatus, device and storage medium for training model
JP6986187B2 (en) Person identification methods, devices, electronic devices, storage media, and programs
CN112752121B (en) Video cover generation method and device
CN110557699B (en) Intelligent sound box interaction method, device, equipment and storage medium
CN111565143B (en) Instant messaging method, equipment and computer readable storage medium
CN113746874B (en) Voice package recommendation method, device, equipment and storage medium
WO2016000536A1 (en) Method for activating application program, user terminal and server
CN112215924A (en) Picture comment processing method and device, electronic equipment and storage medium
CN112527115A (en) User image generation method, related device and computer program product
US11048387B1 (en) Systems and methods for managing media feed timelines
CN110909241B (en) Information recommendation method, user identification recommendation method, device and equipment
CN110109594B (en) Drawing data sharing method and device, storage medium and equipment
CN111353070B (en) Video title processing method and device, electronic equipment and readable storage medium
US20210271725A1 (en) Systems and methods for managing media feed timelines
US20180349932A1 (en) Methods and systems for determining persona of participants by the participant use of a software product
CN112843681A (en) Virtual scene control method and device, electronic equipment and storage medium
CN105100435A (en) Application method and device of mobile communication
CN103490982A (en) Message processing method and device
CN114221923B (en) Message processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant