CN111158924A - Content sharing method and device, electronic equipment and readable storage medium - Google Patents

Content sharing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111158924A
CN111158924A CN201911212878.0A CN201911212878A CN111158924A CN 111158924 A CN111158924 A CN 111158924A CN 201911212878 A CN201911212878 A CN 201911212878A CN 111158924 A CN111158924 A CN 111158924A
Authority
CN
China
Prior art keywords
content
information
image
user
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911212878.0A
Other languages
Chinese (zh)
Other versions
CN111158924B (en
Inventor
刘俊启
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911212878.0A priority Critical patent/CN111158924B/en
Publication of CN111158924A publication Critical patent/CN111158924A/en
Application granted granted Critical
Publication of CN111158924B publication Critical patent/CN111158924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/543User-generated data transfer, e.g. clipboards, dynamic data exchange [DDE], object linking and embedding [OLE]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a content sharing method and device, electronic equipment and a readable storage medium, and relates to a content identification technology. According to the method and the device, the content abstract information of the target content is obtained by analyzing the target content, the content abstract information is displayed, the confirmation information of the user on the content abstract information is received, the content abstract information confirmed by the user is used as a user viewpoint, the user viewpoint and the resource description information of the target content are sent to the position appointed by the user, the user does not need to repeatedly check the sharing content to manually edit the user viewpoint content related to the sharing content, the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility that other users view the shared content and the interaction effect based on the shared content are improved.

Description

Content sharing method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to internet technologies, and in particular, to a content sharing method and apparatus, an electronic device, and a readable storage medium.
Background
With the popularization of the mobile internet, people use mobile phone terminals to browse network contents to become a normal state. Meanwhile, the types of Applications (APP) on the mobile phone terminal are increasing, and a social application is one of the applications. Social applications are used to communicate information, including video, pictures, text, voice, etc., between users and friends, and between users and groups. The user may share the currently browsed content within a shared social application (for example, share the currently browsed content in an internal space of the social application, or send the currently browsed content to a friend or a group within the social application), or may share the currently browsed content with a third-party application (for example, share the currently browsed content in an internal space of the social application, or send the currently browsed content to a friend or a group within the third-party application).
Currently, when a user shares browsing content within a social application or to a third-party application, a sharing link generally includes a title. If the user wants to express the viewpoint related to the sharing content, the user needs to repeatedly check the sharing content to manually edit the corresponding viewpoint content, and therefore the sharing efficiency is low, the sharing effect is poor, and the user experience is poor.
Disclosure of Invention
Aspects of the present disclosure provide a content sharing method and apparatus, an electronic device, and a readable storage medium, so as to improve sharing efficiency and improve user experience.
One aspect of the present application provides a content sharing method, including:
analyzing the target content to obtain content abstract information of the target content;
displaying the content summary information;
and in response to receiving confirmation information of the user on the content summary information, sending the user viewpoint and the resource description information of the target content to a position specified by the user by taking the content summary information confirmed by the user as a user viewpoint.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where after displaying the content summary information, the method further includes:
and modifying the content summary information of the target content according to the modification instruction of the user income, wherein the modification instruction comprises modification information.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where after displaying the content summary information and the resource description information of the target content, the method further includes:
and deleting the content summary information of the target content according to the deleting instruction of the user income.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where, in response to receiving confirmation information of the content summary information by a user, sending the user viewpoint and the resource description information to a location specified by the user with the content summary information confirmed by the user as a user viewpoint, the method includes:
in response to receiving confirmation information of the user on the content summary information, storing the content summary information confirmed by the user and the resource description information of the target content in a clipboard;
and in response to receiving a sharing instruction sent by a user selecting a sharing object, acquiring the content abstract information confirmed by the user and the resource description information of the target content from the clipboard, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object by taking the content abstract information confirmed by the user as a user viewpoint.
The above-described aspects and any possible implementations further provide an implementation in which the target content includes any one or more of: web pages, text, images, audio, video; alternatively, the first and second electrodes may be,
the content summary information comprises any one or more of the following items: text, image, audio, video.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where the target content is a text, and the analyzing the target content to obtain the content summary information of the target content includes: performing content identification and information extraction on the text by using a natural language processing technology to generate content abstract information of the text; alternatively, the first and second electrodes may be,
the target content is an image, and the analyzing of the target content to obtain the content summary information of the target content includes: performing content identification on the image, and obtaining content abstract information of the image based on a content identification result; alternatively, the first and second electrodes may be,
the target content is a webpage, and the analyzing the target content to obtain the content summary information of the target content comprises the following steps: performing content identification on the webpage, and obtaining content abstract information of the webpage based on a content identification result; alternatively, the first and second electrodes may be,
the target content is audio, and the analyzing the target content to obtain the content summary information of the target content includes: performing content identification on the audio, and obtaining content abstract information of the audio based on a content identification result; alternatively, the first and second electrodes may be,
the target content is a video, and the analyzing the target content to obtain the content summary information of the target content includes: and identifying the content of the video, and obtaining the content abstract information of the video based on the content identification result.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where the performing content identification on the web page and obtaining content summary information of the web page based on a content identification result includes:
performing content identification and information extraction on the text in the webpage by using a natural language processing technology to generate content abstract information of the webpage; alternatively, the first and second electrodes may be,
classifying the content in the webpage, performing content identification and information extraction on the text in the webpage to obtain first information, and performing key content extraction on the non-text in the webpage to obtain second information; and obtaining the content abstract information of the webpage according to the first information and the second information.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where the performing content identification on the audio and obtaining content summary information of the audio based on a content identification result includes:
performing content identification and information extraction on the text on the audio by using a natural language processing technology to generate content abstract information of the audio; alternatively, the first and second electrodes may be,
converting the audio into a text, and performing content identification and information extraction on the text obtained by conversion by using a natural language processing technology to generate content abstract information of the audio; alternatively, the first and second electrodes may be,
and converting the audio into a text, and performing content identification and information extraction on the text obtained by conversion and the text on the audio by using a natural language processing technology to generate the content abstract information of the audio.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where the performing content identification on the image and obtaining content summary information of the image based on a content identification result includes: and utilizing an image processing technology to extract and classify the features of the image, and obtaining the content abstract information of the image based on the classification result.
The above-described aspect and any possible implementation manner further provide an implementation manner, where obtaining content summary information of the image based on the classification result includes:
obtaining text abstract information of the image based on the classification result, and selecting at least partial region image of the image based on the classification result;
and obtaining the content abstract information based on the text abstract information of the image and the at least partial region image.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where the performing content identification on the video and obtaining content summary information of the video based on a content identification result includes:
selecting a plurality of frames of images from the video;
respectively taking each frame image in the multi-frame images as a current image, and performing feature extraction and classification on the current image by using an image processing technology to obtain a content identification result of the current image;
and obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
The above aspect and any possible implementation manner further provide an implementation manner, where selecting a plurality of frames of images from the video includes:
segmenting the video to obtain a plurality of segmented videos; respectively selecting a preset number of images from each segmented video in the segmented videos to obtain the multi-frame images; alternatively, the first and second electrodes may be,
segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the plurality of segmented videos respectively to obtain the multi-frame images; alternatively, the first and second electrodes may be,
and randomly selecting images from the video to obtain the multi-frame images.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the obtaining content summary information of the video based on the content identification result of the multiple frames of images includes:
obtaining text summary information of the multi-frame images based on the content identification results of the multi-frame images, and selecting at least one frame of image or an area image in the at least one frame of image based on the content identification results of the multi-frame images;
and obtaining the content summary information based on the text summary information of the multiple frames of images and the at least one frame of image or the area image in the at least one frame of image.
Another aspect of the present application provides a content sharing apparatus, including:
the content analysis unit is used for analyzing the target content to obtain the content abstract information of the target content;
the interaction unit is used for displaying the content abstract information;
and the sharing unit is used for responding to the received confirmation information of the user on the content summary information, taking the content summary information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to a position specified by the user.
The above-mentioned aspects and any possible implementation manners further provide an implementation manner, and the interaction unit is further configured to
And modifying the content summary information of the target content according to the modification instruction of the user income, wherein the modification instruction comprises modification information.
The above-mentioned aspects and any possible implementation manners further provide an implementation manner, and the interaction unit is further configured to
And deleting the content summary information of the target content according to the deleting instruction of the user income.
The above-mentioned aspects and any possible implementation manners further provide an implementation manner, and the sharing unit is specifically configured to
In response to receiving confirmation information of the user on the content summary information, storing the content summary information confirmed by the user and the resource description information of the target content in a clipboard;
and in response to receiving a sharing instruction sent by a user selecting a sharing object, acquiring the content abstract information confirmed by the user and the resource description information of the target content from the clipboard, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object by taking the content abstract information confirmed by the user as a user viewpoint.
The above-described aspects and any possible implementations further provide an implementation in which the target content includes any one or more of: web pages, text, images, audio, video; alternatively, the first and second electrodes may be,
the content summary information is any one or more of the following: text, image, audio, video.
The foregoing aspects and any possible implementations further provide an implementation, where the target content is a text, and the content analysis unit is configured to perform content identification and information extraction on the text by using a natural language processing technology to generate content summary information of the text; alternatively, the first and second electrodes may be,
the target content is an image, and the content analysis unit is used for performing content identification on the image and obtaining content abstract information of the image based on a content identification result; alternatively, the first and second electrodes may be,
the target content is a webpage, and the content analysis unit is used for identifying the content of the webpage and obtaining the content abstract information of the webpage based on the content identification result; alternatively, the first and second electrodes may be,
the target content is audio, the content analysis unit is used for carrying out content identification on the audio and obtaining content abstract information of the audio based on a content identification result; alternatively, the first and second electrodes may be,
the target content is a video, and the content analysis unit is used for performing content identification on the video and obtaining content abstract information of the video based on a content identification result.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where the target content is a web page, and the content analysis unit is specifically configured to
Performing content identification and information extraction on the text in the webpage by using a natural language processing technology to generate content abstract information of the webpage; alternatively, the first and second electrodes may be,
classifying the content in the webpage, performing content identification and information extraction on the text in the webpage to obtain first information, and performing key content extraction on the non-text in the webpage to obtain second information; and obtaining the content abstract information of the webpage according to the first information and the second information.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where the target content is audio, and the content analysis unit is specifically configured to
Performing content identification and information extraction on the text on the audio by using a natural language processing technology to generate content abstract information of the audio; alternatively, the first and second electrodes may be,
converting the audio into a text, and performing content identification and information extraction on the text obtained by conversion by using a natural language processing technology to generate content abstract information of the audio; alternatively, the first and second electrodes may be,
and converting the audio into a text, and performing content identification and information extraction on the text obtained by conversion and the text on the audio by using a natural language processing technology to generate the content abstract information of the audio.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where the target content is an image, and the content analysis unit is specifically configured to
And utilizing an image processing technology to extract and classify the features of the image, and obtaining the content abstract information of the image based on the classification result.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where the target content is a video, and the content analysis unit is specifically configured to
Selecting a plurality of frames of images from the video;
respectively taking each frame image in the multi-frame images as a current image, and performing feature extraction and classification on the current image by using an image processing technology to obtain a content identification result of the current image;
and obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
The foregoing aspects and any possible implementations further provide an implementation where the content analysis unit is specifically configured to select multiple frames of images from the video
Segmenting the video to obtain a plurality of segmented videos; respectively selecting a preset number of images from each segmented video in the segmented videos to obtain the multi-frame images; alternatively, the first and second electrodes may be,
segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the plurality of segmented videos respectively to obtain the multi-frame images; alternatively, the first and second electrodes may be,
and randomly selecting images from the video to obtain the multi-frame images.
In another aspect of the present invention, an electronic device is provided, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of the aspects and any possible implementation described above.
In another aspect of the invention, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the above described aspects and any possible implementation.
According to the technical scheme, the content summary information of the target content is obtained by analyzing the target content, the content summary information is displayed, the confirmation information of the user on the content summary information is received, the content summary information confirmed by the user is used as a user viewpoint, the user viewpoint and the resource description information of the target content are sent to the position appointed by the user, the summary information related to the shared content can be automatically generated to be used as the user viewpoint when the user shares the content, the user does not need to repeatedly check the shared content to manually edit the user viewpoint content related to the shared content, the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility that other users view the shared content and the interaction effect based on the shared content are improved.
In addition, by adopting the technical scheme provided by the application, after the content abstract information and the resource description information of the target content are displayed, the content abstract information of the target content can be modified according to the modification instruction of the user income, so that the optimized adjustment of the automatically generated content abstract information is realized, the personalized viewpoint expression requirements of the user can be met, and the user experience is further improved.
In addition, by adopting the technical scheme provided by the application, the content abstract information of the automatically generated target content can be deleted according to the deleting instruction of the user income, so that the personalized requirements of the user are met, and the user experience is further improved.
In addition, with the technical solution provided by the present application, the target content may include any one or more of the following: the method comprises the steps of automatically generating content summary information aiming at various types of contents, sharing the content summary information as the view of a sharing user, improving sharing efficiency, improving user experience, and improving the possibility that other users view the shared contents and the interaction effect based on the shared contents.
In addition, by adopting the technical scheme provided by the application, the automatically generated content summary information comprises any one or more of the following items: text, image, audio and video, so that the expression content of the user viewpoint can be enriched, and the user experience is further improved.
Further effects of the above aspects or possible implementations will be described below in connection with specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and those skilled in the art can also obtain other drawings according to the drawings without inventive labor. The drawings are only for the purpose of illustrating the present invention and are not to be construed as limiting the present application. Wherein:
fig. 1A is a schematic flowchart of a content sharing method according to an embodiment of the present application;
fig. 1B is a schematic flowchart of a content sharing method according to another embodiment of the present application;
FIGS. 1C-1F are diagrams of an example of target content and content summary information in an embodiment of the present application;
fig. 2A is a schematic structural diagram of a content sharing device according to an embodiment of the present disclosure;
fig. 2B is a schematic structural diagram of a content sharing device according to another embodiment of the present application;
fig. 3 is a schematic view of an electronic device for implementing the data indexing method provided in the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terminal involved in the embodiments of the present application may include, but is not limited to, a mobile phone, a Personal Digital Assistant (PDA), a wireless handheld device, a tablet computer (tablet computer), a Personal Computer (PC), an MP3 player, an MP4 player, a wearable device (e.g., smart glasses, smart watch, smart bracelet, etc.), a smart home device (e.g., smart speaker device, smart television, smart air conditioner, etc.), and the like.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1A is a schematic flow chart of a content sharing method according to an embodiment of the present application, as shown in fig. 1A.
101. And analyzing the target content to obtain the content abstract information of the target content.
Optionally, in a possible implementation manner of this embodiment, the target content may include, but is not limited to, any one or more of the following: the content can be shared by any content such as web pages, texts, images, audios, videos and the like.
Optionally, in a possible implementation manner of this embodiment, the content summary information may include, but is not limited to, any one or more of the following: text, image, audio, video, and the like are arbitrarily contained in the target content.
102. And displaying the content summary information.
103. And in response to receiving confirmation information of the user on the content summary information, sending the user viewpoint and the resource description information of the target content to a position specified by the user by taking the content summary information confirmed by the user as a user viewpoint.
Optionally, in a possible implementation manner of this embodiment, the location specified by the user may include, but is not limited to, any of the following: an application platform, and a friend or a group in an application. The application program in the embodiment of the present application may be any application program, for example, a WeChat, a QQ, a microblog, or the like. The application program may be an application program in which the target content is located, or may be a third-party application program. The application platform may be a space for an application, such as a WeChat album, a QQ space, a micro-blog space, and so forth.
It should be noted that part or all of the execution subjects of 101 to 103 may be an application located at the local terminal, or may also be a functional unit such as a plug-in or Software Development Kit (SDK) set in the application located at the local terminal, or may also be a processing engine located in a server on the network side, or may also be a distributed system located on the network side, for example, a processing engine or a distributed system in a smart home service platform on the network side, which is not particularly limited in this embodiment.
It is to be understood that the application may be a native app (native app) installed on the terminal, or may also be a web page program (webApp) of a browser on the terminal, which is not limited in this embodiment.
Therefore, when the user shares the content, the summary information related to the shared content can be automatically generated to serve as the user viewpoint, the user viewpoint content related to the shared content does not need to be edited manually by repeatedly checking the shared content, the sharing efficiency can be improved, and the user experience can be improved; in addition, based on the user viewpoint content related to the shared content, the possibility that other users view the shared content and the interaction effect based on the shared content are improved.
Optionally, in the content sharing method provided in another embodiment of the present application, before 101 or after 101, the method may further include:
receiving a content sharing request for the target content sent by a user;
and generating the resource description information of the target content.
The resource description information of the target content may include: the link address of the target content may further include the following related information: the main title, the sub-title, the release date, the release platform, etc. of the target content. For example, in one specific example, the resource description information of one target content may include: man … -Baidu sticking bar.
Based on the resource description information of the target content, the user can read the full text of the target content by clicking the link address therein, and can also know the content type, source and the like of the target content based on the further included related information.
Optionally, the content summary information displayed at 102 is editable information, and the user may directly confirm the displayed content summary information as the user viewpoint, or modify the displayed content summary information, and then confirm the modified content summary information as the user viewpoint. In a specific application, an interactive interface for a user to input a confirmation operation may be displayed when the content summary information is displayed, and the interactive interface may be, for example, a "confirmation" button, a "√" button, or a "share" button or a similar button for implementing a share function, which is not limited in this embodiment.
Optionally, in a possible implementation manner of this embodiment, after 102, the content summary information of the target content may also be modified according to a modification instruction of the user revenue, where the modification instruction includes modification information.
Therefore, the content abstract information of the target content can be modified according to the modification instruction of the user income, and the optimization and adjustment of the automatically generated content abstract information are realized, so that the personalized viewpoint expression requirements of the user can be met, and the user experience is further improved.
Optionally, in a possible implementation manner of this embodiment, after 102, the content summary information of the target content may also be deleted according to a deletion instruction of the user income.
Therefore, the content summary information of the automatically generated target content can be deleted according to the deleting instruction of the user income, so that the personalized requirements of the user are met, and the user experience is further improved.
Fig. 1B is a schematic flow chart of a content sharing method according to another embodiment of the present application. As shown in fig. 1B, in a possible implementation manner of this embodiment, 103 may include:
1031. and in response to receiving confirmation information of the user on the content summary information, storing the content summary information confirmed by the user and the resource description information of the target content in the clipboard.
1032. And in response to receiving a sharing instruction sent by a user selecting a sharing object, acquiring the content abstract information confirmed by the user and the resource description information of the target content from the clipboard, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object by taking the content abstract information confirmed by the user as a user viewpoint.
The embodiment provides a specific implementation scheme for sending the user viewpoint and the resource description information of the target content to the position specified by the user, the content abstract information confirmed by the user and the resource description information of the target content can be stored in a clipboard firstly, and then the content abstract information confirmed by the user and the resource description information of the target content are obtained from the clipboard and sent to the position corresponding to the sharing object based on the sharing instruction sent by the user.
Optionally, in a possible implementation manner of this embodiment, when the target content is a text, in 101, content identification and information extraction may be performed on the text by using a Natural Language Processing (NLP) technology, so as to generate content summary information of the text.
Because the NPL adopts a computer science technology and an artificial intelligence technology, in this embodiment, the text is subjected to content identification and information extraction based on the NLP technology, and semantic understanding, context association, key information extraction, and the like of the text content can be realized, so that the generated content summary information more accurately represents the key information of the text, and the sharing efficiency is improved.
Optionally, in a possible implementation manner of this embodiment, when the target content is a web page, in 101, content identification is performed on the web page, and content summary information of the web page is obtained based on a content identification result.
For example, the NLP technology may be utilized to perform content identification and information extraction on the text in the web page, so as to generate the content summary information of the web page. Or, the content in the web page may be classified, and the text in the web page is subjected to content identification and information extraction to obtain first information, where the first information is text information, and the non-text (for example, a picture, a video, a link, a two-dimensional code, and the like) in the web page is subjected to key content extraction to obtain second information, where the second information is non-text information; the content abstract information of the webpage is obtained through the first information and the second information, so that the generated content abstract information can comprise text information and non-text information (such as poster two-dimensional codes) of key content, the content abstract information is richer, user experience is further improved, and the possibility that other users view the shared content and the interaction effect based on the shared content are improved.
Optionally, in a possible implementation manner of this embodiment, when the target content is an audio, in 101, content identification may be performed on the audio, and content summary information of the audio is obtained based on a content identification result.
For example, the NLP technology may be used to perform content identification and information extraction on the text (e.g., song title, artist, lyrics, etc.) on the audio to generate the content summary information of the audio. Or, an audio-to-text conversion technology may be adopted to convert the audio into a text, and an NLP technology is used to perform content identification and information extraction on the text obtained by conversion, so as to generate the content summary information of the audio. Or, the audio may also be converted into a text, and by using NLP technology, content identification and information extraction are performed on the converted text and the text on the audio, so as to generate content summary information of the audio.
Therefore, when no text content exists on the audio or the text content is not rich enough, the audio can be converted into the text by adopting an audio-to-text conversion technology, and then the content abstract information of the audio is generated, so that the content abstract information is richer, and the sharing effect and the interaction effect are further improved.
Optionally, in a possible implementation manner of this embodiment, when the target content is an image, in 101, content identification may be performed on the image, and content summary information of the image is obtained based on a content identification result.
For example, image processing techniques may be used to extract and classify features of the images (e.g., animals, people, vehicles, buildings, flowers, trees, etc.), and to derive summary information of the content of the images based on the classification.
When the content abstract information of the image is obtained based on the classification result, the text abstract information of the image can be obtained based on the classification result, at least partial region images of the image can be selected based on the classification result, and then the content abstract information can be obtained based on the text abstract information of the image and the at least partial region images.
Therefore, the content summary information shared as the user viewpoint can comprise the key area images in the characters and the images, so that the content summary information is richer, and the sharing effect and the interaction effect are further improved.
Optionally, in a possible implementation manner of this embodiment, when the target content is a video, in 101, content identification may be performed on the video, and content summary information of the video is obtained based on a content identification result.
For example, multiple frames of images may be selected from the video, then each frame of image in the multiple frames of images is used as a current image, an image processing technology is used to perform feature extraction and classification on the current image to obtain a content identification result of the current image, and then content summary information of the video is obtained based on the content identification result of the multiple frames of images.
When a plurality of frames of images are selected from the video, the video can be segmented to obtain a plurality of segmented videos; and respectively selecting a preset number of images from each segmented video in the segmented videos to obtain the multi-frame images.
Or, the video may be segmented to obtain a plurality of segmented videos; and respectively selecting a preset number of images from any one or more segmented videos in the plurality of segmented videos to obtain the multi-frame images.
Alternatively, images may be randomly selected from the video to obtain the plurality of frames of images.
In a specific implementation process, the video may be segmented, for example, by performing average segmentation or random segmentation on the video, or may also be segmented according to scenes, and an image of the same scene is divided into a segmented video, or may also be segmented in other manners, which is not particularly limited in this embodiment.
When the content abstract information of the video is obtained based on the content identification result of the multiple frames of images, the text abstract information of the multiple frames of images can be obtained based on the content identification result of the multiple frames of images, at least one frame of image or an area image in the at least one frame of image is selected based on the content identification result of the multiple frames of images, and then the content abstract information is obtained based on the text abstract information of the multiple frames of images and the at least one frame of image or the area image in the at least one frame of image.
Fig. 1C to 1F are diagrams illustrating examples of target content and content summary information in an embodiment of the present application.
As shown in FIGS. 1C-1E, a title of "design stable lithium battery in challenge sound! Three-frame images in a video of a 97-year-old second-war old soldier adult-youngest Channuo prize winning owner serve as target content in an embodiment of the present application, the three-frame images shown in fig. 1C to 1E are multi-frame images selected based on the embodiment of the present application, and content summary information of the video generated based on the embodiment of the present application is as follows: we have to choose carefully what to use as a life service, as that will decide … our future! The user selects to directly use the content summary information as a user view point, as shown in fig. 1F, which is an example of a content display effect for sharing the user view point and the resource description information of the video to the wechat friend circle based on the embodiment of the present application.
According to the technical scheme, the content summary information of the target content is obtained by analyzing the target content, the content summary information is displayed, the confirmation information of the user on the content summary information is received, the content summary information confirmed by the user is used as a user viewpoint, the user viewpoint and the resource description information of the target content are sent to the position appointed by the user, the summary information related to the shared content can be automatically generated to be used as the user viewpoint when the user shares the content, the user viewpoint content related to the shared content does not need to be edited manually by repeatedly checking the shared content, the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility that other users view the shared content and the interaction effect based on the shared content are improved.
In addition, by adopting the technical scheme provided by the application, after the content abstract information and the resource description information of the target content are displayed, the content abstract information of the target content can be modified according to the modification instruction of the user income, so that the optimized adjustment of the automatically generated content abstract information is realized, the personalized viewpoint expression requirements of the user can be met, and the user experience is further improved.
In addition, by adopting the technical scheme provided by the application, the content abstract information of the automatically generated target content can be deleted according to the deleting instruction of the user income, so that the personalized requirements of the user are met, and the user experience is further improved.
In addition, with the technical solution provided by the present application, the target content may include any one or more of the following: the method comprises the steps of automatically generating content summary information aiming at various types of contents, sharing the content summary information as the view of a sharing user, improving sharing efficiency, improving user experience, and improving the possibility that other users view the shared contents and the interaction effect based on the shared contents.
In addition, by adopting the technical scheme provided by the application, the automatically generated content summary information comprises any one or more of the following items: text, image, audio and video, so that the expression content of the user viewpoint can be enriched, and the user experience is further improved.
Fig. 2A is a schematic structural diagram of a content sharing device according to an embodiment of the present disclosure, as shown in fig. 2A. The content sharing apparatus 200 of the present embodiment may include a content analysis unit 201, an interaction unit 202, and a sharing unit 203. The content analysis unit 201 is configured to analyze target content to obtain content summary information of the target content; an interaction unit 202, configured to display the content summary information; the sharing unit 203 is configured to, in response to receiving confirmation information of the user on the content summary information, send the user viewpoint and the resource description information of the target content to a location specified by the user with the content summary information confirmed by the user as a user viewpoint.
It should be noted that, part or all of the execution main body of the content sharing apparatus provided in this embodiment may be an application located at the local terminal, or may also be a functional unit such as a plug-in or Software Development Kit (SDK) set in the application located at the local terminal, or may also be a processing engine located in a server on the network side, or may also be a distributed system located on the network side, for example, a processing engine or a distributed system in a test platform on the network side, which is not particularly limited in this embodiment.
It is to be understood that the application may be a native app (native app) installed on the terminal, or may also be a web page program (webApp) of a browser on the terminal, which is not limited in this embodiment.
Optionally, in a possible implementation manner of this embodiment, the interaction unit 202 may be further configured to receive a content sharing request for the target content, where the content sharing request is sent by a user. Fig. 2B is a schematic structural diagram of a content sharing device according to another embodiment of the present disclosure, and as shown in fig. 2, the content sharing device 200 of this embodiment may further include a generating unit 204 for generating resource description information of the target content.
Optionally, in a possible implementation manner of this embodiment, the interaction unit 202 may be further configured to modify content summary information of the target content according to a modification instruction of the user income, where the modification instruction includes modification information.
Optionally, in a possible implementation manner of this embodiment, the interaction unit 202 may be further configured to delete the content summary information of the target content according to a deletion instruction of the user income.
In a specific implementation process, the sharing unit 203 is specifically configured to: in response to receiving confirmation information of the user on the content summary information, storing the content summary information confirmed by the user and the resource description information of the target content in a clipboard; and in response to receiving a sharing instruction sent by a user selecting a sharing object, acquiring the content abstract information confirmed by the user and the resource description information of the target content from the clipboard, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object by taking the content abstract information confirmed by the user as a user viewpoint.
Optionally, in a possible implementation manner of this embodiment, the target content may include, but is not limited to, any one or more of the following: web pages, text, images, audio, video, etc.; alternatively, the content summary information may include, but is not limited to, any one or more of the following: text, images, audio, video, etc.
Optionally, in a possible implementation manner of this embodiment, the target content is a text, and the content analysis unit 201 is configured to perform content identification and information extraction on the text by using an NLP technology, so as to generate content summary information of the text; or, the target content is an image, and the content analysis unit 201 is configured to perform content identification on the image, and obtain content summary information of the image based on a content identification result; or, the target content is a web page, and the content analysis unit 201 is configured to perform content identification on the web page, and obtain content summary information of the web page based on a content identification result; or, the target content is an audio, and the content analysis unit 201 is configured to perform content identification on the audio, and obtain content summary information of the audio based on a content identification result; or, the target content is a video, and the content analysis unit 201 is configured to perform content identification on the video, and obtain content summary information of the video based on a content identification result.
In a specific implementation process, the target content is a web page, and the content analysis unit 201 is specifically configured to: performing content identification and information extraction on the text in the webpage by using an NLP technology to generate content abstract information of the webpage; or classifying the content in the webpage, performing content identification and information extraction on the text in the webpage to obtain first information, and performing key content extraction on the non-text in the webpage to obtain second information; and obtaining the content abstract information of the webpage according to the first information and the second information.
In a specific implementation process, the target content is an audio, and the content analysis unit 201 is specifically configured to: performing content identification and information extraction on the text on the audio by using an NLP technology to generate content abstract information of the audio; or, converting the audio into a text, and performing content identification and information extraction on the text obtained by conversion by using an NLP technology to generate content abstract information of the audio; or, the audio is converted into a text, and the text obtained by conversion and the text on the audio are subjected to content identification and information extraction by using an NLP technology to generate the content abstract information of the audio.
In a specific implementation process, the target content is an image, and the content analysis unit 201 is specifically configured to perform feature extraction and classification on the image by using an image processing technology, and obtain content summary information of the picture based on a classification result.
For example, the content analysis unit 201 is further specifically configured to perform feature extraction and classification on the image by using an image processing technology; obtaining text abstract information of the image based on the classification result, and selecting at least partial region image of the image based on the classification result; and obtaining the content abstract information based on the text abstract information of the image and the at least partial region image.
In a specific implementation process, the target content is a video, and the content analysis unit 201 is specifically configured to: selecting a plurality of frames of images from the video; respectively taking each frame image in the multi-frame images as a current image, and performing feature extraction and classification on the current image by using an image processing technology to obtain a content identification result of the current image; and obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
For example, when the content analysis unit 201 selects a plurality of frames of images from the video, it is specifically configured to: segmenting the video to obtain a plurality of segmented videos; respectively selecting a preset number of images from each segmented video in the segmented videos to obtain the multi-frame images; or segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the plurality of segmented videos respectively to obtain the multi-frame images; or randomly selecting images from the video to obtain the multi-frame images.
Specifically, the content analysis unit 201 is specifically configured to: selecting a plurality of frames of images from the video; respectively taking each frame image in the multi-frame images as a current image, and performing feature extraction and classification on the current image by using an image processing technology to obtain a content identification result of the current image; obtaining text summary information of the multi-frame images based on the content identification results of the multi-frame images, and selecting at least one frame of image or an area image in the at least one frame of image based on the content identification results of the multi-frame images; and obtaining the content summary information based on the text summary information of the multiple frames of images and the at least one frame of image or the area image in the at least one frame of image.
Optionally, in a possible implementation manner of this embodiment, the location specified by the user may include, but is not limited to, any of the following: an application platform, a friend or group in an application, and the like.
It should be noted that the method in the embodiment corresponding to fig. 1A to fig. 1B may be implemented by the content sharing apparatus provided in this embodiment. For detailed description, reference may be made to relevant contents in the embodiments corresponding to fig. 1A to fig. 1B, and details are not repeated here.
According to the technical scheme, the content summary information of the target content is obtained by analyzing the target content, the content summary information is displayed, the confirmation information of the user on the content summary information is received, the content summary information confirmed by the user is used as a user viewpoint, the user viewpoint and the resource description information of the target content are sent to the position appointed by the user, the summary information related to the shared content can be automatically generated to be used as the user viewpoint when the user shares the content, the user viewpoint content related to the shared content does not need to be edited manually by repeatedly checking the shared content, the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility that other users view the shared content and the interaction effect based on the shared content are improved.
In addition, by adopting the technical scheme provided by the application, after the content abstract information and the resource description information of the target content are displayed, the content abstract information of the target content can be modified according to the modification instruction of the user income, so that the optimized adjustment of the automatically generated content abstract information is realized, the personalized viewpoint expression requirements of the user can be met, and the user experience is further improved.
In addition, by adopting the technical scheme provided by the application, the content abstract information of the automatically generated target content can be deleted according to the deleting instruction of the user income, so that the personalized requirements of the user are met, and the user experience is further improved.
In addition, with the technical solution provided by the present application, the target content may include any one or more of the following: the method comprises the steps of automatically generating content summary information aiming at various types of contents, sharing the content summary information as the view of a sharing user, improving sharing efficiency, improving user experience, and improving the possibility that other users view the shared contents and the interaction effect based on the shared contents.
In addition, by adopting the technical scheme provided by the application, the automatically generated content summary information comprises any one or more of the following items: text, image, audio and video, so that the expression content of the user viewpoint can be enriched, and the user experience is further improved.
Further effects of the above aspects or possible implementations will be described below in connection with specific embodiments.
The present application also provides an electronic device and a non-transitory computer readable storage medium having computer instructions stored thereon, according to embodiments of the present application.
Fig. 3 is a schematic view of an electronic device for implementing the content sharing method according to the embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 3, the electronic apparatus includes: one or more processors 301, memory 302, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a Graphical User Interface (GUI) on an external input/output apparatus, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 3, one processor 301 is taken as an example.
Memory 302 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor, so that the at least one processor executes the content sharing method provided by the present application. A non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform a content sharing method provided herein.
The memory 302 is a non-transitory computer readable storage medium, and can be used to store non-transitory software programs, non-transitory computer executable programs, and units, such as program instructions/units (for example, the obtaining unit 201, the associating unit 202, and the control unit 203 shown in fig. 2) corresponding to the content sharing method in the embodiment of the present application. The processor 301 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and units stored in the memory 302, that is, implements the content sharing method in the above method embodiment.
The memory 302 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data and the like created according to use of the electronic device that implements the content sharing method provided by the embodiment of the present application. Further, the memory 302 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 302 may optionally include a memory remotely located from the processor 301, and these remote memories may be connected to an electronic device implementing the content sharing method provided by the embodiments of the present application via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the content sharing method may further include: an input device 303 and an output device 304. The processor 301, the memory 302, the input device 303 and the output device 304 may be connected by a bus or other means, and fig. 3 illustrates the connection by a bus as an example.
The input device 303 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic device implementing the content sharing method provided by the embodiment of the present application, such as an input device like a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, etc. The output devices 304 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, an Application Specific Integrated Circuit (ASIC), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the content summary information of the target content is obtained by analyzing the target content, the content summary information is displayed, further, in response to the fact that the confirmation information of the user on the content summary information is received, the content summary information confirmed by the user is used as a user viewpoint, the user viewpoint and the resource description information of the target content are sent to the position appointed by the user, the summary information related to the shared content can be automatically generated to be used as the user viewpoint when the user shares the content, the user viewpoint content related to the shared content does not need to be edited manually by repeatedly checking the shared content, the sharing efficiency can be improved, and the user experience is improved; in addition, based on the user viewpoint content related to the shared content, the possibility that other users view the shared content and the interaction effect based on the shared content are improved.
In addition, by adopting the technical scheme provided by the application, after the content abstract information and the resource description information of the target content are displayed, the content abstract information of the target content can be modified according to the modification instruction of the user income, so that the optimized adjustment of the automatically generated content abstract information is realized, the personalized viewpoint expression requirements of the user can be met, and the user experience is further improved.
In addition, by adopting the technical scheme provided by the application, the content abstract information of the automatically generated target content can be deleted according to the deleting instruction of the user income, so that the personalized requirements of the user are met, and the user experience is further improved.
In addition, with the technical solution provided by the present application, the target content may include any one or more of the following: the method comprises the steps of automatically generating content summary information aiming at various types of contents, sharing the content summary information as the view of a sharing user, improving sharing efficiency, improving user experience, and improving the possibility that other users view the shared contents and the interaction effect based on the shared contents.
In addition, by adopting the technical scheme provided by the application, the automatically generated content summary information comprises any one or more of the following items: text, image, audio and video, so that the expression content of the user viewpoint can be enriched, and the user experience is further improved.
Further effects of the above aspects or possible implementations will be described below in connection with specific embodiments.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (28)

1. A method for sharing content, comprising:
analyzing the target content to obtain content abstract information of the target content;
displaying the content summary information;
and in response to receiving confirmation information of the user on the content summary information, sending the user viewpoint and the resource description information of the target content to a position specified by the user by taking the content summary information confirmed by the user as a user viewpoint.
2. The method of claim 1, wherein after displaying the content summary information, further comprising:
and modifying the content summary information of the target content according to the modification instruction of the user income, wherein the modification instruction comprises modification information.
3. The method of claim 1, wherein after displaying the content summary information and the resource description information of the target content, further comprising:
and deleting the content summary information of the target content according to the deleting instruction of the user income.
4. The method according to claim 1, wherein the sending the user viewpoint and the resource description information to the location specified by the user with the content summary information confirmed by the user as the user viewpoint in response to receiving the confirmation information of the content summary information by the user comprises:
in response to receiving confirmation information of the user on the content summary information, storing the content summary information confirmed by the user and the resource description information of the target content in a clipboard;
and in response to receiving a sharing instruction sent by a user selecting a sharing object, acquiring the content abstract information confirmed by the user and the resource description information of the target content from the clipboard, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object by taking the content abstract information confirmed by the user as a user viewpoint.
5. The method of any one of claims 1-4, wherein the target content comprises any one or more of: web pages, text, images, audio, video; alternatively, the first and second electrodes may be,
the content summary information comprises any one or more of the following items: text, image, audio, video.
6. The method of claim 5, wherein the target content is a text, and analyzing the target content to obtain content summary information of the target content comprises: performing content identification and information extraction on the text by using a natural language processing technology to generate content abstract information of the text; alternatively, the first and second electrodes may be,
the target content is an image, and the analyzing of the target content to obtain the content summary information of the target content includes: performing content identification on the image, and obtaining content abstract information of the image based on a content identification result; alternatively, the first and second electrodes may be,
the target content is a webpage, and the analyzing the target content to obtain the content summary information of the target content comprises the following steps: performing content identification on the webpage, and obtaining content abstract information of the webpage based on a content identification result; alternatively, the first and second electrodes may be,
the target content is audio, and the analyzing the target content to obtain the content summary information of the target content includes: performing content identification on the audio, and obtaining content abstract information of the audio based on a content identification result; alternatively, the first and second electrodes may be,
the target content is a video, and the analyzing the target content to obtain the content summary information of the target content includes: and identifying the content of the video, and obtaining the content abstract information of the video based on the content identification result.
7. The method of claim 6, wherein the identifying the content of the web page, and obtaining the summary information of the content of the web page based on the content identification result comprises:
performing content identification and information extraction on the text in the webpage by using a natural language processing technology to generate content abstract information of the webpage; alternatively, the first and second electrodes may be,
classifying the content in the webpage, performing content identification and information extraction on the text in the webpage to obtain first information, and performing key content extraction on the non-text in the webpage to obtain second information; and obtaining the content abstract information of the webpage according to the first information and the second information.
8. The method of claim 6, wherein the content recognition of the audio and obtaining the content summary information of the audio based on the content recognition result comprises:
performing content identification and information extraction on the text on the audio by using a natural language processing technology to generate content abstract information of the audio; alternatively, the first and second electrodes may be,
converting the audio into a text, and performing content identification and information extraction on the text obtained by conversion by using a natural language processing technology to generate content abstract information of the audio; alternatively, the first and second electrodes may be,
and converting the audio into a text, and performing content identification and information extraction on the text obtained by conversion and the text on the audio by using a natural language processing technology to generate the content abstract information of the audio.
9. The method according to claim 6, wherein the performing content recognition on the image and obtaining content summary information of the image based on the content recognition result comprises:
and utilizing an image processing technology to extract and classify the features of the image, and obtaining the content abstract information of the image based on the classification result.
10. The method according to claim 9, wherein the obtaining content summary information of the image based on the classification result comprises:
obtaining text abstract information of the image based on the classification result, and selecting at least partial region image of the image based on the classification result;
and obtaining the content abstract information based on the text abstract information of the image and the at least partial region image.
11. The method according to claim 6, wherein the content recognition of the video and obtaining the content summary information of the video based on the content recognition result comprises:
selecting a plurality of frames of images from the video;
respectively taking each frame image in the multi-frame images as a current image, and performing feature extraction and classification on the current image by using an image processing technology to obtain a content identification result of the current image;
and obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
12. The method of claim 11, wherein said selecting a plurality of frames of images from said video comprises:
segmenting the video to obtain a plurality of segmented videos; respectively selecting a preset number of images from each segmented video in the segmented videos to obtain the multi-frame images; alternatively, the first and second electrodes may be,
segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the plurality of segmented videos respectively to obtain the multi-frame images; alternatively, the first and second electrodes may be,
and randomly selecting images from the video to obtain the multi-frame images.
13. The method according to claim 11, wherein the obtaining content summary information of the video based on the content identification result of the multiple frames of images comprises:
obtaining text summary information of the multi-frame images based on the content identification results of the multi-frame images, and selecting at least one frame of image or an area image in the at least one frame of image based on the content identification results of the multi-frame images;
and obtaining the content summary information based on the text summary information of the multiple frames of images and the at least one frame of image or the area image in the at least one frame of image.
14. A content sharing apparatus, comprising:
the content analysis unit is used for analyzing the target content to obtain the content abstract information of the target content;
the interaction unit is used for displaying the content abstract information;
and the sharing unit is used for responding to the received confirmation information of the user on the content summary information, taking the content summary information confirmed by the user as a user viewpoint, and sending the user viewpoint and the resource description information of the target content to a position specified by the user.
15. The apparatus of claim 14, wherein the interaction unit is further configured to modify content summary information of the target content according to a modification instruction of the user revenue, and the modification instruction includes modification information.
16. The apparatus of claim 14, wherein the interaction unit is further configured to delete the content summary information of the target content according to a deletion instruction of the user revenue.
17. The apparatus according to claim 14, wherein the sharing unit is specifically configured to
In response to receiving confirmation information of the user on the content summary information, storing the content summary information confirmed by the user and the resource description information of the target content in a clipboard;
and in response to receiving a sharing instruction sent by a user selecting a sharing object, acquiring the content abstract information confirmed by the user and the resource description information of the target content from the clipboard, and sending the user viewpoint and the resource description information of the target content to a position corresponding to the sharing object by taking the content abstract information confirmed by the user as a user viewpoint.
18. The apparatus according to any one of claims 14-17, wherein the target content comprises any one or more of: web pages, text, images, audio, video; alternatively, the first and second electrodes may be,
the content summary information comprises any one or more of the following items: text, image, audio, video.
19. The apparatus according to claim 18, wherein the target content is a text, and the content analysis unit is configured to perform content identification and information extraction on the text by using a natural language processing technique to generate content summary information of the text; alternatively, the first and second electrodes may be,
the target content is an image, and the content analysis unit is used for performing content identification on the image and obtaining content abstract information of the image based on a content identification result; alternatively, the first and second electrodes may be,
the target content is a webpage, and the content analysis unit is used for identifying the content of the webpage and obtaining the content abstract information of the webpage based on the content identification result; alternatively, the first and second electrodes may be,
the target content is audio, the content analysis unit is used for carrying out content identification on the audio and obtaining content abstract information of the audio based on a content identification result; alternatively, the first and second electrodes may be,
the target content is a video, and the content analysis unit is used for performing content identification on the video and obtaining content abstract information of the video based on a content identification result.
20. The apparatus according to claim 19, wherein the target content is a web page, and the content analysis unit is specifically configured to
Performing content identification and information extraction on the text in the webpage by using a natural language processing technology to generate content abstract information of the webpage; alternatively, the first and second electrodes may be,
classifying the content in the webpage, performing content identification and information extraction on the text in the webpage to obtain first information, and performing key content extraction on the non-text in the webpage to obtain second information; and obtaining the content abstract information of the webpage according to the first information and the second information.
21. The apparatus according to claim 19, wherein the target content is audio, and wherein the content analysis unit is specifically configured to
Performing content identification and information extraction on the text on the audio by using a natural language processing technology to generate content abstract information of the audio; alternatively, the first and second electrodes may be,
converting the audio into a text, and performing content identification and information extraction on the text obtained by conversion by using a natural language processing technology to generate content abstract information of the audio; alternatively, the first and second electrodes may be,
and converting the audio into a text, and performing content identification and information extraction on the text obtained by conversion and the text on the audio by using a natural language processing technology to generate the content abstract information of the audio.
22. The apparatus according to claim 19, wherein the target content is an image and the content analysis unit is specifically configured to
And utilizing an image processing technology to extract and classify the features of the image, and obtaining the content abstract information of the image based on a classification result.
23. The apparatus according to claim 22, wherein the content analysis unit is specifically configured to
Performing feature extraction and classification on the image by using an image processing technology;
obtaining text abstract information of the image based on the classification result, and selecting at least partial region image of the image based on the classification result;
and obtaining the content abstract information based on the text abstract information of the image and the at least partial region image.
24. The apparatus according to claim 19, wherein the target content is a video and the content analysis unit is specifically configured to
Selecting a plurality of frames of images from the video;
respectively taking each frame image in the multi-frame images as a current image, and performing feature extraction and classification on the current image by using an image processing technology to obtain a content identification result of the current image;
and obtaining the content abstract information of the video based on the content identification result of the multi-frame image.
25. The apparatus according to claim 24, wherein the content analysis unit is configured to select a plurality of frames of images from the video
Segmenting the video to obtain a plurality of segmented videos; respectively selecting a preset number of images from each segmented video in the segmented videos to obtain the multi-frame images; alternatively, the first and second electrodes may be,
segmenting the video to obtain a plurality of segmented videos; selecting a preset number of images from any one or more segmented videos in the plurality of segmented videos respectively to obtain the multi-frame images; alternatively, the first and second electrodes may be,
and randomly selecting images from the video to obtain the multi-frame images.
26. The apparatus according to claim 24, wherein the content analysis unit is specifically configured to
Selecting a plurality of frames of images from the video;
respectively taking each frame image in the multi-frame images as a current image, and performing feature extraction and classification on the current image by using an image processing technology to obtain a content identification result of the current image;
obtaining text summary information of the multi-frame images based on the content identification results of the multi-frame images, and selecting at least one frame of image or an area image in the at least one frame of image based on the content identification results of the multi-frame images;
and obtaining the content summary information based on the text summary information of the multiple frames of images and the at least one frame of image or the area image in the at least one frame of image.
27. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-13.
28. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-13.
CN201911212878.0A 2019-12-02 2019-12-02 Content sharing method and device, electronic equipment and readable storage medium Active CN111158924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911212878.0A CN111158924B (en) 2019-12-02 2019-12-02 Content sharing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911212878.0A CN111158924B (en) 2019-12-02 2019-12-02 Content sharing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111158924A true CN111158924A (en) 2020-05-15
CN111158924B CN111158924B (en) 2023-09-22

Family

ID=70556294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911212878.0A Active CN111158924B (en) 2019-12-02 2019-12-02 Content sharing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111158924B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694984A (en) * 2020-06-12 2020-09-22 百度在线网络技术(北京)有限公司 Video searching method and device, electronic equipment and readable storage medium
CN113157153A (en) * 2021-02-07 2021-07-23 北京字节跳动网络技术有限公司 Content sharing method and device, electronic equipment and computer readable storage medium
CN113626585A (en) * 2021-08-27 2021-11-09 京东方科技集团股份有限公司 Abstract generation method and device, electronic equipment and storage medium
CN115119069A (en) * 2021-03-17 2022-09-27 阿里巴巴新加坡控股有限公司 Multimedia content processing method, electronic device and computer storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452470A (en) * 2007-10-18 2009-06-10 石忠民 Method and apparatus for a web search engine generating summary-style search results
CN102404107A (en) * 2010-09-13 2012-04-04 腾讯科技(深圳)有限公司 Method, device, transmitting end and receiving end all capable of guaranteeing safety of inputted content
CN102567532A (en) * 2011-12-30 2012-07-11 奇智软件(北京)有限公司 Information distribution method and information distribution device
US20120274750A1 (en) * 2011-04-26 2012-11-01 Echostar Technologies L.L.C. Apparatus, systems and methods for shared viewing experience using head mounted displays
CN103207892A (en) * 2013-03-12 2013-07-17 百度在线网络技术(北京)有限公司 Method and device for sharing document through network
CN104731959A (en) * 2015-04-03 2015-06-24 北京威扬科技有限公司 Video abstraction generating method, device and system based on text webpage content
CN106331328A (en) * 2016-08-17 2017-01-11 北京小米移动软件有限公司 Information prompting method and device
CN107451139A (en) * 2016-05-30 2017-12-08 北京三星通信技术研究有限公司 File resource methods of exhibiting, device and corresponding smart machine
CN107831974A (en) * 2017-11-30 2018-03-23 腾讯科技(深圳)有限公司 information sharing method, device and storage medium
CN108133707A (en) * 2017-11-30 2018-06-08 百度在线网络技术(北京)有限公司 A kind of content share method and system
CN108363749A (en) * 2018-01-29 2018-08-03 上海星佑网络科技有限公司 Method and apparatus for information processing
CN108520014A (en) * 2018-03-21 2018-09-11 广东欧珀移动通信有限公司 Information sharing method, device, mobile terminal and computer-readable medium
CN110175323A (en) * 2018-05-31 2019-08-27 腾讯科技(深圳)有限公司 Method and device for generating message abstract

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452470A (en) * 2007-10-18 2009-06-10 石忠民 Method and apparatus for a web search engine generating summary-style search results
CN102404107A (en) * 2010-09-13 2012-04-04 腾讯科技(深圳)有限公司 Method, device, transmitting end and receiving end all capable of guaranteeing safety of inputted content
US20120274750A1 (en) * 2011-04-26 2012-11-01 Echostar Technologies L.L.C. Apparatus, systems and methods for shared viewing experience using head mounted displays
CN102567532A (en) * 2011-12-30 2012-07-11 奇智软件(北京)有限公司 Information distribution method and information distribution device
CN103207892A (en) * 2013-03-12 2013-07-17 百度在线网络技术(北京)有限公司 Method and device for sharing document through network
CN104731959A (en) * 2015-04-03 2015-06-24 北京威扬科技有限公司 Video abstraction generating method, device and system based on text webpage content
CN107451139A (en) * 2016-05-30 2017-12-08 北京三星通信技术研究有限公司 File resource methods of exhibiting, device and corresponding smart machine
CN106331328A (en) * 2016-08-17 2017-01-11 北京小米移动软件有限公司 Information prompting method and device
CN107831974A (en) * 2017-11-30 2018-03-23 腾讯科技(深圳)有限公司 information sharing method, device and storage medium
CN108133707A (en) * 2017-11-30 2018-06-08 百度在线网络技术(北京)有限公司 A kind of content share method and system
CN108363749A (en) * 2018-01-29 2018-08-03 上海星佑网络科技有限公司 Method and apparatus for information processing
CN108520014A (en) * 2018-03-21 2018-09-11 广东欧珀移动通信有限公司 Information sharing method, device, mobile terminal and computer-readable medium
CN110175323A (en) * 2018-05-31 2019-08-27 腾讯科技(深圳)有限公司 Method and device for generating message abstract

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王志军: "轻松玩转雅虎收藏", 《电脑知识与技术(学术交流)》 *
王志军: "轻松玩转雅虎收藏", 《电脑知识与技术(学术交流)》, no. 28, 26 October 2006 (2006-10-26), pages 109 - 110 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694984A (en) * 2020-06-12 2020-09-22 百度在线网络技术(北京)有限公司 Video searching method and device, electronic equipment and readable storage medium
CN113157153A (en) * 2021-02-07 2021-07-23 北京字节跳动网络技术有限公司 Content sharing method and device, electronic equipment and computer readable storage medium
CN115119069A (en) * 2021-03-17 2022-09-27 阿里巴巴新加坡控股有限公司 Multimedia content processing method, electronic device and computer storage medium
CN113626585A (en) * 2021-08-27 2021-11-09 京东方科技集团股份有限公司 Abstract generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111158924B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN112616063B (en) Live broadcast interaction method, device, equipment and medium
US11138207B2 (en) Integrated dynamic interface for expression-based retrieval of expressive media content
US20170083524A1 (en) Platform and dynamic interface for expression-based retrieval of expressive media content
CN111158924B (en) Content sharing method and device, electronic equipment and readable storage medium
CN115443641A (en) Combining first user interface content into a second user interface
WO2020187012A1 (en) Communication method, apparatus and device, and group creation method, apparatus and device
CN114787813A (en) Context sensitive avatar captions
US20170083519A1 (en) Platform and dynamic interface for procuring, organizing, and retrieving expressive media content
CN107977928B (en) Expression generation method and device, terminal and storage medium
US20170083520A1 (en) Selectively procuring and organizing expressive media content
CN104298429A (en) Information presentation method based on input and input method system
CN105204886B (en) A kind of method, user terminal and server activating application program
CN111565143B (en) Instant messaging method, equipment and computer readable storage medium
CN113746874B (en) Voice package recommendation method, device, equipment and storage medium
CN112752121B (en) Video cover generation method and device
US20220092071A1 (en) Integrated Dynamic Interface for Expression-Based Retrieval of Expressive Media Content
CN111177462B (en) Video distribution timeliness determination method and device
CN113010698B (en) Multimedia interaction method, information interaction method, device, equipment and medium
CN112818224B (en) Information recommendation method and device, electronic equipment and readable storage medium
CN110909241B (en) Information recommendation method, user identification recommendation method, device and equipment
US11048387B1 (en) Systems and methods for managing media feed timelines
US20210271725A1 (en) Systems and methods for managing media feed timelines
CN112843681A (en) Virtual scene control method and device, electronic equipment and storage medium
CN111666498A (en) Friend recommendation method based on interactive information, related device and storage medium
CN111918073A (en) Management method and device of live broadcast room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant