CN110708574B - Method and device for publishing information - Google Patents

Method and device for publishing information Download PDF

Info

Publication number
CN110708574B
CN110708574B CN201911011648.8A CN201911011648A CN110708574B CN 110708574 B CN110708574 B CN 110708574B CN 201911011648 A CN201911011648 A CN 201911011648A CN 110708574 B CN110708574 B CN 110708574B
Authority
CN
China
Prior art keywords
video
picture
user
information
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911011648.8A
Other languages
Chinese (zh)
Other versions
CN110708574A (en
Inventor
敦戎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lianshang Network Technology Co Ltd
Original Assignee
Shanghai Lianshang Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lianshang Network Technology Co Ltd filed Critical Shanghai Lianshang Network Technology Co Ltd
Priority to CN201911011648.8A priority Critical patent/CN110708574B/en
Publication of CN110708574A publication Critical patent/CN110708574A/en
Application granted granted Critical
Publication of CN110708574B publication Critical patent/CN110708574B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Abstract

The embodiment of the application discloses a method and equipment for issuing information. One embodiment of the method comprises: determining a video designated by a user according to an operation executed by the user on an information publishing interface; if the picture extraction operation on the video is detected, obtaining a picture extracted from the video; and if the information publishing operation is detected, publishing the extracted one or more pictures on the information flow page. According to the embodiment, the picture is automatically extracted from the video to be published, the user does not need to manually capture the video, the manual operation of publishing the picture in the video is simplified, and a more convenient information publishing mode is provided.

Description

Method and device for publishing information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and equipment for publishing information.
Background
A stream of information generally refers to an ordered set of information. For example, an information stream may be a set of information sorted in chronological order of distribution. Currently, various types of applications (e.g., social software, picture sharing software) are provided with an information flow page on which information flows can be displayed.
Typically, a user may publish information on an information flow page. The content of the published information may include text, pictures, video, and the like. The text published on the information flow page can be read directly; the pictures published on the information flow page can directly browse the thumbnails thereof, or can view the enlarged clear pictures after clicking, the videos published on the information flow page can directly browse the thumbnail video frames containing the playing marks, or the videos are played after loading and browse the contents thereof by consuming a certain time. The above-mentioned way of publishing information on an information flow page has become an inertial thinking in the art.
Disclosure of Invention
The embodiment of the application provides a method and equipment for issuing information.
In a first aspect, an embodiment of the present application provides a method for publishing information, which is applied to a terminal and includes: determining a video designated by a user according to an operation executed by the user on an information publishing interface; if the picture extraction operation on the video is detected, obtaining a picture extracted from the video; and if the information publishing operation is detected, publishing the extracted one or more pictures on the information flow page.
In a second aspect, an embodiment of the present application provides a method for publishing information, which is applied to a server and includes: receiving a picture extraction request for a video sent by a terminal, wherein the video is determined according to an operation executed by a user on an information publishing interface; extracting pictures from the video; sending the extracted picture to a terminal; and if the information release request from the terminal is received, releasing the extracted one or more pictures indicated by the information release request on the information flow page.
In a third aspect, an embodiment of the present application provides an apparatus for publishing information, where the apparatus is disposed in a terminal, and includes: a determination unit configured to determine a video specified by a user according to an operation performed by the user on the information distribution interface; the acquisition unit is configured to acquire a picture extracted from the video if the picture extraction operation on the video is detected; and the issuing unit is configured to issue the extracted one or more pictures on the information flow page if the information issuing operation is detected.
In a fourth aspect, an embodiment of the present application provides an apparatus for publishing information, which is disposed at a server and includes: the receiving unit is configured to receive a picture extraction request for a video sent by a terminal, wherein the video is determined according to an operation performed by a user on an information publishing interface; an extraction unit configured to extract a picture from a video; a transmitting unit configured to transmit the extracted picture to a terminal; and the issuing unit is configured to issue the extracted one or more pictures indicated by the information issuing request on the information flow page if the information issuing request from the terminal is received.
In a fifth aspect, an embodiment of the present application provides a computer device, including: one or more processors; a storage device having one or more programs stored thereon; when executed by one or more processors, cause the one or more processors to implement a method as described in any implementation of the first aspect, or to implement a method as described in any implementation of the second aspect.
In a sixth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method described in any of the implementation manners in the first aspect, or implements the method described in any of the implementation manners in the second aspect.
According to the method and the device for publishing information, the video specified by the user is determined under the condition that the user is detected to execute the operation on the information publishing interface; under the condition that the picture extraction operation on the video is detected, obtaining a picture extracted from the video; and in the case that the information publishing operation is detected, publishing the extracted one or more pictures on an information flow page. The embodiment of the application breaks through the inertial thinking in the field, and the summary of the video can be quickly shown by extracting the pictures from the video and publishing the pictures on the information flow page, rather than playing the video to browse the content of the video; according to the embodiment of the application, the picture is automatically extracted from the video to be published, a user does not need to manually capture the video, the manual operation of publishing the picture in the video is simplified, and a more convenient information publishing mode is provided.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which some embodiments of the present application may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for publishing information in accordance with the present application;
FIG. 3 is a flow diagram of yet another embodiment of a method for publishing information in accordance with the present application;
FIG. 4 is a flow diagram of another embodiment of a method for publishing information in accordance with the present application;
FIG. 5 is a flow diagram of yet another embodiment of a method for publishing information in accordance with the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use with the computer device of some embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for publishing information of the present application may be applied.
As shown in fig. 1, devices 101, 102 and network 103 may be included in system architecture 100. Network 103 is the medium used to provide communication links between devices 101, 102. Network 103 may include various connection types, such as wired, wireless target communication links, or fiber optic cables, to name a few.
The devices 101, 102 may be hardware devices or software that support network connectivity to provide various network services. When the device is hardware, it can be a variety of electronic devices including, but not limited to, smart phones, tablets, laptop portable computers, desktop computers, servers, and the like. In this case, the hardware device may be implemented as a distributed device group including a plurality of devices, or may be implemented as a single device. When the device is software, the software can be installed in the electronic devices listed above. At this time, as software, it may be implemented as a plurality of software or software modules for providing a distributed service, for example, or as a single software or software module. And is not particularly limited herein.
In practice, a device may provide a respective network service by installing a respective client application or server application. After the device has installed the client application, it may be embodied as a client in network communications. Accordingly, after the server application is installed, it may be embodied as a server in network communications.
As an example, in fig. 1, device 101 is embodied as a client and device 102 is embodied as a server. For example, the device 101 may be a client installed with an application provided with an information flow page, and the device 102 may be a server of the application provided with the information flow page.
It should be noted that the method for publishing information provided in the embodiment of the present application may be executed by the device 101, and may also be executed by the device 102.
It should be understood that the number of networks and devices in fig. 1 is merely illustrative. There may be any number of networks and devices, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for publishing information in accordance with the present application is shown. The method for distributing information may comprise the steps of:
step 201, determining the video designated by the user according to the operation executed by the user on the information publishing interface.
In the present embodiment, in the case where an operation performed by the user on the information distribution interface is detected, the terminal (e.g., the device 101 shown in fig. 1) may determine the video specified by the user according to the operation performed by the user on the information distribution interface.
Typically, a user may open a client application provided with a flow page and enter the flow page. The information flow page can be provided with a publishing entry key. When the user clicks the publishing entry button, the information publishing interface can be accessed. In some embodiments, a video selection button may be disposed on the information publishing interface. When the user clicks the video selection button, a video list stored locally or at the server can be displayed for the user to select. When the user selects a video from the video list, the video selected by the user may be determined as the video designated by the user. In some embodiments, a video information input box can be arranged on the information publishing interface. When the user inputs video information in the video information input box, a video indicated by the video information input by the user may be determined as a video designated by the user. The video information input by the user may include, but is not limited to, a video identifier, a video download link, and the like.
Step 202, if the picture extraction operation on the video is detected, obtaining the picture extracted from the video.
In the present embodiment, in the case where a picture extraction operation on a video is detected, a terminal can acquire a picture extracted from the video. The number of pictures extracted from the video at a time may be determined by a default number of pictures preset by a user or a number of pictures input when the user performs a picture extraction operation.
Generally, a picture extraction key can be arranged on the information publishing interface. When the user clicks the picture extraction key, the picture extracted from the video can be acquired. In some embodiments, the terminal may extract pictures from the video. For example, in the case where a video specified by the user is locally stored, the terminal may directly extract a picture from the video. For another example, in a case where a video designated by the user is not stored locally, the terminal may first download the video according to the video information input by the user and then extract a picture from the video. In some embodiments, the terminal may send a picture extraction request for a video to a server (e.g., the device 102 shown in fig. 1), and receive a picture extracted from the video sent by the server. For example, in a case where a video specified by a user is locally stored, the terminal may transmit the video to the server while transmitting a picture extraction request for the video to the server. The server may first extract a picture from the video and then transmit the extracted picture to the terminal. For another example, the terminal device may send the video information to the server while sending a picture extraction request for the video to the server. The server side can firstly search for the video locally according to the video information, or download the video according to the video information, then extract the picture from the video, and finally send the extracted picture to the terminal.
In general, a terminal may extract pictures from a video based on various reference information. The reference information may include, but is not limited to, user preference information, video type, and other user's selection records, among others. In some embodiments, the terminal may first determine the video type of the video; and then extracting pictures from the video based on a picture extraction algorithm matched with the video type. Wherein, different video types can correspond to different picture extraction algorithms. In some embodiments, the terminal may first obtain a selection record of at least one picture in the video by at least one other user; pictures are then extracted from the video based on the selected recording. The terminal can extract at least one picture selected by other users and/or pictures close to the playing time of the picture selected by the other users from the video.
In addition, the user can edit the extracted picture. Generally, the terminal may present the extracted picture on an information publishing interface. If the terminal detects that the selected operation is executed on one or more displayed pictures, the terminal can set the selected pictures to be in an editable state; if the fact that the editing operation is performed on the selected picture is detected, the terminal can edit the selected picture. Wherein the editing operation may include, but is not limited to, at least one of: a retention operation, a deletion operation, a position adjustment operation, and the like. For example, the terminal may display the extracted picture in a preset area of the information publishing interface. A reservation key and/or a deletion key may be provided in the vicinity of the preset area. If the user clicks the reservation operation after selecting one or more displayed pictures, the terminal can delete the unselected pictures from the information publishing interface. If the user clicks the deletion operation after selecting one or more displayed pictures, the terminal can delete the selected pictures from the information publishing interface. The preset area may be a designated area of the instant shooting interface, including but not limited to a lower area, an upper area, and the like of the instant shooting interface.
It should be noted that the user may also repeatedly acquire the pictures extracted from the video for multiple times, and the pictures acquired each time are different. In general, the number of times a picture extracted from a video is acquired may be determined by the number of times a user performs a picture extraction operation. In some embodiments, if a picture extraction operation is detected again, the terminal may acquire a picture extracted from the video that is different from a previously extracted picture.
In step 203, if the information publishing operation is detected, the extracted one or more pictures are published on the information flow page. In some embodiments, the information flow page is a page of a social space in a social application.
In this embodiment, in the case where the information distribution operation is detected, the terminal may distribute the extracted one or more pictures on the information flow page. Specifically, in the case of detecting the information publishing operation, the terminal may send the extracted one or more pictures to the server, and the server may publish the extracted one or more pictures on the information flow page.
Generally, an information publishing key can be arranged on the information publishing interface. When the user clicks the information publishing button, the extracted one or more pictures can be published on the information flow page. Specifically, the extracted picture can be displayed on the information publishing interface. If the user directly clicks the information publishing key, all the extracted pictures can be published on the information flow page. If the user selects to reserve or delete one or more pictures, and then clicks the information publishing key, the pictures which are not deleted can be published on the information flow page.
In some embodiments, the extracted one or more pictures and the video publication from which the one or more pictures are derived may be published together in one message of an information flow page. After browsing the one or more pictures, the user can directly play the video if the user is interested in the video.
According to the method for releasing the information, the video specified by the user is determined under the condition that the user is detected to execute the operation on the information releasing interface; under the condition that the picture extraction operation on the video is detected, obtaining a picture extracted from the video; and in the case that the information publishing operation is detected, publishing the extracted one or more pictures on an information flow page. The embodiment of the application breaks through the inertial thinking in the field, and the summary of the video can be quickly shown by extracting the pictures from the video and publishing the pictures on the information flow page, rather than playing the video to browse the content of the video; according to the embodiment of the application, the picture is automatically extracted from the video to be published, a user does not need to manually capture the video, the manual operation of publishing the picture in the video is simplified, and a more convenient information publishing mode is provided.
With further reference to FIG. 3, shown is a flow 300 that is yet another embodiment of a method for publishing information in accordance with the present application. The method for distributing information may comprise the steps of:
step 301, determining a video designated by a user according to an operation executed by the user on an information publishing interface.
In this embodiment, the specific operation of step 301 has been described in detail in step 201 in the embodiment shown in fig. 2, and is not described herein again.
Step 302, if the picture extraction operation on the video is detected, user preference information is obtained.
In the present embodiment, in the case where a picture extraction operation on a video is detected, the terminal may acquire user preference information. The user preference information may include, but is not limited to, preferred content, preferred color system, and the like. The preferred content may include, but is not limited to, people, animals, scenes, and the like. The preferred color system may include, but is not limited to, a cold color system, a warm color system, an intermediate color system, and the like.
Generally, the terminal may acquire the user preference information by at least one of:
1. and acquiring default preference information preset by a user.
Generally, a user sets a default preference information. For example, the user may set the warm color system as default preference information.
2. The method comprises the steps of determining the video type of a video, and obtaining first preference information matched with the video type from a first preference information set configured by a user in advance.
The first preference information may include, but is not limited to, preference content, preference color system, and the like, and corresponds to a video type one to one. Different video types may correspond to different first preference information. In some embodiments, the preference content may indicate a preference for the image content contained in the picture extracted from the video, such as, but not limited to, "people", "animals or scenery", "buildings", "cities", "nature", etc.; the preferred color system may indicate a color system of a picture extracted from the video, such as the preferred color system may include, but is not limited to, "warm color system", "cold color system", "intermediate color system", and the like.
For example, the three different first preference information includes preference contents of "person", "animal or scene", and "building", respectively, and the video types include a natural documentary, an romance, and an shi drama; the video of the drama type corresponds to the first preference information with preference content of 'character', and the video of the natural documentary type corresponds to the first preference information with preference content of 'animal or scenery'; the video of the verse type corresponds to first preference information with preference content of 'building'; when the pictures are extracted, based on the first preference information, the probability of extracting the pictures containing the character images from the videos of the drama type is higher, the probability of extracting the pictures containing the animal or scenery images from the videos of the nature record type is higher, and the probability of extracting the pictures containing the building images from the videos of the storyboard type is higher. In some embodiments, one video genre may correspond to a plurality of first preference information, and a video of a streetscape genre may simultaneously correspond to two first preference information having a preference content of "building" and a preference content of "person".
Generally, the video type of the video may be determined in two ways: firstly, acquiring a video type input or selected by a user; and secondly, acquiring the video type of the video from the website for acquiring the video by the user.
3. And acquiring second preference information matched with the video from a second preference information set pre-configured by the user.
The second preference information may include, but is not limited to, preferred content, preferred color system, and the like, and corresponds to the video one to one. Different videos may correspond to different second preference information. The second preference information may be similar to the first preference information in configuration content, and the first preference information is mainly different from the second preference information in that: the first preference information corresponds to a video type and the second preference information corresponds to a video.
4. Obtaining a history selection record of the extracted history pictures of the user, and determining user preference information based on the history selection record.
In general, the terminal may perform statistical analysis on the user's historical selection records to determine user preference information. For example, a statistical analysis of the historical selection history of a user of a video yields the following results: under the condition that the picture proportion of a warm color system in pictures obtained by extracting the video for the first time is 58%, the picture proportion of a cold color system is 38% and the picture proportion of an intermediate color system is 4%, after a user selects the extracted historical pictures, the picture proportion of the selected warm color system is 42%, the picture proportion of the cold color system is 52%, the picture proportion of the intermediate color system is 6%, and the proportion increase amplitude of the cold color system pictures after being selected by the user is determined to be far higher than that of the warm color system pictures and the intermediate color system pictures, so that the preferred color system corresponding to the video is determined to be the cold color system.
And step 303, extracting pictures matched with the user preference information from the video.
In this embodiment, the terminal may extract a picture matching the user preference information from the video.
Generally, the terminal may first analyze pictures in the video, determine pictures matching with the user preference information, and then extract a certain number of pictures from the determined pictures. The number of pictures extracted from the video at a time may be determined by a default number of pictures preset by a user or a number of pictures input when the user performs a picture extraction operation.
Step 304, if the information publishing operation is detected, publishing the extracted one or more pictures on an information flow page.
In this embodiment, the specific operation of step 304 has been described in detail in step 203 in the embodiment shown in fig. 2, and is not described herein again.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 2, the flow 300 of the method for publishing information in the present embodiment highlights the step of extracting the picture based on the user preference information. Therefore, the picture extracted from the video by the scheme described in the embodiment is the picture according with the preference of the user, so that the targeted map picture extraction is realized.
With continued reference to FIG. 4, a flow 400 of another embodiment of a method for publishing information in accordance with the present application is shown. The method for distributing information may comprise the steps of:
step 401, receiving a picture extraction request for a video sent by a terminal.
In this embodiment, a server (e.g., the device 102 shown in fig. 1) may receive a picture extraction request for a video sent by a terminal (e.g., the device 101 shown in fig. 1). The video can be a video determined according to an operation performed by a user on the information publishing interface.
Specifically, in the case where an operation performed by the user on the information distribution interface is detected, the terminal may determine the video specified by the user according to the operation performed by the user on the information distribution interface. In the case that a picture extraction operation on a video is detected, the terminal may send a picture extraction request on the video to the server. At this time, the server may receive the picture extraction request.
Typically, a user may open a client application provided with a flow page and enter the flow page. The information flow page can be provided with a publishing entry key. When the user clicks the publishing entry button, the information publishing interface can be accessed. And the information publishing interface can be provided with a picture extracting key. When the user clicks the picture extraction key, a picture extraction request for the video can be sent to the server. In some embodiments, a video selection button may be disposed on the information publishing interface. When the user clicks the video selection button, a list of locally stored videos may be displayed for selection by the user. When the user selects a video from the video list, the video selected by the user may be determined as the video designated by the user. In this case, the terminal generally sends the video to the server side at the same time of sending the picture extraction request for the video to the server side. In some embodiments, a video information input box can be arranged on the information publishing interface. When the user inputs video information in the video information input box, a video indicated by the video information input by the user may be determined as a video designated by the user. In this case, the terminal device will generally send video information to the server side at the same time as sending a picture extraction request for video to the server side. The video information input by the user may include, but is not limited to, a video identifier, a video download link, and the like.
In step 402, a picture is extracted from a video.
In this embodiment, the server may extract a picture from the video. The number of pictures extracted from the video at a time may be determined by a default number of pictures preset by a user or a number of pictures input when the user performs a picture extraction operation.
Generally, in the case of receiving a video from a terminal, a server may directly extract a picture from the video. In the case of receiving video information from the terminal, the server may first search for a video locally according to the video information, or download a video according to the video information, and then extract a picture from the video.
In general, a server may extract pictures from a video based on a variety of reference information. The reference information may include, but is not limited to, user preference information, video type, and other user's selection records, among others. In some embodiments, the server may first determine the video type of the video; and then extracting pictures from the video based on a picture extraction algorithm matched with the video type. Wherein, different video types can correspond to different picture extraction algorithms. In some embodiments, the server may first obtain a selection record of at least one picture in the video by at least one other user; pictures are then extracted from the video based on the selected recording. The server side can extract at least one picture selected by other users and/or pictures close to the playing time of the picture selected by the at least one other user from the video.
It should be noted that the user may also repeatedly acquire the pictures extracted from the video for multiple times, and the pictures acquired each time are different. In general, the number of times a picture extracted from a video is acquired may be determined by the number of times a user performs a picture extraction operation. In some embodiments, if the picture extraction request is received again, the server may extract a picture different from the previously extracted picture from the video.
And step 403, sending the extracted picture to the terminal.
In this embodiment, the server may send the extracted picture to the terminal.
In step 404, if an information distribution request from the terminal is received, the extracted one or more pictures indicated by the information distribution request are distributed on the information flow page. In some embodiments, the information flow page is a page of a social space in a social application.
In this embodiment, in the case of receiving an information distribution request from a terminal, the server may distribute the extracted one or more pictures indicated by the information distribution request on the information flow page.
Specifically, in the case where the information distribution operation is detected, the terminal may transmit an information distribution request to the server. At this time, the server can receive the information publishing request.
Generally, an information publishing key can be arranged on the information publishing interface. When the user clicks the information publishing key, the information publishing request can be sent to the server. In the case of receiving the information publishing request, the server may publish the extracted one or more pictures on the information flow page. Specifically, the extracted picture can be displayed on the information publishing interface. If the user directly clicks the publishing key, all the extracted pictures can be published on the information flow page. If the user selects to reserve or delete one or more pictures and then clicks the publishing key, the pictures which are not deleted can be published on the information flow page.
In some embodiments, the extracted one or more pictures and the video publication from which the one or more pictures are derived may be published together in one message of an information flow page. After browsing the one or more pictures, the user can directly play the video if the user is interested in the video.
According to the method for releasing the information, under the condition that a picture extracting request for a video sent by a terminal is received, the picture is extracted from the video, and the extracted picture is sent to the terminal; and in the case of receiving an information distribution request from the terminal, distributing the extracted one or more pictures indicated by the information distribution request on an information flow page. The embodiment of the application breaks through the inertial thinking in the field, and the summary of the video can be quickly shown by extracting the pictures from the video and publishing the pictures on the information flow page, rather than playing the video to browse the content of the video; according to the embodiment of the application, the picture is automatically extracted from the video to be published, a user does not need to manually capture the video, the manual operation of publishing the picture in the video is simplified, and a more convenient information publishing mode is provided.
With further reference to FIG. 5, shown is a flow 500 of yet another embodiment of a method for publishing information in accordance with the present application. The method for releasing the information is applied to the server and can comprise the following steps:
step 501, receiving a picture extraction request for a video sent by a terminal.
In this embodiment, the specific operation of step 501 has been described in detail in step 401 in the embodiment shown in fig. 4, and is not described herein again.
Step 502, user preference information is obtained.
In this embodiment, the server may obtain the user preference information. Wherein the user preference information may include, but is not limited to, at least one of: preferred content, preferred color systems, and the like. The preferred content may include, but is not limited to, people, animals, scenes, and the like. The preferred color system may include, but is not limited to, a cold color system, a warm color system, an intermediate color system, and the like.
Generally, the server may obtain the user preference information by at least one of the following methods:
1. and acquiring default preference information preset by a user.
Generally, a user sets a default preference information. For example, the user may set the warm color system as default preference information.
2. The method comprises the steps of determining the video type of a video, and obtaining first preference information matched with the video type from a first preference information set configured by a user in advance.
The first preference information may include, but is not limited to, preference content, preference color system, and the like, and corresponds to a video type one to one. Different video types may correspond to different first preference information. In some embodiments, the preference content may indicate a preference for the image content contained in the picture extracted from the video, such as, but not limited to, "people", "animals or scenery", "buildings", "cities", "nature", etc.; the preferred color system may indicate a color system of a picture extracted from the video, such as the preferred color system may include, but is not limited to, "warm color system", "cold color system", "intermediate color system", and the like.
For example, the three different first preference information includes preference contents of "person", "animal or scene", and "building", respectively, and the video types include a natural documentary, an romance, and an shi drama; the video of the drama type corresponds to the first preference information with preference content of 'character', and the video of the natural documentary type corresponds to the first preference information with preference content of 'animal or scenery'; the video of the verse type corresponds to first preference information with preference content of 'building'; when the pictures are extracted, based on the first preference information, the probability of extracting the pictures containing the character images from the videos of the drama type is higher, the probability of extracting the pictures containing the animal or scenery images from the videos of the nature record type is higher, and the probability of extracting the pictures containing the building images from the videos of the storyboard type is higher. In some embodiments, one video genre may correspond to a plurality of first preference information, and a video of a streetscape genre may simultaneously correspond to two first preference information having a preference content of "building" and a preference content of "person".
Generally, the video type of the video may be determined in two ways: firstly, acquiring a video type input or selected by a user; and secondly, acquiring the video type of the video from the website for acquiring the video by the user.
3. And acquiring second preference information matched with the video from a second preference information set pre-configured by the user.
The second preference information may include, but is not limited to, preferred content, preferred color system, and the like, and corresponds to the video one to one. Different videos may correspond to different second preference information. The second preference information may be similar to the first preference information in configuration content, and the first preference information is mainly different from the second preference information in that: the first preference information corresponds to a video type and the second preference information corresponds to a video.
4. Obtaining a history selection record of the extracted history pictures of the user, and determining user preference information based on the history selection record.
In general, the terminal may perform statistical analysis on the user's historical selection records to determine user preference information. For example, a statistical analysis of the historical selection history of a user of a video yields the following results: under the condition that the picture proportion of a warm color system in pictures obtained by extracting the video for the first time is 58%, the picture proportion of a cold color system is 38% and the picture proportion of an intermediate color system is 4%, after a user selects the extracted historical pictures, the picture proportion of the selected warm color system is 42%, the picture proportion of the cold color system is 52%, the picture proportion of the intermediate color system is 6%, and the proportion increase amplitude of the cold color system pictures after being selected by the user is determined to be far higher than that of the warm color system pictures and the intermediate color system pictures, so that the preferred color system corresponding to the video is determined to be the cold color system.
In step 503, a picture matching with the user preference information is extracted from the video.
In this embodiment, the server may extract a picture matching the user preference information from the video.
Generally, the server may first analyze the pictures in the video, determine the pictures matching with the user preference information, and then extract a certain number of pictures from the determined pictures. The number of pictures extracted from the video at a time may be determined by a default number of pictures preset by a user or a number of pictures input when the user performs a picture extraction operation.
And step 504, sending the extracted picture to the terminal.
Step 505, if receiving the information distribution request from the terminal, distributing the extracted one or more pictures indicated by the information distribution request on the information flow page.
In the present embodiment, the specific operations of step 504-505 are described in detail in step 403-404 in the embodiment shown in fig. 4, and are not described herein again.
As can be seen from fig. 5, compared with the embodiment corresponding to fig. 4, the flow 500 of the method for publishing information in the present embodiment highlights the step of extracting the picture based on the user preference information. Therefore, the picture extracted from the video by the scheme described in the embodiment is the picture according with the preference of the user, so that the targeted map picture extraction is realized.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use in implementing a computer device (e.g., devices 101, 102 shown in FIG. 1) of an embodiment of the present application is shown. The computer device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or electronic device. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a determination unit, an acquisition unit, and an issue unit. Here, the names of these units do not constitute a limitation to the unit itself in this case, and for example, the determination unit may also be described as "a unit that determines a video designated by the user according to an operation performed by the user on the information distribution interface". As another example, it can be described as: a processor includes a receiving unit, an extracting unit, a transmitting unit, and a publishing unit. The names of these units do not constitute a limitation to the unit itself in this case, and for example, the receiving unit may also be described as "a unit for a picture extraction request for video transmitted by a receiving terminal".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the computer device described in the above embodiments; or may exist separately and not be incorporated into the computer device. The computer readable medium carries one or more programs which, when executed by the computing device, cause the computing device to: determining a video designated by a user according to an operation executed by the user on an information publishing interface; if the picture extraction operation on the video is detected, obtaining a picture extracted from the video; and if the information publishing operation is detected, publishing the extracted one or more pictures on the information flow page. Or cause the computer device to: receiving a picture extraction request for a video sent by a terminal, wherein the video is determined according to an operation executed by a user on an information publishing interface; extracting pictures from the video; sending the extracted picture to a terminal; and if the information release request from the terminal is received, releasing the extracted one or more pictures indicated by the information release request on the information flow page.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (17)

1. A method for releasing information is applied to a terminal and comprises the following steps:
determining a video appointed by a user according to an operation executed by the user on an information publishing interface;
if the picture extraction operation on the video is detected, obtaining a picture extracted from the video;
if the information publishing operation is detected, publishing the extracted one or more pictures on an information flow page;
wherein the obtaining of the picture extracted from the video comprises:
acquiring a selection record of at least one picture in the video from at least one other user;
based on the selection record, a picture is extracted from the video.
2. The method of claim 1, wherein after said obtaining the picture extracted from the video, further comprising:
and if the picture extraction operation is detected again, acquiring a picture which is extracted from the video and is different from the picture extracted before.
3. The method of claim 1, wherein the obtaining a picture extracted from the video comprises:
sending a picture extraction request for the video to a server, and receiving the picture extracted from the video sent by the server.
4. The method of claim 1, wherein after said obtaining the picture extracted from the video, further comprising:
displaying the extracted picture on the information publishing interface;
if the selected operation of the displayed one or more pictures is detected, setting the selected pictures to be in an editable state;
and if the selected picture is detected to be edited, editing the selected picture.
5. The method of claim 4, wherein the editing operation comprises at least one of: retention operation, deletion operation, and position adjustment operation.
6. The method of claim 1, wherein the obtaining a picture extracted from the video comprises:
acquiring user preference information;
extracting pictures matched with the user preference information from the video;
the obtaining mode of the user preference information comprises at least one of the following modes:
acquiring default preference information preset by the user;
determining the video type of the video, and acquiring first preference information matched with the video type from a first preference information set pre-configured by the user, wherein the first preference information corresponds to the video type one to one;
acquiring second preference information matched with the video from a second preference information set pre-configured by the user, wherein the second preference information corresponds to the video one to one;
obtaining a history selection record of the user for the extracted history pictures, and determining the user preference information based on the history selection record.
7. The method of claim 6, wherein the video type of the video is determined in a manner comprising at least one of:
acquiring the video type input or selected by the user;
and acquiring the video type of the video from the website for acquiring the video by the user.
8. The method of claim 6, wherein the user preference information comprises at least one of: preferred content, preferred color system.
9. The method of claim 1, wherein the obtaining a picture extracted from the video comprises:
determining a video type of the video;
and extracting pictures from the video based on picture extraction algorithms matched with the video types, wherein different video types correspond to different picture extraction algorithms.
10. The method of claim 1, wherein said extracting pictures from the video based on the selected recording comprises:
and extracting the pictures selected by the at least one other user and/or the pictures close to the playing time of the pictures selected by the at least one other user from the video.
11. The method according to one of claims 1 to 10, wherein the number of pictures extracted at a time from the video is determined by a default number of pictures preset by the user or a number of pictures input when the user performs a picture extraction operation.
12. A method for publishing information is applied to a server and comprises the following steps:
receiving a picture extraction request for a video sent by a terminal, wherein the video is determined according to an operation executed by a user on an information publishing interface;
extracting pictures from the video;
sending the extracted picture to the terminal;
if an information release request from the terminal is received, releasing the extracted one or more pictures indicated by the information release request on an information flow page;
wherein the extracting pictures from the video comprises:
acquiring a selection record of at least one picture in the video from at least one other user;
based on the selection record, a picture is extracted from the video.
13. The method of claim 12, wherein after said extracting a picture from said video, further comprising:
and if the picture extraction request is received again, extracting a picture different from the picture extracted before from the video.
14. The method of claim 12, wherein the extracting a picture from the video comprises:
acquiring user preference information;
extracting pictures matched with the user preference information from the video;
the obtaining mode of the user preference information comprises at least one of the following modes:
acquiring default preference information preset by the user;
determining the video type of the video, and acquiring first preference information matched with the video type from a first preference information set pre-configured by the user, wherein the first preference information corresponds to the video type one to one;
acquiring second preference information matched with the video from a second preference information set pre-configured by the user, wherein the second preference information corresponds to the video one to one;
obtaining a history selection record of the user for the extracted history pictures, and determining the user preference information based on the history selection record.
15. The method of claim 12, wherein the extracting a picture from the video comprises:
determining a video type of the video;
and extracting pictures from the video based on picture extraction algorithms matched with the video types, wherein different video types correspond to different picture extraction algorithms.
16. A computer device, comprising:
one or more processors;
a storage device on which one or more programs are stored;
when executed by the one or more processors, cause the one or more processors to implement a method as claimed in any one of claims 1-11, or to implement a method as claimed in any one of claims 12-15.
17. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 11, or carries out the method of any one of claims 12 to 15.
CN201911011648.8A 2019-10-23 2019-10-23 Method and device for publishing information Active CN110708574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911011648.8A CN110708574B (en) 2019-10-23 2019-10-23 Method and device for publishing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911011648.8A CN110708574B (en) 2019-10-23 2019-10-23 Method and device for publishing information

Publications (2)

Publication Number Publication Date
CN110708574A CN110708574A (en) 2020-01-17
CN110708574B true CN110708574B (en) 2022-01-21

Family

ID=69202124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911011648.8A Active CN110708574B (en) 2019-10-23 2019-10-23 Method and device for publishing information

Country Status (1)

Country Link
CN (1) CN110708574B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108102A (en) * 2018-01-02 2018-06-01 联想(北京)有限公司 Image recommendation method and electronic equipment
CN109547852A (en) * 2017-09-21 2019-03-29 江苏华夏知识产权服务有限公司 It is the method for poster by Video Quality Metric
CN109714629A (en) * 2019-01-30 2019-05-03 南华大学 A kind of generation method of stop-motion animation and generate system
CN110248207A (en) * 2018-03-08 2019-09-17 株式会社理光 Image presence shows server, methods of exhibiting and recording medium and display systems

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4153963B2 (en) * 2006-06-12 2008-09-24 オリンパスメディカルシステムズ株式会社 Endoscope insertion shape detection device
KR20090042506A (en) * 2007-10-26 2009-04-30 주식회사 크레듀 A device and method for layering moving picture
US8307395B2 (en) * 2008-04-22 2012-11-06 Porto Technology, Llc Publishing key frames of a video content item being viewed by a first user to one or more second users
CN104333773A (en) * 2013-12-18 2015-02-04 乐视网信息技术(北京)股份有限公司 A Video recommending method and server
CN106055996B (en) * 2016-05-18 2021-03-16 维沃移动通信有限公司 Multimedia information sharing method and mobile terminal
CN106791909B (en) * 2016-12-01 2020-03-17 中央电视台 Video data processing method and device and server
CN109729426B (en) * 2017-10-27 2022-03-01 优酷网络技术(北京)有限公司 Method and device for generating video cover image
CN109936763B (en) * 2017-12-15 2022-07-01 腾讯科技(深圳)有限公司 Video processing and publishing method
CN108989609A (en) * 2018-08-10 2018-12-11 北京微播视界科技有限公司 Video cover generation method, device, terminal device and computer storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547852A (en) * 2017-09-21 2019-03-29 江苏华夏知识产权服务有限公司 It is the method for poster by Video Quality Metric
CN108108102A (en) * 2018-01-02 2018-06-01 联想(北京)有限公司 Image recommendation method and electronic equipment
CN110248207A (en) * 2018-03-08 2019-09-17 株式会社理光 Image presence shows server, methods of exhibiting and recording medium and display systems
CN109714629A (en) * 2019-01-30 2019-05-03 南华大学 A kind of generation method of stop-motion animation and generate system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"监播系统中视频关键帧抽取器的涉及与实现";柯磊,庞龙;《电视技术》;20150617;全文 *

Also Published As

Publication number Publication date
CN110708574A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
US10313726B2 (en) Distributing media content via media channels based on associated content being provided over other media channels
US9460752B2 (en) Multi-source journal content integration systems and methods
CN108833787B (en) Method and apparatus for generating short video
US11800201B2 (en) Method and apparatus for outputting information
CN111447489A (en) Video processing method and device, readable medium and electronic equipment
JP2009181468A (en) Image search log collection system, image search log collection method and program
CN109743245B (en) Method and device for creating group
CN109862100B (en) Method and device for pushing information
US10628955B2 (en) Information processing device, information processing method, and program for identifying objects in an image
US10674183B2 (en) System and method for perspective switching during video access
WO2023051294A1 (en) Prop processing method and apparatus, and device and medium
KR20180111981A (en) Edit real-time content with limited interaction
CN109168012B (en) Information processing method and device for terminal equipment
JP2007310596A (en) Service providing device, computer program and recording medium
CN108038172B (en) Search method and device based on artificial intelligence
CN109241344B (en) Method and apparatus for processing information
US10264324B2 (en) System and method for group-based media composition
CN112040312A (en) Split-screen rendering method, device, equipment and storage medium
WO2023134617A1 (en) Template selection method and apparatus, and electronic device and storage medium
CN110708574B (en) Method and device for publishing information
CN113220381A (en) Click data display method and device
CN112016280B (en) File editing method and device and computer readable medium
CN113473236A (en) Processing method and device for screen recording video, readable medium and electronic equipment
CN112463998A (en) Album resource processing method, apparatus, electronic device and storage medium
CN110703971A (en) Method and device for publishing information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant