CN112258214A - Video delivery method and device and server - Google Patents

Video delivery method and device and server Download PDF

Info

Publication number
CN112258214A
CN112258214A CN202011003534.1A CN202011003534A CN112258214A CN 112258214 A CN112258214 A CN 112258214A CN 202011003534 A CN202011003534 A CN 202011003534A CN 112258214 A CN112258214 A CN 112258214A
Authority
CN
China
Prior art keywords
video
prediction data
user
delivery
effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011003534.1A
Other languages
Chinese (zh)
Inventor
李银辉
刘旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011003534.1A priority Critical patent/CN112258214A/en
Publication of CN112258214A publication Critical patent/CN112258214A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements
    • G06Q30/0245Surveys
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0277Online advertisement

Abstract

The disclosure relates to a video delivery method, a video delivery device and a server, and belongs to the technical field of computer application. The releasing method comprises the following steps: acquiring uploaded materials; generating a video according to the material; acquiring delivery effect prediction data corresponding to the video; and displaying the release effect prediction data corresponding to the video to the user so that the user can select whether to release the video or release the video after modification according to the release effect prediction data corresponding to the video. Therefore, according to the video releasing method, the releasing effect prediction data corresponding to the video can be obtained and displayed to the user before the video is released, and compared with the prior art that the user can obtain the releasing effect only after the video is released for a long time, the method enables the user to know the releasing effect of the video in advance, and the releasing or modifying can be carried out according to the releasing effect prediction data, so that the releasing effect and the releasing efficiency of the video are improved.

Description

Video delivery method and device and server
Technical Field
The present disclosure relates to the field of computer application technologies, and in particular, to a video delivery method, an apparatus, and a server.
Background
Currently, with the development of internet technology, the advertisement delivery to users based on the network has the advantages of large coverage area, strong instantaneity and the like, and is widely applied, for example, the advertisement delivery to users can be performed on media such as web pages, Application programs (APPs) and the like. However, in the related art, a user cannot know the production level and the advertisement putting effect of the advertisement before putting the advertisement, and can obtain the advertisement putting effect only after putting the advertisement for a long time, which has the problems of poor putting effect and putting efficiency.
Disclosure of Invention
The present disclosure provides a video delivery method, apparatus, server, storage medium, and computer program product, to at least solve the problem of poor delivery effect and delivery efficiency in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video delivery method, including: acquiring uploaded materials; generating a video according to the material; acquiring delivery effect prediction data corresponding to the video; and displaying the release effect prediction data corresponding to the video to the user so that the user can select whether to release the video or release the video after modification according to the release effect prediction data corresponding to the video.
In an embodiment of the present disclosure, the obtaining of the predicted data of the delivery effect corresponding to the video includes: searching the actual data of the putting effect of the sample video matched with the video in a database according to the video; and determining the actual data of the putting effect of the sample video matched with the video as the predicted data of the putting effect corresponding to the video.
In an embodiment of the present disclosure, the searching, according to the video, actual data of a delivery effect of a sample video matching the video in a database includes: identifying the video to acquire the industry, the type and the content corresponding to the video; and searching the actual data of the putting effect of the sample video which has the same industry and type and is matched with the video in the database.
In one embodiment of the present disclosure, the method further comprises: acquiring a target case acquisition request of the user for the video; searching a set number of sample videos which have the same industry and type and the highest actual effect data and are the same as the videos in the database; and displaying the searched sample videos with the set number to the user as target cases so that the user can modify the videos according to the target cases and then release the modified videos.
In an embodiment of the present disclosure, the obtaining of the predicted data of the delivery effect corresponding to the video includes: acquiring an acquisition request of the user for the delivery effect prediction data of the video; and acquiring the launching effect prediction data corresponding to the video according to the launching effect prediction data acquisition request.
In one embodiment of the present disclosure, the impression effect prediction data includes click through rate prediction data and/or conversion rate prediction data.
According to a second aspect of the embodiments of the present disclosure, there is provided a video delivery apparatus, including: the first acquisition module is configured to execute acquisition of the uploaded materials; a generation module configured to perform generation of a video from the material; a second obtaining module configured to perform obtaining of delivery effect prediction data corresponding to the video; and the first display module is configured to display the release effect prediction data corresponding to the video to the user, so that the user can select whether to release the video or release the video after modification according to the release effect prediction data corresponding to the video.
In an embodiment of the disclosure, the second obtaining module includes: the searching unit is configured to search the database for the actual delivery effect data of the sample video matched with the video according to the video; and a determination unit configured to perform determination of delivery effect actual data of the sample video matching the video as delivery effect prediction data corresponding to the video.
In an embodiment of the present disclosure, the search unit includes: the identification subunit is configured to perform identification processing on the video to acquire industries, types and contents corresponding to the video; and the searching subunit is configured to search the actual delivery effect data of the sample video which has the same industry and type and is matched with the content and is corresponding to the video in the database.
In one embodiment of the present disclosure, the apparatus further comprises: a third obtaining module configured to execute obtaining of a target case obtaining request of the user for the video; the searching module is configured to search a set number of sample videos which have the same industry and type and the highest actual effect data and are corresponding to the videos in the database; and the second display module is configured to display the searched sample videos with the set number to the user as target cases so that the user can modify and release the videos according to the target cases.
In an embodiment of the disclosure, the second obtaining module includes: a request acquisition unit configured to execute acquisition of an acquisition request of delivery effect prediction data of the user for the video; and a data acquisition unit configured to execute acquiring, according to the launch effect prediction data acquisition request, launch effect prediction data corresponding to the video.
In one embodiment of the present disclosure, the impression effect prediction data includes click through rate prediction data and/or conversion rate prediction data.
According to a third aspect of the embodiments of the present disclosure, there is provided a server, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video delivery method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, wherein instructions, when executed by a processor of a server, enable the server to perform the video delivery method as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which, when executed by a processor of a server, enables the server to perform the video delivery method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the method has the advantages that the releasing effect prediction data corresponding to the video can be obtained and displayed to the user before video releasing, compared with the prior art that the user can obtain the releasing effect only after the video is released for a long time, the user can know the releasing effect of the video in advance, releasing or modifying can be carried out according to the releasing effect prediction data, and the releasing effect and the releasing efficiency of the video can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow chart illustrating a video delivery method according to an exemplary embodiment.
Fig. 2 is a scene diagram illustrating a video delivery method according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a video delivery method for obtaining delivery effect prediction data corresponding to a video according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating another video delivery method according to an exemplary embodiment, wherein delivery effect prediction data corresponding to a video is obtained.
Fig. 5 is a flowchart illustrating a video delivery method for finding delivery effect actual data of a sample video matching a video according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating a video delivery method after presenting delivery effect prediction data corresponding to a video to a user according to an exemplary embodiment.
Fig. 7 is a flow chart illustrating another video delivery method in accordance with an exemplary embodiment.
Fig. 8 is a block diagram illustrating a video delivery apparatus according to an example embodiment.
Fig. 9 is a block diagram of another video delivery apparatus according to an example embodiment.
FIG. 10 is a block diagram illustrating a server in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a video delivery method according to an exemplary embodiment, which is used in a server, as shown in fig. 1, and includes the following steps.
In step S101, the uploaded material is acquired.
It should be noted that the execution subject of the video delivery method of the present disclosure is a server. The video delivery method of the embodiment of the present disclosure may be executed by the video delivery apparatus of the embodiment of the present disclosure, and the video delivery apparatus of the embodiment of the present disclosure may be configured in any server to execute the video delivery method of the embodiment of the present disclosure.
In embodiments of the present disclosure, the material uploaded by the user includes, but is not limited to, material that the user has submitted to the delivery platform and no video has been generated. The delivery platform includes, but is not limited to, platforms such as an Application (APP) served by a server, a web page, or a terminal device, and is not limited herein. Material includes, but is not limited to, text, pictures, audio, video, and the like.
In step S102, a video is generated from the material.
It can be understood that the current video advertisements are common, and compared with the image and text advertisements, the video advertisements have better interest and interactivity.
In the embodiment of the present disclosure, the video is generated according to the material, and the following two possible implementations may be included.
And in the mode 1, the editing operation of the user on the material is obtained, and the video is generated according to the material and the editing operation of the user on the material.
It is understood that after the user uploads the material, the material may be edited on the delivery platform.
For example, assuming that the material includes a plurality of pictures, the user may perform editing operations such as cutting pictures, setting display time of pictures, adding background music, adding text, and the like; if the material comprises a plurality of videos, the user can perform editing operations such as adjusting the sequence of the videos, adding background music, setting transition effect, adding text and the like; assuming that the material includes a plurality of pictures and a plurality of texts, the user can perform editing operations such as setting a display picture of the text, setting a display position and a display time of the text on the display picture, and setting a font size and a font style of the text.
Further, a server corresponding to the delivery platform can acquire the editing operation of the user on the material, and then can generate a video according to the material and the editing operation of the user on the material.
Therefore, the method can generate the video according to the editing operation of the user on the material, and is high in flexibility.
And 2, identifying the material, searching a corresponding video template in a video template library according to the identification result, and generating a video according to the material and the video template.
Optionally, the video template library may be calibrated according to an actual situation, and is preset in the storage space of the server.
It can be understood that the types of the materials are different, and the materials can correspond to different video templates. For example, if the material only includes text, the material may correspond to a text type template; if the material only comprises pictures, the material can correspond to the picture template; if the material comprises the picture and the text, the material can correspond to the comprehensive template. It should be noted that the video template may also include other types, which are not limited herein.
In a specific implementation, the process of identifying the material may include performing an image recognition process and a word recognition process on the material, and the identification result includes, but is not limited to, the type of the material, the number of materials corresponding to each type, and the like. For example, if the material includes only one video, the recognition result may be one video; if the material includes 3 pictures and 2 texts, the recognition result may be 3 pictures and 2 texts.
Therefore, the method can automatically generate the video according to the material and the video template, and is high in efficiency.
In step S103, the delivery effect prediction data corresponding to the video is acquired.
It will be appreciated that the projected effect prediction data can be used to characterize the projected effect of the video.
In specific implementation, the video can be input into the trained prediction model to obtain the delivery effect prediction data corresponding to the video. The prediction model can be calibrated according to actual conditions and is preset in a storage space of the server.
In step S104, the delivery effect prediction data corresponding to the video is displayed to the user, so that the user can select whether to deliver the video or to deliver the video after modification according to the delivery effect prediction data corresponding to the video.
In specific implementation, the putting effect prediction data corresponding to the video can be directly displayed to the user through the putting platform. For example, the delivery effect prediction data corresponding to the video can be directly displayed on the display area corresponding to the video on the delivery platform, and a prompt message carrying the delivery effect prediction data can be sent to the user through the delivery platform. Or, a reminding message carrying the putting effect prediction data can be sent to the user through a terminal device bound with the putting platform, and the terminal device can comprise a mobile phone, a tablet computer and the like.
It can be understood that after the user acquires the delivery effect prediction data corresponding to the video, the user may analyze the delivery effect prediction data to determine whether to deliver the video or to deliver the video after modification. If the launching effect represented by the launching effect prediction data is poor, indicating that the launching effect of the current video is poor, the user can modify the video and then launch the video so as to improve the launching effect of the video; and if the delivery effect represented by the delivery effect prediction data is better, indicating that the delivery effect of the current video is better, the user can directly deliver the video.
According to the video releasing method, the releasing effect prediction data corresponding to the video can be obtained and displayed to the user before the video is released, and compared with the prior art that the user can obtain the releasing effect only after the video is released for a long time, the method enables the user to know the releasing effect of the video in advance, and the releasing or modifying can be carried out according to the releasing effect prediction data, so that the releasing effect and the releasing efficiency of the video can be improved.
In the embodiment of the disclosure, as shown in fig. 2, a user may upload a material to a delivery platform corresponding to a server, the server corresponding to the delivery platform may generate a video according to the material, and feed back delivery effect prediction data corresponding to the video to the user through the delivery platform, and if the user selects to deliver the video, the delivery platform may deliver the video to an object.
On the basis of any of the above embodiments, the impression effect prediction data in step S103 may include Click-Through-Rate (CTR) prediction data and/or Conversion Rate (CVR) prediction data.
Wherein the click through rate prediction data can be used to characterize the click through rate of the video. Alternatively, the click volume of the video may be divided by the presentation volume. It should be noted that the click amount is the total number of times the video is clicked, and the presentation amount is the total number of times the video is presented.
Wherein the conversion prediction data is operable to characterize the conversion of the video. Alternatively, the conversion amount of the video may be divided by the click amount. The conversion amount includes, but is not limited to, the total number of times of the predetermined behaviors brought by the video and the total number of people who generate the predetermined behaviors, and the predetermined behaviors can be calibrated according to actual conditions and are preset in a storage space of the server.
For example, if a client wants to register an object on an APP by delivering a video, the predetermined behavior may be marked as registering on the APP, and the conversion amount may be the total number of registered behaviors brought by the video and may also be the total number of registered users brought by the video; if the customer wants to make the object shop at a certain shop by putting the video, the predetermined behavior can be marked as shopping at the certain shop, the conversion amount can be the total times of the shopping behaviors brought by the video, and can also be the total number of the shopping behaviors brought by the video.
It will be appreciated that higher click through rate prediction data and/or conversion rate prediction data indicates better delivery of video.
On the basis of any of the above embodiments, as shown in fig. 3, the acquiring, in step S103, launch effect prediction data corresponding to a video may include:
in step S201, a user delivery effect prediction data acquisition request for a video is acquired.
In the embodiment of the disclosure, a user can select whether to acquire the delivery effect prediction data corresponding to the video, and can send a delivery effect prediction data acquisition request for the video to the server through the delivery platform.
In specific implementation, a selection menu is preset on a display area corresponding to a video on the launching platform, so that a user can select whether launching effect prediction data corresponding to the video is acquired or not, and the operation of the user on the selection menu can be monitored. If the situation that the user selects to obtain the release effect prediction data corresponding to the video is monitored, an obtaining request of the user for the release effect prediction data of the video can be triggered; otherwise, if the situation that the user selects to acquire the launching effect prediction data corresponding to the video is not monitored, the user does not trigger the request for acquiring the launching effect prediction data of the video.
In step S202, the delivery effect prediction data corresponding to the video is acquired according to the delivery effect prediction data acquisition request.
Therefore, the method can acquire the launching effect prediction data corresponding to the video only when acquiring the launching effect prediction data acquisition request of the user for the video, and is beneficial to saving the calculation resources of the server.
On the basis of any of the above embodiments, as shown in fig. 4, the acquiring, in step S103, launch effect prediction data corresponding to a video may further include:
in step S301, actual data of the delivery effect of the sample video matching the video is searched in the database according to the video.
In the embodiment of the disclosure, a database can be pre-constructed in the server and used for storing the sample video and the corresponding actual data of the putting effect.
In specific implementation, a sample video matched with the video can be searched in a database according to the video, and then the actual data of the delivery effect of the sample video matched with the video is obtained.
Optionally, the sample video with the highest video matching degree may be searched in the database, and then the actual data of the delivery effect of the sample video with the highest video matching degree may be obtained. The matching degree between the video and the sample video can be obtained through the matching degree model, and the matching degree model can be calibrated according to actual conditions and is preset in a storage space of the server.
In step S302, the putting effect actual data of the sample video matching the video is determined as the putting effect prediction data corresponding to the video.
It can be understood that, if the difference between the delivery effect of the video and the sample video matched with the video is not great, the delivery effect actual data of the sample video matched with the video may be determined as the delivery effect prediction data corresponding to the video.
Therefore, the method can determine the delivery effect prediction data corresponding to the video according to the delivery effect actual data of the sample video matched with the video, so as to realize the prediction of the delivery effect.
Optionally, the obtaining of the predicted delivery effect data corresponding to the video in step S103 may further include obtaining a matching degree between the sample videos in the database and the video, then sorting the sample videos according to the matching degree from high to low, obtaining a plurality of actual delivery effect data of the N sample videos before sorting, and then determining the predicted delivery effect data corresponding to the video according to the plurality of actual delivery effect data of the N sample videos before sorting. Wherein N is an integer greater than or equal to 1.
In a specific implementation, an average value of a plurality of delivery effect actual data of the top N sequenced sample videos may be taken as the delivery effect prediction data corresponding to the videos.
Therefore, the method can comprehensively consider a plurality of actual delivery effect data of a plurality of sample videos with high video matching degree to determine the delivery effect prediction data corresponding to the videos.
On the basis of any of the above embodiments, as shown in fig. 5, the step S301 of searching for the actual data of the delivering effect of the sample video matching with the video in the database according to the video may include:
in step S401, the video is identified to obtain industries, types, and contents corresponding to the video.
In the embodiment of the disclosure, the identifying the video to obtain the industry, the type and the content corresponding to the video may include performing image identification processing and character identification processing on the video to obtain the industry and the type corresponding to the video, and performing Artificial Intelligence (AI) identification processing on the video to obtain the content of the video.
In the embodiment of the disclosure, the industries corresponding to videos include, but are not limited to, makeup, food, games, digital products, apparel, home, and the like, the types corresponding to videos include, but are not limited to, pictures, texts, audio, general categories, and the like, and the contents corresponding to videos include, but are not limited to, pictures, texts, audio, special effects, transition effects, end frames, and the like.
For example, assuming that the video is a video for promoting lipstick, and the material for generating the video includes pictures, texts, videos, etc., the industry corresponding to the video is makeup, the type is a comprehensive type, and the content includes, but is not limited to, pictures, texts, audio, end frames, etc.
In step S402, actual data of the delivering effect of the sample video with the same industry and type and the matched content corresponding to the video is searched in the database.
Therefore, the method can determine the putting effect prediction data corresponding to the video according to the putting effect actual data of the sample video which is the same in industry and type and matched in content, so that the putting effect prediction data are more accurate.
On the basis of any of the above embodiments, after the predicted delivery effect data corresponding to the video is displayed to the user in step S104, a threshold value of the delivered effect data may also be obtained, and if the predicted delivery effect data is greater than or equal to the threshold value of the delivered effect data, it indicates that the predicted delivery effect data of the video is higher, that is, the delivery effect of the video is better, a prompt message for delivering the video is sent to the user; and if the launching effect prediction data is smaller than the launching effect data threshold value, the launching effect prediction data of the video is low, namely the launching effect of the video is poor, and a reminding message for modifying the video is sent to the user. The releasing effect data threshold value can be calibrated according to actual conditions and is preset in a storage space of the server.
Therefore, the method can determine to send the reminding message of releasing the video or modifying the video to the user according to the size relation between the releasing effect prediction data and the releasing effect data threshold value, and can help the user to select whether to release the video or release the video after modification.
On the basis of any of the above embodiments, as shown in fig. 6, after displaying the projected effect prediction data corresponding to the video to the user in step S104, the method may further include:
in step S501, a target case acquisition request of a user for a video is acquired.
In the embodiment of the disclosure, the target case includes, but is not limited to, the same industry and type as the video, and delivers the video with better effect.
In the embodiment of the disclosure, a user can select whether to acquire a target case corresponding to a video, and can send a target case acquisition request for the video to a server through a delivery platform. Therefore, the method can acquire the target case corresponding to the video only when the target case acquisition request of the user for the video is acquired, and is beneficial to saving the operation resources of the server.
Optionally, after the delivery effect prediction data corresponding to the video is displayed to the user in step S104, the target case corresponding to the video may also be directly displayed to the user, so that the user can select whether to deliver the video or to deliver the modified video according to the delivery effect prediction data and the target case corresponding to the video.
In step S502, a set number of sample videos that have the same industry and type as the videos and have the highest actual effect data are launched are searched for in the database.
In the embodiment of the present disclosure, the set number may be calibrated according to actual conditions, for example, may be calibrated to 10, and may be preset in the storage space of the server.
In step S503, the searched sample videos with the set number are displayed to the user as target cases, so that the user can modify and release the videos according to the target cases.
Therefore, the method can screen the sample video which has the same industry and type as the video and has a good putting effect from the database, and the sample video is displayed to the user as the target case, so that the user can put the modified video according to the target case, and the efficiency and the effect of modifying the video by the user are improved.
Optionally, after the sample videos of the set number found in step S104 are displayed to the user as the target cases, a video modification method may be obtained according to the videos and the target cases, and the video modification method is displayed to the user, so that the user can modify and release the videos according to the modification method.
Therefore, the method can lead the user to modify the video according to the modification method, is beneficial to saving the modification time of the user and improving the modification effect.
Fig. 7 is a flowchart illustrating another video delivery method, as shown in fig. 7, for use in a server, according to an exemplary embodiment.
In step S601, the uploaded material is acquired.
In step S602, a video is generated from the material.
The detailed implementation process and principle of the steps S601-S602 may refer to the detailed description of the above embodiments, and are not described herein again.
In step S603, a picture frame and a video frame corresponding to the video are acquired.
In step S604, the obtained picture frame and video frame are subjected to identification processing to obtain industries, types and contents corresponding to the video.
In specific implementation, if the launch effect prediction data is cover click rate prediction data, identifying the acquired cover picture of the video to acquire the picture, text and the like included in the cover picture; if the putting effect prediction data is conversion rate prediction data, the obtained video frames can be identified to obtain pictures, texts, special effects, transition effects, tail frames and the like included in the video.
In step S605, a sample video with the same industry and type and content matching as the video is searched in the database, the matching degree between the sample video and the video is obtained, and the actual delivery effect data of the sample video with the highest matching degree is determined as the predicted delivery effect data corresponding to the video.
In step S606, the delivery effect prediction data corresponding to the video is displayed to the user, so that the user can select whether to deliver the video or to deliver the video after modification according to the delivery effect prediction data corresponding to the video.
In specific implementation, if the projected effect prediction data is cover click rate prediction data, the sample videos which are the same in industry and type and are matched with the pictures and texts included in the cover pictures can be searched in the database, the matching degrees of the sample videos and the videos are obtained, and the actual cover click rate data of the sample videos with the highest matching degree is determined as the cover click rate prediction data corresponding to the videos.
In specific implementation, if the delivered effect prediction data is conversion rate prediction data, sample videos which are the same in industry and type and are matched with pictures, texts, special effects, transition effects, end frames and the like in the videos can be searched in a database, the matching degree of the sample videos and the videos is obtained, and the actual conversion rate data of the sample videos with the highest matching degree is determined as the conversion rate prediction data corresponding to the videos.
For example, if the actual click rate data and the actual conversion rate data of the sample video corresponding to the video are 13.6% and 6.6%, the click rate prediction data and the conversion rate prediction data of the video can be determined to be 13.6% and 6.6%, respectively.
In step S607, it is recognized that the user selects to deliver the video.
In step S608, the user is identified to select to deliver the modified video.
In the embodiment of the disclosure, a user can select to release or modify the video, and can send a release request or a modification request for the video to a server through a release platform.
In specific implementation, a selection menu is preset on a display area corresponding to a video on the release platform, so that a user can select to release the video or release the video after modification, and the operation of the user on the selection menu can be monitored. If the user selects to release the video, a release request of the user for the video can be triggered; otherwise, if the user is monitored to release the video after selecting to modify the video, a modification request of the user for the video is triggered.
In step S609, a set number of sample videos that have the same industry and type as the videos and have the highest actual effect data are launched are searched in the database.
In step S610, the searched sample videos with the set number are displayed to the user as target cases, so that the user can modify the videos according to the target cases and then release the modified videos.
In the embodiment of the disclosure, if it is recognized that the user selects to release the video, which indicates that the user has a desire to directly release the video, the process can be ended; if the fact that the user selects to release the video after modification is recognized, the user wants to modify the video first and then release the video, the target case can be searched from the database, and the target case is displayed to the user so that the user can release the video after modifying the video according to the target case.
In specific implementation, if the delivery effect prediction data is cover click rate prediction data, a set number of sample videos which have the same industry and type and the highest actual cover click rate data and correspond to videos can be searched in a database, and the searched sample videos with the set number are displayed to a user as target cases, so that the user can modify and deliver the videos according to the target cases.
In specific implementation, if the delivery effect prediction data is conversion rate prediction data, a set number of sample videos which have the same industry and type and the highest actual conversion rate data and correspond to videos can be searched in a database, and the searched sample videos with the set number are displayed to a user as target cases so that the user can modify and deliver the videos according to the target cases.
The detailed implementation process and principle of the above steps can refer to the detailed description of the above embodiments, and are not repeated here.
According to the video delivery method provided by the embodiment of the disclosure, after the delivery effect prediction data corresponding to the video is displayed for the user, if the user is identified to select to modify the video and then deliver the modified video, the target case can be displayed for the user, so that the user can modify the video according to the target case and then deliver the modified video, and the efficiency and the effect of modifying the video by the user can be improved.
Fig. 8 illustrates a block diagram of a video delivery apparatus, according to an example embodiment. Referring to fig. 8, the apparatus 700 includes: a first obtaining module 71, a generating module 72, a second obtaining module 73, and a first showing module 74.
The first acquisition module 71 is configured to perform acquisition of uploaded material.
The generation module 72 is configured to perform the generation of video from the material.
The second obtaining module 73 is configured to perform obtaining of the projected effect prediction data corresponding to the video.
The first presentation module 74 is configured to perform presentation of the predicted delivery effect data corresponding to the video to the user, so that the user can select whether to deliver the video or deliver the video after modification according to the predicted delivery effect data corresponding to the video.
In an embodiment of the present disclosure, the second obtaining module 73 includes a searching unit configured to perform searching for, in a database, actual data of an impression effect of a sample video matching the video according to the video; and a determination unit configured to perform determination of delivery effect actual data of the sample video matching the video as delivery effect prediction data corresponding to the video.
In an embodiment of the present disclosure, the search unit includes an identification subunit configured to perform identification processing on the video to obtain an industry, a type, and a content corresponding to the video; and the searching subunit is configured to search the actual delivery effect data of the sample video which has the same industry and type and is matched with the content and is corresponding to the video in the database.
In one embodiment of the present disclosure, referring to fig. 9, the apparatus 700 further includes a third obtaining module 75 configured to execute a target case obtaining request for obtaining the video by the user; a searching module 76 configured to search the database for a set number of sample videos which have the same industry and type and the highest actual effect data and correspond to the videos; and a second presentation module 77 configured to perform presentation of the found sample videos of the set number as target cases to the user, so that the user can modify and deliver the videos according to the target cases.
In an embodiment of the present disclosure, the second obtaining module 73 includes a request obtaining unit configured to execute obtaining of a delivery effect prediction data obtaining request of the user for the video; and a data acquisition unit configured to execute acquiring, according to the launch effect prediction data acquisition request, launch effect prediction data corresponding to the video.
In one embodiment of the present disclosure, the impression effect prediction data includes click through rate prediction data and/or conversion rate prediction data.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The video releasing device provided by the embodiment of the disclosure can acquire and display releasing effect prediction data corresponding to videos to users before video releasing, and compared with the method that users can acquire releasing effects only after video releasing for a long time in the related art, the method enables users to know the releasing effects of videos in advance, and can release or modify the videos according to the releasing effect prediction data, thereby being beneficial to improving the releasing effects and releasing efficiency of the videos.
Fig. 10 illustrates a block diagram of a server 800 for video delivery, according to an example embodiment.
As shown in fig. 10, the server 800 includes:
a memory 810 and a processor 820, a bus 830 connecting different components (including the memory 810 and the processor 820), wherein the memory 810 stores computer programs, and when the processor 820 executes the programs, the video delivery method according to the embodiment of the disclosure is implemented.
Bus 830 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The server 800 typically includes a variety of electronic device readable media. Such media may be any available media that is accessible by server 800 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 810 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)840 and/or cache memory 850. The server 800 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 860 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 10, and commonly referred to as a "hard drive"). Although not shown in FIG. 10, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 830 by one or more data media interfaces. Memory 810 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 880 having a set (at least one) of program modules 870 may be stored, for example, in memory 810, such program modules 870 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 870 generally perform the functions and/or methodologies of embodiments described in this disclosure.
The server 800 may also communicate with one or more external devices 890 (e.g., keyboard, pointing device, display 891, etc.), with one or more devices that enable a user to interact with the server 800, and/or with any devices (e.g., network card, modem, etc.) that enable the server 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 892. Also, the server 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network such as the Internet) via a network adapter 893. As shown in FIG. 10, the network adapter 893 communicates with the other modules of the server 800 via a bus 830. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the server 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 820 executes various functional applications and data processing by executing programs stored in the memory 810.
It should be noted that, for the implementation process and the technical principle of the server in this embodiment, reference is made to the foregoing explanation of the video delivery method in the embodiment of the present disclosure, and details are not described here again.
The server provided by the embodiment of the disclosure can execute the video delivery method, and can acquire and display the delivery effect prediction data corresponding to the video to the user before video delivery.
In order to implement the above embodiments, the present disclosure also provides a storage medium.
Wherein the instructions in the storage medium, when executed by a processor of the server, enable the server to perform the video delivery method as previously described.
To implement the above embodiments, the present disclosure also provides a computer program product, which when executed by a processor of a server, enables the server to execute the video delivery method as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for video delivery, comprising:
acquiring uploaded materials;
generating a video according to the material;
acquiring delivery effect prediction data corresponding to the video; and
and displaying the release effect prediction data corresponding to the video to the user so that the user can select whether to release the video or release the video after modification according to the release effect prediction data corresponding to the video.
2. The video delivery method according to claim 1, wherein the obtaining of the delivery effect prediction data corresponding to the video comprises:
searching the actual data of the putting effect of the sample video matched with the video in a database according to the video; and
and determining the actual data of the putting effect of the sample video matched with the video as the predicted data of the putting effect corresponding to the video.
3. The video delivery method according to claim 2, wherein the searching, according to the video, for the actual delivery effect data of the sample video matching the video in the database comprises:
identifying the video to acquire the industry, the type and the content corresponding to the video; and
and searching the actual data of the putting effect of the sample video which has the same industry and type and is matched with the video in the database.
4. The video delivery method of claim 3, further comprising:
acquiring a target case acquisition request of the user for the video;
searching a set number of sample videos which have the same industry and type and the highest actual effect data and are the same as the videos in the database; and
and displaying the searched sample videos with the set number to the user as target cases so that the user can modify the videos according to the target cases and then release the modified videos.
5. The video delivery method according to claim 1, wherein the obtaining of the delivery effect prediction data corresponding to the video comprises:
acquiring an acquisition request of the user for the delivery effect prediction data of the video; and
and acquiring the launching effect prediction data corresponding to the video according to the launching effect prediction data acquisition request.
6. The video delivery method according to claim 1, wherein the delivery effect prediction data comprises click through rate prediction data and/or conversion rate prediction data.
7. A video delivery apparatus, comprising:
the first acquisition module is configured to execute acquisition of the uploaded materials;
a generation module configured to perform generation of a video from the material;
a second obtaining module configured to perform obtaining of delivery effect prediction data corresponding to the video; and
and the first display module is configured to display the release effect prediction data corresponding to the video to the user, so that the user can select whether to release the video or release the video after modification according to the release effect prediction data corresponding to the video.
8. A server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video delivery method of any of claims 1 to 6.
9. A storage medium in which a processor of an instruction server, when executed, enables the server to perform the video delivery method of any one of claims 1 to 6.
10. A computer program product enabling a server to perform the video delivery method according to any one of claims 1 to 6 when executed by a processor of the server.
CN202011003534.1A 2020-09-22 2020-09-22 Video delivery method and device and server Pending CN112258214A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011003534.1A CN112258214A (en) 2020-09-22 2020-09-22 Video delivery method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011003534.1A CN112258214A (en) 2020-09-22 2020-09-22 Video delivery method and device and server

Publications (1)

Publication Number Publication Date
CN112258214A true CN112258214A (en) 2021-01-22

Family

ID=74232858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011003534.1A Pending CN112258214A (en) 2020-09-22 2020-09-22 Video delivery method and device and server

Country Status (1)

Country Link
CN (1) CN112258214A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570403A (en) * 2021-06-30 2021-10-29 北京达佳互联信息技术有限公司 Promotion information release effect prediction method and device and server
CN114418651A (en) * 2022-01-26 2022-04-29 北京数智新天信息技术咨询有限公司 Intelligent popularization decision-making method and device and electronic equipment
CN114501105A (en) * 2022-01-29 2022-05-13 腾讯科技(深圳)有限公司 Video content generation method, device, equipment, storage medium and program product

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127624A (en) * 2007-09-27 2008-02-20 腾讯科技(深圳)有限公司 Demonstration method and system for advertisement server, advertisement originality
CN103268560A (en) * 2013-04-19 2013-08-28 杭州电子科技大学 Before-release advertising effect evaluation method based on electroencephalogram indexes
CN105608604A (en) * 2015-12-30 2016-05-25 合一网络技术(北京)有限公司 Continuous calculation method of brand advertisement effectiveness optimization
CN106096995A (en) * 2016-05-31 2016-11-09 腾讯科技(深圳)有限公司 Advertising creative processing method and advertising creative processing means
CN107480124A (en) * 2017-07-05 2017-12-15 小草数语(北京)科技有限公司 Advertisement placement method and device
CN107871244A (en) * 2016-09-28 2018-04-03 腾讯科技(深圳)有限公司 The detection method and device of a kind of advertising results
CN109389429A (en) * 2018-09-29 2019-02-26 北京奇虎科技有限公司 A kind of production method and device of rich-media ads
CN109391826A (en) * 2018-08-07 2019-02-26 上海奇邑文化传播有限公司 A kind of video generates system and its generation method online
CN109963174A (en) * 2019-01-28 2019-07-02 北京奇艺世纪科技有限公司 Flow index of correlation predictor method, device and computer readable storage medium
CN110324676A (en) * 2018-03-28 2019-10-11 腾讯科技(深圳)有限公司 Data processing method, media content put-on method, device and storage medium
CN110472879A (en) * 2019-08-20 2019-11-19 秒针信息技术有限公司 A kind of appraisal procedure of resource impact, device, electronic equipment and storage medium
CN110807126A (en) * 2018-08-01 2020-02-18 腾讯科技(深圳)有限公司 Method, device, storage medium and equipment for converting article into video
CN110958472A (en) * 2019-12-16 2020-04-03 咪咕文化科技有限公司 Video click rate rating prediction method and device, electronic equipment and storage medium
CN111144937A (en) * 2019-12-20 2020-05-12 北京达佳互联信息技术有限公司 Advertisement material determination method, device, equipment and storage medium
CN111160983A (en) * 2019-12-31 2020-05-15 众安在线财产保险股份有限公司 Advertisement putting effect evaluation method and device, computer equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127624A (en) * 2007-09-27 2008-02-20 腾讯科技(深圳)有限公司 Demonstration method and system for advertisement server, advertisement originality
CN103268560A (en) * 2013-04-19 2013-08-28 杭州电子科技大学 Before-release advertising effect evaluation method based on electroencephalogram indexes
CN105608604A (en) * 2015-12-30 2016-05-25 合一网络技术(北京)有限公司 Continuous calculation method of brand advertisement effectiveness optimization
CN106096995A (en) * 2016-05-31 2016-11-09 腾讯科技(深圳)有限公司 Advertising creative processing method and advertising creative processing means
CN107871244A (en) * 2016-09-28 2018-04-03 腾讯科技(深圳)有限公司 The detection method and device of a kind of advertising results
CN107480124A (en) * 2017-07-05 2017-12-15 小草数语(北京)科技有限公司 Advertisement placement method and device
CN110324676A (en) * 2018-03-28 2019-10-11 腾讯科技(深圳)有限公司 Data processing method, media content put-on method, device and storage medium
CN110807126A (en) * 2018-08-01 2020-02-18 腾讯科技(深圳)有限公司 Method, device, storage medium and equipment for converting article into video
CN109391826A (en) * 2018-08-07 2019-02-26 上海奇邑文化传播有限公司 A kind of video generates system and its generation method online
CN109389429A (en) * 2018-09-29 2019-02-26 北京奇虎科技有限公司 A kind of production method and device of rich-media ads
CN109963174A (en) * 2019-01-28 2019-07-02 北京奇艺世纪科技有限公司 Flow index of correlation predictor method, device and computer readable storage medium
CN110472879A (en) * 2019-08-20 2019-11-19 秒针信息技术有限公司 A kind of appraisal procedure of resource impact, device, electronic equipment and storage medium
CN110958472A (en) * 2019-12-16 2020-04-03 咪咕文化科技有限公司 Video click rate rating prediction method and device, electronic equipment and storage medium
CN111144937A (en) * 2019-12-20 2020-05-12 北京达佳互联信息技术有限公司 Advertisement material determination method, device, equipment and storage medium
CN111160983A (en) * 2019-12-31 2020-05-15 众安在线财产保险股份有限公司 Advertisement putting effect evaluation method and device, computer equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570403A (en) * 2021-06-30 2021-10-29 北京达佳互联信息技术有限公司 Promotion information release effect prediction method and device and server
CN114418651A (en) * 2022-01-26 2022-04-29 北京数智新天信息技术咨询有限公司 Intelligent popularization decision-making method and device and electronic equipment
CN114501105A (en) * 2022-01-29 2022-05-13 腾讯科技(深圳)有限公司 Video content generation method, device, equipment, storage medium and program product
CN114501105B (en) * 2022-01-29 2023-06-23 腾讯科技(深圳)有限公司 Video content generation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109803180B (en) Video preview generation method and device, computer equipment and storage medium
CN112258214A (en) Video delivery method and device and server
CN109241242B (en) Live broadcast room topic recommendation method and device, server and storage medium
CN107908641B (en) Method and system for acquiring image annotation data
CN108595520B (en) Method and device for generating multimedia file
CN110740389A (en) Video positioning method and device, computer readable medium and electronic equipment
US20150235264A1 (en) Automatic entity detection and presentation of related content
CN112235632A (en) Video processing method and device and server
CN111475632A (en) Question processing method and device, electronic equipment and storage medium
CN111723235B (en) Music content identification method, device and equipment
CN111309200A (en) Method, device, equipment and storage medium for determining extended reading content
CN116611401A (en) Document generation method and related device, electronic equipment and storage medium
EP4099711A1 (en) Method and apparatus and storage medium for processing video and timing of subtitles
CN113419798B (en) Content display method, device, equipment and storage medium
US20210407166A1 (en) Meme package generation method, electronic device, and medium
CN113411517B (en) Video template generation method and device, electronic equipment and storage medium
CN113901244A (en) Label construction method and device for multimedia resource, electronic equipment and storage medium
CN114117090A (en) Resource display method and device and server
CN112288452A (en) Advertisement preview method and device, electronic equipment and storage medium
CN116385597B (en) Text mapping method and device
CN113840177B (en) Live interaction method and device, storage medium and electronic equipment
CN117217831B (en) Advertisement putting method and device, storage medium and electronic equipment
EP3764304A1 (en) System and method for assessing quality of media files
WO2022201515A1 (en) Server, animation recommendation system, animation recommendation method, and program
CN115658938A (en) Multimedia searching method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination