CN114996553A - Dynamic video cover generation method - Google Patents

Dynamic video cover generation method Download PDF

Info

Publication number
CN114996553A
CN114996553A CN202210519403.1A CN202210519403A CN114996553A CN 114996553 A CN114996553 A CN 114996553A CN 202210519403 A CN202210519403 A CN 202210519403A CN 114996553 A CN114996553 A CN 114996553A
Authority
CN
China
Prior art keywords
dynamic video
target
static
mobile phone
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210519403.1A
Other languages
Chinese (zh)
Inventor
陈羽飞
刘奎龙
詹鹏鑫
许佳琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210519403.1A priority Critical patent/CN114996553A/en
Publication of CN114996553A publication Critical patent/CN114996553A/en
Priority to PCT/CN2023/093327 priority patent/WO2023217194A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Abstract

The application provides a dynamic video cover generation method, which comprises the following steps: acquiring static materials related to a video main body; classifying the obtained static materials related to the video main body into corresponding static material categories according to the content properties; arranging and combining the static materials according to a preset strategy and the obtained static material categories; the preset strategy at least comprises that different permutation and combination sequences are adopted for different target client objects; and forming the static materials after arrangement and combination into a whole to generate the dynamic video cover. According to the method and the device, the dynamic video cover is generated through the static materials according to the preset strategy, the manufacturing cost of the high-quality dynamic video cover can be effectively reduced, personalized cover display is realized in the face of different consumer demands, and the click rate and the browsing amount of users are further improved.

Description

Dynamic video cover generation method
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for generating a dynamic video cover, an electronic device, and a storage medium.
Background
With the rapid development of internet technology, e-commerce application platforms play an increasingly important role in people's daily life. The display of goods in e-commerce application platforms also presents an increasingly rich presentation. The display form of the commodity has a crucial influence on the click rate, the purchase rate, the conversion rate, the user viscosity and the like of the consumer, so that the method has an increasingly significant meaning for exploring the commodity display mode of the e-commerce application platform.
In the prior art, a common way for displaying commodities by an e-commerce platform mainly comprises pictures and videos. The search interface and the recommendation interface of the commodity display the commodity in the form of pictures, so that consumers cannot visually feel the quality of the commodity, and the automatically played video-audio dynamic video cover page is used, so that the consumers are visually more visual, the cognition of the consumers to the commodity can be effectively improved in the shortest time, the purchasing decision of the consumers is promoted, and the click rate and the purchasing conversion rate of the commodity are greatly improved. Therefore, more and more shops use the dynamic short video covers to replace the original picture covers for commodity display and introduction, and the dynamic three-dimensional display of commodities is realized through the automatic playing of the dynamic short videos.
In the practical application of the dynamic short video, the dynamic video cover of the e-commerce platform is usually shot and uploaded by the merchant. Therefore, the coverage rate of the high-quality dynamic short video of the e-commerce platform is low, and due to the defects that the shooting cost of the high-quality short video is high, the key points of the merchant for the commodity content displayed by the video are not matched with the key points of the user requirement, and the like, how to generate the high-quality dynamic video cover with low cost and promote the personalized display of the dynamic video cover in the face of different consumer requirements becomes a problem to be solved urgently for the e-commerce application platform.
Disclosure of Invention
The invention provides a dynamic video cover generation method, which solves the technical problems that the existing dynamic video cover generation cost is high, and the dynamic video cover display cannot be performed in a targeted manner in the face of different consumer demands. The invention further provides a device for generating the dynamic video cover, electronic equipment and a storage medium. The technical scheme is as follows:
the embodiment of the application provides a method for generating a dynamic video cover, which comprises the following steps:
acquiring static materials related to a video main body;
classifying the obtained static materials related to the video main body into corresponding static material categories according to the content properties;
according to a preset strategy, arranging and combining the static materials according to the obtained static material categories; the preset strategy at least comprises that different permutation and combination sequences are adopted for different target client objects;
and forming the static materials after arrangement and combination into a whole to generate the dynamic video cover.
Optionally, according to different predetermined policies, a multi-version dynamic video cover can be generated from the obtained static material related to the video main body; multiple different versions of the dynamic video cover may also be generated for the same predetermined strategy due to different selections of the static material.
Optionally, the step of integrating the arranged and combined static materials to generate the dynamic video cover includes:
and before the dynamic video cover is generated, carrying out deformation processing on the expression method of the static material related to the video main body.
Optionally, the deforming the expression method of the static material related to the video main body includes: beautifying the acquired static material by selecting a template using a design style.
Optionally, the number of the dynamic video covers is at least a first number, and the method further comprises:
acquiring the first number of dynamic video covers;
extracting a plurality of sample clients from the target client group for the dynamic video covers with the first quantity, and performing a first pre-estimation test;
and obtaining a target dynamic video cover matched with the target group according to the test feedback data and a preset evaluation standard.
Optionally, after determining the target dynamic video cover, further performing the following steps:
acquiring the target dynamic video cover;
adjusting the target dynamic video covers to obtain a plurality of fine-tuned target dynamic video covers;
extracting a plurality of sample clients from the target client group for the plurality of fine-tuned target dynamic video covers, and performing a second estimation test;
and obtaining the fine-tuned dynamic video cover matched with the target group according to the test feedback data.
Optionally, the target customer group is a customer group with target attributes obtained after classifying the customer group.
Optionally, the classifying the customer group includes: and dividing the customers with the same demand characteristic label into the same class of customer groups.
Optionally, the target attribute is an attribute labeled according to a requirement characteristic label of the client.
Optionally, the number of the dynamic video covers is a second number greater than the first number, and the method further includes: selecting a first number of dynamic video covers for testing from a second number of dynamic video covers, which is greater than the first number, in a predetermined method according to a specific target attribute before a first pre-evaluation test is performed on a customer group having the specific target attribute.
According to the dynamic video cover generation method, the obtained static materials related to the video main body are classified into corresponding static material categories according to content properties by obtaining the static materials related to the video main body; arranging and combining the static materials according to a preset strategy and the obtained static material categories; the preset strategy at least comprises that different permutation and combination sequences are adopted for different target client objects; and integrating the arranged and combined static materials to generate the dynamic video cover. The method for generating the dynamic video cover page utilizes the existing static material as the source of video production, and the same static material group can generate multi-version dynamic video cover pages aiming at different target client groups according to different preset strategies. According to the technical scheme, the cost for manufacturing the dynamic video cover can be greatly reduced, the dynamic video cover can be generated according to the preset strategy, the demands of different target customer groups can be met, the dynamic video cover adaptive to the dynamic video cover is generated in a targeted mode, and the effects of improving the click rate and the browsing amount of the user are achieved.
In addition, the application also provides a device of the dynamic video cover generation method, electronic equipment and a computer readable storage medium, and the device, the electronic equipment and the computer readable storage medium also have the beneficial effects.
It should be understood that what is described in this section is not intended to represent key or critical features of the application, nor is it intended to be limiting as to the scope of the application. Features of the present application will become readily apparent from the following description of exemplary examples of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario for a method for generating a dynamic video cover according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for generating a dynamic video cover according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a method for generating a target dynamic video cover according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a method for generating a post-trimming dynamic video cover matched to a target group according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of an apparatus for generating a dynamic video cover according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a storage medium according to an embodiment of the present application;
the accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Detailed Description
In order that the objects, aspects and advantages of the embodiments of the present application will become more apparent, numerous specific details are set forth in the following description in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather construed as limited to the embodiments set forth herein.
It should be noted that the terms "first", "second", third "and the like in the various parts of the embodiments and drawings of the present application are used for distinguishing similar objects, and do not indicate any specific order or sequence order among the objects. Such data may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced in other sequences than those illustrated or described herein.
The terms "comprises," "comprising," and "having," and any variations thereof, in this application are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, unless explicitly stated otherwise, the term "or" includes all possible combinations, except where not feasible. For example, if it is expressed that a database may include a or B, the database may include a, or B, or both a and B, unless specifically stated or not otherwise possible. As another example, if it is expressed that a database may include A, B or C, the database may include databases A, or B, or C, or A and B, or A and C, or B and C, or A and B and C, unless specifically stated or not feasible otherwise.
First, some technical terms related to the present application are explained:
the AB test is to make two (A/B) or a plurality of (A/B/n) versions for a Web or App interface or process, respectively and randomly access the versions by visitor groups (target population) with the same (similar) composition in the same time dimension, collect user experience data and service data of each group, finally analyze and evaluate the best version, and formally adopt the version. The method has the functions of eliminating disputes of different opinions in the design of customer experience (UX) and determining the optimal scheme according to the actual effect; through a comparison test, the real reason of the problem is found, and the product design and operation level is improved; establishing a closed loop process of data driving and continuous optimization; through A/B test, the release risk of new products or new characteristics is reduced, and guarantee is provided for product innovation.
The CTR (Click-Through-Rate) prediction is one of the most core algorithms in calculating advertisements. The method predicts the click condition of each advertisement and predicts whether the user clicks or does not click. Its prediction is related to many factors such as historical click-through rate, ad placement, time, user, etc. The CTR pre-estimation model is a model obtained by training on a large amount of historical data by comprehensively considering various factors and characteristics. Therefore, the CTR estimation model can be widely applied to the fields of personalized recommendation, information retrieval, online advertisement and the like, and is used for learning and predicting the feedback of the user, wherein the feedback of the user mainly comprises clicking, collecting, purchasing and the like.
The recommendation algorithm is used for screening and filtering mass information in the era of mass information data, and the information which is most concerned by a user and is most interesting is displayed in front of the user. The efficiency of work is increased, the time that the user filtered the information has been saved again. By contacting the user and the information, the user is helped to find out valuable information for the user on one hand, and on the other hand, the information can be shown in people interested in the user, so that the win-win situation between the information provider and the user is realized. The recommendation algorithms can be roughly divided into three categories: content-based recommendation algorithms, collaborative filtering recommendation algorithms, and knowledge-based recommendation algorithms.
First, an application scenario of the solution provided in the present application will be described below.
Fig. 1 is a schematic view of an application scenario for a method for generating a dynamic video cover according to an embodiment of the present application. As shown in fig. 1, the application scenario includes a client 101 and a server 102, where the number of the client 101 is not limited, the client 101 obtains a static material related to a video body, sends the static material of a user to the server 102, the server 102 generates a dynamic video cover according to the static material according to a predetermined policy, and sends the dynamic video cover to the client 101, and the client 101 displays the dynamic video cover to a consumer for the consumer to browse and purchase. The specific implementation process can be seen in the schemes of the following embodiments.
It should be noted that fig. 1 is only an application scenario schematic diagram of a method for generating a dynamic video cover provided in the embodiment of the present application, and the embodiment of the present application does not limit the devices included in fig. 1, nor does it limit the positional relationship between the devices in fig. 1. For example, in an application scenario satisfying fig. 1, the application scenario may further include a data storage device, where the data storage device may be an external memory with respect to the client 101 or the server 102, or may be an internal memory integrated in the client 101 or the server 102. The client 101 can be a variety of devices such as a smart phone, a computer, a television, a wearable device, a multimedia player, an e-reader and the like, and various types of shopping application software are installed on the devices; the server 102 may be a server or a cluster of servers, or may be a cloud computing service center. The connection between the client 101 and the server 102 can be realized through wireless network communication, wired communication, or optical fiber cable communication.
It is understood that the number of devices of the client 101 and the server 102 in fig. 1 may vary according to the actual needs.
Next, a method of generating a motion video cover will be described with reference to a specific embodiment.
A first embodiment of the present application provides a method for generating a dynamic video cover.
Fig. 2 is a flowchart of a method for generating a dynamic video cover according to an embodiment of the present application, which can be used in the implementation environment shown in fig. 1. As shown in fig. 2, the method of the embodiment of the present application includes the following steps:
s201, obtaining static materials related to the video main body.
This step gathers relevant basic information, i.e., static material related to the video topic, for implementation in this embodiment for generating a dynamic video cover.
The video subject refers to a target object to be displayed in a video form, and includes people, objects, events and the like; the video is mainly used for various commodities in the occasions of product popularization. In this embodiment, a scene is limited to the above-described commodity promotion occasion. The commodity is used for displaying and selling on the corresponding E-commerce application platform. Static materials are materials used to display characteristics or information of goods.
Illustratively, the video subject can be various subject commodities such as a mobile phone, a book, a vehicle, bread, an apple and the like for sales exhibition on an e-commerce platform, wherein the e-commerce platform is various shopping application platforms commonly used in daily life, and the application is not limited in a specific form. Taking a mobile phone as an example, the static material can be a facial beautification function text message introduction of the commercial mobile phone, an appearance picture of the commercial mobile phone, a continuous 3-quarter sales data report of the commercial mobile phone and other various materials related to the commercial mobile phone.
The obtained static material is derived from the existing material of the commodity, and includes but is not limited to various forms of commodity information, commodity pictures, commodity videos, commodity audios and the like. The commodity information can be characters and diagrams in the commodity introduction, materials which are selected from a special material management library and are specially used for a user, and character contents which are extracted from pictures through character recognition; the picture of the commodity can be a real picture actually shot, a virtual image constructed by application software, or an image picture in a hand-drawing form; the video of the commodity can also be from short video materials related to network application, and can also be short video materials actually recorded by the user; the audio of the commodity can be actual recorded audio and can also be materials in an existing audio material library.
Illustratively, the materials obtained in this step are: the mobile phone with the facial beautification function comprises facial beautification function text information of a commodity mobile phone, wherein the facial beautification function text information comprises a xxx type mobile phone facial beautification camera, help you return to 18 years old, an appearance picture of the commodity mobile phone, a video weblog (video weblog) short video of a real shot of the commodity mobile phone, audio explanation static materials of a seller about the commodity mobile phone, and a continuous 3-quarter sales data report of the commodity mobile phone.
S202, classifying the obtained static materials related to the video main body into corresponding static material categories according to content properties;
the step is used for classifying the static materials collected in the previous step.
And before the dynamic video cover is generated, carrying out deformation processing on the expression method of the static material related to the video main body.
In this step, in order to enhance the user performance of the dynamic video cover, after a plurality of related static materials of the video main body are obtained, the method for expressing the static materials needs to be subjected to transformation processing to obtain transformed static materials, and then the dynamic video cover is made on the transformed static materials according to a predetermined strategy.
Specifically, in this step, the expression method of the static material related to the video theme is mainly subjected to deformation processing, and the obtained static material is beautified by selecting a template using a design style. The design style includes but is not limited to the deformation of the commodity information, the font size, the color, the animation and the like of the commodity character information; cutout, processing, animation, special effect enhancement and the like of the commodity picture; transition, material playing, playing area adjustment and the like of the commodity video; cutting, noise reduction, reverberation, speed change, tone changing processing and the like of commodity audio. The static materials related to various senses such as vision, hearing and the like are beautified and adjusted through the template of the design style so as to enhance the visual perception of consumers. In addition, the existing static material is used as the material for making the dynamic video cover, so that the cost for making the video can be reduced to a certain extent.
Exemplarily, the acquired 'text information of the beauty function of the commercial mobile phone' xxx model mobile phone beauty camera helps you return to 18 years old ', the picture of the appearance and the appearance of the commercial mobile phone, the video blog short video (video weblog) of the commercial mobile phone, and the static material of the audio explanation of the commercial mobile phone of the seller and the continuous 3-quarter sales data report of the commercial mobile phone' are subjected to deformation processing. Material 1 expression method was modified: the character information of the beautifying function of the commodity mobile phone, namely 'xxx type mobile phone beautifying camera', is transformed into a character which adopts a Song style and a 20 type character size, the color of the character is a gradually changed color, and the character information of the beautifying function after the transformation is in a mode of jumping rightwards in a picture at a jumping speed of 0.2 second/frame. The material 2 expression method was modified: defogging, contrast enhancement and lossless amplification are carried out on the appearance picture of the commodity mobile phone to carry out picture deformation; the material 3 expression method was modified: the live-shot vlog short video (video weblog) of the commodity mobile phone realizes deformation by clipping, splicing, adding a filter and adding background music; the material 4 expression method was modified: the seller realizes deformation of the audio commentary static material of the commodity mobile phone through the processing of a loudspeaker, a filter and an equalizer; the material 5 expression method is modified: the continuous 3-quarter sales data report of the commodity mobile phone expresses a sales data report deformed into a three-dimensional bar chart form. The deformation processing of the expression method increases the whole sensory cognition of the customer to the mobile phone, and the existing materials are used for deformation, so that the manufacturing cost is low, and the threshold of high-quality short video manufacturing is reduced.
In embodiments of the present disclosure, the categories in which static material may exist are all possible categories classified by the content of the static material. The category is used to express a point trait or point characteristic of the commodity. The above possible categories include, but are not limited to: commodity price, commodity function, sales volume, cost performance, quality, brand, after-sales service, concept, process/production area advantage, configuration, core consumer group, packaging advantage, added value, originality, copyright, distribution service and the like, and the embodiment of the application is not particularly limited.
Illustratively, according to the example in the above steps, the categories of the commercial cell phone may be: the mobile phone comprises a plurality of categories, such as appearance, brand, function, sales, cost performance, after-sale service, price, configuration and copyright of the mobile phone.
Illustratively, according to the example in the step S201, in this step, the "facial function text information of the commercial cell phone" xxx model mobile phone facial makeup camera shooting "acquired in the step S201, the appearance picture of the commercial cell phone, the live vlog short video (video weblog) of the commercial cell phone, the audio commentary static material of the vendor about the commercial cell phone, and the continuous 3-quarter sales data report of the commercial cell phone" are classified into corresponding categories according to the content information after being deformed and beautified. For example, the obtained text information "xxx type mobile phone beauty camera" of the commodity mobile phone beauty function is classified into a commodity function category, the picture material of the appearance of the commodity mobile phone is classified into an appearance design category, the material of a real shot vlog short video (video weblog) of the commodity mobile phone is classified into a corresponding cost performance category, the static material of the explanation audio of a seller is classified into a corresponding after-sale service category, and the material of a continuous 3-quarter sales data report of the commodity mobile phone is classified into a sales volume category according to the content information.
S203, arranging and combining the static materials according to the obtained classes of the static materials according to a preset strategy; the predetermined strategy at least comprises adopting different permutation and combination orders aiming at different target client objects.
This step is a core step of this embodiment, and is used to generate a dynamic video cover from the static material collected in the previous step.
The preset strategy is a specific mode of arranging and combining the static materials according to the types of the static materials which can exist.
The predetermined policy is a specific manner of selecting and arranging and combining the static materials according to the types of the static materials that may exist, and specifically, the predetermined policy is a manner of selecting and arranging and combining the materials of the types according to what manner. For example, the predetermined policy a is: commodity price-core consumer group-sales volume; wherein the selected static material comprises: the commodity price, the core consumption group information and the sales volume information are arranged in a specific mode as follows: according to the arrangement of commodity price-core consumer group-sales volume.
Similarly, there may also be a predetermined policy B: product profile-process/site advantages-distribution service; predetermined policy C: after-market service-cost performance-commodity function; predetermined policy D: commodity price-commodity function; the predetermined policy E is: cost performance-brand; in summary, various strategies can be established according to the situation, of course, a certain kind of elements can also appear for multiple times, and a more specific selection strategy can also be provided for specifically selecting which element in each kind of elements. The above-described forms of expression of several strategies are merely simplified representations, and there may be more detailed material selection and arrangement of the material, and detailed descriptions of such forms may also be included as part of the strategy. In the present embodiment, only the above simplified policy description is taken.
Illustratively, according to an example of the above steps, the predetermined policy of the mobile phone product is: the predetermined policy A: mobile sales-mobile functionality-mobile appearance; the predetermined policy B: mobile brand-mobile price-mobile after-sale service; predetermined policy C: after-sale service of the mobile phone, appearance of the mobile phone, cost performance of the mobile phone, and the like.
The predetermined policy may not only refer to the arrangement order itself, but also include that different arrangement and combination orders are adopted for different target client objects. The different target client objects refer to target client groups with different attributes; since the main information of interest is different for target customer groups of different attributes, different permutation and combination orders can be adopted to highlight the main information of interest. Specifically, the method of selecting the permutation and combination order for the target client objects with different attributes may be performed in an AB test manner, which will be described in detail later.
In this step, in order to combine the above deformed static materials, according to a predetermined policy, the deformed static materials are matched into a predetermined policy according to their own category, and form a corresponding static material group. Different predetermined strategies are particularly obvious in difference of corresponding generated static material groups. The same preset strategy has different static material groups due to different adopted static materials.
Illustratively, according to an example of the above steps, a combination is made: firstly, randomly selecting the preset strategy A of the mobile phone commodity: mobile phone sales-mobile phone functions-mobile phone appearance, and the predetermined policy C mobile phone after-sale service-mobile phone appearance-mobile phone cost performance. Next, in the example in S202-1, the 'facial beautification function text information of the product mobile phone, "xxx-type mobile phone facial beautification camera shooting, help you return to 18 years old", the appearance picture of the product mobile phone, the live-shot vlog short video (video weblog) of the product mobile phone, and the audio explanation static material of the product mobile phone by the seller, the continuous 3-quarter sales data report' material of the product mobile phone are combined according to the predetermined policy a and the predetermined policy C to form a static material group. Finally, the combination of the formed corresponding static material groups A is as follows: the continuous 3-quarter sales data report of the commercial mobile phone-the facial beautification function text information of the commercial mobile phone, namely 'xxx model mobile phone facial beautification camera shooting, helps you to return to 18 years' picture of appearance of the commercial mobile phone; the corresponding static material group C is composed of: and (3) the seller explains static materials about the audio of the commodity mobile phone, namely an appearance picture of the commodity mobile phone, and a real shot vlog short video (video weblog) of the commodity mobile phone.
And S204, integrating the arranged and combined static materials to generate the dynamic video cover.
The dynamic video cover comprises a plurality of shots, and each shot can be used for independently or jointly expressing the selling point characteristics and the characteristics of the commodity. And matching, transforming and editing the combined static material group by using a video production generation unit to form a whole as a dynamic video cover. Of course, the use of various general video generation tools for generating dynamic video is not excluded. Because different arrangement and combination orders adopted by different target customer groups are adopted, various dynamic video covers can be generated in the step.
Illustratively, according to the example in step S203 above, the formed static material group a: a continuous 3-quarter sales data report of the commercial mobile phone, namely facial beautification function text information 'xxx type mobile phone facial beautification camera' of the commercial mobile phone, and appearance pictures of the commercial mobile phone; and static material group C: the seller inputs the material group of the audio explanation static material of the commercial mobile phone, the appearance picture of the commercial mobile phone, the real shot vlog short video (video weblog) of the commercial mobile phone into the video production generation unit, and accordingly generates a dynamic cover A and a dynamic cover C, wherein the dynamic cover A presents content related to the mobile phone sales volume of the commercial mobile phone, the mobile phone function and the mobile phone appearance, and the dynamic cover C presents content related to the mobile phone after-sale service, the mobile phone appearance and the mobile phone cost performance of the commercial mobile phone.
Obviously, there are multiple generation strategies for the above dynamic video covers, so that multiple versions of dynamic video covers can be generated for the same video, and even if the specific material selection is different for the same generation strategy, multiple versions can be fine-tuned. Therefore, after a plurality of dynamic video covers are generated for the same video main body, the actual effect of the dynamic video covers may need to be evaluated; by evaluation, the most effective version is finally selected among the possible versions. For this reason, in the embodiment of the present disclosure, the method may further include the following steps:
referring to fig. 3, fig. 3 is a flowchart illustrating a method for generating a target motion video cover according to an embodiment of the present disclosure.
S301, acquiring the first number of dynamic video covers.
The number of generated dynamic video covers is at least a first number. The first number of motion video covers should be understood as a plurality of motion video covers, and the embodiment is not limited to a specific number.
As described above, since the source of the static material of the video main body is wide, the material form is rich and varied, and the material content is varied, the number of the moving video covers that can be formed according to the same predetermined policy is variable, and the number of the moving video covers that can be generated according to different predetermined policies is also variable. In this step, it is assumed that the first number of captured motion video covers is a number of motion video covers generated according to a plurality of different predetermined strategies. In other embodiments, the user may also generate a certain number of dynamic video covers according to the same predetermined policy, and the relationship between the dynamic video covers is different fine-tuned versions under the same predetermined policy.
In the initial step of generating dynamic video covers, the number of dynamic video covers generated is a second number greater than the first number.
Illustratively, the predetermined policy for the mobile phone product in the step S202 is obtained in this step as follows: the predetermined policy A: mobile sales-mobile functionality-mobile appearance; the preset strategy B comprises the brand of the mobile phone, the price of the mobile phone, and after-sale service of the mobile phone; predetermined policy C: after-sale service of the mobile phone, appearance of the mobile phone, cost performance of the mobile phone, and three predetermined policies, for example, the total number of the generated three types of dynamic video covers is 300, that is, the second number is 300, in this step, the number of the acquired first number of dynamic video covers is 15, that is, the first number is 15, where the acquired predetermined policy a: the number of mobile phone sales-mobile phone functions-dynamic video covers of the mobile phone appearance is 5; the acquired predetermined policy B: the number of the dynamic video covers of the brand, the price and the after-sale service of the mobile phone is 5; the acquired predetermined policy C: the number of dynamic video covers for after-sale service of the mobile phone, appearance of the mobile phone and cost performance of the mobile phone is 5. There may be a number of ways in which to select the first number of dynamic video covers from the second number of initial dynamic video covers, including: using a specially trained machine learning model, and expert selection, or even random extraction. The second number of initial motion video covers may themselves be automatically generated according to a predetermined strategy using a machine learning model.
S302, for the dynamic video covers with the first number, a plurality of sample clients are extracted from the target client group, and a first estimation test is carried out.
The target customer group is a customer group with target attributes after the customer group is classified. The target attribute refers to the attribute marked according to the requirement characteristic label of the client.
Selecting a first number of dynamic video covers for testing from a second number of dynamic video covers, which is greater than the first number, in a predetermined method according to a specific target attribute before a first pre-evaluation test is performed on a customer group having the specific target attribute.
It should be understood herein that the positioning of the e-commerce application platform dynamic video cover is to present the most core and most attractive qualities of the product to the consumer-customer in a short period of time. The nature of the interest of different customers for the same commodity varies. For example, some customers are sensitive to price, some customers have high requirements on after-sales services, and some customers are relatively reliable in soft strength such as brand influence and sales volume. Therefore, there is a need to classify tags for users having different demand characteristics for the same product.
The customer groups are classified in such a way that the purchasing behavior of each customer on a plurality of shopping application platforms is analyzed, the purchasing demand characteristics of the customer are labeled, the attention points and the demand points of pre-purchased commodities can be labeled by the customer, and then the customers with the same demand characteristic label are classified into the same group.
Illustratively, for a certain shopping application platform, the commodity is a mobile phone, customers who purchase mobile phone and have a demand label of sales volume are divided into a class of customer groups, customers who purchase demand labels of prices are divided into a class of customer groups, customers who purchase demand labels of product functions and sales volumes are also divided into a class of customer groups, and customers who purchase demand labels of brand, price and process/place of origin advantages are divided into a class of customer groups, the division forms can be various, the classification process can refer to related prior art, and details are not repeated in the present disclosure.
Here, it is to be reminded that the target customer group is a customer group that matches the test content of the first pretest test. A plurality of sample clients are extracted from a target client group to perform a first estimation test.
The first pre-estimation test is a test performed according to the content characteristics of the dynamic video cover. In the step, the dynamic video cover generated according to the preset strategy is matched with the corresponding target customer group according to the content characteristics of the dynamic video cover, and part of sample customers are extracted to carry out first estimation test on the dynamic video cover. First predictive test form the present disclosure is not particularly limited. For example, the first predictive test is in the form of an AB test.
Illustratively, according to the example in step S301 above, a first predictive test is performed on a first number of the acquired 150 motion video covers. Taking the dynamic video cover of the preset strategy A as an example, the obtained preset strategy A is: carrying out a first pre-estimation test on the mobile phone sales, the mobile phone functions and 5 dynamic video covers of the mobile phone appearance; according to the content characteristics of the dynamic video cover, the mobile phone sales volume, the mobile phone functions and the mobile phone appearance, the corresponding target client group is matched to be a group with attention points to the three characteristics, and the weight analysis of the three content characteristics of the target client group is not performed. For example, 150000 clients exist in the target client group concerned by the three characteristics, 500 sample clients are randomly extracted at the moment, the first pre-estimation test of the content of the dynamic video cover is carried out, and the first pre-estimation test is developed by adopting an AB test mode. And randomly displaying the 5 mobile phone sales, the mobile phone functions and the dynamic video cover of the mobile phone appearance to the 500 sample customers, and browsing and purchasing the commodity mobile phones by the customers according to personal preference and interest. In the same way for a predetermined policy B: carrying out first pre-estimation test processing on 5 dynamic video covers of mobile phone brand, mobile phone price and mobile phone after-sale service; for a predetermined policy C: the first pre-estimation test processing is carried out on 5 dynamic video covers of mobile phone after-sale service, mobile phone appearance and mobile phone cost performance.
And S303, obtaining a target dynamic video cover matched with the target group according to the test feedback data and a preset evaluation standard.
And comprehensively evaluating according to test data fed back by the first pre-estimation test, such as browsing duration data of a sample user during the test period, purchase order data, order payment proportion data, user forwarding recommended commodity data, purchased user re-purchasing data, purchased user evaluation data and the like.
The predetermined evaluation standard is to grade the test data fed back by the first pre-estimation test, and take the dynamic video cover with the comprehensive score ranked in front as the target dynamic video cover of the user.
Illustratively, according to the example in the step S302, the browsing and purchasing statuses of the product mobile phone of 500 sample clients are tracked through the third-party platform, and of the dynamic video covers of 5 predetermined policies a, the dynamic video cover with the highest comprehensive performance score is used as the target dynamic video cover of the target client group of mobile phone sales, mobile phone functions and mobile phone appearances. Meanwhile, the target dynamic video cover of the target customer group of the mobile phone after-sale service, the mobile phone appearance and the mobile phone cost performance can be determined from the dynamic video covers of the 5 preset strategies B and the target dynamic video cover of the target customer group of the mobile phone after-sale service, the mobile phone price and the mobile phone after-sale service.
The dynamic video covers of the determined targets are different according to target customer groups of different classifications, so that the dynamic video covers of the commodities to be purchased seen by each user on the shopping application platform have real-time pertinence, the viscosity, the click rate and the browsing amount of the commodities to be purchased by the users are increased, and the exposure and the purchasing behavior of the users to the commodities are further improved.
Optionally, after the first pre-estimation test data is obtained, the order or the weight ratio of the logic expression of the commodity selling point can be optimized according to a predetermined strategy by combining a CTR pre-estimation method, a recommendation algorithm and other methods.
Optionally, after the first pre-estimation test data is obtained, intervention is performed in the step of generating the dynamic video cover page according to the static material by combining methods such as a CTR pre-estimation method and a recommendation algorithm according to a predetermined strategy, so that a predetermined strategy that a user expresses well is predicted in advance, the second number of generated dynamic video cover pages can be reduced, and meanwhile, the number of dynamic video cover pages needing to be launched in the first pre-estimation test process can be greatly reduced.
After determining the target motion video cover, in an embodiment of the present disclosure, the method further includes the following steps:
referring to fig. 4, fig. 4 is a flowchart illustrating a method for generating a trimmed dynamic video cover matched with a target group according to an embodiment of the present application, which can be used in the implementation environment shown in fig. 1;
s401, acquiring the cover of the target dynamic video.
And acquiring the target dynamic video cover matched with the target group. It should be understood here that the target customer group performs preliminary ranking and screening on the content features of the dynamic video cover page according to the self satisfaction and personal preference. However, no corresponding screening is performed on the content feature expression form of the target dynamic video cover, and therefore, further optimization and adjustment are required to obtain the target dynamic video cover of the target group.
Illustratively, according to the example in the above step S303, the target dynamic video cover as the target customer group of the mobile phone sales volume-mobile phone function-mobile phone appearance is obtained.
S402, adjusting the target dynamic video covers to obtain a plurality of fine-tuned target dynamic video covers.
The adjustment of the target dynamic video cover refers to the adjustment of the content characteristics of the target dynamic video cover in the expression form, and the content characteristics concerned by the target group are presented through different expression forms. The above-described expression form fine-tuning is understood to be, for example, adjustment in various forms such as changing the target moving video into a form of a drama class for presentation, into a form of an commentary class, into a form of a vlog diary class, into a talk show class, and into a same genre class.
Illustratively, according to the example in the step S401, the target dynamic video cover, which is the group of mobile phone sales, mobile phone functions, and mobile phone appearance target clients, is finely adjusted in expression form. The mobile phone target dynamic video cover can be finely adjusted as follows: the method comprises the following steps of performing scene play evaluation on a mobile phone target dynamic video cover of a scene play type, a live-action log diary type mobile phone target dynamic video cover, an expert explanation type mobile phone target dynamic video cover, a multi-user evaluation talk show type mobile phone target dynamic video cover and the like; for example, 4 trimmed mobile phone commodity object dynamic video covers are obtained.
And S403, extracting a plurality of sample clients from the target client group for the fine-tuned target dynamic video covers, and performing a second estimation test.
The second pre-estimation test is a test performed according to the expression form of the content characteristics of the dynamic video cover. In this step, a plurality of target dynamic video covers with the fine-tuned expression forms are continuously extracted from the target customer group, and the target dynamic video covers with the fine-tuned expression forms are released for pre-estimation testing in a second expression form. Second predictive test form the present disclosure is not particularly limited. For example, the second predictive test may be in the form of an AB test.
Illustratively, according to the example in the step S402, 300 clients among the mobile phone sales volume-mobile phone function-mobile phone appearance target clients are randomly extracted as sample clients, and the obtained 4 fine-tuned dynamic video covers of the mobile phone targets are released to the sample clients for the second estimated AB test.
S404, obtaining the fine-tuned dynamic video cover matched with the target group according to the test feedback data.
And comprehensively evaluating the test data fed back according to the second pre-estimation test, such as browsing duration data of the sample user during the test period, purchase order data, order payment proportion data, user forwarding recommended commodity data, purchased user re-purchasing data, purchased user evaluation data and the like. The fine-tuned dynamic video cover with the top ranking of the composite score is matched to serve as such a user.
Exemplarily, according to the example in the step S402, the 4 pieces of test data of the mobile phone target dynamic video cover of the tuned funny drama type, the mobile phone target dynamic video cover of the log diary type of the live action of the real person, the mobile phone target dynamic video cover of the expert comment type, the mobile phone target dynamic video cover of the talk show type of the multi-user evaluation, and the like are comprehensively analyzed and compared; for example, if a first composite score is given to a mobile phone target dynamic video cover of a slightly-adjusted fun drama type, the slightly-adjusted mobile phone dynamic video cover of the slightly-adjusted fun drama type is used as a slightly-adjusted dynamic video cover of a mobile phone sales volume-mobile phone function-mobile phone appearance target client group, and the dynamic video cover is displayed to the target client group.
Optionally, after the second pre-estimation test data is obtained, intervention may be performed in the step of "generating a dynamic video cover according to the static material according to a predetermined policy" by combining methods such as a CTR pre-estimation method and a recommendation algorithm, so as to predict a better expression form of the user in advance, reduce the second number of generated dynamic video covers, and at the same time, greatly reduce the number of dynamic video covers to be put in a test in the first pre-estimation test process, thereby further improving the efficiency of the second pre-estimation test.
The second embodiment provides a method for generating a dynamic video cover, and correspondingly, the third embodiment of the present application further provides an application apparatus for the method for generating a dynamic video cover. Since the device embodiments are substantially similar to the method embodiments and therefore are described relatively simply, reference may be made to the corresponding description of the method embodiments provided above for details of relevant technical features, and the following description of the device embodiments is merely illustrative.
Referring to fig. 5, to understand the embodiment, fig. 5 is a block diagram of a unit of the apparatus provided in the embodiment, and as shown in fig. 5, the apparatus provided in the embodiment includes:
a material acquisition unit 501 configured to acquire a still material related to a video main body.
A material processing unit 502 configured to classify the acquired static material related to the video main body into a corresponding static material category according to a content property.
The material transformation unit 503 is configured to arrange and combine the static materials according to the obtained static material categories according to a predetermined policy; the predetermined strategy at least comprises adopting different permutation and combination orders aiming at different target client objects.
A video generation unit 504 configured to integrate the combined static material and generate the dynamic video cover.
Optionally, according to different predetermined policies, a multi-version dynamic video cover can be generated from the obtained static material related to the video main body; multiple different versions of the dynamic video cover may also be generated for the same predetermined strategy due to different selections of the static material.
Optionally, the step of integrating the arranged and combined static materials to generate the dynamic video cover includes:
and before the dynamic video cover is generated, carrying out deformation processing on the expression method of the static material related to the video main body.
Optionally, the deforming the expression method of the static material related to the video main body includes: beautifying the acquired static material by selecting a template using a design style.
Optionally, the number of the dynamic video covers is at least a first number, and the method further comprises:
acquiring the first number of dynamic video covers;
extracting a plurality of sample clients from the target client group for the dynamic video covers with the first quantity, and performing a first pre-estimation test;
and obtaining a target dynamic video cover matched with the target group according to the test feedback data and a preset evaluation standard.
Optionally, after determining the target dynamic video cover, further performing the following steps:
acquiring the target dynamic video cover;
adjusting the target dynamic video covers to obtain a plurality of fine-tuned target dynamic video covers;
extracting a plurality of sample clients from the target client group for the plurality of fine-tuned target dynamic video covers, and performing a second estimation test;
and obtaining the fine-tuned dynamic video cover matched with the target group according to the test feedback data.
Optionally, the target customer group is a customer group with a target attribute obtained after classifying the customer group.
Optionally, the classifying the customer group includes: and dividing the customers with the same requirement characteristic label into the same class of customer groups.
Optionally, the target attribute is an attribute labeled according to a requirement characteristic label of the client.
Optionally, the number of the dynamic video covers is a second number greater than the first number, and the method further includes: selecting a first number of dynamic video covers for testing from a second number of dynamic video covers, which is greater than the first number, in a predetermined method according to a specific target attribute before a first pre-evaluation test is performed on a customer group having the specific target attribute.
In the foregoing embodiment, a method for generating a dynamic video cover and an application apparatus of the method for generating a dynamic video cover are provided, and in addition, a fourth embodiment of the present application further provides an electronic device, where the embodiment of the electronic device is as follows:
please refer to fig. 6 for understanding the present embodiment, fig. 6 is a schematic view of an electronic device provided in the present embodiment.
As shown in fig. 7, the electronic apparatus includes: a processor 601; a memory 602;
a memory 602 for storing an application program, which when read and executed by the processor 601, performs the following operations:
static material associated with a video subject is obtained.
Classifying the obtained static materials related to the video main body into corresponding static material categories according to the content properties;
according to a preset strategy, arranging and combining the static materials according to the obtained static material categories; the preset strategy at least comprises that different permutation and combination sequences are adopted for different target client objects;
and integrating the combined static materials to generate the dynamic video cover. Optionally, according to different predetermined policies, a multi-version dynamic video cover can be generated from the obtained static material related to the video main body; multiple different versions of the dynamic video cover may also be generated for the same predetermined strategy due to different selections of the static material.
Optionally, the integrating the combined static material to generate the dynamic video cover includes:
and before the dynamic video cover is generated, carrying out deformation processing on the expression method of the static material related to the video main body.
Optionally, the deforming the expression method of the static material related to the video main body includes: beautifying the acquired static material by selecting a template using a design style.
Optionally, the number of the dynamic video covers is at least a first number, and the method further comprises:
acquiring the first number of dynamic video covers;
extracting a plurality of sample clients from the target client group for the first number of dynamic video covers, and performing a first estimation test;
and obtaining a target dynamic video cover matched with the target group according to the test feedback data and a preset evaluation standard.
Optionally, after determining the target dynamic video cover, further performing the following steps:
acquiring the target dynamic video cover;
adjusting the target dynamic video covers to obtain a plurality of fine-tuned target dynamic video covers;
extracting a plurality of sample clients from the target client group for the plurality of fine-tuned target dynamic video covers, and performing a second estimation test;
and obtaining the fine-tuned dynamic video cover matched with the target group according to the test feedback data.
Optionally, the target customer group is a customer group with a target attribute obtained after classifying the customer group.
Optionally, the classifying the customer group includes: and dividing the customers with the same requirement characteristic label into the same class of customer groups.
Optionally, the target attribute is an attribute labeled according to a requirement characteristic label of the client.
Optionally, the number of the dynamic video covers is a second number greater than the first number, and the method further includes: selecting a first number of dynamic video covers for testing from a second number of dynamic video covers, which is greater than the first number, in a predetermined method according to a specific target attribute before a first pre-evaluation test is performed on a customer group having the specific target attribute.
To facilitate understanding of the embodiment, the execution subject of the method for generating a dynamic video cover provided by the embodiment of the present disclosure is generally a computer device with certain computing power, and the computer device includes: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a handheld device, a computing device, a vehicle-mounted device, a wearable device, or a server or other processing device. In some possible implementations, the dynamic video cover generation method may be implemented by a processor calling computer-readable instructions stored in a memory.
A fifth embodiment of the present application provides an embodiment of a storage medium corresponding to the above-described embodiments, having one or more computer instructions stored thereon; the instructions are executed by the processor to implement the method of the second embodiment.
Fig. 7 is a schematic structural diagram of a storage medium provided for a fourth embodiment of the present application, and as shown in fig. 7, a storage medium 700 of the present application includes: a computer-readable storage medium 701 and a processor 702.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A method for generating a dynamic video cover, comprising:
acquiring static materials related to a video main body;
classifying the obtained static materials related to the video main body into corresponding static material categories according to the content properties;
arranging and combining the static materials according to a preset strategy and the obtained static material categories; the preset strategy at least comprises that different permutation and combination sequences are adopted for different target client objects;
and forming the static materials after arrangement and combination into a whole to generate the dynamic video cover.
2. The method of claim 1, further comprising: according to different preset strategies, the obtained static materials related to the video main body can generate multi-version dynamic video covers; for the same predetermined strategy, a plurality of different versions of the dynamic video cover can be generated due to different static material selections.
3. The method of claim 1, wherein integrating the ranked and combined static material to generate the dynamic video cover comprises:
and before the dynamic video cover is generated, carrying out deformation processing on the expression method of the static material related to the video main body.
4. The method of claim 3, wherein said morphing the representation of the static material related to the video subject comprises: beautifying the acquired static material by selecting a template using a design style.
5. The method of claim 1, wherein the number of motion video covers is at least a first number, the method further comprising:
acquiring the first number of dynamic video covers;
extracting a plurality of sample clients from the target client group for the dynamic video covers with the first quantity, and performing a first pre-estimation test;
and obtaining a target dynamic video cover matched with the target client group according to the test feedback data and a preset evaluation standard.
6. The method of claim 5, wherein after determining the target motion video cover, further performing the following steps:
acquiring the target dynamic video cover;
adjusting the target dynamic video covers to obtain a plurality of fine-tuned target dynamic video covers;
extracting a plurality of sample clients from the target client group for the plurality of fine-tuned target dynamic video covers, and performing a second estimation test;
and obtaining the fine-tuned dynamic video cover matched with the target customer group according to the test feedback data.
7. The method according to claim 5 or 6, wherein the target customer group is a customer group with target attributes obtained by classifying the customer group.
8. The method of claim 7, wherein classifying the customer segment comprises: and dividing the clients with the same requirement characteristic label into the same type of client group.
9. The method of claim 7, wherein the target attributes are attributes labeled according to a customer's requirements feature label.
10. The method of claim 5 or 6, wherein the number of dynamic video covers is a second number that is greater than the first number, the method further comprising: selecting a first number of dynamic video covers for testing from a second number of dynamic video covers, which is greater than the first number, in a predetermined method according to a specific target attribute before a first pre-evaluation test is performed on a customer group having the specific target attribute.
11. An apparatus for motion video cover generation, comprising:
a material acquisition unit configured to acquire a static material related to a video main body;
the material processing unit is configured to classify the obtained static materials related to the video main body into corresponding static material categories according to content properties;
the material transformation unit is configured to arrange and combine the static materials according to the obtained static material categories according to a preset strategy; the preset strategy at least comprises that different permutation and combination sequences are adopted for different target client objects;
and the video generation unit is configured to integrate the arranged and combined static materials and generate the dynamic video cover.
12. An electronic device for dynamic video cover generation, comprising: a memory and a processor;
the memory is to store program instructions;
the processor is configured to invoke program instructions in the memory to perform the method of any of claims 1 to 10.
13. A storage medium for dynamic video cover generation, storing a computer program, wherein the computer program, when executed by a processor, implements the method of any one of claims 1 to 10.
CN202210519403.1A 2022-05-13 2022-05-13 Dynamic video cover generation method Pending CN114996553A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210519403.1A CN114996553A (en) 2022-05-13 2022-05-13 Dynamic video cover generation method
PCT/CN2023/093327 WO2023217194A1 (en) 2022-05-13 2023-05-10 Dynamic video cover generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210519403.1A CN114996553A (en) 2022-05-13 2022-05-13 Dynamic video cover generation method

Publications (1)

Publication Number Publication Date
CN114996553A true CN114996553A (en) 2022-09-02

Family

ID=83028045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210519403.1A Pending CN114996553A (en) 2022-05-13 2022-05-13 Dynamic video cover generation method

Country Status (2)

Country Link
CN (1) CN114996553A (en)
WO (1) WO2023217194A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023217194A1 (en) * 2022-05-13 2023-11-16 阿里巴巴(中国)有限公司 Dynamic video cover generation method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5789734B1 (en) * 2015-02-25 2015-10-07 楽天株式会社 Information processing method, program, storage medium, and information processing apparatus
CN109729426B (en) * 2017-10-27 2022-03-01 优酷网络技术(北京)有限公司 Method and device for generating video cover image
CN111784431A (en) * 2019-11-18 2020-10-16 北京沃东天骏信息技术有限公司 Video generation method, device, terminal and storage medium
CN111369434B (en) * 2020-02-13 2023-08-25 广州酷狗计算机科技有限公司 Method, device, equipment and storage medium for generating spliced video covers
CN111984821A (en) * 2020-06-22 2020-11-24 汉海信息技术(上海)有限公司 Method and device for determining dynamic cover of video, storage medium and electronic equipment
CN114996553A (en) * 2022-05-13 2022-09-02 阿里巴巴(中国)有限公司 Dynamic video cover generation method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023217194A1 (en) * 2022-05-13 2023-11-16 阿里巴巴(中国)有限公司 Dynamic video cover generation method

Also Published As

Publication number Publication date
WO2023217194A1 (en) 2023-11-16

Similar Documents

Publication Publication Date Title
Drott Music as a Technology of Surveillance
CA2700030C (en) Touchpoint customization system
Napoli Audience evolution: New technologies and the transformation of media audiences
US9420319B1 (en) Recommendation and purchase options for recommemded products based on associations between a user and consumed digital content
US8799814B1 (en) Automated targeting of content components
US8126763B2 (en) Automatic generation of trailers containing product placements
Marshall Do people value recorded music?
Blythe et al. Critical methods and user generated content: the iPhone on YouTube
US9213989B2 (en) Product catalog dynamically tailored to user-selected media content
US20150095782A1 (en) System and methods for providing user generated video reviews
KR20190096952A (en) System and method for streaming personalized media content
FR3016459A1 (en)
CA2955707C (en) Digital consumer data model and customer analytic record
CN108140041A (en) It is clustered for the viewing time of video search
US20140344070A1 (en) Context-aware video platform systems and methods
CN113742567A (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
US10136189B2 (en) Method and system for re-aggregation and optimization of media
CN113535991A (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
WO2023217194A1 (en) Dynamic video cover generation method
CN115203539A (en) Media content recommendation method, device, equipment and storage medium
CN114862516A (en) Document recommendation method, storage medium, and program product
WO2016125166A1 (en) Systems and methods for analyzing video and making recommendations
US20150227970A1 (en) System and method for providing movie file embedded with advertisement movie
CN110198460A (en) Choosing method and device, storage medium, the electronic device of media information
US20220038757A1 (en) System for Real Time Internet Protocol Content Integration, Prioritization and Distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination