CN115209232A - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115209232A
CN115209232A CN202211113839.7A CN202211113839A CN115209232A CN 115209232 A CN115209232 A CN 115209232A CN 202211113839 A CN202211113839 A CN 202211113839A CN 115209232 A CN115209232 A CN 115209232A
Authority
CN
China
Prior art keywords
preset
video
videos
text information
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211113839.7A
Other languages
Chinese (zh)
Other versions
CN115209232B (en
Inventor
李银辉
葛姝悦
张玕
胡柳婷
张祎玢
吴帆
刘兆峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202211113839.7A priority Critical patent/CN115209232B/en
Publication of CN115209232A publication Critical patent/CN115209232A/en
Application granted granted Critical
Publication of CN115209232B publication Critical patent/CN115209232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure relates to a video processing method, a video processing device, electronic equipment and a storage medium, and relates to the technical field of internet. The method comprises the following steps: presetting a first video to obtain a plurality of first key pictures, wherein one first key picture comprises at least one of article information of an article corresponding to the first video and text information of the article corresponding to the first video; acquiring style templates of a plurality of second videos, wherein the types of the articles corresponding to the second videos are the same as the types of the articles corresponding to the first videos; and processing the plurality of first key pictures based on the style template of each second video in the plurality of second videos to generate a plurality of target videos, wherein one style template corresponds to one target video. According to the method and the device, the server can rapidly generate the target videos of various styles, and the video generation efficiency is improved.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
Currently, a server may process one original video to generate a plurality of videos.
However, in the process of generating the multiple videos, after the user needs to trigger the user equipment for multiple times, the server may modify the original videos one by one, which results in a complicated process of generating the videos and further results in low video generation efficiency.
Disclosure of Invention
The present disclosure provides a video processing method, a video processing device, an electronic device, and a storage medium, which solve the technical problems that in the generation process of multiple videos in the related art, after a user triggers a user device for multiple times, a server can modify original videos one by one, which results in a complex video generation process and further results in a low video generation efficiency.
The technical scheme of the embodiment of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, a video processing method is provided. The method can comprise the following steps: presetting a first video to obtain a plurality of first key pictures, wherein one first key picture comprises at least one of item information of an item corresponding to the first video and text information of the item corresponding to the first video; acquiring style templates of a plurality of second videos, wherein the types of the articles corresponding to the second videos are the same as the types of the articles corresponding to the first videos; and processing the plurality of first key pictures based on the style template of each second video in the plurality of second videos to generate a plurality of target videos, wherein one style template corresponds to one target video.
Optionally, the preset second video is one of the multiple second videos, and the preset target video is a target video corresponding to a style template of the preset second video, where the video processing method further includes: acquiring the title information of the preset second video and the title information of the preset second video; the processing the plurality of first key pictures based on the style template of the preset second video to generate the preset target video specifically includes: processing the plurality of first key pictures based on the style template of the preset second video to obtain a plurality of second key pictures, wherein the style template of each second key picture in the plurality of second key pictures is the same as the style template of the preset second video; and combining the slice header information of the preset second video, the slice trailer information of the preset second video and the plurality of second key pictures to generate the preset target video.
Optionally, the video processing method further includes: when the recommendation parameter of a preset target video is smaller than or equal to a parameter threshold value, determining whether a preset element meets a preset condition, wherein the preset element is an element included in a preset second key picture, the preset second key picture is a key picture included in the preset target video, the preset target video is one of the target videos, and the recommendation parameter is used for representing a recommendation effect of recommending the preset target video to a user account; and when the preset element does not meet the preset condition, processing the preset element according to a preset rule.
Optionally, the video processing method further includes: determining a first parameter of a preset second video, a second parameter of the preset second video and a third parameter of the preset second video, wherein the preset second video is one of the plurality of second videos, and the first parameter of the preset second video is used for representing the interest degree of the user account in the preset second video; the second parameter of the preset second video is used for representing the interest degree of the user account in the article corresponding to the preset second video; the third parameter of the preset second video is used for representing the similarity degree of the preset second video and other generated videos; and determining the recommended parameters of the preset target video according to the first parameters of the preset second video, the second parameters of the preset second video and the third parameters of the preset second video.
Optionally, the preset element includes the text information, the text information includes first text information and second text information, the preset condition includes a first preset condition, a second preset condition and a third preset condition, the first preset condition is used to determine whether the text information is located within a first preset region, the second preset condition is used to determine whether a definition of the text information is greater than or equal to a definition threshold, the third preset condition is used to determine whether a distance between the first text information and the second text information is less than or equal to a distance threshold, and the video processing method further includes: and when the text information is located outside the first preset area, or the definition of the text information is smaller than the definition threshold, or the distance between the first text information and the second text information is larger than the distance threshold, determining that the preset element does not meet the preset condition.
Optionally, the preset element includes a preset control, the preset control includes a first control and a second control, the preset condition includes a fourth preset condition and a fifth preset condition, the fourth preset condition is used to determine whether the definition of the preset control is greater than or equal to a definition threshold, the fifth preset condition is used to determine whether an overlapping area exists between the first control and the second control, and the video processing method further includes: and when the definition of the preset control is smaller than the definition threshold or an overlapping area exists between the first control and the second control, determining that the preset element does not meet the preset condition.
Optionally, the video processing method further includes: and when the recommended parameter of the first video is smaller than or equal to the parameter threshold, sending prompt information to the client, wherein the prompt information comprises a preset reason.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing apparatus. The apparatus may include: the device comprises a processing module and an acquisition module; the processing module is configured to perform preset processing on a first video to obtain a plurality of first key pictures, wherein one first key picture comprises at least one of item information of an item corresponding to the first video and text information of the item corresponding to the first video; the acquisition module is configured to acquire style templates of a plurality of second videos, wherein the types of the articles corresponding to the second videos are the same as the type of the article corresponding to the first video; the processing module is further configured to process the plurality of first key pictures based on a style template of each of the plurality of second videos to generate a plurality of target videos, wherein one style template corresponds to one target video.
Optionally, the preset second video is one of the plurality of second videos, and the preset target video is a target video corresponding to a style template of the preset second video; the obtaining module is further configured to obtain the slice header information of the preset second video and the slice trailer information of the preset second video; the processing module is specifically configured to process the plurality of first key pictures based on the style template of the preset second video to obtain a plurality of second key pictures, wherein the style template of each second key picture in the plurality of second key pictures is the same as the style template of the preset second video; the processing module is specifically configured to combine the slice header information of the preset second video, the slice trailer information of the preset second video, and the plurality of second key pictures to generate the preset target video.
Optionally, the video processing apparatus further comprises a determining module; the determining module is configured to determine whether a preset element meets a preset condition when a recommendation parameter of a preset target video is smaller than or equal to a parameter threshold, where the preset element is an element included in a preset second key picture, the preset second key picture is a key picture included in the preset target video, the preset target video is one of the plurality of target videos, and the recommendation parameter is used for representing a recommendation effect of recommending the preset target video to a user account; the processing module is further configured to process the preset element according to a preset rule when the preset element does not meet the preset condition.
Optionally, the determining module is further configured to determine a first parameter of a preset second video, a second parameter of the preset second video, and a third parameter of the preset second video, where the preset second video is one of the plurality of second videos, and the first parameter of the preset second video is used to represent a degree of interest of the user account in the preset second video; the second parameter of the preset second video is used for representing the interest degree of the user account in the article corresponding to the preset second video; the third parameter of the preset second video is used for representing the similarity degree of the preset second video and other generated videos; and determining the recommended parameters of the preset target video according to the first parameters of the preset second video, the second parameters of the preset second video and the third parameters of the preset second video.
Optionally, the preset element includes the text information, the text information includes first text information and second text information, the preset condition includes a first preset condition, a second preset condition and a third preset condition, the first preset condition is used to determine whether the text information is located within a first preset area, the second preset condition is used to determine whether the definition of the text information is greater than or equal to a definition threshold, and the third preset condition is used to determine whether a distance between the first text information and the second text information is less than or equal to a distance threshold; the determining module is further configured to determine that the preset element does not satisfy the preset condition when the text information is located outside the first preset region, or the definition of the text information is smaller than the definition threshold, or the distance between the first text information and the second text information is larger than the distance threshold.
Optionally, the preset element includes a preset control, the preset control includes a first control and a second control, the preset condition includes a fourth preset condition and a fifth preset condition, the fourth preset condition is used to determine whether the definition of the preset control is greater than or equal to a definition threshold, and the fifth preset condition is used to determine whether an overlapping area exists between the first control and the second control; the determining module is further configured to determine that the preset element does not satisfy the preset condition when the definition of the preset control is smaller than the definition threshold or an overlapping area exists between the first control and the second control.
Optionally, the video processing apparatus further comprises a sending module; the sending module is configured to send prompt information to the client when the recommended parameter of the first video is smaller than or equal to a parameter threshold, wherein the prompt information comprises a preset reason.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, which may include: a processor and a memory configured to store processor-executable instructions; wherein the processor is configured to execute the instructions to implement any of the above-described optional video processing methods of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having instructions stored thereon, which, when executed by a processor of an electronic device, enable the electronic device to perform any one of the above-mentioned optional video processing methods of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when run on a processor of an electronic device, cause the electronic device to perform the optional video processing method of any one of the first aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
based on any of the above aspects, in the present disclosure, the server may perform preset processing on the first video to obtain a plurality of first key pictures, and obtain the style templates of the plurality of second videos, and then the server may process the plurality of first key pictures based on the style template of each of the plurality of second videos to generate a plurality of target videos. In the present disclosure, since the plurality of target videos are the target videos that the server processes the first video (specifically, the plurality of first key pictures included in the first video) based on the style templates of the plurality of second videos, and the style templates of the plurality of second videos are not the same, the generated plurality of target videos can have diversity, that is, the server can quickly generate target videos of various styles, which improves the efficiency of video generation.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a schematic diagram illustrating a video processing system provided by an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating a video processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart diagram illustrating a further video processing method provided by the embodiment of the present disclosure;
fig. 4 is a schematic flow chart illustrating a further video processing method provided by the embodiment of the present disclosure;
fig. 5 is a schematic flow chart diagram illustrating another video processing method provided by the embodiment of the present disclosure;
fig. 6 is a schematic flow chart illustrating a further video processing method provided by the embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating a preset second key picture provided in an embodiment of the present disclosure;
fig. 8 is a schematic flow chart illustrating a further video processing method provided by the embodiment of the present disclosure;
fig. 9 is a schematic diagram illustrating another preset second key picture provided in an embodiment of the present disclosure;
fig. 10 is a schematic diagram illustrating another preset second key picture provided in the embodiment of the present disclosure;
fig. 11 is a schematic flow chart illustrating a further video processing method provided by the embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a video processing apparatus provided in an embodiment of the present disclosure;
fig. 13 shows a schematic structural diagram of another video processing apparatus provided in an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or components.
It should be noted that, the user information (including but not limited to user device information, user personal information, user behavior information, etc.) and data (including but not limited to style templates of a plurality of second videos, etc.) referred to in the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
In the related art, a server can process one original video to generate a plurality of videos, but in the generation process of the plurality of videos, after a user needs to trigger user equipment for a plurality of times, the server can modify the original video one by one, so that the video generation process is complicated, and further the video generation efficiency is low.
Based on this, the embodiments of the present disclosure provide a video processing method, because the plurality of target videos are obtained by processing, by the server, a first video (specifically, a plurality of first key pictures included in the first video) based on the style templates of a plurality of second videos, and the style templates of the plurality of second videos are different, the generated plurality of target videos can have diversity, that is, the server can quickly generate target videos of various styles, thereby improving the efficiency of video generation.
The video processing method, the video processing device, the electronic equipment and the storage medium provided by the embodiment of the disclosure are applied to a video processing (and video generation) scene. When the server performs preset processing on the first video to obtain a plurality of first key pictures, the method provided by the embodiment of the present disclosure may be used to obtain style templates of a plurality of second videos, and process the plurality of first key pictures based on the style template of each of the plurality of second videos to generate a plurality of target videos.
The following provides an exemplary description of a video processing method according to an embodiment of the present disclosure with reference to the accompanying drawings:
fig. 1 is a schematic view of a video processing system according to an embodiment of the present disclosure, as shown in fig. 1, the video processing system may include a terminal 101 and a server 102, and the terminal 101 may establish a connection with the server 102 through a wired network or a wireless network.
The terminal 101 may be a mobile phone, a tablet computer, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR), a Virtual Reality (VR) device, and the disclosure does not have any particular limitation on the specific form of the terminal 101. The system can perform man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, voice interaction or handwriting equipment and the like.
The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, network acceleration service (CDN), big data, and an artificial intelligence platform.
Specifically, the server 102 may generate a plurality of videos and recommend the plurality of videos to the terminal 101.
The user account corresponding to the terminal 101 (which may be understood as the user account using the terminal 101) may click and view the plurality of videos.
In the following embodiments, an electronic device that executes a video processing method provided by the embodiments of the present disclosure is taken as an example of a server, and the video processing method provided by the embodiments of the present disclosure is described.
As shown in fig. 2, the video processing method provided by the embodiment of the present disclosure may include S101-S103.
S101, the server conducts preset processing on the first video to obtain a plurality of first key pictures.
Wherein, a first key picture comprises at least one of the item information of the item corresponding to the first video and the text information of the item corresponding to the first video.
It should be understood that the server performs a predetermined process on the first video to obtain a plurality of video frames (and/or a plurality of pictures, one video frame corresponding to one picture). Then, the server may determine the plurality of first key pictures (i.e., at least one of the pictures including the item information of the item corresponding to the first video and the text information of the item corresponding to the first video) from the plurality of pictures, and optionally, the preset process may be a frame extraction process.
In the embodiment of the present disclosure, the item corresponding to the first video may also be understood as an item included in the first video. The item information of the item corresponding to the first video may include an identifier of the item corresponding to the first video, a form of the item corresponding to the first video, and the like. The text information of the article corresponding to the first video can explain the article corresponding to the first video; specifically, the text information may include a name of the item corresponding to the first video, a function of the item corresponding to the first video, activity details of the item corresponding to the first video, and the like. It can be understood that the text information can help the user account to quickly acquire key information of an article corresponding to the first video, so that the user account triggers a behavior related to the first video.
S102, the server obtains a plurality of style templates of the second video.
And the types of the articles corresponding to the plurality of second videos are the same as the types of the articles corresponding to the first videos.
In the embodiment of the present disclosure, a style template of a video may represent a position of item information of an item corresponding to the video in the video, a position of text information included in the video, and the like. Specifically, the position of the item information in the video may represent an area where the item information is located in a certain picture included in the video.
In an alternative implementation manner, the server may store a style template of each of the plurality of second videos in a database of the server, and the server may obtain the style template of each of the second videos from the database.
S103, the server processes the first key pictures based on the style template of each second video in the second videos to generate a plurality of target videos.
Wherein one style template corresponds to one target video.
It should be understood that, when the server processes a plurality of first key pictures based on a style template of a second video (e.g., a preset second video), a process of generating a target video (e.g., a preset target video) may be understood as a process in which the server replaces the style template of each first key picture (e.g., a style template of a first video) in the plurality of first key pictures with the style template of the preset second video. That is, the style template of each of the replaced first key pictures is the same as the style template of the preset second video.
In one implementation manner of the embodiment of the present disclosure, the server may combine the plurality of first key pictures and other pictures (i.e., pictures other than the plurality of first key pictures in the plurality of pictures, which may be understood as pictures that do not include the item information of the item corresponding to the first video or the text information of the item corresponding to the first video) to generate the target video.
The technical scheme provided by the embodiment can at least bring the following beneficial effects: as can be seen from S101-S103, the server may perform preset processing on the first video to obtain a plurality of first key pictures, and obtain a style template of a plurality of second videos, and then the server may process the plurality of first key pictures based on the style template of each of the plurality of second videos to generate a plurality of target videos. In the embodiment of the present disclosure, because the plurality of target videos are obtained by processing, by the server, a first video (specifically, a plurality of first key pictures included in the first video) based on the style templates of a plurality of second videos, and the style templates of the plurality of second videos are different, the plurality of generated target videos can have diversity, that is, the server can quickly generate target videos of various styles, thereby improving the efficiency of video generation.
In an implementation manner of the embodiment of the present disclosure, the preset second video is one of the plurality of second videos, and the preset target video is a target video corresponding to a style template of the preset second video. With reference to fig. 2, as shown in fig. 3, the video processing method provided by the embodiment of the present disclosure further includes S104.
S104, the server acquires the title information of the preset second video and the title information of the preset second video.
It should be understood that the slice header information may be understood as a first slice of the preset second video, and the first slice may include one or more video frames in the preset second video. The end-of-track information may be understood as a last segment of the predetermined second video, and the last segment may also include one or more video frames of the predetermined second video.
Continuing with fig. 3, the server processes the plurality of first key pictures based on the style template of the preset second video to generate a preset target video, specifically including S1031-S1032.
And S1031, the server processes the plurality of first key pictures based on a style template of a preset second video to obtain a plurality of second key pictures.
And the style template of each second key picture in the plurality of second key pictures is the same as the style template of the preset second video.
With reference to the description of the above embodiment, it should be understood that the server processes the plurality of first key pictures based on the style template of the preset second video, that is, replaces the style template of each of the plurality of first key pictures (i.e., the style template of the above first video) with the style template of the preset second video to generate the plurality of second key pictures.
S1032, the server combines the preset second video title information, the preset second video title information and the plurality of second key pictures to generate a preset target video.
In an optional implementation manner, when the first video does not have slice header information and slice trailer information, the server may determine slice header information of a preset second video as slice header information of the preset target video, determine slice trailer information of the preset second video as slice trailer information of the preset target video, and combine the plurality of second key pictures to generate the preset target video.
In another optional implementation manner, when the first video has slice header information and slice trailer information, the server may replace the slice header information of the first video with the slice header information of the preset second video, replace the slice trailer information of the first video with the slice trailer information of the preset second video, and then aggregate the plurality of second key pictures to generate the preset target video.
It should be noted that, the sequence of the plurality of second key pictures included in the preset target video is the same as the sequence of the plurality of first key pictures included in the first video.
The technical scheme provided by the embodiment can at least bring the following beneficial effects: as can be seen from S104 and S1031 to S1032, the server may obtain the title information of the preset second video and the trailer information of the preset second video; then, the server may process the plurality of first key pictures based on a style template of a preset second video to obtain a plurality of second key pictures, and combine the slice header information of the preset second video, the slice trailer information of the preset second video, and the plurality of second key pictures to generate a target video. In the embodiment of the disclosure, the server can process a plurality of first key pictures based on a uniform style, and can accurately and effectively generate each second key picture. Moreover, the server can combine the second key pictures with unified slice header information and slice trailer information based on each second key picture, so that the generation efficiency of the target video can be improved.
With reference to fig. 2, as shown in fig. 4, the video processing method provided by the embodiment of the present disclosure further includes S105-S106.
And S105, when the recommended parameter of the preset target video is smaller than or equal to the parameter threshold, the server determines whether the preset element meets the preset condition.
The preset element is an element included in a preset second key picture, the preset second key picture is a key picture included in the preset target video, the preset target video is one of the target videos, and the recommendation parameter is used for representing a recommendation effect of recommending the preset target video to the user account.
It should be understood that the server may determine a recommended parameter of one target video (e.g., a preset target video) of the plurality of target videos. When the recommendation parameter of the preset target video is smaller than or equal to the parameter threshold, it is indicated that the recommendation parameter of the preset target video is smaller, that is, the recommendation effect of the server for recommending the preset target video to the user account is poor. At this time, the server may determine whether a preset element (i.e., an element included in the preset second key picture) satisfies a preset condition.
In an optional implementation manner, the elements (i.e., the preset elements) included in the preset second key picture may include article information of an article corresponding to the first video, text information of the article corresponding to the first video, and the preset control (e.g., a sticker control, a special effect control), and the like.
Optionally, when the recommendation parameter of the preset target video is greater than the parameter threshold, it is indicated that the recommendation parameter of the preset target video is greater, that is, the recommendation effect of recommending the preset target video to the user account by the server is better.
And S106, when the preset element does not meet the preset condition, the server processes the preset element according to a preset rule.
It should be understood that when the preset element does not satisfy the preset condition, it indicates that the quality of the preset element is low, that is, the recommended parameter of the preset target video is small due to the low quality of the preset element. At this time, the server may process the preset element according to the preset rule to improve the quality of the preset element, so that the preset element may satisfy the preset condition, and further, the recommendation parameter of the preset target video may be improved.
Optionally, when the preset element meets the preset condition, it is indicated that the quality of the preset element is higher, it may not be that the recommended parameter of the preset target video is lower due to the quality of the preset element, and at this time, it may be that the recommended parameter of the preset target video is lower due to other factors, for example, the type of the article corresponding to the first video is not matched with the type of the user account.
The technical scheme provided by the embodiment can at least bring the following beneficial effects: as can be seen from S105-S106, when the recommended parameter of the preset target video is less than or equal to the parameter threshold, it indicates that the recommended parameter of the preset target video is smaller, and at this time, the server may determine whether a preset element (i.e., an element included in the key picture included in the preset target video) satisfies a preset condition. When the preset element does not meet the preset condition, the quality of the preset element is low, and the server can process the preset element at the moment. In the embodiment of the disclosure, when the quality of the preset element is low, the server may process the preset element according to the preset rule, and may promote the recommended parameter of the preset target video in a manner of improving the quality of the preset material, so as to determine the target video that may make the user more interested.
With reference to fig. 4, as shown in fig. 5, the video processing method provided by the embodiment of the present disclosure further includes S107-S108.
S107, the server determines a first parameter of a preset second video, a second parameter of the preset second video and a third parameter of the preset second video.
The preset second video is one of the second videos, and the first parameter of the preset second video is used for representing the interest degree of the user account in the preset second video; the second parameter of the preset second video is used for representing the interest degree of the user account in the article corresponding to the preset second video; the third parameter of the preset second video is used for representing the similarity degree of the preset second video and other generated videos.
In an optional implementation manner, the first parameter of the preset second video may be a ratio between the playing frequency of the preset second video and the display frequency of the preset second video; the second parameter of the preset second video may be a ratio between the browsing times of the article corresponding to the preset second video and the playing times of the preset second video; the preset third parameter of the second video may be a usage of the style template of the preset second video, and specifically, when the usage of the style template of the preset second video is less, it indicates that the preset third parameter of the second video is higher.
Optionally, the server may send the first parameter of the preset second video, the second parameter of the preset second video, and the third parameter of the preset second video to the client, so that the client may display the first parameter of the preset second video, the second parameter of the preset second video, and the third parameter of the preset second video.
Illustratively, table 1 below is an example of a fractional display of multiple types of video provided by embodiments of the present disclosure. Specifically, the first parameter of the type X is a%, the second parameter of the type X is C%, and the third parameter of the type X is E%; the first parameter of type Y is B%, the second parameter of type B is D%, and the third parameter of type B is F%.
TABLE 1
Figure 831036DEST_PATH_IMAGE001
S108, the server determines the recommended parameters of the preset target video according to the first parameters of the preset second video, the second parameters of the preset second video and the third parameters of the preset second video.
It should be understood that the preset target video is a target video generated based on the preset second video.
In this embodiment of the disclosure, the server may determine the recommended parameter of the preset second video according to a first parameter of the preset second video, a second parameter of the preset second video, and a third parameter of the preset second video, and determine the recommended parameter of the preset second video as the recommended parameter of the preset target video.
In an optional implementation manner, the server may determine one of a first parameter of the preset second video, a second parameter of the preset second video, or a third parameter of the preset second video as the recommended parameter of the preset target video.
In another optional implementation manner, the server may also determine a sum of the first parameter of the preset second video, the second parameter of the preset second video, and the third parameter of the preset second video as the recommended parameter of the preset target video.
In another optional implementation manner, the server may further determine, as the recommended parameter of the preset target video, an average value of the first parameter of the preset second video, the second parameter of the preset second video, and the third parameter of the preset second video.
The technical scheme provided by the embodiment can at least bring the following beneficial effects: as seen from S107-S108, the server may determine a first parameter of the preset second video, a second parameter of the preset second video, and a third parameter of the preset second video, and determine a recommended parameter of the preset target video according to the first parameter of the preset second video, the second parameter of the preset second video, and the third parameter of the preset second video. In the embodiment of the disclosure, the first parameter of the preset second video, the second parameter of the preset second video and the third parameter of the preset second video are data which can be directly determined and belong to the preset second video, and the server determines the recommended parameter of the preset target video based on the data which can be directly determined and belong to the preset second video, that is, the recommended parameter of the preset target video can be predicted based on the existing data, and the recommended parameter of each target video can be accurately determined.
In an implementation manner of the embodiment of the present disclosure, the preset element includes the text information, the text information includes first text information and second text information, the preset condition includes a first preset condition, a second preset condition, and a third preset condition, the first preset condition is used to determine whether the text information is located within a first preset region, the second preset condition is used to determine whether a definition of the text information is greater than or equal to a definition threshold, and the third preset condition is used to determine whether a distance between the first text information and the second text information is less than or equal to a distance threshold. With reference to fig. 4, as shown in fig. 6, the video processing method provided by the embodiment of the present disclosure further includes S109.
S109, when the text information is located outside the first preset area, or the definition of the text information is smaller than a definition threshold, or the distance between the first text information and the second text information is smaller than a distance threshold, the server determines that the preset element does not meet the preset condition.
In the embodiment of the present disclosure, the first preset area may be understood as a safe area of the text information, and when the text information is located outside the safe area, it indicates that the quality of the text information is low, that is, the server may determine that the preset element does not satisfy the preset condition.
Optionally, the first text information may be understood as a main text in the preset second key picture, where the main text is used to describe a type of an item corresponding to the first video, a name of the item corresponding to the first video, a function of the item corresponding to the first video, and the like. The second text information may be understood as secondary text in the preset second key picture, where the secondary text is used to describe price information of the item corresponding to the first video, activity details of the item corresponding to the first video, and the like.
In an alternative implementation, the size of the words included in the first text information is larger than the size of the words included in the second text information.
It can be understood that, when the preset element includes text information, the text information includes first text information and second text information, and the preset conditions include a first preset condition, a second preset condition and a third preset condition, the server processes the preset element according to a preset rule, that is, adjusts the position information of the first text information and the position information of the second text information, so that the text information (including the first text information and the second text information) is located within the first preset region, and adjusts the definition of the text information, so that the definition of the text information is greater than or equal to the definition threshold.
In an optional implementation manner, the server may determine the position information of the first preset region in the preset second key picture, and further determine whether the text information is located within the first preset region. For example, as shown in fig. 7, assuming that the first text information is article information, the second text information is discount information, and the text information (including the first text information and the second text information) is located outside the first preset area, the server determines that the preset element (specifically, the text information) does not satisfy the preset condition.
Optionally, the preset rule may include: a size range of words included in the text information and a number range of words included in the text information.
Optionally, the preset rule may include a preset corresponding relationship, where the preset corresponding relationship is used to represent a corresponding relationship between a color of a font included in the text information and a color of a background corresponding to the text information. Specifically, the color of the font included in the text information may be complementary to the color of the background corresponding to the text information, and for example, assuming that the color of the background corresponding to the text information is a color in a light color system, the server determines that the color of the font included in the text information is a color in a dark color system. Optionally, the server may further add black delineation to the font included in the text information to improve a display effect of the font included in the text information.
In an optional implementation manner, the text information in the preset second key picture and the background of the preset second key picture may belong to a whole. At this time, the server may layer the preset second key picture, specifically, divide the preset second key picture into at least two layers (including a text layer and a background layer), and then the server may process the text layer according to a preset rule, so as to enable the text information to satisfy a preset condition.
The technical scheme provided by the embodiment can at least bring the following beneficial effects: as can be seen from S109, when the text information is located outside the first preset area, or the definition of the text information is smaller than the definition threshold, or the distance between the first text information and the second text information is smaller than the distance threshold, it indicates that the quality of the text information (including the first text information and the second text information) is low, and the server may determine that the text information does not satisfy the first preset condition, the second preset condition, or the third preset condition, that is, the server may determine that the preset element does not satisfy the preset condition. In the embodiment of the disclosure, the server may determine whether the text information is located within the first preset region, whether the definition of the text information is greater than or equal to a definition threshold, and whether the distance between the first text information and the second text information is less than or equal to a distance threshold, so as to determine whether the preset elements satisfy the preset conditions, and may accurately and effectively determine whether the elements included in each key picture satisfy the preset conditions. Further, the text information can be optimized in a targeted manner, specifically, the preset element (specifically, the text information) is processed according to the preset rule, so that the preset element can meet the preset condition, whether the preset element needs to be optimized can be conveniently and quickly determined, and then the recommended parameter of the target video can be improved from the perspective of element quality.
In an implementation manner of the embodiment of the present disclosure, the preset element includes a preset control, the preset control includes a first control and a second control, the preset condition includes a fourth preset condition and a fifth preset condition, the fourth preset condition is used to determine whether a definition of the preset control is greater than or equal to a definition threshold, and the fifth preset condition is used to determine whether an overlapping area exists between the first control and the second control. With reference to fig. 4, as shown in fig. 8, the video processing method provided by the embodiment of the present disclosure further includes S110.
S110, when the definition of the preset control is smaller than a definition threshold value or an overlapping area exists between the first control and the second control, the server determines that the preset element does not meet the preset condition.
In conjunction with the description of the above embodiment, it should be understood that the preset control may include a sticker control, a special effect control, and the like, and for example, the sticker control may exist in the preset second key picture in the form of a card, an arrow, and the like.
It can be understood that, when the definition of the preset control is smaller than the definition threshold, it is indicated that the preset control is relatively fuzzy, the quality of the preset element (specifically, the preset control) is relatively low, and at this time, the server may determine that the preset element (i.e., the preset control) does not satisfy the preset condition (specifically, the fourth preset condition). Further, the server may process the preset control according to the preset rule, specifically, improve the definition of the preset control.
In the embodiment of the present disclosure, when there is an overlapping area between the first control and the second control, it is described that the first control occludes the second control (or the second control occludes the first control). At this time, the terminal used by the user account may not be able to completely display the first control and the second control, which affects the touch operation of the user account on the first control and the second control, and further fails to enable the related functions of the first control and the second control, and the server may determine that the preset element (specifically, the preset control) does not satisfy the preset condition (specifically, the fifth preset condition). At this time, the server may process the preset elements (i.e., the first control and the second control) according to a preset rule, specifically, adjust the position information of the first control or the position information of the second control, so that there is no overlapping area between the first control and the second control.
For example, as shown in fig. 9, assuming that the first control is a control 201, the second control is a control 202, and an overlapping area exists between the control 201 and the control 202, the server determines that the first control and the second control do not satisfy the fifth preset condition, that is, the server determines that the preset element does not satisfy the preset condition.
Accordingly, as shown in fig. 10, the server may adjust the position information of the control 202, specifically, move the control 202 upward, so that there is no overlapping area between the adjusted control 201 and the control 202. At this time, the preset elements (specifically, the first control and the second space) satisfy a fifth preset condition.
Optionally, the first preset control may be an arrow control with a direction, the second preset control may be a link control, and at this time, the server may determine whether the arrow control points to the link control. When the arrow control does not point to the link control, the server may determine that the preset element (specifically, the arrow control and/or the link control) does not meet the preset condition.
The technical scheme provided by the embodiment can at least bring the following beneficial effects: from S110, when the definition of the preset control is smaller than the definition threshold, it indicates that the definition of the preset control is lower; in the embodiment of the disclosure, the server can accurately and effectively determine whether the element (specifically, the control) included in each key picture meets the preset condition or not based on the definition of the preset control and whether the overlapping area exists between the controls (namely, the first control and the second control) included in the preset control, and then, the preset control can be processed, so that the processed preset control can meet the preset conditions (including a fourth preset condition and a fifth preset condition), the preset element can be conveniently and quickly optimized, and further, the parameters of a target video can be promoted from the perspective of the quality of the element in a targeted manner.
With reference to fig. 2, as shown in fig. 11, the video processing method provided by the embodiment of the present disclosure further includes S111.
And S111, when the recommended parameter of the first video is smaller than or equal to the parameter threshold value, the server sends prompt information to the client.
Wherein, the prompt message comprises a preset reason.
In conjunction with the description of the above embodiment, it should be understood that when the recommended parameter of the first video is less than or equal to the parameter threshold, it indicates that the recommended parameter of the first video is smaller, i.e., the user account has less interest in the first video. At this time, the server determines a reason (i.e., a preset reason) causing the recommended parameter of the first video to be smaller, and transmits prompt information including the preset reason to the client so that the client can display the prompt information.
Optionally, the preset reason may include a poor quality of the element (i.e., the text message and/or the preset control), a mismatch between the type of the item corresponding to the first video and the type of the user account, and the like.
The technical scheme provided by the embodiment can at least bring the following beneficial effects: as can be seen from S111, when the recommended parameter of the first video is less than or equal to the parameter threshold, it indicates that the recommended parameter of the first video is smaller, that is, the user account has a smaller interest level in the first video. At this time, the server may send a prompt message to the client, where the prompt message includes a preset reason. In the embodiment of the disclosure, when the user account has a small interest level in a certain video (for example, a first video), the server may determine a reason (that is, a preset reason) causing the small interest level, and send a prompt message including the preset reason to the client, so that the reason causing the small interest level can be conveniently and quickly displayed to the user, and further, the user account may determine how to optimize the first video based on the reason, and the like.
It is understood that, in practical implementation, the electronic device according to the embodiments of the present disclosure may include one or more hardware structures and/or software modules for implementing the corresponding video processing methods, and these hardware structures and/or software modules may constitute an electronic device/server. Those of skill in the art will readily appreciate that the present disclosure can be implemented in hardware or a combination of hardware and computer software for implementing the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Based on such understanding, the embodiment of the present disclosure also provides a video processing apparatus, and fig. 12 shows a schematic structural diagram of the video processing apparatus provided by the embodiment of the present disclosure. As shown in fig. 12, the video processing apparatus 30 may include: a processing module 301 and an acquisition module 302.
The processing module 301 is configured to perform preset processing on a first video to obtain a plurality of first key pictures, where one first key picture includes at least one of item information of an item corresponding to the first video and text information of the item corresponding to the first video.
An obtaining module 302 configured to obtain style templates of a plurality of second videos, where types of items corresponding to the plurality of second videos are the same as types of items corresponding to the first video.
The processing module 301 is further configured to process the plurality of first key pictures based on a style template of each of the plurality of second videos to generate a plurality of target videos, wherein one style template corresponds to one target video.
Optionally, the preset second video is one of the second videos, and the preset target video is a target video corresponding to the style template of the preset second video.
The obtaining module 302 is further configured to obtain the slice header information of the preset second video and the slice trailer information of the preset second video.
The processing module 301 is specifically configured to process the plurality of first key pictures based on the style template of the preset second video to obtain a plurality of second key pictures, where the style template of each second key picture in the plurality of second key pictures is the same as the style template of the preset second video.
The processing module 301 is further specifically configured to combine the slice header information of the preset second video, the slice trailer information of the preset second video, and the plurality of second key pictures to generate the preset target video.
Optionally, the video processing apparatus 30 further comprises a determining module 303.
The determining module 303 is configured to determine whether a preset element meets a preset condition when a recommendation parameter of a preset target video is less than or equal to a parameter threshold, where the preset element is an element included in a preset second key picture, the preset second key picture is a key picture included in the preset target video, the preset target video is one of the plurality of target videos, and the recommendation parameter is used for representing a recommendation effect of recommending the preset target video to a user account.
The processing module 301 is further configured to process the preset element according to a preset rule when the preset element does not satisfy the preset condition.
Optionally, the determining module 303 is further configured to determine a first parameter of a preset second video, a second parameter of the preset second video, and a third parameter of the preset second video, where the preset second video is one of the plurality of second videos, and the first parameter of the preset second video is used to represent the interest level of the user account in the preset second video; the second parameter of the preset second video is used for representing the interest degree of the user account in the article corresponding to the preset second video; the third parameter of the preset second video is used for representing the similarity degree of the preset second video and other generated videos.
The determining module 303 is further configured to determine a recommended parameter of the preset target video according to the first parameter of the preset second video, the second parameter of the preset second video, and the third parameter of the preset second video.
Optionally, the preset element includes the text information, the text information includes first text information and second text information, the preset condition includes a first preset condition, a second preset condition and a third preset condition, the first preset condition is used to determine whether the text information is located within a first preset region, the second preset condition is used to determine whether the definition of the text information is greater than or equal to a definition threshold, and the third preset condition is used to determine whether a distance between the first text information and the second text information is less than or equal to a distance threshold.
The determining module 303 is further configured to determine that the preset element does not satisfy the preset condition when the text information is located outside the first preset region, or the definition of the text information is smaller than the definition threshold, or the distance between the first text information and the second text information is larger than the distance threshold.
Optionally, the preset element includes a preset control, the preset control includes a first control and a second control, the preset condition includes a fourth preset condition and a fifth preset condition, the fourth preset condition is used to determine whether the definition of the preset control is greater than or equal to a definition threshold, and the fifth preset condition is used to determine whether an overlapping area exists between the first control and the second control.
The determining module 303 is further configured to determine that the preset element does not satisfy the preset condition when the definition of the preset control is smaller than the definition threshold or an overlapping area exists between the first control and the second control.
Optionally, the video processing apparatus 30 further comprises a sending module 304.
A sending module 304, configured to send a prompt message to the client when the recommended parameter of the first video is less than or equal to the parameter threshold, where the prompt message includes a preset reason.
As described above, the present disclosure may perform division of functional modules on a video processing apparatus according to the above method example. The integrated module can be realized in a hardware form, and can also be realized in a software functional module form. In addition, it should be further noted that the division of the modules in the embodiments of the present disclosure is schematic, and is only a logic function division, and there may be another division manner in actual implementation. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block.
With regard to the video processing apparatus in the foregoing embodiment, the specific manner in which each module performs operations and the beneficial effects thereof have been described in detail in the foregoing method embodiment, and are not described herein again.
Fig. 13 is a schematic structural diagram of another video processing apparatus provided by the present disclosure. As shown in fig. 13, the video processing device 40 may include at least one processor 401 and a memory 403 for storing processor-executable instructions. Wherein the processor 401 is configured to execute instructions in the memory 403 to implement the video processing method in the above-described embodiments.
In addition, the video processing device 40 may also include a communication bus 402 and at least one communication interface 404.
Processor 401 may be a Central Processing Unit (CPU), a micro-processing unit, an ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the disclosed aspects.
Communication bus 402 may include a path that transfers information between the above components.
The communication interface 404 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
The memory 403 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory may be self-contained and connected to the processing unit by a bus. The memory may also be integrated with the processing unit.
The memory 403 is used for storing instructions for executing the disclosed solution, and is controlled by the processor 401. The processor 401 is configured to execute instructions stored in the memory 403 to implement the functions of the disclosed method.
In particular implementations, processor 401 may include one or more CPUs, such as CPU0 and CPU1 in fig. 13, as one embodiment.
In particular implementations, video processing device 40 may include multiple processors, such as processor 401 and processor 407 in fig. 13, as one embodiment. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In one implementation, the video processing apparatus 40 may further include an output device 405 and an input device 406. An output device 405 is in communication with the processor 401 and may display information in a variety of ways. For example, the output device 405 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 406 is in communication with the processor 401 and can accept user input in a variety of ways. For example, the input device 406 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
Those skilled in the art will appreciate that the configuration shown in fig. 13 does not constitute a limitation of the video processing apparatus 40, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In addition, the present disclosure also provides a computer-readable storage medium comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform the video processing method provided as the above embodiment.
In addition, the present disclosure also provides a computer program product comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform the video processing method as provided in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A video processing method, comprising:
presetting a first video to obtain a plurality of first key pictures, wherein one first key picture comprises at least one of item information of an item corresponding to the first video and text information of the item corresponding to the first video;
acquiring style templates of a plurality of second videos, wherein the types of articles corresponding to the second videos are the same as the types of articles corresponding to the first videos;
and processing the plurality of first key pictures based on a style template of each of the plurality of second videos to generate a plurality of target videos, wherein one style template corresponds to one target video.
2. The method of claim 1, wherein the preset second video is one of the plurality of second videos, and the preset target video is a target video corresponding to a style template of the preset second video, and the method further comprises:
acquiring the title information of the preset second video and the title information of the preset second video;
processing the plurality of first key pictures based on the style template of the preset second video to generate the preset target video, including:
processing the plurality of first key pictures based on the style template of the preset second video to obtain a plurality of second key pictures, wherein the style template of each second key picture in the plurality of second key pictures is the same as the style template of the preset second video;
and combining the slice header information of the preset second video, the slice trailer information of the preset second video and the plurality of second key pictures to generate the preset target video.
3. The video processing method of claim 1, wherein the method further comprises:
when the recommendation parameter of a preset target video is smaller than or equal to a parameter threshold value, determining whether a preset element meets a preset condition, wherein the preset element is an element included in a preset second key picture, the preset second key picture is a key picture included in the preset target video, the preset target video is one of the target videos, and the recommendation parameter is used for representing a recommendation effect of recommending the preset target video to a user account;
and when the preset element does not meet the preset condition, processing the preset element according to a preset rule.
4. The video processing method of claim 3, wherein the method further comprises:
determining a first parameter of a preset second video, a second parameter of the preset second video and a third parameter of the preset second video, wherein the preset second video is one of the plurality of second videos, and the first parameter of the preset second video is used for representing the interest degree of the user account in the preset second video; the second parameter of the preset second video is used for representing the interest degree of the user account in the article corresponding to the preset second video; the third parameter of the preset second video is used for representing the similarity degree of the preset second video and other generated videos;
and determining the recommended parameters of the preset target video according to the first parameters of the preset second video, the second parameters of the preset second video and the third parameters of the preset second video.
5. The video processing method according to claim 3, wherein the preset element includes the text information, the text information includes first text information and second text information, the preset conditions include a first preset condition, a second preset condition and a third preset condition, the first preset condition is used for determining whether the text information is located within a first preset area, the second preset condition is used for determining whether the definition of the text information is greater than or equal to a definition threshold, the third preset condition is used for determining whether the distance between the first text information and the second text information is less than or equal to a distance threshold, and the method further comprises:
and when the text information is located outside the first preset area, or the definition of the text information is smaller than the definition threshold, or the distance between the first text information and the second text information is larger than the distance threshold, determining that the preset element does not meet the preset condition.
6. The video processing method according to claim 3, wherein the preset element comprises a preset control, the preset control comprises a first control and a second control, the preset condition comprises a fourth preset condition and a fifth preset condition, the fourth preset condition is used for determining whether the definition of the preset control is greater than or equal to a definition threshold, the fifth preset condition is used for determining whether an overlapping area exists between the first control and the second control, and the method further comprises:
and when the definition of the preset control is smaller than the definition threshold or an overlapping area exists between the first control and the second control, determining that the preset element does not meet the preset condition.
7. The video processing method according to any of claims 1-6, wherein the method further comprises:
and when the recommended parameter of the first video is smaller than or equal to a parameter threshold value, sending prompt information to a client, wherein the prompt information comprises a preset reason.
8. A video processing apparatus, comprising: the device comprises a processing module and an acquisition module;
the processing module is configured to perform preset processing on a first video to obtain a plurality of first key pictures, wherein one first key picture comprises at least one of item information of an item corresponding to the first video and text information of the item corresponding to the first video;
the obtaining module is configured to obtain style templates of a plurality of second videos, wherein types of articles corresponding to the plurality of second videos are the same as the type of article corresponding to the first video;
the processing module is further configured to process the plurality of first key pictures based on a style template of each of the plurality of second videos to generate a plurality of target videos, wherein one style template corresponds to one target video.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory configured to store the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of any of claims 1-7.
10. A computer-readable storage medium having instructions stored thereon, wherein the instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video processing method of any of claims 1-7.
CN202211113839.7A 2022-09-14 2022-09-14 Video processing method and device, electronic equipment and storage medium Active CN115209232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211113839.7A CN115209232B (en) 2022-09-14 2022-09-14 Video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211113839.7A CN115209232B (en) 2022-09-14 2022-09-14 Video processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115209232A true CN115209232A (en) 2022-10-18
CN115209232B CN115209232B (en) 2023-01-20

Family

ID=83573372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211113839.7A Active CN115209232B (en) 2022-09-14 2022-09-14 Video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115209232B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675420A (en) * 2019-08-22 2020-01-10 华为技术有限公司 Image processing method and electronic equipment
US20200410034A1 (en) * 2019-06-26 2020-12-31 Wangsu Science & Technology Co., Ltd. Video generating method, apparatus, server, and storage medium
CN113473182A (en) * 2021-09-06 2021-10-01 腾讯科技(深圳)有限公司 Video generation method and device, computer equipment and storage medium
CN113556484A (en) * 2021-07-16 2021-10-26 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN113613059A (en) * 2021-07-30 2021-11-05 杭州时趣信息技术有限公司 Short-cast video processing method, device and equipment
WO2021238931A1 (en) * 2020-05-26 2021-12-02 北京字节跳动网络技术有限公司 Video watermark processing method and apparatus, information transmission method, electronic device and storage medium
CN113849686A (en) * 2021-09-13 2021-12-28 北京达佳互联信息技术有限公司 Video data acquisition method and device, electronic equipment and storage medium
CN114827752A (en) * 2022-04-25 2022-07-29 中国平安人寿保险股份有限公司 Video generation method, video generation system, electronic device, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200410034A1 (en) * 2019-06-26 2020-12-31 Wangsu Science & Technology Co., Ltd. Video generating method, apparatus, server, and storage medium
CN110675420A (en) * 2019-08-22 2020-01-10 华为技术有限公司 Image processing method and electronic equipment
WO2021238931A1 (en) * 2020-05-26 2021-12-02 北京字节跳动网络技术有限公司 Video watermark processing method and apparatus, information transmission method, electronic device and storage medium
CN113556484A (en) * 2021-07-16 2021-10-26 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN113613059A (en) * 2021-07-30 2021-11-05 杭州时趣信息技术有限公司 Short-cast video processing method, device and equipment
CN113473182A (en) * 2021-09-06 2021-10-01 腾讯科技(深圳)有限公司 Video generation method and device, computer equipment and storage medium
CN113849686A (en) * 2021-09-13 2021-12-28 北京达佳互联信息技术有限公司 Video data acquisition method and device, electronic equipment and storage medium
CN114827752A (en) * 2022-04-25 2022-07-29 中国平安人寿保险股份有限公司 Video generation method, video generation system, electronic device, and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于海涛等: "基于多模态输入的对抗式视频生成方法", 《计算机研究与发展》 *

Also Published As

Publication number Publication date
CN115209232B (en) 2023-01-20

Similar Documents

Publication Publication Date Title
US10212244B2 (en) Information push method, server, user terminal and system
US20180365580A1 (en) Determining a likelihood of a user interaction with a content element
US10313746B2 (en) Server, client and video processing method
Boutet et al. Hyrec: leveraging browsers for scalable recommenders
CN112269917B (en) Media resource display method and device, equipment, system and storage medium
CN113094141A (en) Page display method and device, electronic equipment and storage medium
WO2023005197A1 (en) Content display method and terminal
CN110880100A (en) Business approval processing method, device and system
CN113807926A (en) Recommendation information generation method and device, electronic equipment and computer readable medium
US10783010B2 (en) Offline briefcase synchronization
CN116611411A (en) Business system report generation method, device, equipment and storage medium
US20180107763A1 (en) Prediction using fusion of heterogeneous unstructured data
CN112463994B (en) Multimedia resource display method, device, system and storage medium
CN115209232B (en) Video processing method and device, electronic equipment and storage medium
CN113780321A (en) Picture auditing method and system, electronic equipment and computer readable medium
CN116955817A (en) Content recommendation method, device, electronic equipment and storage medium
CN115643468A (en) Poster generation method and device, electronic equipment and storage medium
CN113377380B (en) User interface component deployment method and device
CN113448580B (en) Page merging method, device, server, medium and product
CN116911928B (en) Training method and device of advertisement recommendation model based on creative features
CN115250259B (en) Information interaction method and device and electronic equipment
US11282118B2 (en) Order management user interface
US20220114333A1 (en) Workbooks for online tools
CN115914769A (en) Text information display method, device, equipment and storage medium
CN117668352A (en) Method, apparatus, device, storage medium and program product for generating landing page

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant