CN113473178B - Video processing method, video processing device, electronic equipment and computer readable storage medium - Google Patents

Video processing method, video processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113473178B
CN113473178B CN202110734130.8A CN202110734130A CN113473178B CN 113473178 B CN113473178 B CN 113473178B CN 202110734130 A CN202110734130 A CN 202110734130A CN 113473178 B CN113473178 B CN 113473178B
Authority
CN
China
Prior art keywords
video
target
new
target video
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110734130.8A
Other languages
Chinese (zh)
Other versions
CN113473178A (en
Inventor
范爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110734130.8A priority Critical patent/CN113473178B/en
Publication of CN113473178A publication Critical patent/CN113473178A/en
Application granted granted Critical
Publication of CN113473178B publication Critical patent/CN113473178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The disclosure provides a video processing method, a video processing device, electronic equipment and a computer readable storage medium, and relates to the technical field of Internet, in particular to the technical field of video processing. The specific implementation scheme is as follows: determining a target video; acquiring at least one video material matched with the target video; determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video; and replying the target video by taking the new video as the reply content of the target video.

Description

Video processing method, video processing device, electronic equipment and computer readable storage medium
Technical Field
The disclosure relates to the technical field of internet, in particular to the technical field of video processing, and specifically relates to a video processing method, a device, electronic equipment and a computer readable storage medium.
Background
With the continuous development of internet technology, video production and browsing have become one of the more popular entertainment modes for users. Short videos are increasingly favored by users because of the advantages of short video time, simple manufacturing mode and the like. Currently, users usually make short videos by shooting, clipping and other modes, and the making mode requires the users to self-mine and sort video materials to realize video creation.
Disclosure of Invention
The present disclosure provides a video processing method, apparatus, electronic device, and computer-readable storage medium.
According to an aspect of the present disclosure, there is provided a video processing method including:
determining a target video;
acquiring at least one video material matched with the target video;
determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video;
and replying the target video by taking the new video as the reply content of the target video.
According to another aspect of the present disclosure, there is provided a video processing apparatus including:
the determining module is used for determining a target video;
the acquisition module is used for acquiring at least one video material matched with the target video;
the synthesizing module is used for determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video;
and the reply module is used for replying the target video by taking the new video as reply content of the target video.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described in one aspect above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to the above aspect.
According to a further aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to the above aspect.
According to the method and the device, based on the target video, the new video can be quickly synthesized based on the target video material determined from the recommended video material, a user does not need to consume a great deal of time and effort to acquire the video material in a shooting or searching mode, and the creation mode of the new video is more convenient; in addition, the new video can reply to the target video, so that a new interaction mode is provided, interaction communication can be carried out among video publishers through the video, and better use experience is provided for video users.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a video processing method provided in accordance with an embodiment of the present disclosure;
FIG. 2a is one of the interface schematic diagrams of an electronic device according to which the video processing method provided by embodiments of the present disclosure is implemented;
FIG. 2b is a second schematic interface diagram of an electronic device for implementing the video processing method according to the embodiments of the present disclosure;
fig. 3 is a schematic view of a video processing method according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of another video processing method provided in accordance with another embodiment of the present disclosure;
fig. 5 is a block diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a video processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The present disclosure provides a video processing method. Referring to fig. 1, fig. 1 is a flowchart of a video processing method according to an embodiment of the disclosure, as shown in fig. 1, the video processing method includes the following steps:
step S101, determining a target video.
The video processing method provided by the disclosure may be applied to electronic devices such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, and an intelligent wearable device.
Optionally, the target video may refer to a video that is currently being played by the electronic device; or the video determined based on the user operation may be also used, for example, the video selected by the user from the videos to be played may be determined as the target video; alternatively, the target video may be a video that the electronic device automatically determines based on a certain setting, for example, the electronic device may determine a video including a specified person in the video content as the target video, or the like.
Step S102, at least one video material matched with the target video is obtained.
The video material may include video, text, pictures, audio, and the like. The electronic device may acquire at least one video material matched with the target video from a video material library, where the video material library may be pre-stored in the electronic device, or the video material library may also be stored in a preset server, and the electronic device may acquire the video material from the video material library online.
In the embodiment of the disclosure, the acquisition of at least one video material matched with the target video can be realized based on the video content of the target video. For example, if the video content in the target video includes a specific building, at least one video material matching the specific building may be obtained from the video material library, for example, a video or a picture that also includes the specific building may be used as the matched video material. For another example, if the video content of the target video includes children, the video content may be a video material matching the target video, such as a child song, a cartoon video, and a cartoon map in the video material library.
Optionally, the step S102 may include:
extracting video features of the target video based on content understanding technology;
at least one video material matching the video feature is obtained.
The specific implementation principle of the content understanding technology may refer to related technology, and this disclosure will not be repeated. As can be appreciated, after determining a target video, the electronic device can understand the video content of the target video based on a content understanding technology to obtain video features of the target video; the video feature may refer to a video tag, a video author, a video key frame, video background music, key video content, etc. of the target video.
For example, the video feature may be a key video content of the target video, where the key video content may be extracted based on a content understanding technology, for example, the video content with the occurrence number greater than a preset number in the target video may be determined as the key video content; assuming that the key video content includes sunset, it may be to obtain video material from a video material library that matches sunset, such as video, audio, pictures, etc. that include sunset. Alternatively, the video feature may be a video author, and the video material may be obtained from a video published by the video author, for example, using, as matching video material, audio, pictures, etc. that have been used by the video author. Of course, the video features and video material matching the video features are merely examples, and the disclosure does not enumerate much, as other possible forms are possible.
In the embodiment of the disclosure, the video characteristics of the target video are extracted through a content understanding technology, and then the matched at least one video material is acquired based on the video characteristics. Therefore, the obtained video material is more attached to the target video, the video material can be recommended to the user rapidly, the user does not need to take time and effort to shoot or search to obtain the video material, and the user can create a new video more conveniently.
Step S103, determining a target video material from the at least one video material based on a first input, and combining the target video material with the target video to generate a new video.
Alternatively, the first input may be a user input operation received by the electronic device. For example, the target video is in a playing state, the at least one video material may be displayed in a display interface of the electronic device, and the at least one video material and the target video may respectively belong to different display levels. Alternatively, a specific icon may be displayed in the display interface of the electronic device, and when the user clicks on the icon, the at least one video material is displayed in the display interface. Further, the user may select a target video material from the displayed at least one video material, for example, when receiving a click input when the user is acting on a certain video material, the video material is determined as the target video material. Of course, the first input may also be in other input forms, for example, the first input may also be a specific sliding track, a voice input, or the like.
In the embodiment of the disclosure, after determining a target video material, the target video material is synthesized with the target video to generate a new video. That is, the new video is generated in conjunction with the target video material on the basis of the target video.
And step S104, the new video is used as the reply content of the target video, and the target video is replied.
In the embodiment of the disclosure, the target video may be a video with a reply function, and the reply function may be implemented by leaving a message, posting a comment, posting a barrage, and the like on the target video.
And after the target video and the target video material are synthesized to obtain a new video, the new video is taken as the reply content of the target video, and the target video is replied. For example, the new video may be replied in the form of comments, so that the video publisher of the target video can see the new video; or, the new video may be sent to the video publisher of the target video in the form of a new message, so that the video publisher of the target video can receive the new video and view the new video.
It should be noted that, when the new video is used as the reply content of the target video and the target video is replied, the playing end of the target video can display the new video in time, so that the video sender of the target video can see the new video in time. Further, when the new video is displayed on the playing end of the target video, the playing end of the target video may also be based on the scheme of each step of the disclosure, the new video is taken as the target video, at least one video material matched with the new video is obtained, then the target video material is determined therefrom, the determined target video material and the new video are synthesized to generate a new video, the new video can also be used as a reply content of the new video, and the new video is replied, so that the video publisher and the browser can realize interactive communication based on video creation.
In the embodiment of the disclosure, after determining a target video, at least one video material matched with the target video is obtained, then the target video material is determined from the at least one video material, the target video material and the target video are synthesized to generate a new video, the new video is used as reply content of the target video, and the target video is replied. Therefore, on the basis of the target video, the new video can be quickly synthesized based on the target video material determined from the recommended video materials, a user does not need to consume a great deal of time and effort to acquire the video material in a shooting or searching mode, and the creation mode of the new video is more convenient; in addition, the new video can reply to the target video, so that a new interaction mode is provided, interaction communication can be carried out among video publishers through the video, and better use experience is provided for video users.
Optionally, the step S104 may include:
performing content auditing on the new video;
and under the condition that the new video passes the auditing, taking the new video as the reply content of the target video, and replying the target video.
In the embodiment of the disclosure, under the condition that a new video is generated based on the target video and the target video material, the new video cannot be directly replied, and the new video also needs to be subjected to content auditing so as to ensure the safety compliance of the new video.
The content auditing of the new video may be to audit whether the video content of the new video meets a preset requirement, where the preset requirement may refer to that the content is not politically sensitive, that the content is not non-civilized, and the like. If the video content of the new video meets the preset requirement, determining that the new video passes the auditing, and further enabling the electronic equipment to take the new video as the reply content of the target video so as to reply the target video.
In the embodiment of the disclosure, the generated new video is subjected to content auditing, so that only the new video passing the auditing can reply and release the target video, thereby ensuring the safety of the video and the network environment and ensuring the safe operation of the video network.
Further, after content review of the new video, the method further comprises:
under the condition that the new video passes the auditing, video content screening is carried out on the new video;
and taking the video content screened from the new video as an alternative video material, and storing the alternative video material into a video material library.
It can be understood that, in the case that the new video passes the audit, it is indicated that the generated new video meets the preset requirement, and the reply content serving as the target video may be replied and released, where in this case, the video content screening may be performed on the new video. For example, the method may be to extract video features of the new video based on a content understanding technology, and use the extracted video features as the screened video content; or, the new video may be subjected to video content screening based on video quality, for example, video content with a bit rate greater than a preset bit rate is screened out; alternatively, the new video may be subjected to video content filtering based on content including target text, image, audio, etc.
In the embodiment of the disclosure, after screening video content of a new video passing through auditing, the video content screened from the new video is used as an alternative video material, and the alternative video material is stored in a video material library. Further, the selected video content can be selected as a video material of another video as an alternative video material, and can be combined with another video to generate a new video. By the method, the material quantity of the video material library can be effectively expanded, the sources of the video materials in the video material library are more flexible and convenient, the generated new video can be effectively utilized, and a user does not need to manually collect the video materials to expand the video material library.
Optionally, in an embodiment of the disclosure, the at least one video material includes any one of the following:
in the video material library, the matching degree with the target video is larger than a video material of a first preset value;
in the video material library, selecting video materials which are synthesized with the video to generate new video with the frequency larger than a second preset value;
in the video material library, the bit rate of the video is larger than the video material of a third preset value;
and video materials matched with the target video are obtained from a video material library based on the user image of the current user.
In the embodiment of the disclosure, after determining the target video, at least one video material matched with the target video is acquired, and the at least one video material may be acquired in a plurality of ways. For example, the video material with the matching degree with the target video being greater than the first preset value in the video material library may be used as the matched video material, the matching degree with the target video may refer to the similarity of the video content, or may refer to the relativity with the target video, for example, the target video is a game video, and the video material including the cartoon character may be considered to have a higher relativity with the target video. Of course, the matching degree between the video material and the target video may be determined by other manners, which is not described in detail in this disclosure.
The matching degree of the video material and the target video can be determined through a machine learning algorithm and a partial randomization strategy, for example, a pre-training related material matching degree determination model can be used, and the training method of the model can be a model training method in a reference related technology, which is not described in detail herein.
Alternatively, the video material selected from the video material library to be synthesized with the video to generate the new video with the frequency greater than the second preset value may be determined as the video material matched with the target video. For example, if a video material is selected from the video material library for more than 5 times to be synthesized with a video to generate a new video, the video material may be considered to have higher heat or be more general, and if the second preset value is 5, the video material may be determined to be a video material matching the target video.
Or, the video material with the bit rate greater than the third preset value in the video material library can be used as the video material matched with the target video. The video bit rate can represent the quality of the video, the video material with the video bit rate larger than a certain preset value can be regarded as the video material with better video quality, and the video material is determined to be the video material matched with the target video, so that the generated new video can be guaranteed to have better video quality.
Or, the video material matched with the target video can be obtained from a video material library based on the user image of the current user. Wherein the current user may refer to a viewer who is watching the target video. In the embodiment of the disclosure, the target video may be played on a specific video class application program, and the current user may refer to a user who is using or running the video class application program, so that the electronic device can acquire a user portrait of the current user through the video class application program, where the user portrait includes the gender, age, history browsing record and the like of the current user. For example, by the age of the current user, if the age of the current user is greater than 50 years old, the middle-aged and old people audio, the life-preserving video and the like in the video material library can be used as video materials matched with the target video; or if the current user history browsing record is mainly child cartoon, determining video materials matched with the target video by using a child song, a cartoon picture and the like in the video material library. In this way, the video material matched with the target video is determined based on the user portrait of the current user, so that the determined video material has higher matching degree with the current user, and better selection can be provided for the current user to select the target video material.
In the embodiment of the disclosure, after the target video is determined, at least one video material matched with the target video can be acquired in various manners, so that the manner of acquiring the at least one video material is richer and more flexible.
In the embodiments of the present disclosure, there may be various ways how to determine the target video, for example, determining the video being in the playing state as the target video; alternatively, the determining the target video may include:
acquiring a playing video in a playing state;
and if a second input acting on the playing video is received, determining the playing video as a target video.
It will be appreciated that the method provided by the present disclosure is applicable to electronic devices, which may play video based on video-type applications. Wherein, for the play video in the play state, it is not determined as the target video, and when the second input acting on the play video is received, it is determined as the target video. Alternatively, the second input may be a specific input operation, such as a preset sliding track, or a preset voice input, or a specific operation, which is performed on a preset virtual key of the playing video interface.
As shown in fig. 2a, in one scenario, for a playing video in a playing state on an electronic device, when a user a has not replied to the playing video, and receives a single click input of a user on a "button" key above a playing video interface, the playing video is determined to be a target video. Alternatively, as shown in fig. 2B, in another scenario, for a playing video on an electronic device that is in a playing state, and the current user a and the video publisher B of the playing video are already in a battle state, that is, the current user replies to the playing video, or the video publisher of the playing video replies to the current user, if a click input of the current user on a "reply" button in a video interface is received, the playing video is determined to be a target video.
It can be understood that, for the playing video in the playing state, the playing video needs to be determined as the target video only when the second input acted on the playing video by the user is received, so that the target video determined by misoperation of the user can be avoided, and the mode of determining the target video is more accurate.
Referring to fig. 3, taking a battle group 1 as an example, for a video published by a video author a, a video author B can generate a response video 1 in combination with a video material on the basis of a main video to respond to the main video, after receiving the response video 1 responded by the video author B, the video author a can further generate a response video 2 in combination with a video material on the basis of the response video 1 to respond to the video author B, then the video author B can generate a response video 3 in combination with a video material on the basis of the response video 2, and then respond to the video author a, the video author a can continue to generate a response video 4 in combination with a video material on the basis of the response video 3 to respond to the video author B, and the video author B can continue to generate a response video 5 in combination with a video material on the basis of the response video 4 to respond to the video author a. Similarly, other video authors, such as video author C, video author D, and the like, may perform video response with video author a based on the above manner, which is not described herein. Therefore, the video publishers can exchange interaction based on creating new videos, a more flexible video interaction mode is provided, and video creation is simpler and more convenient.
Referring to fig. 4, fig. 4 is a flowchart of another video processing method provided in another embodiment of the present disclosure, as shown in fig. 4, in the case of determining a main video work, performing content understanding on the main video work based on a content understanding technology to obtain video features of the main video work, performing material recommendation on the main video work based on the video features, performing video editing on the recommended material and the main video work to synthesize a new video, performing video release on the new video through a user original content (User Generated Content, UGC) video release device, further performing security audit and quality audit on the new video, and publishing video reply when the new video meets preset security requirements and quality requirements, that is, replying the new video as reply content of the main video work. The specific implementation process of this embodiment may refer to the description of the embodiment of the method described in fig. 1, and the same technical effects may be achieved, which is not described herein.
The embodiment of the disclosure also provides a video processing device.
Referring to fig. 5, fig. 5 is a block diagram of a video processing apparatus according to an embodiment of the disclosure. As shown in fig. 5, the video processing apparatus 500 includes:
a determining module 501, configured to determine a target video;
an obtaining module 502, configured to obtain at least one video material matched with the target video;
a synthesizing module 503, configured to determine a target video material from the at least one video material based on a first input, and synthesize the target video material with the target video to generate a new video;
and a reply module 504, configured to reply to the target video by using the new video as reply content of the target video.
Optionally, the determining module 501 is further configured to:
acquiring a playing video in a playing state;
and if a second input acting on the playing video is received, determining the playing video as a target video.
Optionally, the reply module 504 is further configured to:
performing content auditing on the new video;
and under the condition that the new video passes the auditing, taking the new video as the reply content of the target video, and replying the target video.
Optionally, the video processing apparatus 500 further includes a filtering module configured to:
under the condition that the new video passes the auditing, video content screening is carried out on the new video;
and taking the video content screened from the new video as an alternative video material, and storing the alternative video material into a video material library.
Optionally, the obtaining module 502 is further configured to:
extracting video features of the target video based on content understanding technology;
at least one video material matching the video feature is obtained.
Optionally, the at least one video material includes any one of:
in the video material library, the matching degree with the target video is larger than a video material of a first preset value;
in the video material library, selecting video materials which are synthesized with the video to generate new video with the frequency larger than a second preset value;
in the video material library, the bit rate of the video is larger than the video material of a third preset value;
and video materials matched with the target video are obtained from a video material library based on the user image of the current user.
It should be noted that, the video processing apparatus 500 provided in this embodiment can implement all the technical solutions of the embodiments of the video processing method, so at least all the technical effects can be implemented, and the description is omitted herein.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, for example, a video processing method. For example, in some embodiments, the video processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. When a computer program is loaded into RAM 603 and executed by computing unit 601, one or more steps of the video processing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the video processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (14)

1. A video processing method, comprising:
determining a target video;
acquiring at least one video material matched with the target video;
determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video;
taking the new video as the reply content of the target video, and replying the target video;
displaying a first response video replied by the new video playing end aiming at the new video, acquiring video materials matched with the first response video, and synthesizing the video materials and the first response video to generate a second response video;
the second response video is used as the reply content of the first response video, and the first response video is replied;
the new video playing end is the issuing end of the target video.
2. The method of claim 1, wherein the determining the target video comprises:
acquiring a playing video in a playing state;
and if a second input acting on the playing video is received, determining the playing video as a target video.
3. The method of claim 1, wherein replying to the target video with the new video as reply content of the target video, comprises:
performing content auditing on the new video;
and under the condition that the new video passes the auditing, taking the new video as the reply content of the target video, and replying the target video.
4. A method according to claim 3, further comprising:
under the condition that the new video passes the auditing, video content screening is carried out on the new video;
and taking the video content screened from the new video as an alternative video material, and storing the alternative video material into a video material library.
5. The method of claim 1, wherein the acquiring at least one video material that matches the target video comprises:
extracting video features of the target video based on content understanding technology;
at least one video material matching the video feature is obtained.
6. The method of any of claims 1-5, wherein the at least one video material comprises any of:
in the video material library, the matching degree with the target video is larger than a video material of a first preset value;
in the video material library, selecting video materials which are synthesized with the video to generate new video with the frequency larger than a second preset value;
in the video material library, the bit rate of the video is larger than the video material of a third preset value;
and video materials matched with the target video are obtained from a video material library based on the user image of the current user.
7. A video processing apparatus comprising:
the determining module is used for determining a target video;
the acquisition module is used for acquiring at least one video material matched with the target video;
the synthesizing module is used for determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video;
the reply module is used for replying the target video by taking the new video as reply content of the target video;
the synthesis module is also for: displaying a first response video replied by the new video playing end aiming at the new video, acquiring video materials matched with the first response video, and synthesizing the video materials and the first response video to generate a second response video;
the reply module is further configured to: the second response video is used as the reply content of the first response video, and the first response video is replied;
the new video playing end is the issuing end of the target video.
8. The apparatus of claim 7, wherein the means for determining is further for:
acquiring a playing video in a playing state;
and if a second input acting on the playing video is received, determining the playing video as a target video.
9. The apparatus of claim 7, wherein the reply module is further to:
performing content auditing on the new video;
and under the condition that the new video passes the auditing, taking the new video as the reply content of the target video, and replying the target video.
10. The apparatus of claim 9, further comprising a screening module to:
under the condition that the new video passes the auditing, video content screening is carried out on the new video;
and taking the video content screened from the new video as an alternative video material, and storing the alternative video material into a video material library.
11. The apparatus of claim 7, wherein the acquisition module is further to:
extracting video features of the target video based on content understanding technology;
at least one video material matching the video feature is obtained.
12. The apparatus of any of claims 7-11, wherein the at least one video material comprises any of:
in the video material library, the matching degree with the target video is larger than a video material of a first preset value;
in the video material library, selecting video materials which are synthesized with the video to generate new video with the frequency larger than a second preset value;
in the video material library, the bit rate of the video is larger than the video material of a third preset value;
and video materials matched with the target video are obtained from a video material library based on the user image of the current user.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202110734130.8A 2021-06-30 2021-06-30 Video processing method, video processing device, electronic equipment and computer readable storage medium Active CN113473178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110734130.8A CN113473178B (en) 2021-06-30 2021-06-30 Video processing method, video processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110734130.8A CN113473178B (en) 2021-06-30 2021-06-30 Video processing method, video processing device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113473178A CN113473178A (en) 2021-10-01
CN113473178B true CN113473178B (en) 2023-06-16

Family

ID=77874353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110734130.8A Active CN113473178B (en) 2021-06-30 2021-06-30 Video processing method, video processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113473178B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8818175B2 (en) * 2010-03-08 2014-08-26 Vumanity Media, Inc. Generation of composited video programming
CN103928039B (en) * 2014-04-15 2016-09-21 北京奇艺世纪科技有限公司 A kind of image synthesizing method and device
CN109963166A (en) * 2017-12-22 2019-07-02 上海全土豆文化传播有限公司 Online Video edit methods and device
CN117251091A (en) * 2020-12-25 2023-12-19 北京字节跳动网络技术有限公司 Information interaction method, device, equipment, storage medium and program product

Also Published As

Publication number Publication date
CN113473178A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
US11483268B2 (en) Content navigation with automated curation
CN107193792B (en) Method and device for generating article based on artificial intelligence
US9086776B2 (en) Modifying avatar attributes
US10356025B2 (en) Identifying and splitting participants into sub-groups in multi-person dialogues
CN105204886B (en) A kind of method, user terminal and server activating application program
CN104866275B (en) Method and device for acquiring image information
JP7394809B2 (en) Methods, devices, electronic devices, media and computer programs for processing video
CN112818224B (en) Information recommendation method and device, electronic equipment and readable storage medium
CN112749300B (en) Method, apparatus, device, storage medium and program product for video classification
US11943181B2 (en) Personality reply for digital content
CN107924398B (en) System and method for providing a review-centric news reader
US20230237255A1 (en) Form generation method, apparatus, and device, and medium
CN112581162A (en) Information content display method, device, storage medium and terminal
US20180026922A1 (en) Messaging as a graphical comic strip
CN115357755B (en) Video generation method, video display method and device
CN112843681B (en) Virtual scene control method and device, electronic equipment and storage medium
CN106021279B (en) Information display method and device
US20230198791A1 (en) Group contact lists generation
CN113473178B (en) Video processing method, video processing device, electronic equipment and computer readable storage medium
CN113873323B (en) Video playing method, device, electronic equipment and medium
CN114969427A (en) Singing list generation method and device, electronic equipment and storage medium
CN114564581A (en) Text classification display method, device, equipment and medium based on deep learning
CN113923477A (en) Video processing method, video processing device, electronic equipment and storage medium
CN113778717A (en) Content sharing method, device, equipment and storage medium
CN113766257B (en) Live broadcast data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant