CN114630057B - Method and device for determining special effect video, electronic equipment and storage medium - Google Patents

Method and device for determining special effect video, electronic equipment and storage medium Download PDF

Info

Publication number
CN114630057B
CN114630057B CN202210238163.8A CN202210238163A CN114630057B CN 114630057 B CN114630057 B CN 114630057B CN 202210238163 A CN202210238163 A CN 202210238163A CN 114630057 B CN114630057 B CN 114630057B
Authority
CN
China
Prior art keywords
video frame
special effect
processed
determining
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210238163.8A
Other languages
Chinese (zh)
Other versions
CN114630057A (en
Inventor
陈嘉俊
全浩
阮翔鸿
周栩彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210238163.8A priority Critical patent/CN114630057B/en
Publication of CN114630057A publication Critical patent/CN114630057A/en
Application granted granted Critical
Publication of CN114630057B publication Critical patent/CN114630057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the disclosure provides a method, a device, electronic equipment and a storage medium for determining special effect video, wherein the method comprises the following steps: responding to the special effect triggering operation, and determining a fuzzy video frame corresponding to the current video frame to be processed; determining an audio special effect consistent with the audio information of the current video frame to be processed; the blurred video frame is used as a background image of the current special effect video frame, and the audio special effect is used as a foreground image of the current special effect video frame; and obtaining the target special effect video through special effect video frame splicing processing of each video frame to be processed. According to the technical scheme, the voice of the user is presented in a visual mode, the interestingness of the special effect video is enhanced, and meanwhile, the video frames to be processed shot by the user are subjected to fuzzy processing, so that the personalized requirements of the user are met.

Description

Method and device for determining special effect video, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of video processing, in particular to a method, a device, electronic equipment and a storage medium for determining special effect video.
Background
With the development of network technology, more and more application programs enter the life of users, and especially a series of software capable of shooting short videos is deeply favored by users. For example, a user may take a video through application software and publish the video to a particular platform or share it with other users.
However, in the prior art, the video special effects provided by the application for the user are not abundant enough, the video content shot by the user lacks interestingness, and meanwhile, the personalized requirements of the user are not considered in the video shooting process, so that the use experience of the user is reduced.
Disclosure of Invention
The method, the device, the electronic equipment and the storage medium for determining the special effect video not only display the voice of the user in a visual mode and enhance the interestingness of the special effect video, but also blur the video frames to be processed, which are shot by the user, and meet the personalized requirements of the user.
In a first aspect, an embodiment of the present disclosure provides a method for determining a special effect video, including:
responding to the special effect triggering operation, and determining a fuzzy video frame corresponding to the current video frame to be processed; determining an audio special effect consistent with the audio information of the current video frame to be processed;
The blurred video frame is used as a background image of the current special effect video frame, and the audio special effect is used as a foreground image of the current special effect video frame;
and obtaining the target special effect video through special effect video frame splicing processing of each video frame to be processed.
In a second aspect, an embodiment of the present disclosure further provides an apparatus for determining a special effect video, including:
the fuzzy video frame determining module is used for responding to the special effect triggering operation and determining a fuzzy video frame corresponding to the current video frame to be processed; determining an audio special effect consistent with the audio information of the current video frame to be processed;
the special effect video frame generation module is used for taking the blurred video frame as a background image of the current special effect video frame and taking the audio special effect as a foreground image of the current special effect video frame;
and the target special effect video generation module is used for obtaining target special effect video through the special effect video frame splicing processing of each video frame to be processed.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of determining special effects video as described in any of the embodiments of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a storage medium containing computer-executable instructions for performing the method of determining special effects video as described in any of the disclosed embodiments when executed by a computer processor.
According to the technical scheme, the fuzzy video frame corresponding to the current video frame to be processed and the audio special effect consistent with the audio information of the current video frame to be processed are determined in response to special effect triggering operation; the method comprises the steps of taking a blurred video frame as a background image and an audio special effect as a foreground image, so that a current special effect video frame is constructed, further, splicing the special effect video frames to obtain a target special effect video, presenting the voice of a user in a visual mode, enhancing the interestingness of the special effect video, and simultaneously, blurring the video frame to be processed shot by the user, meeting the personalized requirements of the user and improving the use experience of the user in the process of manufacturing the special effect video.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flowchart of a method for determining a special effect video according to a first embodiment of the disclosure;
fig. 2 is a schematic structural diagram of an apparatus for determining a special effect video according to a second embodiment of the disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units. It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Before the technical scheme is introduced, an application scenario of the embodiment of the disclosure may be illustrated. For example, when a user shoots a video through application software or performs a video call with other users, it may be desirable to make the video content obtained by shooting more interesting, and at the same time, some users may have personalized requirements on the picture shot by the video (for example, some users suffering from social phobia do not want to show their own personal features and the like in the video), it may be understood that these users want to hide all or part of the content in the shot picture (for example, the facial image of the user himself), at this time, according to the technical scheme of this embodiment, while blurring the video picture, the audio special effects of the user are superimposed on each video frame, so as to obtain a special effect video with richer picture content and at the same time, which effectively meets the personalized requirements of the user.
Example 1
Fig. 1 is a schematic flow chart of a method for determining a special effect video according to an embodiment of the present disclosure, where the embodiment of the present disclosure is suitable for a situation of generating a special effect video with more interest while meeting the personalized needs of a user, and the method may be performed by a device for determining a special effect video, where the device may be implemented in a form of software and/or hardware, and optionally, may be implemented by an electronic device, where the electronic device may be a mobile terminal, a PC terminal, a server, or the like.
As shown in fig. 1, the method includes:
s110, responding to special effect triggering operation, and determining a fuzzy video frame corresponding to the current video frame to be processed; and determining the audio special effect consistent with the audio information of the current video frame to be processed.
The device for executing the special effect video processing method provided by the embodiment of the disclosure can be integrated in application software supporting the special effect video processing function, and the software can be installed in electronic equipment, and optionally, the electronic equipment can be a mobile terminal or a PC (personal computer) terminal and the like. The application software may be a type of software for image/video processing, and specific application software thereof is not described herein in detail, as long as image/video processing can be implemented. The method can also be a specially developed application program to realize the addition of special effects and the display of the special effects, or be integrated in a corresponding page, and a user can realize the processing of the special effect video through the page integrated in the PC terminal.
In this embodiment, in the application software or the application program supporting the special effect video processing function, a control for triggering the special effect may be developed in advance, and when the control is detected to be triggered by the user, the response to the special effect triggering operation may be performed.
In this embodiment, the current video frame to be processed may be a frame of image captured by the electronic device with application software installed in response to the special effect triggering operation, or may be a frame of image in the currently played video, and the corresponding blurred video frame is a frame of image obtained after the current video frame to be processed is subjected to blurring processing.
It will be appreciated that blurred video frames do not show the content of the picture as clearly as the video frames to be processed, but rather reduce the definition of the picture, i.e. the visual effect of blurring the picture, thereby hiding all or part of the information in the picture of each frame. For example, after processing a plurality of frames of video frames to be processed containing user face information to obtain corresponding blurred video frames, the appearance of the user cannot be accurately identified only through the blurred video frames.
In the actual application process, various modes of determining the fuzzy video frame exist, and optionally, a fuzzy filter is added to the current video frame to be processed to obtain the fuzzy video frame; or, carrying out Gaussian blur processing on the current video frame to be processed to obtain a blurred video frame; or, inputting the current video frame to be processed into a fuzzy processing model to obtain a fuzzy video frame; or, if the current video frame to be processed comprises the target object, blurring the target object to obtain a blurred video frame. The following describes the above modes, respectively.
In the first way of determining the blurred video frames, the blur filter may be a filter developed in advance in the application software. Specifically, after adding a blurring filter to a video frame to be processed, blurring effect can be generated in all areas of each frame of image, and blurring effect can also be generated in areas with higher definition or stronger contrast in each frame of image, so that a corresponding blurring video frame is obtained. It should be understood by those skilled in the art that when only a part of the image is blurred, the blurring may be achieved by balancing parameters related to pixels of different areas in the image, which is not described herein.
In a second way of determining blurred video, gaussian blur is also known as gaussian smoothing, by which the noise of the image is reduced and the level of detail is reduced, the blurring technique produces an image whose visual effect is as if it were being observed through a ground glass. Based on the above, it can be understood that the corresponding blurred video frame can be obtained after the current video frame to be processed is subjected to Gaussian blur processing.
In a third way of determining a blurred video, the blur processing model may be a pre-trained neural network model and may be integrated in the relevant application software, at least for generating blurred video frames. It will be appreciated that the input of the model is the current video frame to be processed that is acquired, and the output of the model is the corresponding blurred video frame. It should be understood by those skilled in the art that, for the fuzzy processing model, training may be performed based on the corresponding training set and verification set, and when the loss function of the fuzzy processing model converges, it indicates that the model training is completed and may be integrated in the application, and the specific training process is not described in this embodiment.
In the fourth way of determining the blurred video, a target object may be set in advance in the application, for example, image data including face information of a specific user is input into the application as the target object, and further, when the application responds to a special effect triggering operation and recognizes the face information of the specific user in the display interface, the current video frame to be processed may be automatically subjected to blurring processing, so as to obtain a corresponding blurred video frame. It will be appreciated that in this mode, when the target object is not recognized in the display interface, blurring processing is not performed on each frame image.
In this embodiment, while acquiring a current video frame to be processed, in order to present the sound of the user in a visual form in the special effect video, the audio information of the user needs to be acquired first. For example, voice information uttered by a user is collected by a microphone on an electronic device while video is being captured. It is understood that the audio information includes at least audio features for characterizing the audio content or the audio characteristics. In the practical application process, the audio features can also comprise voiceprint frequency spectrum features, the corresponding audio special effects can also be dynamic frequency spectrums (such as dynamic ripples) displayed by electroacoustical singularities, and when the precision reaches a certain value, at least a specific user and voice information sent by the user can be represented by the sound wave frequency spectrums; meanwhile, when the volume in the audio frequency changes, the dynamic ripple in the voiceprint voice also fluctuates, when the volume is higher, the fluctuation of the dynamic ripple is larger, and correspondingly, when the volume is lower, the fluctuation of the dynamic ripple is smaller. It should be noted that, in order to unify the visual effect and the auditory effect in the finally generated special effect video, the voiceprint voice also needs to be consistent with the audio information of the current video frame to be processed.
Further, after the audio information is parsed and the audio features are obtained, an audio special effect corresponding to the audio features of the audio information may be constructed, where the audio special effect may be a cartoon map for representing the audio features, such as a pre-constructed animal cartoon map, or may be waves corresponding to a voiceprint spectrum, which is understood by those skilled in the art that the audio special effect may be pre-created according to actual needs, such as a feature pattern related to a country or a more visual note pattern, etc., which is not limited herein specifically.
In this embodiment, the display form of the audio special effects includes dynamic display and/or static display. Specifically, the dynamic display is based on the animation display audio features, and the static display is that the audio features are statically displayed on the display interface. When the application displays the pre-built cartoon map of the small animal in a dynamic way, the map continuously jumps up and down in the display interface along with the change of the audio characteristics, when the application displays the ripple corresponding to the voiceprint frequency spectrum in a static way, the application can splice a plurality of sections of the ripple into a whole section and display the whole section on the display interface based on the time stamp, and it can be understood that the size of the display interface is usually limited, so that only one section of the ripple corresponding to the voiceprint frequency spectrum is displayed at any moment, and only when the application detects that a user selects a specific moment, or when drag operation is performed on the display area of the ripple, one section of the ripple corresponding to the specific moment or drag operation is displayed.
Optionally, processing the audio information corresponding to the current video frame to be processed based on the voiceprint feature extraction model to obtain voiceprint voice. The voiceprint feature extraction model can be a pre-trained model, and can be integrated in application. It can be understood that after the audio information of the user is collected, the audio information can be input into the voiceprint feature extraction model, so that the corresponding voiceprint voice is obtained. Of course, in the practical application process, the voiceprint voice input by the voiceprint feature extraction model may be a sound wave spectrum with various animation effects, for example, the output sound wave spectrum may be presented in various shapes or colors, and those skilled in the art should understand that the specific visual effect presented by the voiceprint voice may be selected according to the practical situation, and the embodiments of the disclosure are not limited in detail herein.
S120, taking the blurred video frame as a background image of the current special effect video frame, and taking the audio special effect as a foreground image of the current special effect video frame.
In this embodiment, after determining the blurred video frame and the audio special effects, the special effect video frame can be constructed based on the above information. Specifically, the special effect video frame includes a background image and a foreground image, the foreground image is superimposed and displayed on the background image, and all or part of the area of the background image can be blocked, so that the constructed special effect video frame has a hierarchical sense, and the process of determining the background image is described below.
Optionally, in the process of determining the background image, a superimposed background image corresponding to the current video frame to be processed and a target transparency corresponding to the superimposed background image may be determined; and superposing the superposed background image on the fuzzy video frame according to the target transparency to serve as a background image.
The superimposed background image may be an image preset by a user through application, or may be an image automatically selected according to information such as brightness and color of a video frame to be processed. Specifically, the image may be a solid color image, such as a solid black image or a solid gray image, and when the solid color image is selected as the superimposed background image, the visual effect presented by the finally obtained special effect video frame is softer. It will be appreciated that the automatically selected image is more adapted to the picture presented by the video frame to be processed.
In the actual application process, the pixel mean value can be determined according to the pixel value of each pixel point in the current video frame to be processed; and determining an overlapped background image corresponding to the current video frame to be processed based on the pixel mean value. For example, the current video frame to be processed is analyzed, so that the pixel value of each pixel point in the picture is determined, further, the pixels of all channels (R, G, B) are averaged to obtain the pixel average value reflecting the average brightness of the current video frame to be processed, and based on the pixel average value, the image with the corresponding color can be determined to be used as the superimposed background image. It is understood that a solid color overlay background image is determined based on the pixel mean.
In order to make the finally generated special effect video present a better visual effect, the target transparency of the image needs to be determined while the superimposed background image is determined, and likewise, the user can set the target transparency in advance by applying, and the application can dynamically select the corresponding target transparency according to the information such as the brightness, the color and the like of the video frame to be processed, which is not described in detail herein in the embodiment of the disclosure. And finally, superposing the superposed background image on the fuzzy video frame according to the target transparency to obtain the background image of the special effect video frame. It should also be noted that the superimposed background image is provided with a certain transparency.
The method includes the steps that when a fuzzy video frame is determined, a gray image can be automatically selected as a superimposed background image according to the fuzzy video frame, meanwhile, the target transparency of the gray image is determined to be 50% according to parameters preset by a user, on the basis, the gray image can be adjusted to be 50% transparency and then superimposed on the fuzzy video frame, and the superimposed image is used as the background image.
In this embodiment, after the background image is determined, the audio special effect corresponding to the moment corresponding to the background information can be used as the foreground image and superimposed on the background image, so as to construct the special effect video frame. It should be understood by those skilled in the art that when there are a plurality of video frames to be processed, the application may determine, for each video frame to be processed, a corresponding special effect video frame based on the scheme of the present embodiment, which is not described herein.
S130, obtaining the target special effect video through special effect video frame splicing processing of each video frame to be processed.
In this embodiment, after determining a plurality of special effect video frames corresponding to a plurality of to-be-processed video frames, a sequence corresponding to the plurality of special effect video frames may be determined according to a timestamp carried by each to-be-processed video frame, so that the plurality of special effect video frames are spliced according to the sequence to obtain the target special effect video. It can be understood that in the target special effect video, the content in each video frame to be processed can be displayed in a blurred visual effect, and meanwhile, the voiceprint voice at the corresponding moment can be displayed at a specific position on the upper layer of the picture, namely, the audio of the user at the moment is displayed in a visual form.
In this embodiment, since the video frame to be processed is processed in a blurring manner, and only the voiceprint voice is displayed, although the personalized requirement of the user is satisfied, the video viewing experience of other users may be affected, so in order to improve the attraction of the special effect video to other users, a target character model may be added to the special effect video, and this process is described below.
Specifically, a target character model is superimposed in a foreground image; acquiring limb action information and/or facial expression of a target object in a current video frame to be processed; the target character model is adjusted to match limb movements and/or facial expressions.
Wherein the target character model may be a pre-set static or dynamic 3D model that may present at least an anthropomorphic representation, such as a virtual cartoon character. At the same time, the target character model may be superimposed in the foreground image, for example, with the target character model presented above or below the voiceprint speech. Those skilled in the art will appreciate that the target character model also occludes all or part of the background image, thereby making the constructed special effect video frame more hierarchical.
In this embodiment, when the target character model is a dynamic model, in order to further improve the interest of the special effect video, the motion of the model in the special effect video may be further matched with the real motion of the user. For example, when it is determined that the arm of the user is raised in the current video frame to be processed through the key point recognition technology, the arm of the target character model needs to be adaptively adjusted, that is, the arm of the virtual character is raised; in the next video frame to be processed, if it is determined that the user's arm has been put down, the arm adaptability of the avatar needs to be adjusted to put down. Based on this, in the finally generated special effect video, the target character model can make an action basically consistent with the actual action of the user. It should be noted that, in the actual application process, the application may further capture facial expressions of the user in the video frame to be processed, so as to adaptively adjust the expression of the target character model, which is not described herein in detail in the embodiments of the present disclosure.
It can be understood that in any frame of the special effect video, the limb motion and/or facial expression of the target character model are matched with the limb motion and/or facial expression of the target object in the video frame to be processed.
In the practical application process, on the one hand, the special effect video may be generated in real time according to the scheme of the embodiment, for example, in the process of video call of multiple users. On the other hand, the existing video can be subjected to post-processing to generate the special effect video. The specific video obtained by post-processing will be described below.
Specifically, if the target special effect video frame is generated in a video recording mode, determining text information corresponding to the audio information; and displaying the text information in the target area of the playing interface when the target special effect video is played.
Specifically, the video recording mode is a mode in which a user photographs videos autonomously, which is different from a video call mode between a plurality of users, and in this mode, the user may photograph special effect videos based on functions provided by an application and perform operations such as further processing, storage, or sharing on the generated special effect videos.
When the target special effect video is generated in the video recording mode, in order to facilitate the user to share the special effect video and improve the application intelligence, text information corresponding to the audio information can be determined based on a pre-trained voice recognition model and/or semantic recognition model, and the text information is displayed in a target area of a playing interface, and in the process, in order to avoid the situation that the text information is blocked so as to influence the viewing experience of the user, the target area is positioned in a foreground image of each special effect video frame to be processed; meanwhile, in order to prevent the text information from shielding the voiceprint voice in the special effect image of each frame, and avoid the visual confusion of the two elements caused by the watched user, the text information and the voiceprint voice corresponding to the audio information need to be displayed in the playing interface in a lower-difference mode, for example, the text information is displayed at the lowest part of the playing interface in a white and specific font mode, and the voiceprint voice is overlapped at the center position of the background image in a horizontal shape and multiple colors, so that the voice is displayed at the center of the playing interface.
It should be understood by those skilled in the art that the audio information corresponding to each frame of the special effect video may have a difference, and based on this, the text information displayed in the playing interface also changes continuously with the playing of the video. By displaying the text information corresponding to the audio information in the playing interface, the technical effect of automatically adding the caption for the special effect video is realized, and the problem that the audio information input by the user is unclear, thereby affecting the watching experience of the special effect video of other users is avoided.
It should be noted that, after the corresponding text information is generated for the target special effect video, in order to avoid the phenomenon that three elements of audio information, text information and voiceprint voice in the special effect video are out of sync, the audio information, the text information and the voiceprint voice can be adjusted to be displayed in the same frequency. The control for adjusting the timestamps of the three elements is developed in advance in the application, and after corresponding text information is determined for the audio information corresponding to each frame of picture in the special effect video, if the text information in a certain frame or a plurality of frames is found out of synchronization with the actual audio information in the generated special effect video, the text information corresponding to the certain frame or a plurality of frames of special effect images can be adjusted by applying adjustment operation to the control for controlling the timestamps of the text information. It can be appreciated that when the voiceprint voice in a certain frame or frames in the special effect video is not synchronous with the actual audio, the timestamp thereof can be adjusted in the above manner, and the embodiments of the disclosure are not repeated here.
In this embodiment, the application may also present the target keywords associated with the audio information on the display interface. Optionally, updating at least one target keyword on the display interface according to the historical audio information of each historical video frame to be processed before the current video frame to be processed.
Specifically, after determining the audio information of the user, the application may parse the audio information based on a pre-trained speech processing model and extract keywords therein. The keywords may be one or more words with highest occurrence frequency in the audio information, or words which occur in the audio information and are consistent with content in a pre-constructed keyword word stock, and of course, in an actual application process, the keywords may also be current hot words occurring in the audio information, or related professional words in a certain field, etc., and it should be understood by those skilled in the art that the extraction rule of the keywords may be set according to actual requirements, and embodiments of the disclosure are not limited specifically herein.
In this embodiment, after the application determines the keywords in the audio information, the keywords may be displayed on the display interface, where the displayed keywords are at least used to implement a positioning function for the historical video frames. Specifically, when the trigger target keyword is detected, jumping to a historical target special effect video frame corresponding to the target keyword and playing; the historical target special effect video frame is based on the video frame obtained after special effect processing of the historical to-be-processed video frame.
It can be understood that when the application detects the triggering operation of the user on any keyword, the keyword is the target keyword, and further, the application can automatically jump the currently played special effect video frame to the historical special effect video frame containing the keyword, in this way, when the user subsequently recalls the content associated with a certain keyword, the user can quickly locate the special effect video frame corresponding to the content. The method includes that a word of a technical scheme appears in audio information of a user, meanwhile, the word is determined to be a keyword by an application and is displayed in a display interface, on the basis, when a user plays back a special effect video generated by the application later, if the user wants to watch only a part of the special effect video with the technical scheme, clicking the word of the technical scheme displayed in the display interface, and after the application detects a triggering operation of the user, the user can automatically jump to a historical special effect video frame with the word of the technical scheme appearing for the first time and play the historical special effect video frame continuously from the frame.
According to the technical scheme, the fuzzy video frame corresponding to the current video frame to be processed and the audio special effect consistent with the audio information of the current video frame to be processed are determined in response to special effect triggering operation; the method comprises the steps of taking a blurred video frame as a background image and an audio special effect as a foreground image, so that a current special effect video frame is constructed, further, splicing the special effect video frames to obtain a target special effect video, presenting the voice of a user in a visual mode, enhancing the interestingness of the special effect video, and simultaneously, blurring the video frame to be processed shot by the user, meeting the personalized requirements of the user and improving the use experience of the user in the process of manufacturing the special effect video.
Example two
Fig. 2 is a schematic structural diagram of an apparatus for determining a special effect video according to a second embodiment of the present disclosure, where, as shown in fig. 2, the apparatus includes: a blurred video frame determination module 210, a special effects video frame generation module 220, and a target special effects video generation module 230.
A blurred video frame determination module 210, configured to determine a blurred video frame corresponding to the current video frame to be processed in response to the special effect triggering operation; and determining the audio special effect consistent with the audio information of the current video frame to be processed.
The special effect video frame generating module 220 is configured to take the blurred video frame as a background image of the current special effect video frame and the audio special effect as a foreground image of the current special effect video frame.
The target special effect video generation module 230 is configured to obtain a target special effect video by performing a splicing process on special effect video frames of each video frame to be processed.
Based on the above aspects, the blurred video frame determining module 210 includes a blurred video frame determining unit and an audio special effect determining unit.
The fuzzy video frame determining unit is used for adding a fuzzy filter into the current video frame to be processed to obtain the fuzzy video frame; or, carrying out Gaussian blur processing on the current video frame to be processed to obtain a blurred video frame; or inputting the current video frame to be processed into a fuzzy processing model to obtain the fuzzy video frame; or, if the current video frame to be processed comprises the target object, blurring the target object to obtain a blurred video frame.
And the audio special effect determining unit is used for processing the audio information corresponding to the current video frame to be processed based on the voiceprint feature extraction model to obtain the audio special effect.
On the basis of the technical schemes, the audio special effects correspond to the audio characteristics of the audio information; the audio features include voiceprint spectral features; the display form of the audio special effect comprises dynamic display and/or static display, wherein the dynamic display is used for displaying audio characteristics based on animation, and the static display is used for displaying the audio characteristics on a display interface in a static mode.
Based on the above technical solutions, the special effect video frame generating module 220 includes a superimposed background image determining unit and a background image determining unit.
And the superimposed background image determining unit is used for determining a superimposed background image corresponding to the current video frame to be processed and a target transparency corresponding to the superimposed background image.
And the background image determining unit is used for superposing the superposed background image on the blurred video frame according to the target transparency to serve as the background image.
Optionally, the background image superposition determining unit is further configured to determine a pixel average value according to a pixel value of each pixel point in the current video frame to be processed; and determining an overlapped background image corresponding to the current video frame to be processed based on the pixel mean value.
On the basis of the technical schemes, the device for determining the special effect video further comprises a target character model determining module.
A target character model determination module for superimposing a target character model in the foreground image; acquiring limb action information and/or facial expression of a target object in the current video frame to be processed; the target character model is adjusted to match the limb movements and/or the facial expressions.
On the basis of the technical schemes, the device for determining the special effect video further comprises a target keyword updating module.
And the target keyword updating module is used for updating at least one target keyword on the display interface according to the historical audio information of each historical video frame to be processed before the current video frame to be processed.
On the basis of the technical schemes, the device for determining the special effect video further comprises a jump module.
The jump module is used for jumping to a historical target special effect video frame corresponding to the target keyword and playing the historical target special effect video frame when the trigger target keyword is detected; the historical target special effect video frame is based on a video frame obtained after special effect processing of a historical to-be-processed video frame.
On the basis of the technical schemes, the device for determining the special effect video further comprises a text information determining module.
The text information determining module is used for determining text information corresponding to the audio information if the target special effect video frame is generated in a video recording mode; displaying the text information in a target area of a playing interface when the target special effect video is played; the target area is located in a foreground image of each special effect video frame to be processed.
On the basis of the technical schemes, the device for determining the special effect video further comprises an adjusting module.
And the adjusting module is used for adjusting the audio information, the text information and the audio special effect on-channel display.
On the basis of the technical schemes, the device for determining the special effect video further comprises a distinguishing display module.
And the distinguishing display module is used for distinguishing and displaying the text information and the audio special effect corresponding to the audio information in the playing interface.
According to the technical scheme provided by the embodiment, the fuzzy video frame corresponding to the current video frame to be processed and the audio special effect consistent with the audio information of the current video frame to be processed are determined in response to the special effect triggering operation; the method comprises the steps of taking a blurred video frame as a background image and an audio special effect as a foreground image, so that a current special effect video frame is constructed, further, splicing the special effect video frames to obtain a target special effect video, presenting the voice of a user in a visual mode, enhancing the interestingness of the special effect video, and simultaneously, blurring the video frame to be processed shot by the user, meeting the personalized requirements of the user and improving the use experience of the user in the process of manufacturing the special effect video.
The device for determining the special effect video provided by the embodiment of the disclosure can execute the method for determining the special effect video provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that each unit and module included in the above apparatus are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present disclosure.
Example III
Fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the disclosure. Referring now to fig. 3, a schematic diagram of an electronic device (e.g., a terminal device or server in fig. 3) 300 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a pattern processor, etc.) 301 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 306 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An edit/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: editing devices 306 including, for example, a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, and the like; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communications means 309, or installed from storage means 306, or installed from ROM 302. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The electronic device provided by the embodiment of the present disclosure and the method for determining a special effect video provided by the foregoing embodiment belong to the same inventive concept, and technical details not described in detail in the present embodiment may be referred to the foregoing embodiment, and the present embodiment has the same beneficial effects as the foregoing embodiment.
Example IV
The embodiment of the present disclosure provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method for determining a special effect video provided by the above embodiment.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
responding to the special effect triggering operation, and determining a fuzzy video frame corresponding to the current video frame to be processed; determining an audio special effect consistent with the audio information of the current video frame to be processed;
the blurred video frame is used as a background image of the current special effect video frame, and the audio special effect is used as a foreground image of the current special effect video frame;
And obtaining the target special effect video through special effect video frame splicing processing of each video frame to be processed.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method comprising:
responding to the special effect triggering operation, and determining a fuzzy video frame corresponding to the current video frame to be processed; determining an audio special effect consistent with the audio information of the current video frame to be processed;
the blurred video frame is used as a background image of the current special effect video frame, and the audio special effect is used as a foreground image of the current special effect video frame;
and obtaining the target special effect video through special effect video frame splicing processing of each video frame to be processed.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
optionally, adding a fuzzy filter to the current video frame to be processed to obtain the fuzzy video frame; or alternatively, the first and second heat exchangers may be,
gaussian blur processing is carried out on the current video frame to be processed to obtain a blurred video frame; or alternatively, the first and second heat exchangers may be,
inputting the current video frame to be processed into a fuzzy processing model to obtain the fuzzy video frame; or alternatively, the first and second heat exchangers may be,
and if the current video frame to be processed comprises the target object, blurring the target object to obtain a blurred video frame.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
optionally, processing the audio information corresponding to the current video frame to be processed based on the voiceprint feature extraction model to obtain the audio special effect.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
optionally, the audio special effects correspond to audio features of the audio information; the audio features include voiceprint spectral features; the display form of the audio special effect comprises dynamic display and/or static display, wherein the dynamic display is used for displaying audio characteristics based on animation, and the static display is used for displaying the audio characteristics on a display interface in a static mode.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
optionally, determining a superimposed background image corresponding to the current video frame to be processed and a target transparency corresponding to the superimposed background image;
and superposing the superposed background image on the fuzzy video frame according to the target transparency to serve as the background image.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
optionally, determining a pixel mean value according to the pixel value of each pixel point in the current video frame to be processed;
and determining an overlapped background image corresponding to the current video frame to be processed based on the pixel mean value.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
optionally, overlaying a target character model in the foreground image;
acquiring limb action information and/or facial expression of a target object in the current video frame to be processed;
the target character model is adjusted to match the limb movements and/or the facial expressions.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
optionally, updating at least one target keyword on the display interface according to the historical audio information of each historical video frame to be processed before the current video frame to be processed.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
Optionally, when the trigger target keyword is detected, jumping to a historical target special effect video frame corresponding to the target keyword and playing;
the historical target special effect video frame is based on a video frame obtained after special effect processing of a historical to-be-processed video frame.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
optionally, if the target special effect video frame is generated in a video recording mode, determining text information corresponding to the audio information;
displaying the text information in a target area of a playing interface when the target special effect video is played;
the target area is located in a foreground image of each special effect video frame to be processed.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effects video, the method further comprising:
optionally, the audio information, the text information and the audio special effect are adjusted to be displayed in the same frequency.
According to one or more embodiments of the present disclosure, there is provided a method of determining a special effect video, the method further comprising:
Optionally, text information and audio effects corresponding to the audio information are displayed in the playing interface in a lower distinguishing mode.
According to one or more embodiments of the present disclosure, there is provided an apparatus for determining a special effect video, the apparatus comprising:
the fuzzy video frame determining module is used for responding to the special effect triggering operation and determining a fuzzy video frame corresponding to the current video frame to be processed; determining an audio special effect consistent with the audio information of the current video frame to be processed;
the special effect video frame generation module is used for taking the blurred video frame as a background image of the current special effect video frame and taking the audio special effect as a foreground image of the current special effect video frame;
and the target special effect video generation module is used for obtaining target special effect video through the special effect video frame splicing processing of each video frame to be processed.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (14)

1. A method of determining a special effect video, comprising:
responding to the special effect triggering operation, and determining a fuzzy video frame corresponding to the current video frame to be processed; determining an audio special effect consistent with the audio information of the current video frame to be processed;
The blurred video frame is used as a background image of the current special effect video frame, and the audio special effect is used as a foreground image of the current special effect video frame;
the target special effect video is obtained through special effect video frame splicing processing of each video frame to be processed;
the step of taking the blurred video frame as the background image of the current special effect video frame comprises the following steps:
determining a superimposed background image corresponding to the current video frame to be processed and a target transparency corresponding to the superimposed background image;
and superposing the superposed background image on the fuzzy video frame according to the target transparency to serve as the background image.
2. The method of claim 1, wherein the determining a blurred video frame corresponding to a current video frame to be processed comprises:
adding a fuzzy filter into the current video frame to be processed to obtain the fuzzy video frame; or alternatively, the first and second heat exchangers may be,
gaussian blur processing is carried out on the current video frame to be processed to obtain a blurred video frame; or alternatively, the first and second heat exchangers may be,
inputting the current video frame to be processed into a fuzzy processing model to obtain the fuzzy video frame; or alternatively, the first and second heat exchangers may be,
and if the current video frame to be processed comprises the target object, blurring the target object to obtain a blurred video frame.
3. The method of claim 1, wherein said determining an audio special effect consistent with audio information of the current video frame to be processed comprises:
and processing the audio information corresponding to the current video frame to be processed based on the voiceprint feature extraction model to obtain the audio special effect.
4. A method according to any one of claims 1-3, wherein the audio effects correspond to audio features of the audio information; the audio features include voiceprint spectral features; the display form of the audio special effect comprises dynamic display and/or static display, wherein the dynamic display is used for displaying audio characteristics based on animation, and the static display is used for displaying the audio characteristics on a display interface in a static mode.
5. The method of claim 1, wherein the determining the superimposed background image corresponding to the current video frame to be processed comprises:
determining a pixel mean value according to the pixel value of each pixel point in the current video frame to be processed;
and determining an overlapped background image corresponding to the current video frame to be processed based on the pixel mean value.
6. The method as recited in claim 1, further comprising:
Superimposing a target character model in the foreground image;
acquiring limb action information and/or facial expression of a target object in the current video frame to be processed;
the target character model is adjusted to match the limb movements and/or the facial expressions.
7. The method as recited in claim 1, further comprising:
updating at least one target keyword on a display interface according to the historical audio information of each historical video frame to be processed before the current video frame to be processed;
updating at least one target keyword on a display interface according to the historical audio information of each historical video frame to be processed before the current video frame to be processed, wherein the updating comprises the following steps:
after the audio information is determined, analyzing the audio information based on a pre-trained voice processing model, extracting keywords in the audio information, and displaying the keywords in a display interface, wherein the displayed keywords are at least used for realizing the positioning function of historical video frames.
8. The method as recited in claim 7, further comprising:
when triggering operation on a target keyword is detected, jumping to a historical target special effect video frame corresponding to the target keyword, and playing;
The historical target special effect video frame is based on a video frame obtained after special effect processing of a historical to-be-processed video frame.
9. The method as recited in claim 1, further comprising:
if the target special effect video frame is generated in a video recording mode, determining text information corresponding to the audio information;
displaying the text information in a target area of a playing interface when the target special effect video is played;
the target area is located in a foreground image of each special effect video frame to be processed.
10. The method as recited in claim 9, further comprising:
and adjusting the audio information, the text information and the audio special effect to be displayed in the same frequency.
11. The method as recited in claim 10, further comprising:
and displaying the text information and the audio special effect corresponding to the audio information in the playing interface in a lower distinguishing way.
12. An apparatus for determining a special effect video, comprising:
the fuzzy video frame determining module is used for responding to the special effect triggering operation and determining a fuzzy video frame corresponding to the current video frame to be processed; determining an audio special effect consistent with the audio information of the current video frame to be processed;
The special effect video frame generation module is used for taking the blurred video frame as a background image of the current special effect video frame and taking the audio special effect as a foreground image of the current special effect video frame;
the target special effect video generation module is used for obtaining target special effect video through the special effect video frame splicing processing of each video frame to be processed;
the special effect video frame generation module comprises a superimposed background image determination unit and a background image determination unit;
the superimposed background image determining unit is used for determining a superimposed background image corresponding to the current video frame to be processed and a target transparency corresponding to the superimposed background image;
the background image determining unit is used for superposing the superposed background image on the blurred video frame according to the target transparency to serve as the background image.
13. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of determining special effects video of any of claims 1-11.
14. A storage medium containing computer executable instructions for performing the method of determining special effects video of any one of claims 1-11 when executed by a computer processor.
CN202210238163.8A 2022-03-11 2022-03-11 Method and device for determining special effect video, electronic equipment and storage medium Active CN114630057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210238163.8A CN114630057B (en) 2022-03-11 2022-03-11 Method and device for determining special effect video, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210238163.8A CN114630057B (en) 2022-03-11 2022-03-11 Method and device for determining special effect video, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114630057A CN114630057A (en) 2022-06-14
CN114630057B true CN114630057B (en) 2024-01-30

Family

ID=81902466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210238163.8A Active CN114630057B (en) 2022-03-11 2022-03-11 Method and device for determining special effect video, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114630057B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082301B (en) * 2022-08-22 2022-12-02 中关村科学城城市大脑股份有限公司 Customized video generation method, device, equipment and computer readable medium
CN115623146A (en) * 2022-09-29 2023-01-17 北京字跳网络技术有限公司 Method and device for generating special effect video, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105049911A (en) * 2015-07-10 2015-11-11 西安理工大学 Video special effect processing method based on face identification
CN107911739A (en) * 2017-10-25 2018-04-13 北京川上科技有限公司 A kind of video acquiring method, device, terminal device and storage medium
CN109729297A (en) * 2019-01-11 2019-05-07 广州酷狗计算机科技有限公司 The method and apparatus of special efficacy are added in video
WO2020097888A1 (en) * 2018-11-15 2020-05-22 深圳市欢太科技有限公司 Video processing method and apparatus, electronic device, and computer-readable storage medium
CN111405361A (en) * 2020-03-27 2020-07-10 咪咕文化科技有限公司 Video acquisition method, electronic equipment and computer readable storage medium
CN112215762A (en) * 2019-07-12 2021-01-12 阿里巴巴集团控股有限公司 Video image processing method and device and electronic equipment
WO2021027631A1 (en) * 2019-08-09 2021-02-18 北京字节跳动网络技术有限公司 Image special effect processing method and apparatus, electronic device, and computer-readable storage medium
CN112749613A (en) * 2020-08-27 2021-05-04 腾讯科技(深圳)有限公司 Video data processing method and device, computer equipment and storage medium
CN113422977A (en) * 2021-07-07 2021-09-21 上海商汤智能科技有限公司 Live broadcast method and device, computer equipment and storage medium
CN114071028A (en) * 2020-07-30 2022-02-18 北京字节跳动网络技术有限公司 Video generation and playing method and device, electronic equipment and storage medium
CN114079817A (en) * 2020-08-20 2022-02-22 北京达佳互联信息技术有限公司 Video special effect control method and device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007048973B4 (en) * 2007-10-12 2010-11-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a multi-channel signal with voice signal processing
KR102148006B1 (en) * 2019-04-30 2020-08-25 주식회사 카카오 Method and apparatus for providing special effects to video
CN111770375B (en) * 2020-06-05 2022-08-23 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105049911A (en) * 2015-07-10 2015-11-11 西安理工大学 Video special effect processing method based on face identification
CN107911739A (en) * 2017-10-25 2018-04-13 北京川上科技有限公司 A kind of video acquiring method, device, terminal device and storage medium
WO2020097888A1 (en) * 2018-11-15 2020-05-22 深圳市欢太科技有限公司 Video processing method and apparatus, electronic device, and computer-readable storage medium
CN109729297A (en) * 2019-01-11 2019-05-07 广州酷狗计算机科技有限公司 The method and apparatus of special efficacy are added in video
CN112215762A (en) * 2019-07-12 2021-01-12 阿里巴巴集团控股有限公司 Video image processing method and device and electronic equipment
WO2021027631A1 (en) * 2019-08-09 2021-02-18 北京字节跳动网络技术有限公司 Image special effect processing method and apparatus, electronic device, and computer-readable storage medium
CN111405361A (en) * 2020-03-27 2020-07-10 咪咕文化科技有限公司 Video acquisition method, electronic equipment and computer readable storage medium
CN114071028A (en) * 2020-07-30 2022-02-18 北京字节跳动网络技术有限公司 Video generation and playing method and device, electronic equipment and storage medium
CN114079817A (en) * 2020-08-20 2022-02-22 北京达佳互联信息技术有限公司 Video special effect control method and device, electronic equipment and storage medium
CN112749613A (en) * 2020-08-27 2021-05-04 腾讯科技(深圳)有限公司 Video data processing method and device, computer equipment and storage medium
CN113422977A (en) * 2021-07-07 2021-09-21 上海商汤智能科技有限公司 Live broadcast method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
弹幕视频广告和创意中插广告探析;李玥;;传媒论坛(第16期);全文 *

Also Published As

Publication number Publication date
CN114630057A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN111476871B (en) Method and device for generating video
CN114630057B (en) Method and device for determining special effect video, electronic equipment and storage medium
CN110898429B (en) Game scenario display method and device, electronic equipment and storage medium
JP7473676B2 (en) AUDIO PROCESSING METHOD, APPARATUS, READABLE MEDIUM AND ELECTRONIC DEVICE
CN113225606B (en) Video barrage processing method and device
CN109600559B (en) Video special effect adding method and device, terminal equipment and storage medium
CN114598823B (en) Special effect video generation method and device, electronic equipment and storage medium
CN109496295A (en) Multimedia content generation method, device and equipment/terminal/server
US20240273794A1 (en) Image processing method, training method for an image processing model, electronic device, and medium
CN111669502A (en) Target object display method and device and electronic equipment
CN111246196B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN115002359A (en) Video processing method and device, electronic equipment and storage medium
CN114697568B (en) Special effect video determining method and device, electronic equipment and storage medium
CN115205164A (en) Training method of image processing model, video processing method, device and equipment
CN112785669B (en) Virtual image synthesis method, device, equipment and storage medium
CN114422698A (en) Video generation method, device, equipment and storage medium
CN111626922B (en) Picture generation method and device, electronic equipment and computer readable storage medium
CN113875227A (en) Information processing apparatus, information processing method, and program
CN113706673B (en) Cloud rendering frame platform applied to virtual augmented reality technology
CN113905177A (en) Video generation method, device, equipment and storage medium
CN116847147A (en) Special effect video determining method and device, electronic equipment and storage medium
CN110807728B (en) Object display method and device, electronic equipment and computer-readable storage medium
CN114765692B (en) Live broadcast data processing method, device, equipment and medium
CN115243097B (en) Recording method and device and electronic equipment
CN117934769A (en) Image display method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant