WO2023121571A2 - 视频生成方法及设备 - Google Patents

视频生成方法及设备 Download PDF

Info

Publication number
WO2023121571A2
WO2023121571A2 PCT/SG2022/050927 SG2022050927W WO2023121571A2 WO 2023121571 A2 WO2023121571 A2 WO 2023121571A2 SG 2022050927 W SG2022050927 W SG 2022050927W WO 2023121571 A2 WO2023121571 A2 WO 2023121571A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
features
neural network
target
Prior art date
Application number
PCT/SG2022/050927
Other languages
English (en)
French (fr)
Other versions
WO2023121571A3 (zh
Inventor
施亦纯
杨骁�
沈晓辉
Original Assignee
脸萌有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 脸萌有限公司 filed Critical 脸萌有限公司
Publication of WO2023121571A2 publication Critical patent/WO2023121571A2/zh
Publication of WO2023121571A3 publication Critical patent/WO2023121571A3/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • This technology can be used, for example, to generate special effects to improve the interest of videos.
  • multiple video frames in the video need to be generated based on two images, and then a video with gradual changes between the two images is obtained.
  • the quality of multiple video frames generated based on two images especially the image quality of the middle frame of the video needs to be improved.
  • SUMMARY Embodiments of the present disclosure provide a video generation method and device to solve the problem that the image quality of intermediate frames of the video needs to be improved when the video is generated based on a small number of images.
  • an embodiment of the present disclosure provides a video generation method, including: extracting a first image feature from a first image; and obtaining a plurality of intermediate image feature, the second image feature is an image feature of the second image; based on the first image feature, the second image feature and the plurality of intermediate image features, image reconstruction is performed through an image generation model to generate a target A video, wherein the target video is used to show the process of changing from the first image to the second image.
  • an embodiment of the present disclosure provides a method for determining a model, including: training a neural network based on a plurality of training images and an image generation model, and the neural network is used to learn image features based on the feature space of the image generation model Adjusted deviation; wherein, a training process of the neural network includes: generating target image features based on the image features of the first training image and the image features of the second training image; performing a preliminary adjustment; learning the target deviation corresponding to the preliminary adjustment through the neural network, and readjusting the features of the target image after the preliminary adjustment according to the target deviation; and readjusting the target image according to the target deviation features, the first training image and the second training image training images, and adjust the model parameters of the neural network.
  • an embodiment of the present disclosure provides a video generation device, including: an extraction unit, configured to extract first image features from a first image; an interpolation unit, configured to extract features of the first image based on the first image features and the second image feature, a plurality of intermediate image features are obtained through nonlinear interpolation, the second image feature is an image feature of the second image; a video generation unit is configured to be based on the first image feature, the second image feature and the A plurality of intermediate image features are used for image reconstruction by an image generation model to generate a target video, wherein the target video is used to show a process of gradual change from the first image to the second image.
  • an embodiment of the present disclosure provides a model determination device, including: a training unit, configured to train a neural network according to a plurality of training images and an image generation model, and the neural network is used to learn a model based on the image generation model The deviation of the image feature adjustment in the feature space; wherein, a training process of the neural network includes: generating target image features according to the image features of the first training image and the image features of the second training image; based on the feature space, Preliminarily adjusting the features of the target image; learning the target deviation corresponding to the preliminary adjustment through the neural network, and readjusting the target image features after the preliminary adjustment according to the target deviation; according to the target deviation, again The adjusted target image features, the first training image and the second training image are used to adjust the model parameters of the neural network.
  • an embodiment of the present disclosure provides an electronic device, including: at least one processor and a memory; the memory stores computer-executable instructions; the at least one processor executes the computer-executable instructions stored in the memory, so that the At least one processor executes the video generation method described in the first aspect or various possible designs of the first aspect, or causes the at least one processor to execute the video generation method described in the second aspect or various possible designs of the second aspect model determination method.
  • embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when the processor executes the computer-executable instructions, the first aspect or the first The video generation method described in various possible designs of the second aspect, or implement the model determination method described in the second aspect or various possible designs of the second aspect.
  • a computer program product is provided, the computer program product includes computer-executable instructions, and when the processor executes the computer-executable instructions, the first aspect or The video generation method described in various possible designs of the first aspect, or implement the model determination method described in the second aspect or various possible designs of the second aspect.
  • a computer program is provided.
  • the computer program is executed by a processor, the video as described in the first aspect or various possible designs of the first aspect can be realized.
  • the video generation method and device according to the first image feature of the first image and the second image feature of the second image, obtain a plurality of intermediate image features through nonlinear interpolation, based on the first image feature, the second image features and a plurality of intermediate image features, image reconstruction is performed through an image generation model, and a target video is generated, wherein the target video is used to show a process of gradation from the first image to the second image. Therefore, through the non-linear interpolation method, the quality of the intermediate image features is improved, and on the basis of ensuring the similarity between the intermediate frame of the target video and the first image and the second image Above, improve the image quality of the middle frame of the target video, and then improve the video quality of the target video.
  • FIG. 1 is a schematic diagram of an application scenario applicable to an embodiment of the present disclosure
  • Fig. 2 is a schematic flowchart of a video generation method provided by an embodiment of this disclosure
  • Fig. 3a is a schematic flowchart of a video generation method provided by an embodiment of this disclosure
  • Fig. 1 is a schematic diagram of an application scenario applicable to an embodiment of the present disclosure
  • Fig. 2 is a schematic flowchart of a video generation method provided by an embodiment of this disclosure
  • Fig. 3a is a schematic flowchart of a video generation method provided by an embodiment of this disclosure
  • Fig. 1 is a schematic diagram of an application scenario applicable to an embodiment of the present disclosure
  • Fig. 2 is a schematic flowchart of a video generation method provided by an embodiment of this disclosure
  • Fig. 3a is a schematic flowchart of a video generation method provided by an embodiment of this disclosure
  • FIG. 3b A schematic flow diagram of adjusting the third image feature based on the feature space and neural network of the image generation model provided by the embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart of a model determination method provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a training framework of a neural network provided by an embodiment of the present disclosure
  • FIG. 7 is a structural block diagram of a video generation device provided by an embodiment of the present disclosure
  • FIG. 8 is a structural block diagram of a model determination device provided by an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present disclosure.
  • the embodiments of the present disclosure provide a video generation method and device, based on the first image features of the first image and the second image features of the second image, through nonlinear interpolation, multiple intermediate image features are obtained, Based on the first image feature, the second image feature and a plurality of intermediate image features, image reconstruction is performed through an image generation model to generate a target video. Wherein, the target video is used to show the process of gradually changing from the first image to the second image.
  • the change process of the real video picture is a nonlinear change. Therefore, compared with the linear interpolation method, the embodiment of the present disclosure uses nonlinear interpolation to improve the quality of the intermediate image features and improve the quality of the intermediate frame of the target video, so that the target The video picture of the video presents non-linear changes, which is more authentic and beautiful.
  • improving the quality of the intermediate image feature includes: improving the authenticity of the intermediate image feature, and improving the similarity between the intermediate image and the first image and the second image.
  • Improving the quality of the intermediate frame of the target video includes: improving the aesthetics and authenticity of the intermediate frame, and increasing the similarity between the intermediate frame and the first image and the second image. Referring to FIG. 1, FIG.
  • the involved devices include a video generating device 101, where the video generating device 101 may be a terminal or a server.
  • FIG. 1 takes the video generating device 101 as an example.
  • the two images may be processed to generate a video for displaying the gradient effect between the two images.
  • the device involved in the application scenario further includes an image acquisition device 102, wherein the image acquisition device 102 can also be a terminal or a server, for example, the terminal collects the image input by the user, or the terminal collects the image in the current scene through the camera.
  • the server collects images published on the network and allowed to be used by the public from the network.
  • FIG. 1 takes the image acquisition device 102 as a terminal as an example.
  • the image acquisition device 102 sends the captured image to the video generation device 101, and the video generation device 101 generates a video showing the gradual change from the captured image to another image (from the image capture device 102 or from other devices), or another Image fade to video of the captured image.
  • the video generation device 101 and the image acquisition device 102 may be the same or different devices.
  • the video generation device 101 and the image acquisition device 102 are the same device, for example: the user uses a mobile phone to take a selfie, obtains a selfie avatar, and selects another image on the mobile phone; the mobile phone generates a video based on the user's selfie avatar and the image selected by the user, The video content of the video is the process of changing from the user's selfie avatar to the image selected by the user.
  • the video generation device 101 and the image acquisition device 102 are different devices, for example: the user uses a mobile phone to take a selfie, obtains a selfie portrait, and selects another image on the mobile phone; the mobile phone sends the selfie image and the image selected by the user to the server, and the server Generate a video and return the video to the mobile phone.
  • the video content of the video is the process of changing from the user's selfie avatar to the image selected by the user.
  • the terminal may be a personal digital assistant (PDA) device, a handheld device (such as a smart phone, a tablet computer), a computing device (such as a personal computer (personal computer, PC)), a vehicle device, a wearable device (such as smart watches, smart bracelets), and smart home devices (such as smart display devices), etc.
  • the server may be a distributed server, a centralized server, a cloud server, and the like.
  • the execution subject of multiple embodiments of the present disclosure may be an electronic device, and the electronic device may be a terminal or a server. Referring to FIG. 2, FIG. 2 is a first schematic flowchart of a video generation method provided by an embodiment of the present disclosure. As shown in Figure 2, the video generation method includes:
  • the first image may be an image input by the user, an image from other devices, or an image captured by the current execution device.
  • the terminal may acquire the first image input by the user, or acquire the first image captured by the camera on the terminal.
  • the server may receive the first image sent by the terminal and input by the user.
  • the first image feature is an image feature of the first image.
  • an encoder is used to encode the first image to obtain the first image feature.
  • the first image feature specifically refers to the image feature obtained after the first image is encoded.
  • the second image feature is an image feature of the second image.
  • the second image is different from the first image.
  • the second image feature specifically refers to an image feature obtained after the second image is encoded.
  • multiple images and image features obtained by encoding the multiple images may be stored in advance. Obtain second image features from the stored image features of the plurality of images.
  • the user may designate the second image among the plurality of pre-stored images, and obtain the image characteristics of the second image, that is, the second image characteristics, from the image characteristics of the plurality of images; in another manner,
  • the second image feature may be acquired from the image features of the plurality of images in a preset order (for example, image storage order) or randomly.
  • the terminal in response to the operation of the user inputting the first image, multiple images for the user to select are displayed on the terminal; the user selects the second image among the multiple images, and inputs on the terminal to generate a gradient from the first image to the second image a video request; the terminal, in response to the request, acquires the image features of the second image, that is, the second image features, from the pre-stored image features of multiple images.
  • the second image input by the user, sent by other devices, or captured by the current execution device may be obtained, and the second image may be encoded to obtain the second image features.
  • the first image feature and the second image feature are used as two known quantities in the nonlinear interpolation process, and a preset nonlinear interpolation method is used to perform nonlinear interpolation,
  • the interpolation function is obtained, that is, the interpolation curve is obtained.
  • sampling is performed between points corresponding to the first image feature and points corresponding to the second image feature to obtain multiple intermediate image features.
  • the intermediate image feature is used to generate the intermediate frame of the video.
  • equal interval sampling is performed on the interpolation curve, so that the degree of change between adjacent intermediate image features obtained by interpolation is similar, and the quality of the subsequently generated video is improved.
  • image reconstruction is performed through an image generation model to generate a target video, wherein the target video is used to show a process of gradation from the first image to the second image.
  • the image generation model may be a neural network for image generation or image reconstruction, its input data is encoded image features, and its output data is a reconstructed image.
  • the trained image generation model disclosed on the Internet can be used, or the neural network can be trained through training data (including multiple training images) to obtain the image generation model, and the training process of the model is not limited.
  • the first image feature, the second image feature and the plurality of intermediate image features can be respectively input into the image generation model to obtain the reconstructed image corresponding to the first image feature, the second The reconstructed image corresponding to the image feature and the reconstructed image corresponding to each intermediate image feature respectively.
  • the multiple reconstructed images may be sorted and combined according to the distribution sequence of the first image feature, the second image feature and the intermediate image feature on the interpolation curve to obtain the target video.
  • the first frame image is a reconstructed image corresponding to the first image feature
  • the last frame image is a reconstructed image corresponding to the second image feature
  • the middle frame is a reconstructed image corresponding to the intermediate image feature.
  • nonlinear interpolation is performed to obtain a plurality of intermediate image features, based on the first image feature, the second image feature
  • Two image features and a plurality of intermediate image features are used for image reconstruction through an image generation model, and a reconstructed image is output based on the image generation model.
  • the method of non-linear interpolation of the image features obtained by encoding the two images can improve the interpolation value
  • the authenticity of the obtained intermediate image features and the similarity between the intermediate image features and the image features of the two original images, thereby improving the authenticity and aesthetics of the intermediate frame of the video, and improving the quality of the intermediate frame and the first frame image and the last frame Image similarity improves video quality.
  • the image generation model is a generative adversarial network (generative adversarial networks, GAN), so that the advantages of GAN in image generation are used to improve image generation
  • GAN generative adversarial networks
  • the image generation model is a style-based architecture for GANs (style-based architecture for GANs, StyleGAN) model or a StyleGAN2 model. Therefore, by utilizing the advantages of the StyleGAN model or the StyleGAN2 model in image generation, the image reconstruction quality of the image generation model is improved, and the image frame quality of the target video is improved.
  • the feature space of the image generation model and the neural network can be used to assist the nonlinear interpolation.
  • FIG. 3a is an example of the process flow of the video generation method provided by the embodiment of the present disclosure, FIG. 2 . As shown in Figure 3a, the video generation method includes:
  • a third image feature according to the first image feature and the second image feature, where the second image feature is an image feature of the second image.
  • the acquisition process of the second image feature may refer to the foregoing embodiments, and details are not repeated here.
  • an average value of the first image feature and the second image feature is determined, and the average value is the third image feature.
  • the feature values of corresponding positions on the first image feature and the second image feature may be added and then averaged to obtain the average value of the first image feature and the second image feature.
  • the first image feature and the second image feature are weighted and summed to obtain the third image feature.
  • weights respectively corresponding to the first image feature and the second image feature may be preset.
  • the feature space of the image generation model can be understood as the input space of the image generation model, and the feature samples in the input space conform to a certain probability distribution.
  • the feature space of the image generation model is a latent space (latent space) corresponding to the image generation model, and the first image and the second image are encoded by an encoder.
  • the obtained image feature is latent code (latent code), that is, the first image feature may be called the first latent code, and the second image feature may be called the second latent code.
  • the third image features may be adjusted based on the feature samples in the feature space of the image generation model, so that the third image features are closer to the feature samples in the feature space, so that The image quality of the reconstructed image obtained by performing image reconstruction based on the third image feature is improved, that is, the image quality of the intermediate frame is improved.
  • the third image feature is adjusted again through the neural network model to improve the similarity between the third image feature and the first image feature and the second image feature.
  • the neural network needs to be trained so that the neural network can learn the deviation of image feature adjustment based on the feature space, and the specific training process refers to the subsequent embodiments.
  • the neural network is a fully connected neural network. Therefore, when the learning task of the neural network is single, and the input data and the output data are both image features, the accuracy of adjusting the third image feature is improved through the fully-linked neural network with more network parameters. In a possible implementation manner, refer to FIG.
  • FIG. 3b which is a schematic flowchart of adjusting the third image feature (ie, S303 ) sequentially based on the feature space of the image generation model and the neural network provided by an embodiment of the present disclosure.
  • the process of adjusting the third image feature based on the feature space of the image generation model and the neural network in turn includes:
  • the output data reflects the deviation of the preliminary adjustment; S3034, according to the output data, adjust the third image feature after the preliminary adjustment again.
  • the output data of the neural network reflects the feature deviation generated after preliminary adjustment of the third image feature based on the average image feature of the feature space.
  • the average image feature in the feature space can be determined based on the probability distribution that the feature space conforms to.
  • the probability distribution that the feature space conforms to is, for example, a Gaussian distribution.
  • the average image feature is used to preliminarily adjust the third image feature, so that the third image feature is close to the average image feature, and the quality of the third image feature is improved.
  • input the first image feature and the second image feature to the neural network to obtain the output data of the neural network, which is also the image feature.
  • the pre-adjusted third image feature is adjusted again, so that the third image feature is close to the first image feature and the second image feature, and the third image feature is compared with the first image feature, the second image feature, and the second image feature.
  • the similarity of image features is used to preliminarily adjust the third image feature, so that the third image feature is close to the average image feature, and the quality of the third image feature is improved.
  • the preliminary adjustment of the third image feature according to the average image feature includes: determining the average value of the third image feature and the average image feature, and determining the pre-adjusted third image feature as the average value. Therefore, by solving the mean value of the third image feature and the average image feature, the feature clipping (that is, preliminary adjustment) of the third image feature is realized.
  • readjusting the preliminarily adjusted third image feature includes: adding the output data to the preliminarily adjusted third image feature to obtain the readjusted third image feature . Therefore, by adding the feature deviation brought about by the preliminary adjustment process learned by the neural network to the pre-adjusted third image feature, the similarity between the third image feature and the first image feature and the second image feature is improved. Spend.
  • S304 Perform nonlinear interpolation according to the first image feature, the second image feature, and the adjusted third image feature to obtain multiple intermediate image features.
  • the first image feature, the second image feature, and the third image feature are used as three known quantities, by In the non-linear interpolation mode, an interpolation curve is obtained, and a plurality of intermediate image features are obtained by sampling on the interpolation curve. Therefore, in addition to the first image feature and the second image feature, in the nonlinear interpolation process, the higher quality and the first image feature and the second image feature
  • the third image features with high similarity between image features can effectively improve the accuracy of nonlinear interpolation and improve the quality of intermediate image features.
  • the nonlinear interpolation method adopts cubic spline interpolation (cubic spline interpolation) o
  • S304 includes: according to the first image feature, the second image feature and the third image feature, through the cubic spline Interpolation to obtain an interpolation curve; sampling on the interpolation curve to obtain multiple intermediate image features. Therefore, the accuracy of nonlinear interpolation is improved by using cubic spline interpolation, and the quality of intermediate image features is improved.
  • the third image feature, the first image feature, and the second image feature may be input into cubic spline interpolation together to obtain an interpolation function, that is, an interpolation curve. Furthermore, sampling is performed on the interpolation curve to obtain multiple intermediate image features.
  • S305 Based on the first image feature, the second image feature and a plurality of intermediate image features, perform image reconstruction through the image generation model to generate a target video, wherein the target video is used to show the process of gradation from the first image to the second image.
  • the implementation principle and technical effect of S305 can refer to the foregoing embodiments, and will not be repeated here.
  • FIG. 4 is an example diagram of a framework of nonlinear interpolation based on feature space and neural network provided by an embodiment of the present disclosure.
  • hidden code 1, hidden code 2 and the average value are used for spline interpolation to obtain multiple interpolation results (that is, multiple intermediate image features).
  • image features may also be adjusted based on feature space alone, that is, the adjustment caused by feature space is ignored. characteristic deviation.
  • the neural network in order to improve the nonlinear interpolation effect, the neural network needs to be trained in advance, so that the neural network can learn the deviation of image feature adjustment based on the feature space of the image generation model. Below, an example of neural network training is provided.
  • FIG. 5 is an exemplary flowchart of a model determination method provided by an embodiment of the present disclosure. As shown in Figure 5, the model determination method includes:
  • S501 Train a neural network according to a plurality of training images and an image generation model, and the neural network is used to learn a deviation for image feature adjustment based on the feature space of the image generation model.
  • S501 includes the following steps:
  • target image features according to the image features of the first training image and the image features of the second training image sign.
  • two training images may be obtained from multiple training images, and for the convenience of distinction, the two training images are respectively referred to as a first training image and a second training image.
  • the two training images may be encoded by an encoder to obtain image features of the first training image and image features of the second training image. Perform feature fusion processing on the image features of the first training image and the image features of the second training image to obtain target image features.
  • performing feature fusion processing on the image features of the first training image and the image features of the second training image to obtain target image features including: determining an average of the image features of the first training image and the image features of the second training image value, the average value is the target image feature.
  • the image feature of the first training image and the feature value of the corresponding position on the image feature of the second training image may be added and averaged to obtain the average value.
  • the image features of the first training image and the image features of the second training image are weighted and summed to obtain the target image features. Wherein, weights respectively corresponding to the image features of the first training image and the image features of the second training image may be preset.
  • the average image feature in the feature space can be determined based on the probability distribution that the feature space conforms to.
  • the preliminary adjustment to the target image feature makes the target image feature close to the average image feature, and improves the quality of the target image feature.
  • the preliminary adjustment of the target image feature according to the average image feature includes: determining the mean value of the target image feature and the average image feature, and determining the target image feature after preliminary adjustment as the mean value. Therefore, by solving the mean value of the target image feature and the average image feature, the feature clipping (that is, preliminary adjustment) of the target image feature is realized.
  • the image features of the first training image and the image features of the second training image are input to the neural network to obtain the output data of the neural network, that is, the target deviation corresponding to the preliminary adjustment is obtained through learning.
  • the target image features after the preliminary adjustment are adjusted again, so that the target image features are close to the image features of the first training image and the image features of the second training image, that is, to improve the target image.
  • readjusting the pre-adjusted target image features according to the target deviation includes: adding the target deviation to the pre-adjusted target image features to obtain the re-adjusted target image features. Therefore, by adding the characteristic deviation generated in the preliminary adjustment process learned by the neural network to the initially adjusted target image features, the target image features and the image features of the first training image and the image features of the second training image are improved. Similarity of features.
  • the training error of the neural network can be determined based on the target deviation, the readjusted target image features, the first training image and the second training image, and the model parameters of the neural network can be adjusted based on the training error.
  • the training error is determined based on the difference between the readjusted target image feature and the image feature of the first training image, and/or the difference between the readjusted target image feature and the image feature of the second training image.
  • the neural network is trained based on regularization constraints and similarity constraints, and the regularization constraints are used to minimize the basis
  • the similarity constraint is used to minimize the image features adjusted based on the neural network (ie, again The difference between the adjusted target image features) and the image features of the first training image and the image features of the second training image.
  • S5014 includes: determining the target optimization function of the neural network through the regularization constraint and the similarity constraint; model parameters of the neural network.
  • the objective optimization function of the neural network can be determined in advance according to regularization constraints and similarity constraints.
  • the function value of the target optimization function that is, the training error of the neural network is determined.
  • the model parameters of the neural network are optimized.
  • the optimization algorithm is, for example, a gradient descent algorithm.
  • the target image features can be input into the image generation model respectively to obtain the intermediate reconstructed image (that is, the reconstructed image corresponding to the target image features), and then through feature extraction
  • the network performs feature extraction on the first training image, the second training image, and the intermediate reconstruction image respectively, to obtain image features of the first training image, image features of the second training image, and image features of the intermediate reconstruction image.
  • a face feature extraction network may be used to perform feature extraction on these images.
  • X] and X2 respectively denote the first training image and the second training image
  • W] denote the image features obtained after encoding the first training image
  • W2 denote the second The image features obtained after encoding the training image
  • W3 represents the target image features
  • f( ) represents the neural network
  • G( ) represents the image generation model
  • e( ) represents the feature extraction network
  • input is the preset parameter.
  • FIG. 6 is a schematic diagram of a neural network training framework provided by an embodiment of the present disclosure.
  • the training process includes: First determine the average value of hidden code 1 (the image feature obtained after the input image 1 is encoded) and hidden code 2 (the image feature obtained after the input image 2 is encoded); In the feature space of the model, perform feature clipping (that is, make preliminary adjustments) on the average value to obtain the clipped average value; then, input hidden code 1 and hidden code 2 into the neural network, and according to the characteristic deviation output by the neural network, it can be determined
  • the regularization constrains this part of the training error; then, add the characteristic deviation of the neural network output to the cropped average value, and then input the average value into the image generation model to obtain the reconstructed image; finally, determine the reconstructed image through the feature extraction network
  • FIG. 7 is a structural block diagram of a video generation device provided in an embodiment of the present disclosure. For ease of description, only parts related to the embodiments of the present disclosure are shown. Referring to FIG. 7, the video generation device includes: an extraction unit 701 and an interpolation unit 702.
  • the extraction unit 701 is used to extract the first image features from the first image; the interpolation unit 702 is used to obtain a plurality of intermediate image features through nonlinear interpolation according to the first image features and the second image features, and the second image features is the image feature of the second image; the video generation unit 703 is configured to perform image reconstruction through an image generation model based on the first image feature, the second image feature and a plurality of intermediate image features, and generate a target video, wherein the target video is used for Show the process of gradation from the first image to the second image.
  • the interpolation unit 702 is further configured to: generate a third image feature according to the first image feature and the second image feature; sequentially adjust the third image feature based on the feature space of the image generation model and the neural network, the neural network It is used to learn the deviation of image feature adjustment based on the feature space; according to the first image feature, the second image feature and the adjusted third image feature, nonlinear interpolation is performed to obtain multiple intermediate image features.
  • the interpolation unit 702 is further configured to: obtain the average image feature in the feature space; perform preliminary adjustment to the third image feature according to the average image feature; input the first image feature and the second image feature into the neural network , to obtain the output data of the neural network, the output data reflects the deviation of the initial adjustment; according to the output data, the third image feature after the initial adjustment is adjusted again.
  • the interpolation unit 702 is further configured to: determine the mean value of the third image feature and the average image feature; determine the preliminarily adjusted third image feature as the mean value.
  • the neural network is trained based on regularization constraints and similarity constraints, the regularization constraints are used to minimize the difference between the image features adjusted based on the neural network and the image features adjusted based on the feature space, and the similarity constraints are used The purpose is to minimize the difference between the image features adjusted based on the neural network, the image features of the first training image, and the image features of the second training image.
  • the interpolation unit 702 is further configured to: obtain an interpolation curve through cubic spline interpolation according to the first image feature, the second image feature and the third image feature; perform sampling on the interpolation curve to obtain multiple intermediate images feature.
  • the image generation model is a StyleGAN model or a StyleGAN2 model.
  • FIG. 8 is a structural block diagram of a model determination device provided in an embodiment of the present disclosure.
  • the model determination device includes: a training unit 801 .
  • the training unit 801 is configured to train a neural network according to a plurality of training images and an image generation model, and the neural network is used to learn deviations for image feature adjustment based on the feature space of the image generation model.
  • a training process of the neural network includes: generating target image features based on the image features of the first training image and the image features of the second training image; making preliminary adjustments to the target image features based on the feature space; learning preliminary adjustments through the neural network The corresponding target deviation, and according to the target deviation, the initially adjusted target image The features are readjusted; and the model parameters of the neural network are adjusted according to the target deviation, the readjusted target image features, the first training image and the second training image.
  • the training unit 801 is further configured to: determine the target optimization function of the neural network through regularization constraints and similarity constraints; based on the target optimization function, target deviation, readjusted target image features, first training image and The second training image is to adjust the model parameters of the neural network; where the regular constraint is used to minimize the difference between the readjusted target image features and the initially adjusted target image features, and the similarity constraint is used to minimize the readjusted The difference between the target image features of , the image features of the first training image, and the image features of the second training image.
  • the model determination device provided in this embodiment can be used to execute the technical solutions of the above-mentioned embodiments related to the model determination method, and its implementation principle and technical effect are similar, so this embodiment will not repeat them here.
  • FIG. 9 shows a schematic structural diagram of an electronic device 900 suitable for implementing the embodiments of the present disclosure.
  • the electronic device 900 may be a terminal device or a server.
  • the terminal device may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, personal digital assistant (personal digital assistant, PDA), tablet computer (portable android device, PAD), portable multimedia player (portable media player) , PMP), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital TVs, desktop computers, and the like.
  • PDA personal digital assistant
  • PAD portable android device
  • PMP portable multimedia player
  • PMP portable multimedia player
  • PMP portable multimedia player
  • mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals)
  • fixed terminals such as digital TVs, desktop computers, and the like.
  • an electronic device 900 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 901, which may be stored in a read only memory (read only memory, ROM) 902 according to a program or from a storage device 908
  • a program loaded into a random access memory (random access memory, RAM) 903 executes various appropriate actions and processes.
  • RAM 903 random access memory
  • various programs and data necessary for the operation of the electronic device 900 are also stored.
  • the processing device 90, ROM 902, and RAM 903 are connected to each other through a bus 904.
  • the input/output (input/output, I/O) interface 905 is also connected to the bus 904 o
  • the following devices can be connected to the I/O interface 905: including, for example, a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, an accelerometer , an input device 906 such as a gyroscope; an output device 907 including a liquid crystal display (Liquid Crystal Display, LCD), a speaker, a vibrator, etc.; a storage device 908 including a magnetic tape, a hard disk, etc.; and a communication device 909.
  • the communication means 909 may allow the electronic device 900 to perform wireless or wired communication with other devices to exchange data. While FIG.
  • FIG. 9 shows electronic device 900 having various means, it should be understood that implementing or possessing all of the illustrated means is not a requirement. More or fewer means may alternatively be implemented or provided.
  • the processes described above with reference to the flowcharts can be implemented as computer software programs.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 909 , or from storage means 908 , or from ROM 902 .
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof.
  • Computer-readable storage media may include, but are not limited to: Electrical connection, portable computer disk, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), optical fiber, portable compact disk ROM (compact disc read-only memory, CD-ROM optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program It may be used by or in combination with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program codes therein.
  • the propagated data signal can take various forms, including but not limited to electromagnetic signal, optical signal or any suitable combination of the above.
  • the computer-readable signal medium can also be any computer-readable medium other than the computer-readable storage medium,
  • the computer-readable signal medium can send, propagate or transmit the program for use by or in conjunction with the instruction execution system, device or device.
  • the program code contained on the computer-readable medium can be transmitted by any appropriate medium, including but not limited to: wires, optical cables, radio frequency (radio frequency, RF), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be contained in the above-mentioned electronic device; In an electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device is made to perform the methods shown in the above-mentioned embodiments.
  • One or more A programming language or a combination thereof to write computer program code for performing the operations of the present disclosure including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional procedural programming languages— A programming language such as "C" or similar.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer , or entirely on a remote computer or server.
  • the remote computer can be connected to the The user's computer, alternatively, can be connected to an external computer (eg, via the Internet using an Internet service provider).
  • the flowcharts and block diagrams in the accompanying drawings illustrate the system, method, computer program product and possible implementation architecture, functions and operations of computer programs according to various embodiments of the present disclosure.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions for implementing the specified executable instructions.
  • each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented by a dedicated hardware-based system that performs specified functions or operations , or may be implemented by a combination of special purpose hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by means of software or by means of hardware.
  • the acquisition unit may also be described as "the unit for acquiring the target audio".
  • the functions described herein above may be performed at least in part by one or more hardware logic components.
  • exemplary types of hardware logic components include: field-programmable gate array (field-programmable gate array, FPGA), application specific integrated circuit (application specific integrated circuit, ASIC), application specific standard parts (ASSP), system on chip (SOC), complex programmable logic device (complex programmable logic device, CPLD) and so on.
  • a machine-readable medium may be a tangible medium, which may contain or store a program for use by or in combination with an instruction execution system, device, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, Random Access Memory (RAM), Read Only Memory (ROM), Erasable Programmable Read Only Memory (EPROM or flash memory), optical fiber, compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • a method for generating a video including: extracting a first image feature from a first image; according to the first image feature and the second image feature, A plurality of intermediate image features are obtained by nonlinear interpolation, and the second image feature is an image feature of a second image; based on the first image feature, the second image feature, and the plurality of intermediate image features, through the image
  • the generation model performs image reconstruction to generate a target video, wherein the target video is used to show a process of gradual change from the first image to the second image.
  • the obtaining a plurality of intermediate image features through nonlinear interpolation according to the first image feature and the second image feature includes: according to the first image feature and the the second image feature, generating a third image feature; sequentially adjusting the third image feature based on the feature space of the image generation model and the neural network, and the neural network is used to learn an image feature adjustment based on the feature space deviation; performing nonlinear interpolation according to the first image feature, the second image feature, and the adjusted third image feature to obtain the plurality of intermediate image features.
  • the third image feature is adjusted based on the feature space of the image generation model and the neural network in turn, and the neural network is used to learn image features based on the feature space
  • the adjusted deviation includes: obtaining an average image feature in the feature space; performing preliminary adjustments to the third image feature according to the average image feature; combining the first image feature and the second image feature, inputting the neural network to obtain output data of the neural network, the output data reflecting the deviation of the preliminary adjustment; and readjusting the preliminary adjusted third image feature according to the output data.
  • the preliminary adjustment of the third image feature according to the average image feature includes: determining an average value of the third image feature and the average image feature; determining The preliminarily adjusted third image feature is the mean value.
  • the neural network is trained based on regularization constraints and similarity constraints, and the regularization constraints are used to minimize the difference between the image features adjusted based on the neural network and those adjusted based on the feature space. The difference between the image features after the similarity constraint is used to minimize the difference between the image features adjusted based on the neural network, the image features of the first training image, and the image features of the second training image.
  • the non-linear interpolation is performed based on the first image feature, the second image feature, and the third image feature to obtain the plurality of intermediate image features, including : According to the first image feature, the second image feature and the third image feature, an interpolation curve is obtained by cubic spline interpolation line; performing sampling on the interpolation curve to obtain the multiple intermediate image features.
  • the image generation model is a StyleGAN model or a StyleGAN2 model.
  • a model determination method including: generating a model according to a plurality of training images and images, and training a neural network, where the neural network is used to learn The feature space of the generative model is biased for image feature adjustment.
  • one training process of the neural network includes: generating target image features according to the image features of the first training image and the image features of the second training image; performing preliminary adjustments to the target image features based on the feature space; learning the target deviation corresponding to the preliminary adjustment through the neural network, and re-adjusting the target image features after the preliminary adjustment according to the target deviation; according to the target deviation, the re-adjusted target image features, the The first training image and the second training image adjust the model parameters of the neural network.
  • adjusting model parameters of the neural network according to the target deviation, re-adjusted target image features, and the first training image and the second training image Including: determining the target optimization function of the neural network through regularization constraints and similarity constraints; based on the target optimization function, the target deviation, the readjusted target image features, the first training image and the the second training image, and adjust the model parameters of the neural network; wherein, the regularization constraint is used to minimize the difference between the re-adjusted target image features and the initially adjusted target image features, so The similarity constraint is used to minimize the difference between the readjusted target image features, the image features of the first training image, and the image features of the second training image.
  • a video generation device including: an extraction unit, configured to extract a first image feature from a first image; an interpolation unit, configured to extract a first image feature according to the first image an image feature and a second image feature, a plurality of intermediate image features are obtained through nonlinear interpolation, and the second image feature is an image feature of the second image; a video generation unit is configured to be based on the first image feature, the The second image feature and the plurality of intermediate image features are used to perform image reconstruction through an image generation model to generate a target video, wherein the target video is used to show a process of gradation from the first image to the second image.
  • a model determination device including: a training unit, configured to generate a model according to a plurality of training images and images, and train a neural network, and the neural network is used for learning Image feature adjustment bias is performed based on the feature space of the image generation model.
  • the training module is used to: generate target image features according to the image features of the first training image and the image features of the second training image; Preliminarily adjusting the image features; learning the target deviation corresponding to the preliminary adjustment through the neural network, and readjusting the pre-adjusted target image features according to the target deviation; according to the target deviation, the readjusted
  • the target image features, the first training image and the second training image are used to adjust the model parameters of the neural network.
  • an electronic device including: at least one processor and a memory; the memory stores computer-executable instructions; the at least one processor executes the memory-stored The computer executes instructions, so that the at least one processor executes the video generation method described in the above first aspect or various possible designs of the first aspect, or makes the at least one processor execute the above second aspect or the second Aspects of various possible designs of the described model determination methods.
  • a computer-readable storage medium stores computer-executable instructions, and when a processor executes the computer-executable instructions, Implement the video generation method described in the above first aspect or various possible designs of the first aspect, or implement the model determination method described in the above second aspect or various possible designs of the second aspect.
  • a computer program product is provided, the computer program product includes computer-executable instructions, and when the processor executes the computer-executable instructions, the first aspect or The video generation method described in various possible designs of the first aspect, or implement the model determination method described in the second aspect or various possible designs of the second aspect.
  • a computer program is provided.
  • the computer program is executed by a processor, the video as described in the first aspect or various possible designs of the first aspect is realized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Television Systems (AREA)
  • Image Processing (AREA)

Abstract

本公开实施例提供一种视频生成方法及设备,该方法包括:在第一图像中,提取第一图像特征;根据第一图像特征和第二图像特征,通过非线性插值得到多个中间图像特征,第二图像特征为第二图像的图像特征;基于第一图像特征、第二图像特征和多个中间图像特征,通过图像生成模型进行图像重建,生成目标视频,其中,目标视频用于展现从第一图像渐变至第二图像的过程。从而,利用非线性插值的方式,提高中间图像特征的质量,进而提高目标视频的中间帧的图像质量,即提高了目标视频的视频质量。

Description

视频 生成 方法 及 设备 相关申请交叉引用 本申请要求于 2021年 12月 24日提交中国专利局、 申请号为 202111609441.8、 发明名称 为 “视频生成方法及设备” 的中国专利申请的优先权, 其全部内容通过引用并入本文。 技术领域 本公开实施例涉及计算机技术领域, 尤其涉及一种视频生成方法及设备。 背景技术 在 目前的计算机视觉技术、 深度学习技术中, 基于两张图像可以生成两张图像之间渐变 的视频, 例如, 向深度学习模型, 输入两张人脸图像, 生成两张人脸图像之间渐变的视频, 该视频中的视频帧从一张人脸图像渐变为另一张人脸图像。 该技术例如可以用于特效生成, 提高视频的趣味性。 在视频生成的过程中, 需要基于两张图像来生成视频中的多个视频帧, 进而得到该两张 图像之间渐变的视频。 然而, 目前基于两张图像生成多个视频帧的质量, 尤其是视频的中间 帧的图像质量有待提高。 发明内容 本公开实施例提供一种视频生成方法及设备, 以解决在基于数量较少的图像生成视频时 视频的中间帧的图像质量有待提高的问题。 第一方面, 本公开实施例提供一种视频生成方法, 包括: 在第一图像中, 提取第一图像特征; 根据所述第一图像特征和第二图像特征, 通过非线性插值得到多个中间图像特征, 所述 第二图像特征为第二图像的图像特征; 基于所述第一图像特征、 所述第二图像特征和所述多个中间图像特征, 通过图像生成模 型进行图像重建, 生成目标视频, 其中, 所述目标视频用于展现从所述第一图像渐变至所述 第二图像的过程。 第二方面, 本公开实施例提供一种模型确定方法, 包括: 根据多个训练图像和图像生成模型, 训练神经网络, 所述神经网络用于学习基于所述图 像生成模型的特征空间进行图像特征调整的偏差; 其中, 所述神经网络的一次训练过程包括: 根据第一训练图像的图像特征和第二训练图像的图像特征, 生成目标图像特征; 基于所述特征空间, 对所述目标图像特征进行初步调整; 通过所述神经网络学习所述初步调整对应的目标偏差, 并根据所述目标偏差, 对初步调 整后的目标图像特征进行再次调整; 根据 所述目标偏差、 再次调整后的目标图像特征、 所述第一训练图像和所述第二训 练图像, 调整所述神经网络的模型参数。 第三方面, 本公开实施例提供一种视频生成设备, 包括: 提取单元, 用于在第一图像中, 提取第一图像特征; 插值单元, 用于根据所述第一图像特征和第二图像特征, 通过非线性插值得到多个中间 图像特征, 所述第二图像特征为第二图像的图像特征; 视频生成单元, 用于基于所述第一图像特征、 所述第二图像特征和所述多个中间图像特 征, 通过图像生成模型进行图像重建, 生成目标视频, 其中, 所述目标视频用于展现从所述 第一图像渐变至所述第二图像的过程。 第四方面, 本公开实施例提供一种模型确定设备, 包括: 训练单元, 用于根据多个训练图像和图像生成模型, 训练神经网络, 所述神经网络用于 学习基于所述图像生成模型的特征空间进行图像特征调整的偏差; 其中, 所述神经网络的一次训练过程包括: 根据第一训练图像的图像特征和第二训练图像的图像特征, 生成目标图像特征; 基于所述特征空间, 对所述目标图像特征进行初步调整; 通过所述神经网络学习所述初步调整对应的目标偏差, 并根据所述目标偏差, 对初步调 整后的目标图像特征进行再次调整; 根据所述目标偏差、 再次调整后的目标图像特征、 所述第一训练图像和所述第二训练图 像, 调整所述神经网络的模型参数。 第五方面, 本公开实施例提供一种电子设备, 包括: 至少一个处理器和存储器; 所述存储器存储计算机执行指令; 所述至少一个处理器执行所述存储器存储的计算机执行指令, 使得所述至少一个处理器 执行如第一方面或第一方面各种可能的设计所述的视频生成方法, 或者, 使得所述至少一个 处理器执行如第二方面或第二方面各种可能的设计所述的模型确定方法。 第六方面, 本公开实施例提供一种计算机可读存储介质, 所述计算机可读存储介质中存 储有计算机执行指令, 当处理器执行所述计算机执行指令时, 实现如第一方面或第一方面各 种可能的设计所述的视频生成方法, 或者, 实现如第二方面或第二方面各种可能的设计所述 的模型确定方法。 第七方面, 根据本公开的一个或多个实施例, 提供了一种计算机程序产品, 所述计算机 程序产品包含计算机执行指令, 当处理器执行所述计算机执行指令时, 实现如第一方面或第 一方面各种可能的设计所述的视频生成方法, 或者, 实现如第二方面或第二方面各种可能的 设计所述的模型确定方法。 第八方面, 根据本公开的一个或多个实施例, 提供了一种计算机程序, 所述计算机程序 被处理器执行时, 实现如第一方面或第一方面各种可能的设计所述的视频生成方法, 或者, 实现如第二方面或第二方面各种可能的设计所述的模型确定方法。 本实施 例提供的视频生成方法及设备, 根据第一图像的第一图像特征和第二图像的 第二图像特征, 通过非线性插值得到多个中间图像特征, 基于第一图像特征、 第二图像 特征和多个中间图像特征 , 通过图像生成模型进行图像重建, 生成目标视频, 其中, 目 标视频用于展示从第一 图像渐变至第二图像的过程。 从而, 通过非线性插值方式, 提高 中间图像特征的质量, 在保证目标视频的中间帧与第一图像、 第二图像的相似度的基础 上, 提高目标视频的中间帧的图像质量, 进而提高目标视频的视频质量。 附图说明 为了更清楚地说明本公开实施例或现有技术中的技术方案, 下面将对实施例或现有技术 描述中所需要使用的附图作一简单地介绍, 显而易见地, 下面描述中的附图是本公开的一些 实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动性的前提下, 还可以根据这些 附图获得其他的附图。 图 1为本公开实施例适用的应用场景的示意图; 图 2为本公开实施例提供的视频生成方法的流程示意图一; 图 3a为本公开实施例提供的视频生成方法的流程示意图二; 图 3b为本公开实施例提供的依次基于图像生成模型的特征空间和神经网络调整第三图像 特征的流程示意图; 图 4为本公开实施例提供的基于特征空间和神经网络的非线性插值的框架示例图; 图 5为本公开实施例提供的模型确定方法的流程示意图; 图 6为本公开实施例提供的神经网络的训练框架示意图; 图 7为本公开实施例提供的视频生成设备的结构框图; 图 8为本公开实施例提供的模型确定设备的结构框图; 图 9为本公开实施例提供的电子设备的硬件结构示意图。 具体实施方式 为使本 公开实施例的目的、 技术方案和优点更加清楚, 下面将结合本公开实施例中 的附图, 对本公开实施例中的技术方案进行清楚、 完整地描述, 显然, 所描述的实施例 是本公开一部分实施例 , 而不是全部的实施例。 基于本公开中的实施例, 本领域普通技 术人员在没有作出创造性劳动 前提下所获得的所有其他实施例, 都属于本公开保护的范 围。 在生成 两张输入图像之间的渐变视频时, 通常的, 对两种输入图像的图像特征进行 线性插值, 得到中间图像特征, 利用中间图像特征生成视频的中间帧。 该方式可以保证 视频帧的连续性、 相似性, 但是线性插值后的中间图像特征往往不符合真实视频中视频 画面的图像特征的分布规律 (或变化规律), 导致中间帧的图像质量不佳, 美观性、 真实 性不足。 为解 决上述问题, 本公开实施例提供了一种视频生成方法及设备, 基于第一图像的 第一图像特征和第二 图像的第二图像特征, 通过非线性插值, 得到多个中间图像特征, 基于第一图像特征、 第二图像特征和多个中间图像特征, 通过图像生成模型进行图像重 建, 生成目标视频。 其中, 目标视频用于展现从第一图像渐变至第二图像的过程。 真实 的视频画面的变化过程为非线性 变化, 因此, 相较于线性插值方式, 本公开实施例采用 非线性插值, 提高了中间图像特征的质量, 提高了目标视频的中间帧的质量, 使得目标 视频的视频画面呈现非线性变化 , 更具真实性、 美观性。 其中, 提高中间图像特征的质 量包括: 提高中间图像特征的真实性、 提高中间图像与第一图像和第二图像的相似度。 提高目标视频的中间帧 的质量包括: 提高了中间帧的美观性和真实性、 提高了中间帧与 第一图像和第二图像的相似度。 参考 图 1 , 图 1为本公开实施例适用的应用场景的示意图。 如 图 1所示, 在该应用场景中, 涉及的设备包括视频生成设备 101 , 其中, 视频生成 设备 101可以为终端或者服务器, 图 1 以视频生成设备 101为服务器为例。 在视频生成 设备 101 ± , 可对两张图像进行处理, 生成用于展示两张图像之间渐变效果的视频。 在一种实施例 中, 该应用场景涉及的设备还包括图像采集设备 102, 其中, 图像采集 设备 102 也可以为终端或者服务器, 例如, 终端采集用户输入的图像, 或者终端通过摄 像头采集当前场景下的 图像, 又如, 服务器从网络上采集在网络上公开并允许公众使用 的图像。 其中, 图 1 以图像采集设备 102为终端为例。 图像采集设备 102将采集的图像 发送至视频生成设备 101 ,由视频生成设备 101生成用于展示由采集的图像渐变至另一图 像 (来自图像采集设备 102 或者来自其他设备) 的视频, 或者另一图像渐变至采集的图 像的视频。 其 中, 视频生成设备 101与图像采集设备 102可以为相同或不同设备。 视频生成设备 101 与图像采集设备 102为相同设备时, 例如: 用户使用手机进行自 拍, 得到自拍头像, 并在手机上选中另一张图像; 手机基于用户的自拍头像和用户选中 的图像生成视频, 该视频的视频内容为从用户的自拍头像渐变至用户所选中的图像的过 程。 视频生成设备 101 与图像采集设备 102为不同设备时, 例如: 用户使用手机进行自 拍, 得到自拍头像, 并在手机上选中另一张图像; 手机将自拍图像和用户选中的图像发 送至服务器, 服务器生成视频并将该视频返回给手机, 该视频的视频内容为从用户的自 拍头像渐变至用户所选中的图像的过程。 其 中, 终端可以是个人数字处理 (personal digital assistant, PDA) 设备、 手持设备 (例如智能手机、 平板电脑)、 计算设备 (例如个人电脑 (personal computer, PC))、 车 载设备、 可穿戴设备 (例如智能手表、 智能手环)、 以及智能家居设备 (例如智能显示设 备) 等。 服务器可以是分布式服务器、 集中式服务器、 云服务器等。 下面 , 提供本公开的多个实施例。 其中, 本公开的多个实施例的执行主体可以为电 子设备, 电子设备可以为终端或者服务器。 参考 图 2, 图 2为本公开实施例提供的视频生成方法的流程示意图一。 如图 2所示, 该视频生成方法包括:
5201、 在第一图像中, 提取第一图像特征。 其 中, 第一图像可为用户输入的图像、 来自其他设备的图像或者当前执行设备拍摄 到的图像。 例如, 在当前执行设备为终端时, 终端可获取用户输入的第一图像, 或者获 取终端上的摄像头拍摄 的第一图像。 又如, 在当前执行设备为服务器时, 服务器可接收 终端发送的来自用户输入的第一图像。 其 中, 第一图像特征为第一图像的图像特征。 本实施 例中, 采用编码器对第一图像进行编码, 得到第一图像特征, 此时, 第一图 像特征具体指第一图像经过编码后得到的图像特征。
5202、根据第一图像特征和第二图像特征,通过非线性插值得到多个中间图像特征, 第二图像特征为第二图像的图像特征。 其 中, 第二图像与第一图像为不同图像。 其 中, 第二图像特征具体指第二图像经过编码后得到的图像特征。 一示例 中, 可预先存储多个图像和多个图像经过编码后得到的图像特征。 从存储的 多个图像的图像特征中 , 获取第二图像特征。 一种方式中, 可由用户在预先存储的多个 图像中指定第二图像, 从该多个图像的图像特征中, 获取第二图像的图像特征, 即第二 图像特征; 另一种方式中, 可按预设顺序 (例如图像存储顺序) 或者随机在多个图像的 图像特征中获取第二图像特征。 例如 : 响应于用户输入第一图像的操作, 在终端上显示多个供用户选择的图像; 用 户在多个图像中选择第二 图像, 并在终端上输入生成从第一图像渐变至第二图像的视频 的请求; 终端响应于该请求, 从预先存储的多个图像的图像特征中获取第二图像的图像 特征, 即第二图像特征。 又一示 例中, 可获取用户输入的、 其他设备发送的或者当前执行设备拍摄的第二图 像, 对第二图像进行编码, 得到第二图像特征。 本实施 例中, 在获得第二图像特征后, 将第一图像特征和第二图像特征作为非线性 插值过程中的两个巳知量, 采用预设的非线性插值方法, 进行非线性插值, 得到插值函 数, 即得到插值曲线。 在插值曲线上, 在第一图像特征所对应的点与第二图像特征所对 应的点之间进行采样, 得到多个中间图像特征。 其中, 中间图像特征用于生成视频的中 间帧。 在一种 实施例中, 在插值曲线上进行等间隔采样, 使得插值得到的相邻中间图像特 征之间的变化程度相近, 提高后续生成的视频的质量。
S203、 基于第一图像特征、 第二图像特征和多个中间图像特征, 通过图像生成模型 进行图像重建, 生成目标视频, 其中, 目标视频用于展现从第一图像渐变至第二图像的 过程。 其 中, 图像生成模型可为用于图像生成或者图像重建的神经网络, 其输入数据为编 码后的图像特征, 其输出数据为重建图像。 可以采用网络上公开的训练好的图像生成模 型, 也可以通过训练数据 (包括多个训练图像) 对神经网络进行训练, 得到图像生成模 型, 对该模型的训练过程不做限制。 本实施 例中, 在得到多个中间图像特征后, 可将第一图像特征、 第二图像特征和多 个中间图像特征分别输入至 图像生成模型, 得到第一图像特征对应的重建图像、 第二图 像特征对应的重建图像和各 中间图像特征分别对应的重建图像。 可按照第一图像特征、 第二图像特征和中间 图像特征在插值曲线上的分布顺序, 对该多个重建图像进行排序组 合, 得到目标视频。其中, 在目标视频中, 第一帧图像为第一图像特征对应的重建图像, 最后一帧图像为第二图像特征对应的重建图像, 中间帧为中间图像特征对应的重建图像。 本公 开实施例中, 基于第一图像经编码得到的第一图像特征和第二图像经编码得到 的第二图像特征, 进行非线性插值, 得到多个中间图像特征, 基于第一图像特征、 第二 图像特征和多个中间 图像特征, 通过图像生成模型进行图像重建, 基于图像生成模型输 出重建图像。 因此, 利用对两个图像经编码得到的图像特征进行非线性插值的方式, 提高插值得 到的中间图像特征的真实性和 中间图像特征与两个原始图像的图像特征的相似度, 进而 提高视频的中间帧的真实性 、 美观性, 提高了中间帧与第一帧图像和最后一帧图像的相 似度, 提高了视频质量。 关于 图像生成模型, 有以下一些可选的实施例: 在一些实施例 中, 图像生成模型为生成式对抗网络 (generative adversarial networks, GAN), 从而, 利用 GAN在图像生成方面的优势, 提高图像生成模型的图像重建质量, 提高目标视频的图像帧的质量。 在一些实施例 中, 图像生成模型为风格生成式对抗网络 (style-based architecture for GANs, StyleGAN )模型或者 StyleGAN2模型。从而,利用 StyleGAN模型或者 StyleGAN2 模型在图像生成方面的优势, 提高图像生成模型的图像重建质量, 提高目标视频的图像 帧的质量。 关于 非线性插值过程, 在一些实施例中, 可采用图像生成模型的特征空间、 神经网 络来辅助非线性插值。 后续, 通过实施例对该辅助过程进行描述。 参照 图 3a, 图 3a为本公开实施例提供的视频生成方法的流程示例图二。 如图 3a所 示, 该视频生成方法包括:
5301、 在第一图像中, 提取第一图像特征。 其 中, S301的实现原理和技术效果可参照前述实施例, 不再赘述。
5302、 根据第一图像特征和第二图像特征, 生成第三图像特征, 第二图像特征为第 二图像的图像特征。 其 中, 第二图像特征的获取过程可参照前述实施例, 不再赘述。 一示例 中,确定第一图像特征与第二图像特征的平均值,该平均值即第三图像特征。 具体的, 可将第一图像特征与第二图像特征上相应位置的特征值相加后求平均, 得到第 一图像特征与第二图像特征的平均值。 又一示 例中, 对第一图像特征与第二图像特征进行加权求和, 得到第三图像特征。 其中, 可预先设置第一图像特征、 第二图像特征分别对应的权重。
5303、 依次基于图像生成模型的特征空间和神经网络, 调整第三图像特征, 神经网 络用于学习基于特征空间进行图像特征调整的偏差。 其 中, 图像生成模型的特征空间可理解为图像生成模型的输入空间, 该输入空间中 的特征样本符合一定的概率分布。 在一种 实施例中, 图像生成模型为生成式对抗网络时, 图像生成模型的特征空间为 图像生成模型对应的隐空间 (latent space), 通过编码器对第一图像、 第二图像进行编码 所得到的图像特征为隐编码 (latent code), 即第一图像特征可以称为第一隐编码, 第二图 像特征可称为第二隐编码。 本实施 例中, 在得到第三图像特征后, 可先基于图像生成模型的特征空间中的特征 样本, 对第三图像特征进行调整, 使得第三图像特征更靠近特征空间中的特征样本, 以 提高基于第三图像特征进行 图像重建所得到的重建图像的图像质量, 即提高中间帧的图 像质量。 本实施 例中, 考虑到基于特征空间对第三图像特征进行的调整可能存在一定偏差, 使得第三图像特征与第一图像特征、第二图像特征的相似度下降, 因此,为解决该问题, 在基于特征空间对第三 图像特征进行调整后, 通过神经网络模型, 对第三图像特征进行 再次调整, 以提高第三图像特征与第一图像特征、 第二图像特征的相似度。 其 中, 需要对神经网络进行训练, 使得神经网络可以学习基于特征空间进行图像特 征调整的偏差, 具体训练过程参照后续实施例。 在一种实施例 中, 神经网络为全链接神经网络。从而, 在神经网络的学习任务单一、 输入数据和输出数据均为 图像特征的情况下, 通过网络参数较多的全链接神经网络, 提 高对第三图像特征进行调整的准确性。 在一种可 能的实现方式中, 参考图 3b, 图 3b为本公开实施例提供的依次基于图像生 成模型的特征空间和神经网络调整第三图像特征(即 S303)的流程示意图。如图 3b所示, 依次基于图像生成模型的特征空间和神经网络调整第三图像特征的过程 (即 S303的一种 可能的实现方式) 包括:
S3031、 获取特征空间中的平均图像特征; S3032、 根据平均图像特征, 对第三图像 特征进行初步调整; S3033、 将第一图像特征和第二图像特征, 输入神经网络, 得到神经 网络的输出数据, 输出数据反映初步调整的偏差; S3034、 根据输出数据, 对初步调整后 的第三图像特征进行再次调整。 其 中, 神经网络的输出数据反映基于特征空间的平均图像特征对第三图像特征进行 初步调整后产生的特征偏差。 本实施例 中,可基于特征空间所符合的概率分布,确定特征空间中的平均图像特征。 其中, 特征空间所符合的概率分布例如高斯分布。 在确定平均图像特征后, 利用平均图 像特征, 对第三图像特征的初步调整, 使得第三图像特征靠近该平均图像特征, 提高第 三图像特征的质量。 再将第一图像特征和第二图像特征输入至神经网络, 得到神经网络 的输出数据, 神经网络的输出数据也为图像特征。 基于神经网络的输出数据, 对初步调 整后的第三图像特征进行再次 调整, 以使得第三图像特征靠近第一图像特征和第二图像 特征, 提高第三图像特征与第一图像特征、 第二图像特征的相似度。 在一种 实施例中, 根据平均图像特征, 对第三图像特征进行初步调整, 包括: 确定 第三图像特征与平均图像特征的均值,确定初步调整后的第三图像特征为该均值。从而, 通过求解第三图像特征与平均 图像特征的均值的方式, 实现对第三图像特征的特征裁剪 (即初步调整)。 在一种 实施例中, 根据输出数据, 对初步调整后的第三图像特征进行再次调整, 包 括: 将输出数据与初步调整后的第三图像特征相加, 得到再次调整后的第三图像特征。 从而, 通过在初步调整后的第三图像特征上加上神经网络学习到的初步调整过程中所带 来的特征偏差的方式, 提高第三图像特征与第一图像特征、 第二图像特征的相似度。
S304、 根据第一图像特征、 第二图像特征和调整后的第三图像特征, 进行非线性插 值, 得到多个中间图像特征。 本实施例 中,在得到第一图像特征、第二图像特征和最终调整后的第三图像特征后, 将第一图像特征、第二图像特征和第三图像特征作为三个巳知量,通过非线性插值方式, 得到插值曲线, 在插值曲线上采样得到多个中间图像特征。 从而, 除第一图像特征和第 二图像特征之外, 在非线性插值过程中还利用到了质量较高且与第一图像特征和第二图 像特征相似度较高的第三 图像特征, 有效地提高了非线性插值的准确性, 提高了中间图 像特征的质量。 在 一种可能 的实现方式 中, 非线性插值方式采 用三次样条 插值 ( cubic spline interpolation) o 此时, S304 包括: 根据第一图像特征、 第二图像特征和第三图像特征, 通过三次样条插值得到插值 曲线; 在插值曲线上进行采样, 得到多个中间图像特征。 从 而, 利用三次样条插值, 提高非线性插值的准确性, 提高中间图像特征的质量。 具体 的, 可将第三图像特征与第一图像特征、 第二图像特征一起输入至三次样条插 值中, 得到插值函数, 即得到插值曲线。 进而, 在插值曲线上进行采样, 得到多个中间 图像特征。
S305、 基于第一图像特征、 第二图像特征和多个中间图像特征, 通过图像生成模型 进行图像重建, 生成目标视频, 其中, 目标视频用于展现从第一图像渐变至第二图像的 过程。 其 中, S305的实现原理和技术效果可参照前述实施例, 不再赘述。 本 公开实施例中, 基于第一图像经编码得到的第一图像特征和第二图像经编码得到 的第二图像特征,采用基于特征空间和神经网络的非线性插值,得到多个中间图像特征, 有效地提高了非线性插值 的准确性, 进而提高了中间图像特征的质量, 提高了视频的中 间帧的图像质量, 进而提高了视频质量。 作为示例 的, 参考图 4, 图 4为本公开实施例提供的基于特征空间和神经网络的非线 性插值的框架示例图。 如图 4所示, 先确定隐编码 1 (此时相当于第一图像特征)和隐编 码 2 (此时相当于第二图像特征) 的平均值 (此时相当于第三图像特征), 基于特征空间 对该平均值进行裁剪, 得到裁剪后的平均值 (此时相当于初步调整的第三图像特征); 接 着, 将隐编码 1和隐编码 2输入神经网络, 得到神经网络输出的特征偏差; 接着, 在裁 剪后的平均值上加上该特征偏 差 (此时相当于得到再次调整后的第三图像特征)。 如此, 最后将隐编码 1、 隐编码 2和该平均值用于样条插值, 得到多个插值结果(即多个中间图 像特征)。 需要说明的是, 上述实施例提供了结合特征空间和神经网络对图像特征进行调整的 方案, 在实际应用中, 也可以单独基于特征空间对图像特征进行调整, 即忽视特征空间 进行调整所带来的特征偏差。 在 一些实施例中, 为提高非线性插值效果, 需要预先对神经网络进行训练, 使得神 经网络能够学习到基于 图像生成模型的特征空间进行图像特征调整的偏差。 下面, 提供 神经网络训练的实施例。 需要说明的是, 神经网络的训练过程与前述实施例中的视频生成过程, 可以在同一 设备上执行, 也可以在不同设备上执行。 参照 图 5, 图 5为本公开实施例提供的模型确定方法的流程示例图。 如图 5所示, 该 模型确定方法包括:
S501、 根据多个训练图像和图像生成模型, 训练神经网络, 神经网络用于学习基于 图像生成模型的特征空间进行图像特征调整的偏差 。 其 中, 在神经网络的一次训练过程中, S501包括如下步骤:
S5011、 根据第一训练图像的图像特征和第二训练图像的图像特征, 生成目标图像特 征。 本实施 例中, 在每次训练过程中, 可从多个训练图像中获取两个训练图像, 为了便 于区分, 将两个训练图像分别称为第一训练图像和第二训练图像。 可通过编码器对两个 训练图像进行编码, 得到第一训练图像的图像特征和第二训练图像的图像特征。 对第一 训练图像的图像特征和第二训练图像的图像特征进行特征融合处理 , 得到目标图像特征。 一示例 中, 对第一训练图像的图像特征和第二训练图像的图像特征进行特征融合处 理, 得到目标图像特征, 包括: 确定第一训练图像的图像特征与第二训练图像的图像特 征的平均值, 该平均值即目标图像特征。 具体的, 可将第一训练图像的图像特征与第二 训练图像的图像特征上相应位置的特征值相加后求平均, 得到该平均值。 又一示例 中, 对第一训练图像的图像特征和第二训练图像的图像特征进行加权求和, 得到目标图像特征。 其中, 可预先设置第一训练图像的图像特征、 第二训练图像的图像 特征分别对应的权重。
S5012、 基于特征空间, 对目标图像特征进行初步调整。 本实施例 中,可基于特征空间所符合的概率分布,确定特征空间中的平均图像特征。 利用该平均图像特征, 对目标图像特征的初步调整, 使得目标图像特征靠近该平均图像 特征, 提高目标图像特征的质量。 在一种 实施例中, 根据平均图像特征, 对目标图像特征进行初步调整, 包括: 确定 目标图像特征与平均图像特征的均值,确定初步调整后的目标图像特征为该均值。从而, 通过求解目标图像特征与平均 图像特征的均值的方式, 实现对目标图像特征的特征裁剪 (即初步调整)。
S5013、 通过神经网络学习初步调整对应的目标偏差, 并根据目标偏差, 对初步调整 后的目标图像特征进行再次调整。 本实施 例中, 将第一训练图像的图像特征和第二训练图像的图像特征输入至神经网 络, 得到神经网络的输出数据, 即学习得到初步调整对应的目标偏差。 基于神经网络学 习得到的初步调整对应 的目标偏差, 对初步调整后的目标图像特征进行再次调整, 以使 得目标图像特征靠近第一训练 图像的图像特征和第二训练图像的图像特征, 即提高目标 图像特征与第一训练图像的图像特征、 第二训练图像的图像特征的相似度。 在一种 实施例中, 根据目标偏差, 对初步调整后的目标图像特征进行再次调整, 包 括: 将目标偏差与初步调整后的目标图像特征相加, 得到再次调整后的目标图像特征。 从而, 通过在初步调整后的目标图像特征上加上神经网络学习到的初步调整过程中所产 生的特征偏差的方式, 提高目标图像特征与第一训练图像的图像特征、 第二训练图像的 图像特征的相似度。
S5014、根据目标偏差、再次调整后的目标图像特征、第一训练图像和第二训练图像, 调整神经网络的模型参数。 本实施 例中, 可基于目标偏差、 再次调整后的目标图像特征、 第一训练图像和第二 训练图像, 确定神经网络的训练误差, 基于该训练误差, 调整神经网络的模型参数。 例 如, 基于再次调整后的目标图像特征与第一训练图像的图像特征之间的差异、 和 /或再次 调整后的目标图像特征与第二训练图像的图像特征之间的差异 , 确定训练误差。 一示例 中, 神经网络基于正则约束和相似度约束训练得到, 正则约束用于最小化基 于神经网络调整后的图像特征与基 于特征空间调整后的图像特征 (即初步调整后的目标 图像特征) 之间的差异, 相似度约束用于最小化基于神经网络调整后的图像特征 (即再 次调整后的 目标图像特征) 与第一训练图像的图像特征、 第二训练图像的图像特征之间 的差异。 此时 , S5014包括: 通过正则约束和相似度约束, 确定神经网络的目标优化函数; 基 于目标优化函数、 目标偏差、 再次调整后的目标图像特征、 第一训练图像和第二训练图 像, 调整所述神经网络的模型参数。 具体 的, 可预先根据正则约束和相似度约束, 确定神经网络的目标优化函数。 在神 经网络的训练过程中, 基于目标偏差、 第一训练图像和第二训练图像, 确定目标优化函 数的函数值, 即神经网络的训练误差。 基于该训练误差, 对神经网络的模型参数进行优 化。 其中, 优化算法例如为梯度下降算法。 具体 的, 由于前述实施例所提到的图像特征均为编码后的图像特征。 为提高模型训 练的准确性, 可在得到再次调整后的目标图像特征后, 将目标图像特征分别输入图像生 成模型,得到中间重建图像 (即目标图像特征所对应的重建图像),再通过特征提取网络, 分别对第一训练图像、 第二训练图像和中间重建图像进行特征提取, 得到第一训练图像 的图像特征、 第二训练图像的图像特征、 中间重建图像的图像特征。 例如, 当第一训练 图像、 第二训练图像、 中间重建图像均为人脸图像时, 可采用人脸特征提取网络, 对这 些图像进行特征提取 。 接着, 确定中间重建图像的图像特征与第一训练图像的图像特征 (由特征提取网络所提取到的特征) 的差异、 中间重建图像的图像特征与第二训练图像 的图像特征 (由特征提取网络提取的特征) 的差异, 根据该两种差异以及根据神经网络 的输出数据, 确定训练误差。 一示例 中, 神经网络的目标优化函数可表示为: minL =忡 (G(f(W], W2) + w3)) - ① &)忙 +忡 (G(f(W], W2) + w3)) - ^(x2)||2
+ X||f(w1, w2)|| 其 中, X]、 X2分别表示第一训练图像、第二训练图像, W]表示第一训练图像编码后得 到的图像特征, W2表示第二训练图像编码后得到的图像特征, W3表示目标图像特征, f( ) 表示神经网络, G( )表示图像生成模型, e( )表示特征提取网络, 入为预设参数。 其 中, 忡 (G(f(W], W2) + w3)) - ① (X1)||2 + ||$(G(f(w1, w2) + w3)) - $(x2)||2为相似度 约束, 入 ||f(W], W2)||为正则约束。 如此 , 重复执行上述步骤, 对神经网络进行多次调整。 作为示例 的, 参考图 6, 图 6为本公开实施例提供的神经网络的训练框架示意图。 如 图 6所示, 训练过程包括: 先确定隐编码 1 (输入图像 1经编码后得到的图像特征) 与隐 编码 2(输入图像 2经编码后得到的图像特征)的平均值;基于图像生成模型的特征空间, 对该平均值进行特征裁剪 (即进行初步调整), 得到裁剪后的平均值; 接着, 将隐编码 1 和隐编码 2输入神经网络, 根据神经网络输出的特征偏差, 可以确定正则约束这部分的 训练误差; 接着, 在裁剪后的平均值上加上神经网络输出的特征偏差, 再将该平均值输 入图像生成模型, 得到重建图像; 最后, 通过特征提取网络确定该重建图像与输入图像 1 的特征差异、 该重建图像与输入图像 2 的特征差异, 基于该两种特征差异, 确定相似度 约束这部分的训练误差 。 如此, 基于正则约束这部分的训练误差和相似度约束这部分的 训练误差, 调整神经网络的模型参数。 对应于上文实施例 的视频生成方法, 图 7为本公开实施例提供的视频生成设备的结 构框图。 为了便于说明, 仅示出了与本公开实施例相关的部分。 参照图 7, 视频生成设备 包括: 提取单元 701和插值单元 702。 提取单元 701 , 用于在第一图像中, 提取第一图像特征; 插值单元 702, 用于根据第一图像特征和第二图像特征, 通过非线性插值得到多个中 间图像特征, 第二图像特征为第二图像的图像特征; 视频生成单元 703 , 用于基于第一图像特征、 第二图像特征和多个中间图像特征, 通 过图像生成模型进行 图像重建, 生成目标视频, 其中, 目标视频用于展现从第一图像渐 变至第二图像的过程。 在一些实施例 中, 插值单元 702还用于: 根据第一图像特征和第二图像特征, 生成 第三图像特征; 依次基于图像生成模型的特征空间和神经网络, 调整第三图像特征, 神 经网络用于学习基于特征空 间进行图像特征调整的偏差; 根据第一图像特征、 第二图像 特征和调整后的第三图像特征, 进行非线性插值, 得到多个中间图像特征。 在一些实施例 中, 插值单元 702还用于: 获取特征空间中的平均图像特征; 根据平 均图像特征, 对第三图像特征进行初步调整; 将第一图像特征和第二图像特征, 输入神 经网络, 得到神经网络的输出数据, 输出数据反映初步调整的偏差; 根据输出数据, 对 初步调整后的第三图像特征进行再次调整。 在一些实施例 中,插值单元 702还用于:确定第三图像特征与平均图像特征的均值; 确定初步调整后的第三图像特征为均值。 在一 些实施例中, 神经网络基于正则约束和相似度约束训练得到, 正则约束用于最 小化基于神经网络调整后的 图像特征与基于特征空间调整后的图像特征之间的差异, 相 似度约束用于最小化基于神经 网络调整后的图像特征与第一训练图像的图像特征、 第二 训练图像的图像特征之间的差异。 在一些实施例 中, 插值单元 702还用于: 根据第一图像特征、 第二图像特征和第三 图像特征, 通过三次样条插值得到插值曲线; 在插值曲线上进行采样, 得到多个中间图 像特征。 在一些实施例 中, 图像生成模型为 StyleGAN模型或者 StyleGAN2模型。 本实施 例提供的视频生成设备, 可用于执行上述与视频生成方法相关的实施例的技 术方案, 其实现原理和技术效果类似, 本实施例此处不再赘述。 对应于上文实施例 的模型确定方法, 图 8 为本公开实施例提供的模型确定设备的结 构框图。 为了便于说明, 仅示出了与本公开实施例相关的部分。 参照图 8 , 模型确定设备 包括: 训练单元 801。 训练单元 801 , 用于根据多个训练图像和图像生成模型, 训练神经网络, 神经网络用 于学习基于图像生成模型的特征空间进行图像特征调整的偏差 。 其 中, 神经网络的一次训练过程包括: 根据第一训练图像的图像特征和第二训练图 像的图像特征, 生成目标图像特征; 基于特征空间, 对目标图像特征进行初步调整; 通 过神经网络学习初步调整对应 的目标偏差, 并根据目标偏差, 对初步调整后的目标图像 特征进行再次调整 ; 根据目标偏差、 再次调整后的目标图像特征、 第一训练图像和第二 训练图像, 调整神经网络的模型参数。 在 一些实施例中, 训练单元 801 还用于: 通过正则约束和相似度约束, 确定神经网 络的目标优化函数 ; 基于目标优化函数、 目标偏差、 再次调整后的目标图像特征、 第一 训练图像和第二训练 图像, 调整神经网络的模型参数; 其中, 正则约束用于最小化再次 调整后的 目标图像特征与初步调整后的目标图像特征之间的差异, 相似度约束用于最小 化再次调整后的 目标图像特征与第一训练图像的图像特征、 第二训练图像的图像特征之 间的差异。 本 实施例提供的模型确定设备, 可用于执行上述与模型确定方法相关的实施例的技 术方案, 其实现原理和技术效果类似, 本实施例此处不再赘述。 参考 图 9, 其示出了适于用来实现本公开实施例的电子设备 900的结构示意图, 该电 子设备 900可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、 笔记本电脑、 数字广播接收器、 个人数字助理 (personal digital assistant, PDA)、 平板电 脑 (portable android device, PAD)、便携式多媒体播放器 (portable media player, PMP)、 车载终端 (例如车载导航终端) 等等的移动终端以及诸如数字 TV、 台式计算机等等的固 定终端。 图 9 示出的电子设备仅仅是一个示例, 不应对本公开实施例的功能和使用范围 带来任何限制。 如图 9所示, 电子设备 900可以包括处理装置 (例如中央处理器、 图形处理器等) 901 , 其可以根据存储在只读存储器 (read only memory, ROM) 902中的程序或者从存储 装置 908加载到随机访问存储器 ( random access memory, RAM) 903中的程序而执行各 种适当的动作和处理。 在 RAM 903中, 还存储有电子设备 900操作所需的各种程序和数 据。处理装置 90KROM 902以及 RAM 903通过总线 904彼此相连。输入 /输出 (input/output, I/O) 接口 905也连接至总线 904 o 通 常, 以下装置可以连接至 I/O接口 905: 包括例如触摸屏、 触摸板、 键盘、 鼠标、 摄像头、麦克风、加速度计、陀螺仪等的输入装置 906;包括例如液晶显示器 (Liquid Crystal Display, LCD)、 扬声器、 振动器等的输出装置 907; 包括例如磁带、 硬盘等的存储装置 908; 以及通信装置 909。 通信装置 909可以允许电子设备 900与其他设备进行无线或有 线通信以交换数据。 虽然图 9示出了具有各种装置的电子设备 900, 但是应理解的是, 并 不要求实施或具备所有示出的装置。 可以替代地实施或具备更多或更少的装置。 特 别地, 根据本公开的实施例, 上文参考流程图描述的过程可以被实现为计算机软 件程序。 例如, 本公开的实施例包括一种计算机程序产品, 其包括承载在计算机可读介 质上的计算机程序 , 该计算机程序包含用于执行流程图所示的方法的程序代码。 在这样 的实施例中, 该计算机程序可以通过通信装置 909 从网络上被下载和安装, 或者从存储 装置 908被安装, 或者从 ROM 902被安装。 在该计算机程序被处理装置 901执行时, 执 行本公开实施例的方法中限定的上述功能 。 需要说明的是, 本公开上述的计算机可读介质可以是计算机可读信号介质或者计算 机可读存储介质或者是上述 两者的任意组合。 计算机可读存储介质例如可以是一一但不 限于一一电、 磁、 光、 电磁、 红外线、 或半导体的系统、 装置或器件, 或者任意以上的 组合。 计算机可读存储介质的更具体的例子可以包括但不限于: 具有一个或多个导线的 电连接、 便携式计算机磁盘、 硬盘、 随机访问存储器 (RAM)、 只读存储器 (ROM)、 可 擦除可编程只读存储器 ( erasable programmable read-only memory, EPROM)、 光纤、 便 携式紧凑磁盘只读存储器 ( compact disc read-only memory, CD-ROM 光存储器件、 磁 存储器件、 或者上述的任意合适的组合。 在本公开中, 计算机可读存储介质可以是任何 包含或存储程序的有形介质 , 该程序可以被指令执行系统、 装置或者器件使用或者与其 结合使用。 而在本公开中, 计算机可读信号介质可以包括在基带中或者作为载波一部分 传播的数据信号, 其中承载了计算机可读的程序代码。 这种传播的数据信号可以采用多 种形式, 包括但不限于电磁信号、 光信号或上述的任意合适的组合。 计算机可读信号介 质还可以是计算机可读存储介质 以外的任何计算机可读介质, 该计算机可读信号介质可 以发送、 传播或者传输用于由指令执行系统、 装置或者器件使用或者与其结合使用的程 序。 计算机可读介质上包含的程序代码可以用任何适当的介质传输, 包括但不限于: 电 线、 光缆、 射频 ( radio frequency , RF) 等等, 或者上述的任意合适的组合。 上述计 算机可读介质可以是上述电子设备中所包含的; 也可以是单独存在, 而未装 配入该电子设备中。 上述计 算机可读介质承载有一个或者多个程序, 当上述一个或者多个程序被该电子 设备执行时, 使得该电子设备执行上述实施例所示的方法。 可 以以一种或多种程序设计语言或其组合来编写用于执行本公开 的操作的计算机程 序代码, 上述程序设计语言包括面向对象的程序设计语言一诸如 Java、 Smalltalk, C++, 还包括常规的过程式程序设计语言一诸如 “ C”语言或类似的程序设计语言。 程序代码可 以完全地在用户计算机上执行 、 部分地在用户计算机上执行、 作为一个独立的软件包执 行、 部分在用户计算机上部分在远程计算机上执行、 或者完全在远程计算机或服务器上 执行。 在涉及远程计算机的情形中, 远程计算机可以通过任意种类的网络一一包括局域 网 ( local area network, LAN) 或广域网 ( wide area network, WAN) 一连接到用户计算 机, 或者, 可以连接到外部计算机 (例如利用因特网服务提供商来通过因特网连接)。 附图中的流程图和框图, 图示了按照本公开各种实施例的系统、 方法和计算机程序 产品及计算机程序的可能实现 的体系架构、 功能和操作。 在这点上, 流程图或框图中的 每个方框可以代表一个模块、 程序段、 或代码的一部分, 该模块、 程序段、 或代码的一 部分包含一个或多个用于实现规 定的逻辑功能的可执行指令。 也应当注意, 在有些作为 替换的实现中, 方框中所标注的功能也可以以不同于附图中所标注的顺序发生。 例如, 两个接连地表示的方框实际上可以基本并行地执行, 它们有时也可以按相反的顺序执行, 这依所涉及的功能而 定。 也要注意的是, 框图和 /或流程图中的每个方框、 以及框图和 / 或流程图中的方框的组合, 可以用执行规定的功能或操作的专用的基于硬件的系统来实 现, 或者可以用专用硬件与计算机指令的组合来实现。 描述 于本公开实施例中所涉及到的单元可以通过软件的方式实现 , 也可以通过硬件 的方式来实现。 其中, 单元的名称在某种情况下并不构成对该单元本身的限定, 例如, 获取单元还可以被描述为 “获取目标音频的单元”。 本文 中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执 行。 例如, 非限 制性地 , 可以使用的示 范类型 的硬件逻 辑部件 包括: 现场可编程门阵 列 (field-programmable gate array, FPGA) 、 专用集成电路 ( application specific integrated circuit, ASIC) 、 专用标准产品 (application specific standard parts, ASSP) 、 片上系统 (system on chip, SOC)、复杂可编程逻辑设备 ( complex programmable logic device, CPLD) 等等。 在本 公开的上下文中, 机器可读介质可以是有形的介质, 其可以包含或存储以供指 令执行系统、 装置或设备使用或与指令执行系统、 装置或设备结合地使用的程序。 机器 可读介质可以是机器可读信号介 质或机器可读储存介质。 机器可读介质可以包括但不限 于电子的、 磁性的、 光学的、 电磁的、 红外的、 或半导体系统、 装置或设备, 或者上述 内容的任何合适组合。 机器可读存储介质的更具体示例会包括基于一个或多个线的电气 连接、 便携式计算机盘、 硬盘、 随机存取存储器 (RAM)、 只读存储器 (ROM)、 可擦除 可编程只读存储器 (EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器 ( CD-ROM), 光学储存设备、 磁储存设备、 或上述内容的任何合适组合。 第一 方面, 根据本公开的一个或多个实施例, 提供了一种视频生成方法, 包括: 在 第一图像中, 提取第一图像特征; 根据所述第一图像特征和第二图像特征, 通过非线性 插值得到多个中间图像特征 , 所述第二图像特征为第二图像的图像特征; 基于所述第一 图像特征、 所述第二图像特征和所述多个中间图像特征, 通过图像生成模型进行图像重 建, 生成目标视频, 其中, 所述目标视频用于展现从所述第一图像渐变至所述第二图像 的过程。 根据本 公开的一个或多个实施例, 所述根据所述第一图像特征和第二图像特征, 通 过非线性插值, 得到多个中间图像特征, 包括: 根据所述第一图像特征和所述第二图像 特征, 生成第三图像特征; 依次基于所述图像生成模型的特征空间和神经网络, 调整所 述第三图像特征, 所述神经网络用于学习基于所述特征空间进行图像特征调整的偏差; 根据所述第一图像特征、所述第二图像特征和调整后的第三图像特征,进行非线性插值, 得到所述多个中间图像特征。 根据本 公开的一个或多个实施例, 所述依次基于所述图像生成模型的特征空间和神 经网络, 调整所述第三图像特征, 所述神经网络用于学习基于所述特征空间进行图像特 征调整的偏差, 包括: 获取所述特征空间中的平均图像特征; 根据所述平均图像特征, 对所述第三图像特征进行初步 调整; 将所述第一图像特征和所述第二图像特征, 输入所 述神经网络, 得到所述神经网络的输出数据, 所述输出数据反映所述初步调整的偏差; 根据所述输出数据, 对初步调整后的第三图像特征进行再次调整。 根据本 公开的一个或多个实施例, 所述根据所述平均图像特征, 对所述第三图像特 征进行初步调整, 包括: 确定所述第三图像特征与所述平均图像特征的均值; 确定所述 初步调整后的第三图像特征为所述均值。 根据本 公开的一个或多个实施例, 所述神经网络基于正则约束和相似度约束训练得 到, 所述正则约束用于最小化基于所述神经网络调整后的图像特征与基于所述特征空间 调整后的图像特征之间 的差异, 所述相似度约束用于最小化基于所述神经网络调整后的 图像特征与第一训练图像的图像特征、 第二训练图像的图像特征之间的差异。 根据本 公开的一个或多个实施例, 所述基于所述第一图像特征、 所述第二图像特征 和所述第三图像特征, 进行非线性插值, 得到所述多个中间图像特征, 包括: 根据所述 第一图像特征、 所述第二图像特征和所述第三图像特征, 通过三次样条插值得到插值曲 线; 在所述插值曲线上进行采样, 得到所述多个中间图像特征。 根 据本公开 的一个或多个 实施例, 所述图像生成模型为 StyleGAN 模型或者 StyleGAN2模型。 第二 方面, 根据本公开的一个或多个实施例, 提供了一种模型确定方法, 包括: 根 据多个训练图像和图像生成模 型, 训练神经网络, 所述神经网络用于学习基于所述图像 生成模型的特征空间进行 图像特征调整的偏差。 其中, 所述神经网络的一次训练过程包 括: 根据第一训练图像的图像特征和第二训练图像的图像特征, 生成目标图像特征; 基 于所述特征空间, 对所述目标图像特征进行初步调整; 通过所述神经网络学习所述初步 调整对应的 目标偏差, 并根据所述目标偏差, 对初步调整后的目标图像特征进行再次调 整; 根据所述目标偏差、 再次调整后的目标图像特征、 所述第一训练图像和所述第二训 练图像, 调整所述神经网络的模型参数。 根据本 公开的一个或多个实施例, 所述根据所述目标偏差、 再次调整后的目标图像 特征和所述第一训练 图像和所述第二训练图像, 调整所述神经网络的模型参数, 包括: 通过正则约束和相似度约束 , 确定所述神经网络的目标优化函数; 基于所述目标优化函 数、 所述目标偏差、 所述再次调整后的目标图像特征、 所述第一训练图像和所述第二训 练图像, 调整所述神经网络的模型参数; 其中, 所述正则约束用于最小化所述再次调整 后的目标图像特征与所述初步 调整后的目标图像特征之间的差异, 所述相似度约束用于 最小化所述再次调整后 的目标图像特征与所述第一训练图像的图像特征、 所述第二训练 图像的图像特征之间的差异。 第三 方面, 根据本公开的一个或多个实施例, 提供一种视频生成设备, 包括: 提取 单元, 用于在第一图像中, 提取第一图像特征; 插值单元, 用于根据所述第一图像特征 和第二图像特征, 通过非线性插值得到多个中间图像特征, 所述第二图像特征为第二图 像的图像特征; 视频生成单元, 用于基于所述第一图像特征、 所述第二图像特征和所述 多个中间图像特征, 通过图像生成模型进行图像重建, 生成目标视频, 其中, 所述目标 视频用于展现从所述第一图像渐变至所述第二图像的过程 。 第 四方面, 根据本公开的一个或多个实施例, 提供一种模型确定设备, 包括: 训练 单元, 用于根据多个训练图像和图像生成模型, 训练神经网络, 所述神经网络用于学习 基于所述图像生成模型 的特征空间进行图像特征调整的偏差。 其中, 在所述神经网络的 一次训练过程中, 训练模块用于: 根据第一训练图像的图像特征和第二训练图像的图像 特征, 生成目标图像特征; 基于所述特征空间, 对所述目标图像特征进行初步调整; 通 过所述神经网络学习所述初步调整对 应的目标偏差, 并根据所述目标偏差, 对初步调整 后的目标图像特征进行再次调整 ; 根据所述目标偏差、 再次调整后的目标图像特征、 所 述第一训练图像和所述第二训练图像, 调整所述神经网络的模型参数。 第五 方面, 根据本公开的一个或多个实施例, 提供了一种电子设备, 包括: 至少一 个处理器和存储器; 所述存储器存储计 算机执行指令; 所述 至少一个处理器执行所述存储器存储的计算机执行指令, 使得所述至少一个处 理器执行如上第一方面或第一 方面各种可能的设计所述的视频生成方法, 或者, 使得所 述至少一个处理器执行如上第二方面或第二方面各种可能的设计所述 的模型确定方法。 第六 方面, 根据本公开的一个或多个实施例, 提供了一种计算机可读存储介质, 所 述计算机可读存储介质中存储有计算机执行指令, 当处理器执行所述计算机执行指令时, 实现如上第一方面或第一方面 各种可能的设计所述的视频生成方法, 或者, 实现如上第 二方面或第二方面各种可能的设计所述的模型确定方法。 第七 方面, 根据本公开的一个或多个实施例, 提供了一种计算机程序产品, 所述计 算机程序产品包含计算机执行 指令, 当处理器执行所述计算机执行指令时, 实现如第一 方面或第一方面各种可能 的设计所述的视频生成方法, 或者, 实现如第二方面或第二方 面各种可能的设计所述的模型确定方法。 第八方面 , 根据本公开的一个或多个实施例, 提供了一种计算机程序, 所述计算机程 序被处理器执行时,实现如第一方面或第一方面各种可能的设计所述的视频生成方法,或者, 实现如第二方面或第二方面各种可能的设计所述的模型确定方法。 以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。 本领域技术人员 应当理解, 本公开中所涉及的公开范围, 并不限于上述技术特征的特定组合而成的技术 方案, 同时也应涵盖在不脱离上述公开构思的情况下, 由上述技术特征或其等同特征进 行任意组合而形成的其它技术 方案。 例如上述特征与本公开中公开的 (但不限于) 具有 类似功能的技术特征进行互相替换而形成的技术方案 。 此外 , 虽然采用特定次序描绘了各操作, 但是这不应当理解为要求这些操作以所示 出的特定次序或以顺序次序来执 行。 在一定环境下, 多任务和并行处理可能是有利的。 同样地, 虽然在上面论述中包含了若干具体实现细节, 但是这些不应当被解释为对本公 开的范围的限制。 在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个 实施例中。 相反地, 在单个实施例的上下文中描述的各种特征也可以单独地或以任何合 适的子组合的方式实现在多个实施例中。 尽管 巳经采用特定于结构特征和 /或方法逻辑动作的语言描述了本主题, 但是应当理 解所附权利要求书中所限定 的主题未必局限于上面描述的特定特征或动作。 相反, 上面 所描述的特定特征和动作仅仅是实现权利要求书的示例形式 。

Claims

权 利 要 求 书
1、 一种视频生成方法, 包括: 在第一 图像中, 提取第一图像特征; 根据所述第一 图像特征和第二图像特征, 通过非线性插值得到多个中间图像特征, 所述第二图像特征为第二图像的图像特征; 基于所述第一 图像特征、 所述第二图像特征和所述多个中间图像特征, 通过图像生 成模型进行图像重建, 生成目标视频, 其中, 所述目标视频用于展现从所述第一图像渐 变至所述第二图像的过程。
2、 根据权利要求 1所述的视频生成方法, 所述根据所述第一图像特征和第二图像特 征, 通过非线性插值得到多个中间图像特征, 包括: 根据所述第一 图像特征和所述第二图像特征, 生成第三图像特征; 依次基于所述 图像生成模型的特征空间和神经网络, 调整所述第三图像特征, 所述 神经网络用于学习基于所述特征空间进行图像特征调整的偏差; 根据所述第一 图像特征、 所述第二图像特征和调整后的第三图像特征, 进行非线性 插值, 得到所述多个中间图像特征。
3、 根据权利要求 2所述的视频生成方法, 所述依次基于所述图像生成模型的特征空 间和神经网络, 调整所述第三图像特征, 所述神经网络用于学习基于所述特征空间进行 图像特征调整的偏差, 包括: 获取所述特征 空间中的平均图像特征; 根据所述平均 图像特征, 对所述第三图像特征进行初步调整; 将所述第一 图像特征和所述第二图像特征, 输入所述神经网络, 得到所述神经网络 的输出数据, 所述输出数据反映所述初步调整的偏差; 根据所述输 出数据, 对初步调整后的第三图像特征进行再次调整。
4、 根据权利要求 3所述的视频生成方法, 所述根据所述平均图像特征, 对所述第三 图像特征进行初步调整, 包括: 确定所述第三 图像特征与所述平均图像特征的均值; 确定所述初步调整后 的第三图像特征为所述均值。
5、 根据权利要求 2至 4任一项所述的视频生成方法, 所述神经网络基于正则约束和 相似度约束训练得到, 所述正则约束用于最小化基于所述神经网络调整后的图像特征与 基于所述特征空间调整后的图像特征之间的差异, 所述相似度约束用于最小化基于所述 神经网络调整后的图像特征与第一训练图像的图像特征、 第二训练图像的图像特征之间 的差异。
6、 根据权利要求 2至 5任一项所述的视频生成方法, 所述根据所述第一图像特征、 所述第二图像特征和调整后的第三图像特征, 进行非线性插值, 得到所述多个中间图像 特征, 包括: 根据所述第一 图像特征、 所述第二图像特征和所述第三图像特征, 通过三次样条插 值得到插值曲线; 在所述插值 曲线上进行采样, 得到所述多个中间图像特征。
7、根据权利要求 1至 6任一项所述的视频生成方法,所述图像生成模型为 StyleGAN 模型或者 StyleGAN2模型。
8、 一种模型确定方法, 包括: 根据 多个训练图像和图像生成模型, 训练神经网络, 所述神经网络用于学习基于所 述图像生成模型的特征空间进行图像特征调整的偏差; 其 中, 所述神经网络的一次训练过程包括: 根据第一训练 图像的图像特征和第二训练图像的图像特征, 生成目标图像特征; 基于所述特征 空间, 对所述目标图像特征进行初步调整; 通过所述神经 网络学习所述初步调整对应的目标偏差, 并根据所述目标偏差, 对初 步调整后的目标图像特征进行再次调整; 根据所述 目标偏差、 再次调整后的目标图像特征、 所述第一训练图像和所述第二训 练图像, 调整所述神经网络的模型参数。
9、 根据权利要求 8所述的模型确定方法, 其特征在于, 所述根据所述目标偏差、 再 次调整后的目标图像特征、 所述第一训练图像和所述第二训练图像, 调整所述神经网络 的模型参数, 包括: 通过正则约束和相 似度约束, 确定所述神经网络的目标优化函数; 基于所述 目标优化函数、 所述目标偏差、 所述再次调整后的目标图像特征、 所述第 一训练图像和所述第二训练图像, 调整所述神经网络的模型参数; 其 中, 所述正则约束用于最小化所述再次调整后的目标图像特征与所述初步调整后 的目标图像特征之间的差异, 所述相似度约束用于最小化所述再次调整后的目标图像特 征与所述第一训练图像的图像特征、 所述第二训练图像的图像特征之间的差异。
10、 一种视频生成设备, 包括: 提取单元 , 用于在第一图像中, 提取第一图像特征; 插值单元 , 用于根据所述第一图像特征和第二图像特征, 通过非线性插值得到多个 中间图像特征, 所述第二图像特征为第二图像的图像特征; 视频生成单元 , 用于基于所述第一图像特征、 所述第二图像特征和所述多个中间图 像特征, 通过图像生成模型进行图像重建, 生成目标视频, 其中, 所述目标视频用于展 现从所述第一图像渐变至所述第二图像的过程。
11、 一种模型确定设备, 包括: 训练单元 , 用于根据多个训练图像和图像生成模型, 训练神经网络, 所述神经网络 用于学习基于所述图像生成模型的特征空间进行图像特征调整 的偏差; 其 中, 所述神经网络的一次训练过程包括: 根据第一训练 图像的图像特征和第二训练图像的图像特征, 生成目标图像特征; 基于所述特征 空间, 对所述目标图像特征进行初步调整; 通过所述神经 网络学习所述初步调整对应的目标偏差, 并根据所述目标偏差, 对初 步调整后的目标图像特征进行再次调整; 根据所述 目标偏差、 再次调整后的目标图像特征、 所述第一训练图像和所述第二训 练图像, 调整所述神经网络的模型参数。
12、 一种电子设备, 包括: 至少一个处理器和存储器; 所述存储器存储计 算机执行指令; 所述至少一个 处理器执行所述存储器存储的计算机执行指令, 使得所述至少一个处 理器执行如权利要求 1至 7任一项所述的视频生成方法, 或者, 使得所述至少一个处理 器执行如权利要求 8或 9所述的模型确定方法。
13、一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令, 当处理器执行所述计算机执行指令时, 实现如权利要求 1至 7任一项所述的视频生成方 法, 或者, 实现如权利要求 8或 9所述的模型确定方法。
14、 一种计算机程序产品, 所述计算机程序产品包含计算机执行指令, 当处理器执 行所述计算机执行指令时, 实现如权利要求 1至 7任一项所述的视频生成方法, 或者, 实现如权利 8或 9所述的模型确定方法。
15、 一种计算机程序, 所述计算机程序被处理器执行时, 实现如权利要求 1至 7任 一项所述的视频生成方法, 或者, 实现如权利 8或 9所述的模型确定方法。
19
PCT/SG2022/050927 2021-12-24 2022-12-22 视频生成方法及设备 WO2023121571A2 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111609441.8 2021-12-24
CN202111609441.8A CN114255169A (zh) 2021-12-24 2021-12-24 视频生成方法及设备

Publications (2)

Publication Number Publication Date
WO2023121571A2 true WO2023121571A2 (zh) 2023-06-29
WO2023121571A3 WO2023121571A3 (zh) 2023-09-21

Family

ID=80797847

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2022/050927 WO2023121571A2 (zh) 2021-12-24 2022-12-22 视频生成方法及设备

Country Status (2)

Country Link
CN (1) CN114255169A (zh)
WO (1) WO2023121571A2 (zh)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10614347B2 (en) * 2018-01-25 2020-04-07 Adobe Inc. Identifying parameter image adjustments using image variation and sequential processing
CN109905624B (zh) * 2019-03-01 2020-10-16 北京大学深圳研究生院 一种视频帧插值方法、装置及设备

Also Published As

Publication number Publication date
CN114255169A (zh) 2022-03-29
WO2023121571A3 (zh) 2023-09-21

Similar Documents

Publication Publication Date Title
CN112419151B (zh) 图像退化处理方法、装置、存储介质及电子设备
US11482257B2 (en) Image display method and apparatus
CN110021052B (zh) 用于生成眼底图像生成模型的方法和装置
US20230421716A1 (en) Video processing method and apparatus, electronic device and storage medium
US20240013359A1 (en) Image processing method, model training method, apparatus, medium and device
US20240045641A1 (en) Screen sharing display method and apparatus, device, and storage medium
US11785195B2 (en) Method and apparatus for processing three-dimensional video, readable storage medium and electronic device
CN110519645B (zh) 视频内容的播放方法、装置、电子设备及计算机可读介质
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
CN112752118A (zh) 视频生成方法、装置、设备及存储介质
US20230132137A1 (en) Method and apparatus for converting picture into video, and device and storage medium
CN111967397A (zh) 人脸影像处理方法和装置、存储介质和电子设备
CN113038176B (zh) 视频抽帧方法、装置和电子设备
CN113923378A (zh) 视频处理方法、装置、设备及存储介质
US20230224429A1 (en) Video generation method, video playing method, video generation device, video playing device, electronic apparatus and computer-readable storage medium
WO2023098649A1 (zh) 视频生成方法、装置、设备及存储介质
WO2023093481A1 (zh) 基于傅里叶域的超分图像处理方法、装置、设备及介质
WO2023035973A1 (zh) 视频处理方法、装置、设备及介质
WO2023121571A2 (zh) 视频生成方法及设备
JP2023550970A (ja) 画面の中の背景を変更する方法、機器、記憶媒体、及びプログラム製品
CN113905177A (zh) 视频生成方法、装置、设备及存储介质
CN113706663A (zh) 图像生成方法、装置、设备及存储介质
CN111083518B (zh) 一种追踪直播目标的方法、装置、介质和电子设备
WO2023093838A1 (zh) 超分图像处理方法、装置、设备及介质
TWI822032B (zh) 影片播放系統、可攜式影片播放裝置及影片增強方法

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2022912123

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022912123

Country of ref document: EP

Effective date: 20240620