CN113242451A - Video generation method and device - Google Patents

Video generation method and device Download PDF

Info

Publication number
CN113242451A
CN113242451A CN202110496247.7A CN202110496247A CN113242451A CN 113242451 A CN113242451 A CN 113242451A CN 202110496247 A CN202110496247 A CN 202110496247A CN 113242451 A CN113242451 A CN 113242451A
Authority
CN
China
Prior art keywords
video
task
replacement
image
replaced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110496247.7A
Other languages
Chinese (zh)
Inventor
郑杰
何涛
王林霄
顾力源
查如琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110496247.7A priority Critical patent/CN113242451A/en
Publication of CN113242451A publication Critical patent/CN113242451A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a video generation method and a video generation device, wherein the video generation method comprises the following steps: receiving a video generation task, wherein the video generation task comprises a video identifier and an image identifier; replacing an object to be replaced in the video to be replaced, which is indicated by the video identification, based on the replacement image indicated by the image identification to obtain a replacement video, and calling a monitoring thread to acquire a task state of a video generation task in the replacement process; and in the case that the task state is determined to be the task success, sending the replacement video to the client. According to the method, in the process of generating the replacement video, the monitoring thread is called to obtain the task state of the video generation task, the task state can be obtained in time under the condition that the video generation task is not influenced, the task state is not consumed for a long time when being obtained, the efficiency of obtaining the task state is improved, and the user can know the progress of the video generation task in real time by obtaining the task state in time, so that the user experience can be improved.

Description

Video generation method and device
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video generation method. The application also relates to a video generating device, a computing device and a computer readable storage medium.
Background
With the development of video processing technology, in the field of video creation, people have great enthusiasm for object replacement in videos, but due to the fact that the implementation of a bottom layer algorithm is not known enough and no platform provides similar video creation for users, the users are difficult to achieve creation of replacing objects in videos to obtain replacement videos.
The existing software can replace the face in the video through simple operation, namely, the face in the image uploaded by the user is used for replacing the object to be replaced in the video to be replaced, so that the replacement video is obtained. In addition, in the process of generating the replacement video, if the client wants to acquire the video generation progress, the client can send a progress acquisition instruction to the server, and the server can execute the progress acquisition instruction to acquire the video generation progress and feed back the video generation progress to the client.
However, in the above method, the server not only needs to execute the video generation task, but also needs to execute the progress acquisition instruction, and since the video generation task is being executed, the server may not be able to execute the progress acquisition instruction in time, so that it takes a long time to acquire the video generation progress, and user experience is reduced.
Disclosure of Invention
In view of this, the present application provides a video generation method. The application also relates to a video generation device, a computing device and a computer readable storage medium, which are used for solving the problem that the time for acquiring the video generation progress is long in the prior art.
According to a first aspect of embodiments of the present application, there is provided a video generation method, including:
receiving a video generation task, wherein the video generation task comprises a video identifier and an image identifier;
replacing an object to be replaced in the video to be replaced, which is indicated by the video identification, based on the replacement image, which is indicated by the image identification, to obtain a replacement video, and calling a monitoring thread to acquire a task state of the video generation task in the replacement process;
and sending the replacement video to a client under the condition that the task state is determined to be the task success. According to a second aspect of embodiments of the present application, there is provided a video generation apparatus, including:
the receiving module is configured to receive a video generation task, wherein the video generation task comprises a video identifier and an image identifier;
the replacing module is configured to replace an object to be replaced in the video to be replaced, which is indicated by the video identification, based on the replacing image indicated by the image identification to obtain a replacing video, and call a monitoring thread to acquire a task state of the video generating task in a replacing process;
a sending module configured to send the replacement video to a client if it is determined that the task status is task successful.
According to a third aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the video generation method when executing the instructions.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the video generation method.
The video generation method provided by the application receives a video generation task, wherein the video generation task comprises a video identifier and an image identifier; replacing an object to be replaced in the video to be replaced, which is indicated by the video identification, based on the replacement image, which is indicated by the image identification, to obtain a replacement video, and calling a monitoring thread to acquire a task state of the video generation task in the replacement process; and sending the replacement video to a client under the condition that the task state is determined to be the task success. According to the method, in the process of generating the replacement video, the monitoring thread is called to obtain the task state of the video generation task, the task state can be obtained in time under the condition that the video generation task is not influenced, the task state is not consumed for a long time when being obtained, the efficiency of obtaining the task state is improved, and the user can know the progress of the video generation task in real time by obtaining the task state in time, so that the user experience can be improved.
Drawings
Fig. 1 is a flowchart of a video generation method according to an embodiment of the present application;
fig. 2 is a process flow diagram of a video generation method applied to human body replacement according to an embodiment of the present application;
fig. 3 is a schematic diagram of a video to be replaced according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative video provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application;
fig. 6 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
Replacing the video: and replacing the object to be replaced in the video to be replaced by using the replacement image uploaded by the user to obtain the video.
And (3) image verification: the intelligent content auditing scheme based on deep learning accurately filters images carrying illegal contents, and realizes an automatic auditing function.
And (3) task queue: a task queue is a list of tasks that one or more threads service. And if the task queue only has one service thread, executing all tasks according to the sequence of the write-in list. If the task queue has multiple service threads, the order of execution of the tasks is unknown.
Message queue: a queue for storing task states.
Monitoring the thread: the server creates a thread for acquiring the task state of the video generation task.
Face driving: and processing the video to be replaced to generate a video sequence so as to carry out animation processing on the human face in the replacement image according to the motion of the video to be replaced.
Task scheduling: refers to scheduling a processor to execute a task or a system command according to certain constraints. For example, if the task queue is time dependent, the task schedule may be to schedule the processor to execute the task.
A gateway layer: and the scheduling of the task related function interface is responsible.
A file storage system: for storing the uploaded replacement image.
AI layer: and the method is used for performing compliance and legal audit on the replacement image by adopting an AI algorithm.
Algorithm gateway layer: the gateway layer is used for interacting with the algorithm model layer, sending the request data of the gateway layer to the algorithm model layer, and writing the task information into the message queue after the algorithm model layer finishes executing the task for the gateway layer to monitor and consume.
An algorithm model layer: and the algorithm model is stored, and a video generation task can be executed according to the replacement image and the video to be replaced to generate a replacement video.
H5 page: and determining the video to be replaced and the image to be replaced and generating the composite video on the page displayed in the client.
Next, a brief description is given of an application scenario provided in the embodiment of the present application.
In the field of video creation, creators have extremely high enthusiasm for face driving, face changing and the like, have various inspirations of opening brain cavities, but are limited by insufficient understanding of underlying algorithms, and are difficult to practice originality of the creators. The threshold for creating the face-driven video in the current market is higher, and no function or platform for realizing the creation of the face-driven video by facing a common user with lower learning cost and simple user operation exists.
In order to solve the above problems, the present application provides a video generation method, which is low in learning cost and simple in operation for a user, and in the process of generating a replacement video, a monitor thread is specially called to obtain a task state of a video generation task and feed the task state back to a client, so that the user can know the progress of the video generation task in real time, and further user experience can be improved.
In the present application, a video generation method is provided, and the present application relates to a video generation apparatus, a computing device, and a computer-readable storage medium, which are described in detail one by one in the following embodiments.
Fig. 1 shows a flowchart of a video generation method provided in an embodiment of the present application, where the method is applied to a server, and specifically includes the following steps:
step 102: receiving a video generation task, wherein the video generation task comprises a video identifier and an image identifier.
Specifically, the video identifier may be used to uniquely identify a video, and the video identifier may be a letter, a number, a symbol, or the like. The image identifier may be used to uniquely identify an image, and the image identifier may be a letter, number, symbol, or the like. For example, the video logo is V1 and the picture logo is P2.
In a specific implementation, the server may receive a video generation task from the client, where the video generation task includes a video identifier and an image identifier, so that the server may determine that an object to be replaced in a video indicated by the video identifier is replaced by the image indicated by the image identifier.
In an embodiment of the present application, before receiving the video generation task, the method further includes: receiving and storing the replacement image; and sending the storage address of the replacement image to a client.
In a specific implementation, before receiving a video generation task, the server may receive a replacement image from the client, store the replacement image in a file storage system of the server, and return a storage address to the client, so that the client may subsequently acquire the replacement image from the server. Wherein the file storage system may be used to store replacement images.
In some embodiments, if a user wants to replace an object to be replaced in a video to be replaced, the user may select the video to be replaced from a video list displayed by the client, determine a replacement image based on the video to be replaced, upload the replacement image to the client, and then the client may determine the video to be replaced and the replacement image. In a specific implementation, a user may select an interested video from a video list as a video to be replaced, and may click to enter a detail page of the video and play the video to obtain content of the video. Then, an image is selected from the album, or an image is directly photographed as a substitute image. If the video to be replaced selected by the user is a video based on a human face (such as a singing video), the human face image can be selected as a replacement image; if the video to be replaced selected by the user is a video based on a human body (such as a dance video), an image including the human body may be selected as the replacement image.
In other embodiments, if the user wants to replace the object to be replaced in the video to be replaced, the user can upload the image as the replacement image by himself. The client can then extract feature points of the replacement image to realize target recognition of the replacement image, and determine a video to be replaced according to the recognition result. For example, if a human face is identified in the replacement image, a video based on the human face (such as a singing video) may be recommended, and if a human body is identified in the replacement image, a video based on the human body (such as a dance video) may be recommended. In this manner, the client may determine the video to be replaced and the replacement image.
As an example, after receiving the replacement image, the client may upload the replacement image to the server for storage. Since the client has not sent the video generation task to the server at this time, the server may receive the replacement image from the client before receiving the video generation task, and the server may store the replacement image in the file storage system, return the storage address of the replacement image to the client, and then the subsequent client may obtain the replacement image from the server based on the storage address.
As another example, a client and server may interact with data through a gateway layer. After receiving the replacement image, the client can send the replacement image to the gateway layer, the gateway layer sends the replacement image to the server, after receiving the replacement image, the server can store the replacement image into the file storage system, send the storage address of the replacement image to the gateway layer, return the storage address to the client by the gateway layer, and after subsequently generating the replacement video, can obtain the replacement image from the server as the cover of the replacement video. The gateway layer is responsible for scheduling task related function interfaces. In the above example, the gateway layer is responsible for invoking an interface of the file storage system of the server for storing the replacement image into the file storage system.
In one embodiment of the present application, after receiving and storing the replacement image, the method further includes: receiving an image checking instruction; and verifying the replacement image based on the image verification instruction, acquiring a verification result and sending the verification result to the client.
The image verification instruction is an instruction for verifying the replacement image. The verification result may include both pass and fail cases.
In specific implementation, after receiving and storing the replacement image, the server may further receive an image verification instruction, where the image verification instruction may include an image identifier, and the server may verify the replacement image indicated by the image identifier based on the image verification instruction to determine whether the replacement image is compliant or not, and whether the replacement image is available for video generation or not, so as to avoid generating a non-compliant replacement video and waste of server resources.
In some embodiments, in order to ensure that the used replacement image is valid and valid, after the client uploads the replacement image to the server, an image verification instruction for verifying the replacement image may be generated and sent to the server, and then the server may receive the image verification instruction, perform image verification on the replacement image based on an AI algorithm of mass data, and send a verification result to the client. Or, the server may be configured with an AI layer, where the AI layer is configured to perform compliance legal audit on the image by using an AI algorithm, and after receiving the image verification instruction, the server may send the replacement image to the AI layer, perform image verification on the replacement image through the AI layer, and send a verification result to the client.
In other embodiments, the client generates an image verification instruction for verifying the replacement image and then sends the image verification instruction to the gateway layer, the gateway layer may call the AI layer of the server and send the image verification instruction to the AI layer, the AI layer may obtain the replacement image from the file storage system, perform image verification on the replacement image through an AI algorithm, send a verification result to the gateway layer, and send the verification result to the client through the gateway layer.
As an example, the alternative image may be verified by an algorithm such as image classification, object detection, face recognition, etc. Alternatively, the replacement image may be verified by a porn model and a politics model, and the verification result is returned to the client.
In the embodiment of the application, the function of uploading the replacement images by the user definition is opened, the replacement images uploaded by the user can be automatically checked and verified based on massive image information, the user is ensured to use legal and effective replacement images to generate videos, and the problem that invalid replacement videos are generated through invalid replacement images, so that server resources are wasted is avoided.
In an embodiment of the application, after receiving the verification result, if the verification result is passed, the client may create a video generation task based on the video to be replaced and the replacement image and send the video generation task to the server, and accordingly, the server may receive the video generation task. And if the verification result is that the video is not passed, the client cannot generate the video generation task.
As one example, a client and server may communicate data through a gateway layer. The client can create a video generation task based on the video to be replaced and the replacement image, and send the video generation task to the gateway layer, so that the gateway layer can send the video generation task to the algorithm gateway layer of the server, the algorithm gateway layer can interact with the algorithm model layer, and send the video generation task to the algorithm model layer, so that the server can be considered to receive the video generation task. In addition, after receiving the video generation task, the gateway layer may record the image identifier and the video identifier of the video generation task.
Further, after receiving the video generation task, the algorithm model layer may generate a task identifier of the video generation task and send the task identifier to the algorithm gateway layer, and then the algorithm gateway layer may send the task identifier to the gateway layer, and the gateway layer may record the task identifier.
In one embodiment of the application, before the client creates the video generation task, the client can also perform image recognition on the alternative image to obtain a recognition result; determining whether the identification result is matched with the video type of the video to be replaced under the condition that the identification is determined to be successful based on the identification result; and if the identification result is matched with the video type of the video to be replaced, creating a video generation task based on the video to be replaced and the replacement image.
Specifically, the recognition result may be an object existing in the recognized substitute image. Carrying out image recognition on the replacement image, and if the recognition result is a human face or a human body, the recognition can be considered to be successful; if the human face or the human body cannot be recognized, the recognition can be considered to be failed. The video types can comprise two types of face videos and human body videos.
In a specific implementation, in order to avoid that a replacement video cannot be generated based on an acquired replacement image, image recognition may be performed on the replacement image, and a recognition result is acquired, if the recognition result is successful, it may be considered that the replacement image includes a human face or a human body, and human face or human body replacement may be performed, but if the video to be replaced is not matched with the replacement image, replacement cannot be performed, and therefore, it is further necessary to determine whether the video type of the video to be replaced is matched with the recognition result, and if the recognition result is matched with the video type of the video to be replaced, a video generation task may be created based on the video to be replaced and the replacement image.
Further, if the identification result does not match with the video type of the video to be replaced. That is, if the recognition result is a human face and the video type is a human body video, or if the recognition result is a human body and the video type is a human face video, the recognition result and the video type are not matched. In this case, the image may be newly acquired as a new replacement image based on the video type of the video to be replaced, and a video generation task may be created based on the video to be replaced and the new replacement image. For example, if the video type of the video to be replaced is a face video, a face image may be acquired as a new replacement image, and if the video type of the video to be replaced is a human body video, a human body image may be acquired as a new replacement image.
In the embodiment of the application, after the client side obtains the replacement image, the replacement image can be uploaded to the server to be stored, the image verification instruction is generated and sent to the server, the server can perform image verification on the replacement image after receiving the image verification instruction, and the verification result is sent to the client side. And the client can create a video generation task after determining that the replacement image meets the requirements based on the verification result, and the video generation task is sent to the server, so that the server can receive the video generation task. And a video generation task is created under the condition that the replacement image meets the requirement, so that the replacement image is ensured to be a legally compliant image, and the waste of server resources caused by the fact that the server generates an invalid replacement video based on the illegal replacement image is avoided.
Step 104: and replacing the object to be replaced in the video to be replaced indicated by the video identification based on the replacement image indicated by the image identification to obtain a replacement video, and calling a monitoring thread to acquire the task state of the video generation task in the replacement process.
Specifically, the task state is used for representing the processing progress of the video generation task. As one example, the task state may include wait, execute, cancel, success, failure, etc. states.
In specific implementation, the server can generate a replacement video based on the video to be replaced and the replacement image through an algorithm model, and can call a monitoring thread to acquire a task state of a video generation task in the process of generating the replacement video. For example, the algorithmic model may be a face-driven model.
As an example, a monitoring thread is a thread created by a server for monitoring the task state of a video generation task.
In an embodiment of the present application, replacing an object to be replaced in a video to be replaced, which is indicated by the video identifier, based on the replacement image, which is indicated by the image identifier, and obtaining a specific implementation of the replacement video may include: performing frame division processing on the video to be replaced to obtain a plurality of video frames; performing target identification on each video frame in the plurality of video frames, and taking the video frame with the object to be replaced as a target video frame; replacing the object to be replaced in the target video frame based on the replacement image to obtain a replacement video frame; and generating the replacement video based on the replacement video frame and the video frames of the video to be replaced except the target video frame.
Specifically, the object to be replaced may be a subject object in the video to be replaced. For example, if the video to be replaced is a singing video, the object to be replaced may be a singer. If the video to be replaced is a dance video, the object to be replaced may be a dancer.
In specific implementation, a video to be replaced can be divided into a plurality of video frames by frames, each video frame can be subjected to target detection by a target detector based on a deep convolutional network, and the video frame with a detected object to be replaced serves as a target video frame. And then replacing the object to be replaced in the target video frame with a replacement image to obtain a replacement video frame, and splicing the replacement video frame and the video frames except the target video frame in the video to be replaced according to a time sequence to obtain a replacement video.
As an example, the video to be replaced and the replacement image may be input into an algorithm model layer of the server, where the algorithm model layer stores a video generation model, and a video generation task may be performed according to the replacement image and the video to be replaced to obtain the replacement video. Namely, the replacement image and the video to be replaced are input into the video generation model, and the replacement video can be output.
It should be noted that the video generation model may be a pre-trained neural network model.
In the embodiment of the application, a target video frame needing object replacement is determined from a video to be replaced through target detection, then an object to be replaced in the target video frame is replaced by a replacement image, a replacement video frame can be obtained, the target video frame in the video to be replaced is replaced by the replacement video frame, and then a replacement video can be obtained. Therefore, the replacement of the object to be replaced in the video to be replaced can be realized.
In an embodiment of the present application, replacing the object to be replaced in the target video frame based on the replacement image, and obtaining a specific implementation of the replacement video frame may include: carrying out three-dimensional reconstruction on the object to be replaced in the target video frame to obtain a three-dimensional model of the object to be replaced, and carrying out three-dimensional reconstruction on the object to be replaced in the replacement image to obtain a three-dimensional model of the object to be replaced; extracting a plurality of feature points of the object to be replaced in the target video frame, and determining feature parameters of the object to be replaced based on a three-dimensional model of the object to be replaced; constructing a target image based on the three-dimensional model of the replacement image, the plurality of feature points and the feature parameters; and replacing the object to be replaced in the target video frame based on the target image to obtain a replaced video frame.
The characteristic parameters may include expression parameters and/or action parameters. If the object to be replaced is a human face, the characteristic parameters may be expression parameters, and if the object to be replaced is a human body, the characteristic parameters may be motion parameters or motion parameters and expression parameters.
In a specific implementation, generating a replacement video based on a replacement image and a video to be replaced may be implemented by a video generation model, where the video generation model may include a three-dimensional reconstruction module, a video preprocessing module, a two-dimensional image generation module, and a video editing module. The three-dimensional reconstruction module can be used for three-dimensionally reconstructing the object to be replaced in the target video frame to obtain a three-dimensional model of the object to be replaced, and the three-dimensional reconstruction module is used for three-dimensionally reconstructing the replacement object in the replacement image to obtain a three-dimensional model of the replacement object. A plurality of feature points of an object to be replaced in a target video frame can be extracted through a video preprocessing module, and feature parameters of the object to be replaced are determined based on a three-dimensional model of the object to be replaced. And constructing the target image based on the three-dimensional model of the replacement image, the plurality of characteristic points and the characteristic parameters through a two-dimensional image generation module. And replacing the target image on the target video frame through the video editing module to obtain a replaced video frame.
As an example, for each target video frame, the two-dimensional image generation module may adjust and align a three-dimensional model of a replacement object with an object to be replaced in the target video frame through feature points, perform deformation adjustment on the three-dimensional model according to feature parameters of the object to be replaced in the target video frame, and project the aligned and feature-adjusted three-dimensional model onto a two-dimensional image, so as to obtain a target image.
As an example, for each target video frame, the video editing module may replace an object to be replaced in the target video frame with a target image, and perform smoothing and color blurring processing on the target image to avoid color mutation and other distortion phenomena, so that a replacement video frame may be obtained.
In the embodiment of the application, the replacement video can be generated through the three-dimensional reconstruction module, the video preprocessing module, the two-dimensional image generation module and the video editing module in the video generation model, and the video generation model is trained in advance, so that the generation time of the replacement video can be shortened, and the generation efficiency of the replacement video can be improved.
In an embodiment of the present application, after receiving the video generation task, the method further includes: and writing the video generation task into a task queue.
In a specific implementation, after receiving the video generation task, the server may write the video generation task into a task queue, where the task queue is used to store the video generation task.
As an example, the server may create multiple service threads for servicing the task queue. In this case, if the server receives a plurality of video generation tasks, the plurality of video generation tasks may be written into the task queue in the order of reception, and the plurality of service threads may acquire the video generation tasks from the task queue in parallel to execute them. Namely, the task scheduling of the application is to arrange a plurality of service threads of the server to execute the tasks in parallel.
Therefore, under the condition that the video generation tasks are more, the service threads process the video generation tasks in parallel, the time for processing the video generation tasks can be greatly shortened, the processing efficiency of the server is improved, and the efficiency for generating the replacement video is further improved.
In an embodiment of the present application, before replacing an object to be replaced in a video to be replaced, which is indicated by the video identifier, based on the replacement image, the method further includes: acquiring the video generation task from the task queue; and acquiring the video to be replaced indicated by the video identification, and acquiring the replacement image indicated by the image identification.
In a specific implementation, since the video generation task is stored in the task queue, before the video generation task is executed, the service thread may be called to acquire the video generation task from the task queue, and acquire a video to be replaced based on the video identifier and acquire a replacement image based on the image identifier, so as to facilitate subsequent processing.
As an example, after receiving a video generation task, a server may store the video generation task in a task queue, and if a plurality of video generation tasks are stored in the task queue, a server thread may be called to concurrently acquire the video generation task from the task queue, and acquire a replacement image from a file storage system according to an image identifier in the video generation task, and acquire a video to be replaced according to the video identifier, so as to perform video synthesis based on the replacement image and the video to be replaced, and obtain a replacement video.
In an embodiment of the present application, a specific implementation of invoking a monitoring thread to obtain a task state of the video generation task in the replacement process may include: and under the condition that the task state of the video generation task changes, calling the monitoring thread to acquire the task state of the video generation task.
In the specific implementation, the video generation task can be executed by the algorithm model layer of the server, and in the process of executing the video generation task, if the task state changes, the algorithm model layer can feed back a new task state to the algorithm gateway layer, and correspondingly, the server can call the monitoring thread through the gateway layer, so that the gateway layer can receive the task state of the video generation task sent by the algorithm gateway layer.
It should be noted that, in this implementation, the gateway layer may be considered as a module configured in the server.
As an example, each video generation task may record its task status as waiting when it is not taken out of the task queue. The video generation task is taken out from the task queue, and when the algorithm model layer starts to execute the video generation task, the monitoring thread can be called to obtain the task state of the video generation task, and the recorded task state can be executed, and the task state can be continuously recorded in the execution process, for example, 50 percent of the composition progress, 80 percent of the composition progress and the like. If the execution of the video generation task is completed, the replacement video is successfully generated, the monitoring thread can be called to acquire the task state of the video generation task, and the recorded task state can be successful. If the replacement video is not generated due to network or other reasons in the process of executing the video generation task, and the video synthesis cannot be continued any more, the monitoring thread can be called to acquire the task state of the video generation task, and the recorded task state may be a failure. If the user does not want to replace the image temporarily, a task canceling instruction can be sent to the server through the client, the task canceling instruction can include a task identifier, in the process of executing the video generation task, generation of the replacement video can be stopped due to the fact that the task canceling instruction is received, therefore when the task canceling instruction is received, the monitoring thread can be called to obtain the task state of the video generation task, and the recorded task state can be cancelled.
In the embodiment of the application, the server calls the monitoring thread to be specially used for acquiring the task state of the video generation task, the task state information of the video generation task can be acquired in real time, the service thread for executing the video generation task is not occupied, and the processing efficiency of the server can be improved.
In an embodiment of the present application, after the monitoring thread is called to obtain the task state of the video generation task in the replacement process, the method further includes: and writing the acquired task state and the task identifier of the video generation task into a message queue.
In a specific implementation, after the monitoring thread is called to obtain the task state of the video generation task, the task state and the task identifier of the video generation task can be written into the message queue, so that the client can obtain the task state of the video generation task from the message queue based on the task identifier.
As an example, the message queue may include task identifiers of a plurality of video generation tasks and a task state of each video generation task, and after acquiring the task state, the gateway layer may determine the task state of the video generation task indicated by the task identifier from the message queue, and update the task state of the video generation task with the acquired task state.
In an embodiment of the present application, after writing the obtained task state and the task identifier of the video generation task into a message queue, the method further includes: receiving a task state acquisition instruction, wherein the task state acquisition instruction comprises a task identifier; and acquiring the task state of the video generation task indicated by the task identifier from the message queue and sending the task state to the client.
In a specific implementation, the server may receive a task state obtaining instruction from the client, obtain, based on a task identifier included in the task state obtaining instruction, a task state of the video generation task indicated by the task identifier from the message queue, and send the task state to the client, so that the client may obtain the task state of the video generation task.
As an example, if the gateway layer is a module configured in the server, the gateway layer receives a task state obtaining instruction from the client, and obtains a task state of a corresponding video generation task from the message queue according to the task identifier, and sends the task state to the client.
For example, assuming that the task flag is S1, the task state corresponding to the task flag S1 may be acquired from the message queue, and assuming that the task state corresponding to S1 is in execution, it may be determined that the video generation task S1 is currently in execution.
In some embodiments, to facilitate a user to know the generation progress of the replacement video in real-time, the client may poll the task status of the video generation task. As an example, a specific implementation of polling the task state of the video generation task may include: and sending a task state acquisition instruction to the server and receiving a task state returned by the server. And updating the task state of the video generation task indicated by the task identifier in the task list based on the task state returned by the server.
In specific implementation, after the client generates a video generation task, a task list can be created, and the video generation task is sent to the gateway layer, and the gateway layer can record an image identifier and a video identifier of the video generation task. And after receiving the video generation task, the server can create a task identifier for the video generation task and send the task identifier to the gateway, and the gateway can record the task identifier and send the task identifier to the client, so that the client can store the task identifier in the task list. The client can periodically send a task state acquisition instruction to the server, the server acquires the task state from the message queue and sends the task state to the gateway layer based on the task identifier in the task state acquisition instruction, and the gateway layer sends the task state to the client. After receiving the task state sent by the gateway layer, the client can update the task state of the video generation task in the task list based on the task state.
In the embodiment of the application, the server can generate the replacement video based on the video to be replaced and the replacement image, in the replacement process, the monitoring thread can be independently called to obtain the task state of the video generation task, the task state is written into the message queue, when the task state obtaining instruction of the client side including the task identifier is received, the task state of the video generation task indicated by the task identifier is obtained from the message queue and is sent to the client side, the client side can obtain the task state of the video generation task in real time, so that a user can monitor the generation progress of the replacement video in real time, and user experience is improved.
Step 106: and sending the replacement video to a client under the condition that the task state is determined to be the task success.
In a specific implementation, if it is determined that the task state is a task success, it may be considered that the object to be replaced in the video to be replaced has been successfully replaced with the replacement image, so as to obtain a replacement video, and therefore, the replacement video may be obtained and sent to the client.
Further, under the condition that it is determined that the task state is the task success, a task success notification may be sent to the client, and after receiving the task success notification, the client may update the task state of the task in the task list to the task success.
In an embodiment of the present application, after the monitoring thread is called to obtain the task state of the video generation task in the replacement process, the method further includes: and sending a task failure notice to the client when the task state is determined to be the task failure.
In a specific implementation, if the task state is a task failure, it may be considered that the object to be replaced in the video to be replaced is not successfully replaced with the replacement image, the video after replacement is not available, a task failure notification may be sent to the client, and after receiving the task failure notification, the client may update the task state of the task in the task list to be a task failure.
As an example, if a user wants to cancel a video generation task in the process of executing the video generation task by a server, the user may click a "cancel" option displayed by a client, and the client may generate a task cancel instruction, where the task cancel instruction includes a task identifier, and send the task cancel instruction to the server, and the server may determine the video generation task based on the task identifier, stop executing the video generation task, and send a completion notification to the client to inform the client that the video generation task has been cancelled.
The video generation method provided by the application receives a video generation task, wherein the video generation task comprises a video identifier and an image identifier; replacing an object to be replaced in the video to be replaced, which is indicated by the video identification, based on the replacement image, which is indicated by the image identification, to obtain a replacement video, and calling a monitoring thread to acquire a task state of the video generation task in the replacement process; and sending the replacement video to a client under the condition that the task state is determined to be the task success. According to the method, in the process of generating the replacement video, the monitoring thread is called to obtain the task state of the video generation task, the task state can be obtained in time under the condition that the video generation task is not influenced, the task state is not consumed for a long time when being obtained, the efficiency of obtaining the task state is improved, and the user can know the progress of the video generation task in real time by obtaining the task state in time, so that the user experience can be improved.
The following describes the video generation method further by taking an application of the video generation method provided by the present application in human body replacement as an example, with reference to fig. 2. Fig. 2 shows a processing flow chart of a video generation method applied to human body replacement according to an embodiment of the present application, which may specifically include the following steps:
step 202: in response to the user's operation, the client determines a video to be replaced.
In this embodiment, the operations performed by the client can be regarded as operations performed on the H5 page.
For example, multiple videos may be presented on the H5 page of the client, from which the user selects one video as the video to be replaced. Referring to fig. 3, fig. 3 is a screenshot of a video to be replaced.
Step 204: the client receives the replacement image.
For example, the user may click on the control of fig. 3 to add a photo, select an image from an album, or directly take an image and upload the image as a replacement image to the client.
Step 206: and the client uploads the received replacement image to the gateway layer.
Step 208: the gateway layer uploads the replacement image to a file storage system of the server.
Step 210: the file storage system of the server stores the replacement image and returns the storage address of the replacement image to the gateway layer.
Step 212: and the gateway layer sends the storage address of the replacement image to the client.
Step 214: and the client sends an image checking instruction to the gateway layer.
For example, a user may click a control of the composite video in fig. 3, the client may receive a video generation request, and in order to avoid that the generated replacement video cannot be used due to the non-compliance of the replacement picture, the replacement image may be verified, that is, the client may send an image verification instruction to the gateway layer.
Step 216: and the gateway layer sends the image checking instruction to an AI layer of the server.
Step 218: the AI layer of the server verifies the replacement image.
Step 220: and the AI layer of the server sends the verification result to the gateway layer.
Step 222: and the gateway layer sends the checking result to the client.
Illustratively, in order to ensure that the used replacement image is valid and effective, after the client uploads the replacement image to the file storage system of the server through the gateway layer, an image verification instruction for verifying the replacement image can be generated and sent to the AI layer of the server, and then the server can receive the image verification instruction, perform image verification on the replacement image based on an AI algorithm of mass data at the AI layer, and send the verification result to the client through the gateway layer.
Step 224: and if the client determines that the replacement image meets the requirements based on the verification result, generating a video generation task and sending the video generation task to the gateway layer.
Step 226: and the gateway layer receives the video generation task and records the image identification and the video identification of the video generation task.
Step 228: and the gateway layer sends the video generation task to an algorithm gateway layer of the server.
Step 230: and the algorithm gateway layer of the server sends the video generation task to the algorithm model layer.
Step 232: and the algorithm model layer generates a task identifier of the video generation task and returns the task identifier to the algorithm gateway layer.
Step 234: and the algorithm gateway layer sends the task identifier to the gateway layer, and the gateway layer can record the task identifier.
Step 236: and the gateway layer sends the task identification to the client.
Step 238: the algorithm model layer performs the video generation task.
Step 240: and the server starts a monitoring thread through the gateway layer and is used for receiving the task state of the video generation task sent by the algorithm gateway layer, searching the corresponding video generation task according to the task identifier included in the received task state and updating the task state.
For example, assuming that the task identifier is a, the task corresponding to the task identifier is originally in execution, the task state is obtained by the monitoring thread, and the new task state is successful, the task state of the video generation task corresponding to the task identifier a may be modified to be successful.
Step 242: and the client sends a task state acquisition instruction to the gateway layer, wherein the task state acquisition instruction comprises a task identifier.
Step 244: and the gateway layer receives the task state acquisition instruction, inquires the corresponding task state according to the task identifier and sends the task state to the client.
Step 246: in the event that the task status is determined to be task successful, the gateway layer sends a task success notification and a replacement video to the client.
For example, referring to fig. 4, fig. 4 is a screenshot of one frame of an alternative video. And, the user can publish the replacement video for viewing by other users.
The video generation method provided by the application receives a video generation task, wherein the video generation task comprises a video identifier and an image identifier; replacing an object to be replaced in the video to be replaced, which is indicated by the video identification, based on the replacement image, which is indicated by the image identification, to obtain a replacement video, and calling a monitoring thread to acquire a task state of the video generation task in the replacement process; and sending the replacement video to a client under the condition that the task state is determined to be the task success. According to the method, in the process of generating the replacement video, the monitoring thread is called to obtain the task state of the video generation task, the task state can be obtained in time under the condition that the video generation task is not influenced, the task state is not consumed for a long time when being obtained, the efficiency of obtaining the task state is improved, and the user can know the progress of the video generation task in real time by obtaining the task state in time, so that the user experience can be improved.
Corresponding to the above method embodiment, the present application further provides an embodiment of a video generating apparatus, and fig. 5 shows a schematic structural diagram of a video generating apparatus provided in an embodiment of the present application. As shown in fig. 5, the apparatus includes:
a receiving module 502 configured to receive a video generation task, wherein the video generation task includes a video identifier and an image identifier;
a replacing module 504, configured to replace an object to be replaced in the video to be replaced indicated by the video identifier based on the replacement image indicated by the image identifier, to obtain a replacement video, and to call a monitoring thread to acquire a task state of the video generation task in a replacing process;
a sending module 506 configured to send the replacement video to a client if it is determined that the task status is task successful.
Optionally, a replacement module 504 configured to:
performing frame division processing on the video to be replaced to obtain a plurality of video frames;
performing target identification on each video frame in the plurality of video frames, and taking the video frame with the object to be replaced as a target video frame;
replacing the object to be replaced in the target video frame based on the replacement image to obtain a replacement video frame;
and generating the replacement video based on the replacement video frame and the video frames of the video to be replaced except the target video frame.
Optionally, a replacement module 504 configured to:
carrying out three-dimensional reconstruction on the object to be replaced in the target video frame to obtain a three-dimensional model of the object to be replaced, and carrying out three-dimensional reconstruction on the object to be replaced in the replacement image to obtain a three-dimensional model of the object to be replaced;
extracting a plurality of feature points of the object to be replaced in the target video frame, and determining feature parameters of the object to be replaced based on a three-dimensional model of the object to be replaced;
constructing a target image based on the three-dimensional model of the replacement image, the plurality of feature points and the feature parameters;
and replacing the object to be replaced in the target video frame based on the target image to obtain a replaced video frame.
Optionally, the receiving module 502 is further configured to:
and writing the video generation task into a task queue.
Optionally, the replacement module 504 is further configured to:
acquiring the video generation task from the task queue;
and acquiring the video to be replaced indicated by the video identification, and acquiring the replacement image indicated by the image identification.
Optionally, a replacement module 504 configured to:
and under the condition that the task state of the video generation task changes, calling the monitoring thread to acquire the task state of the video generation task.
Optionally, the replacement module 504 is further configured to:
and writing the acquired task state and the task identifier of the video generation task into a message queue.
Optionally, the replacement module 504 is further configured to:
receiving a task state acquisition instruction, wherein the task state acquisition instruction comprises a task identifier;
and acquiring the task state of the video generation task indicated by the task identifier from the message queue and sending the task state to the client.
Optionally, the receiving module 502 is further configured to:
receiving and storing the replacement image;
and sending the storage address of the replacement image to a client.
Optionally, the receiving module 502 is further configured to:
receiving an image checking instruction;
and verifying the replacement image based on the image verification instruction, acquiring a verification result and sending the verification result to the client.
Optionally, the replacement module 504 is further configured to:
and sending a task failure notice to the client when the task state is determined to be the task failure.
The video generation method provided by the application receives a video generation task, wherein the video generation task comprises a video identifier and an image identifier; replacing an object to be replaced in the video to be replaced, which is indicated by the video identification, based on the replacement image, which is indicated by the image identification, to obtain a replacement video, and calling a monitoring thread to acquire a task state of the video generation task in the replacement process; and sending the replacement video to a client under the condition that the task state is determined to be the task success. According to the method, in the process of generating the replacement video, the monitoring thread is called to obtain the task state of the video generation task, the task state can be obtained in time under the condition that the video generation task is not influenced, the task state is not consumed for a long time when being obtained, the efficiency of obtaining the task state is improved, and the user can know the progress of the video generation task in real time by obtaining the task state in time, so that the user experience can be improved.
The above is a schematic scheme of a video generating apparatus of the present embodiment. It should be noted that the technical solution of the video generation apparatus belongs to the same concept as the technical solution of the video generation method, and for details that are not described in detail in the technical solution of the video generation apparatus, reference may be made to the description of the technical solution of the video generation method.
Fig. 6 illustrates a block diagram of a computing device 600 provided according to an embodiment of the present application. The components of the computing device 600 include, but are not limited to, a memory 610 and a processor 620. The processor 620 is coupled to the memory 610 via a bus 630 and a database 650 is used to store data.
Computing device 600 also includes access device 640, access device 640 enabling computing device 600 to communicate via one or more networks 660. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 640 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 600, as well as other components not shown in FIG. 6, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 6 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 600 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 600 may also be a mobile or stationary server.
Wherein the processor 620 implements the steps of the video generation method described above when executing the instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the video generation method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the video generation method.
An embodiment of the present application further provides a computer readable storage medium, which stores computer instructions, and when the instructions are executed by a processor, the instructions implement the steps of the video generation method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the video generation method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the video generation method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (14)

1. A video generation method is applied to a server and comprises the following steps:
receiving a video generation task, wherein the video generation task comprises a video identifier and an image identifier;
replacing an object to be replaced in the video to be replaced, which is indicated by the video identification, based on the replacement image, which is indicated by the image identification, to obtain a replacement video, and calling a monitoring thread to acquire a task state of the video generation task in the replacement process;
and sending the replacement video to a client under the condition that the task state is determined to be the task success.
2. The video generation method of claim 1, wherein replacing an object to be replaced in the video to be replaced indicated by the video identification based on the replacement image indicated by the image identification to obtain a replacement video, comprises:
performing frame division processing on the video to be replaced to obtain a plurality of video frames;
performing target identification on each video frame in the plurality of video frames, and taking the video frame with the object to be replaced as a target video frame;
replacing the object to be replaced in the target video frame based on the replacement image to obtain a replacement video frame;
and generating the replacement video based on the replacement video frame and the video frames of the video to be replaced except the target video frame.
3. The video generation method of claim 2, wherein replacing the object to be replaced in the target video frame based on the replacement image to obtain a replacement video frame comprises:
carrying out three-dimensional reconstruction on the object to be replaced in the target video frame to obtain a three-dimensional model of the object to be replaced, and carrying out three-dimensional reconstruction on the object to be replaced in the replacement image to obtain a three-dimensional model of the object to be replaced;
extracting a plurality of feature points of the object to be replaced in the target video frame, and determining feature parameters of the object to be replaced based on a three-dimensional model of the object to be replaced;
constructing a target image based on the three-dimensional model of the replacement image, the plurality of feature points and the feature parameters;
and replacing the object to be replaced in the target video frame based on the target image to obtain a replaced video frame.
4. The video generation method of any of claims 1-3, wherein after receiving the video generation task, further comprising:
and writing the video generation task into a task queue.
5. The video generation method of claim 4, wherein before replacing the object to be replaced in the video to be replaced indicated by the video identification based on the replacement image indicated by the image identification, further comprising:
acquiring the video generation task from the task queue;
and acquiring the video to be replaced indicated by the video identification, and acquiring the replacement image indicated by the image identification.
6. The video generation method of any of claims 1-3, wherein invoking a monitor thread to obtain a task state of the video generation task in an alternative process comprises:
and under the condition that the task state of the video generation task changes, calling the monitoring thread to acquire the task state of the video generation task.
7. The video generation method of claim 6, wherein after invoking the monitor thread to obtain the task state of the video generation task in the replacement process, further comprising:
and writing the acquired task state and the task identifier of the video generation task into a message queue.
8. The video generation method of claim 7, wherein after writing the acquired task state and the task identification of the video generation task into a message queue, further comprising:
receiving a task state acquisition instruction, wherein the task state acquisition instruction comprises a task identifier;
and acquiring the task state of the video generation task indicated by the task identifier from the message queue and sending the task state to the client.
9. The video generation method of claim 1, wherein, prior to receiving the video generation task, further comprising:
receiving and storing the replacement image;
and sending the storage address of the replacement image to a client.
10. The video generation method of claim 9, wherein after receiving and storing the replacement image, further comprising:
receiving an image checking instruction;
and verifying the replacement image based on the image verification instruction, acquiring a verification result and sending the verification result to the client.
11. The video generation method of claim 1, wherein after invoking the monitor thread to obtain the task state of the video generation task in the replacement process, further comprising:
and sending a task failure notice to the client when the task state is determined to be the task failure.
12. A video generation apparatus, comprising:
the receiving module is configured to receive a video generation task, wherein the video generation task comprises a video identifier and an image identifier;
the replacing module is configured to replace an object to be replaced in the video to be replaced, which is indicated by the video identification, based on the replacing image indicated by the image identification to obtain a replacing video, and call a monitoring thread to acquire a task state of the video generating task in a replacing process;
a sending module configured to send the replacement video to a client if it is determined that the task status is task successful.
13. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-11 when executing the instructions.
14. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 11.
CN202110496247.7A 2021-05-07 2021-05-07 Video generation method and device Pending CN113242451A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110496247.7A CN113242451A (en) 2021-05-07 2021-05-07 Video generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110496247.7A CN113242451A (en) 2021-05-07 2021-05-07 Video generation method and device

Publications (1)

Publication Number Publication Date
CN113242451A true CN113242451A (en) 2021-08-10

Family

ID=77132274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110496247.7A Pending CN113242451A (en) 2021-05-07 2021-05-07 Video generation method and device

Country Status (1)

Country Link
CN (1) CN113242451A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113825018A (en) * 2021-11-22 2021-12-21 环球数科集团有限公司 Video processing management platform based on image processing
CN113923515A (en) * 2021-09-29 2022-01-11 马上消费金融股份有限公司 Video production method and device, electronic equipment and storage medium
CN115174812A (en) * 2022-07-01 2022-10-11 维沃移动通信有限公司 Video generation method, video generation device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019576A1 (en) * 2005-09-16 2008-01-24 Blake Senftner Personalizing a Video
CN105118082A (en) * 2015-07-30 2015-12-02 科大讯飞股份有限公司 Personalized video generation method and system
CN107798108A (en) * 2017-10-30 2018-03-13 中国联合网络通信集团有限公司 A kind of asynchronous task querying method and equipment
CN111857919A (en) * 2020-07-16 2020-10-30 北京字节跳动网络技术有限公司 Video processing method, device, terminal equipment and medium
CN111866508A (en) * 2020-07-13 2020-10-30 腾讯科技(深圳)有限公司 Video processing method, device, medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019576A1 (en) * 2005-09-16 2008-01-24 Blake Senftner Personalizing a Video
CN105118082A (en) * 2015-07-30 2015-12-02 科大讯飞股份有限公司 Personalized video generation method and system
CN107798108A (en) * 2017-10-30 2018-03-13 中国联合网络通信集团有限公司 A kind of asynchronous task querying method and equipment
CN111866508A (en) * 2020-07-13 2020-10-30 腾讯科技(深圳)有限公司 Video processing method, device, medium and electronic equipment
CN111857919A (en) * 2020-07-16 2020-10-30 北京字节跳动网络技术有限公司 Video processing method, device, terminal equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923515A (en) * 2021-09-29 2022-01-11 马上消费金融股份有限公司 Video production method and device, electronic equipment and storage medium
CN113825018A (en) * 2021-11-22 2021-12-21 环球数科集团有限公司 Video processing management platform based on image processing
CN115174812A (en) * 2022-07-01 2022-10-11 维沃移动通信有限公司 Video generation method, video generation device and electronic equipment

Similar Documents

Publication Publication Date Title
CN113242451A (en) Video generation method and device
CN107756395B (en) Control system, method and device of intelligent robot
CN104040467B (en) Along with the content consumption of individual's reaction
CN111476871B (en) Method and device for generating video
CN102771082B (en) There is the communication session between the equipment of mixed and interface
US20180341878A1 (en) Using artificial intelligence and machine learning to automatically share desired digital media
US11196962B2 (en) Method and a device for a video call based on a virtual image
CN106572139B (en) Multi-terminal control method, terminal, server and system
CN107360007A (en) Conference implementation method and device and electronic equipment
JP2022002074A (en) Conference reservation method, apparatus, device, and medium realized by computer
US11228683B2 (en) Supporting conversations between customers and customer service agents
CN108363999A (en) Operation based on recognition of face executes method and apparatus
KR101698739B1 (en) Video editing systems and a driving method using video project templates
CN111651731A (en) Method for converting entity product into digital asset and storing same on block chain
US20240185877A1 (en) Method for providing speech video and computing device for executing the method
CN114637450A (en) Automatic processing method and system of business process and electronic equipment
CN110415318B (en) Image processing method and device
CN110446118B (en) Video resource preprocessing method and device and video resource downloading method and device
US11539915B2 (en) Transmission confirmation in a remote conference
CN115484474A (en) Video clip processing method, device, electronic equipment and storage medium
CN111757115A (en) Video stream processing method and device
CN111931465A (en) Method and system for automatically generating user manual based on user operation
US20240184860A1 (en) Methods and arrangements for providing impact imagery
Minev Amplifying Human Content Expertise with Real-World Machine-Learning Workflows
US20230101254A1 (en) Creation and use of digital humans

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210810