CN111447389B - Video generation method, device, terminal and storage medium - Google Patents
Video generation method, device, terminal and storage medium Download PDFInfo
- Publication number
- CN111447389B CN111447389B CN202010324210.1A CN202010324210A CN111447389B CN 111447389 B CN111447389 B CN 111447389B CN 202010324210 A CN202010324210 A CN 202010324210A CN 111447389 B CN111447389 B CN 111447389B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- background image
- target window
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Abstract
The embodiment of the application provides a video generation method, a video generation device, a video generation terminal and a storage medium. The method comprises the following steps: acquiring a background image and acquiring a foreground image; displaying the foreground image in a target window in the background image to obtain a composite image, wherein the target window comprises a target object; and generating the target video according to the composite image. According to the technical scheme, the foreground image acquired from the network or the terminal locally replaces the environment image or the interface image adopted during video recording, so that the background of video recording can be enriched, and the diversity and flexibility of video recording are improved.
Description
Technical Field
The embodiment of the application relates to the technical field of internet, in particular to a video generation method, a video generation device, a video generation terminal and a storage medium.
Background
With the continuous development of video technology, various video sharing platforms emerge, and a user can record videos through a terminal and upload the recorded videos to the video sharing platform for other users to enjoy.
In the related art, a terminal is provided with a recording application program, and when a user triggers the recording application program to record a video, the recording application program calls a camera assembly to acquire a plurality of frames of images so as to realize video recording, or the recording application program calls a screen recording application program to acquire a plurality of frames of images so as to realize video recording.
In the related art, the images acquired by the terminal during video recording are usually environment images (images for describing the environment in which the terminal is located) or interface images (images for describing the currently displayed interface of the terminal), so that the background of the video recording is relatively single.
Disclosure of Invention
The embodiment of the application provides a video generation method, a video generation device, a terminal and a storage medium, which can enrich the background of video recording. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a video generation method, where the method includes:
acquiring a background image and acquiring a foreground image;
displaying the foreground image in a target window in the background image to obtain a composite image, wherein the target window comprises the target object;
and generating a target video according to the composite image.
In another aspect, an embodiment of the present application provides a video generating apparatus, where the apparatus includes:
the image acquisition module is used for acquiring a background image;
the image acquisition module is used for acquiring a foreground image;
the image synthesis module is used for displaying the foreground image in a target window in the background image to obtain a synthesized image, wherein the target window comprises the target object;
and the video generation module is used for generating a target video according to the synthetic image.
In yet another aspect, a terminal is provided, the terminal comprising a processor and a memory, the memory storing a computer program, the computer program being loaded and executed by the processor to implement the video generation method of an aspect.
In yet another aspect, a computer-readable storage medium is provided, in which a computer program is stored, the computer program being loaded and executed by a processor to implement the video generation method of an aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
through setting up the display window in the background image, and show foreground image and record the object from the network or from terminal local acquireing at this display window, obtain the composite image, the video is recorded in the multiframe composite image constitution, because the foreground image is from the network or from terminal local acquireing, the kind is more, and the content is abundanter, compare in the correlation technique only gather environment image or interface image when the video is recorded, the technical scheme that this application embodiment provided can enrich the background that the video was recorded, promote the variety and the flexibility that the video was recorded.
Drawings
FIG. 1 is a flow chart of a video generation method shown in an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a video generation method shown in another exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of generating a composite image, shown in one exemplary embodiment of the present application;
FIG. 4 is a block diagram of a video generation apparatus shown in another exemplary embodiment of the present application;
fig. 5 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a video generation method, a display window is arranged in a background image, a foreground image and a recording object which are obtained from a network or from a terminal locally are displayed on the display window, a composite image is obtained, and a recording video is formed by multi-frame composite images.
According to the technical scheme provided by the embodiment of the application, the execution main body of each step can be a terminal. The terminal may be a smartphone, a tablet computer, a laptop or a desktop computer, etc. In some embodiments, a recording application is installed in the terminal, and the execution subject of each step may also be the recording application.
Referring to fig. 1, a flow chart of a video generation method according to an embodiment of the present application is shown. The method may comprise the steps of:
The background image includes a target object, which is used to describe a recording object, and the recording object may be a person, an animal, an article, a virtual person, and the like, which is not limited in this embodiment of the application.
The number of the background images may be one or multiple, and the number of the background images is not limited in the embodiment of the present application. The terminal is provided with hardware or software having an image capturing function by which an image is captured.
In some embodiments, the terminal is configured with hardware having an image capture function (such as a camera) through which the terminal captures a background image. It should be noted that the hardware with the image capturing function may be hardware of the terminal itself, or may be hardware that is independent from the terminal and has a wired connection or a wireless connection with the terminal.
In some embodiments, software with image capture functionality (such as screen recording software) is provided, by which the terminal captures background images. The background image acquired by the screen recording software is the display content in the terminal screen.
In other embodiments, the terminal acquires images through the camera and the screen recording software at the same time, and synthesizes the images acquired by the camera and the screen recording software to obtain a background image.
The number of foreground images may be one or multiple, and the number of foreground images is not limited in the embodiment of the present application. In a possible implementation manner, the number of foreground images is one, that is, each synthesized image is synthesized by using the same foreground image. In another possible implementation, the number of foreground images is the same as the number of background images, that is, each synthesized image is synthesized by using different foreground images.
In some embodiments, the foreground image is a frame of image in the foreground video, and the terminal acquires the foreground video first and then extracts the foreground image from the foreground video. The terminal can obtain the foreground video from a local preset storage path. The terminal can also acquire the foreground video from the network. For example, the terminal may obtain the foreground video from a server corresponding to the recording application.
The execution sequence of acquiring the background image and the foreground image is not limited in the embodiment of the application. The terminal can acquire the background image first and then acquire the foreground image, can acquire the foreground image first and then acquire the background image, and can acquire the background image and acquire the foreground image simultaneously.
When the technical scheme provided by the embodiment of the application is applied to a live broadcast scene, the terminal can acquire the foreground video and extract the foreground image from the foreground video, and then acquire the background image. When the technical scheme provided by the embodiment of the application is applied to a short video recording scene, the terminal can acquire the background image, acquire the foreground video and extract the foreground image from the foreground video.
And 103, displaying the foreground image in a target window in the background image to obtain a composite image.
The target window includes a target object. The target window is a preset region for displaying the target object in the foreground image and the background image. The number of the synthetic images may be determined according to the number of the background images, for example, the number of the synthetic images is the same as the number of the background images.
The composition of the composite image may be determined based on the size of the target window. When the size of the target window is the same as the size of the display window for displaying the background image, the composite image is composed of the foreground image and the target object, and when the size of the target window is smaller than the size of the display window for displaying the background image, the composite image is composed of the foreground image, the target object, and a partial region of the background object other than the target object.
In some embodiments, step 103 may be implemented as the following sub-steps: and replacing the display content in the area except the target object in the target window with the foreground image.
And the terminal replaces the display content in the region except the target object in the target window with the foreground image to obtain a composite image.
In other embodiments, step 103 may be replaced with the following sub-steps: and displaying the foreground image on the upper layer of the display content in the target window in an overlapping manner.
And the transparency of the overlapping area of the foreground image and the target object is a preset value. The preset value can be set according to actual requirements. For example, the preset value is 1, that is, an overlapping area of the foreground image and the target object is completely transparent, and the target object can be displayed through the overlapping area.
And 104, generating a target video according to the synthetic image.
And the terminal plays the plurality of synthesized images frame by frame to obtain the target video.
To sum up, the technical scheme provided by the embodiment of the application obtains the synthetic image by setting the display window in the background image and displaying the foreground image and the recording object which are obtained from the network or from the terminal locally in the display window, and the multi-frame synthetic image forms the recording video.
Please refer to fig. 2, which shows a flowchart of a video generation method provided by an embodiment of the present application. The method comprises the following steps:
The background image includes a target object.
Referring collectively to FIG. 2, a schematic diagram of an interface for generating a composite image is shown, according to an embodiment of the present application. The background image 31 includes a target object 311.
In order to avoid blocking the target object when generating the composite image from the foreground image, the target object needs to be marked before generating and combining the images.
In some embodiments, the terminal marks the target object in the background image by the following sub-steps:
(1) Carrying out graying processing on the background image to obtain a grayed background image;
the graying processing is to convert a background image into a grayscale image, and the grayscale image is to represent an image of each pixel point by 8-bit (0-255) grayscale values. The gray values are used to represent the shades of the colors.
In the embodiment of the application, the terminal may perform graying processing on the background image by using a method such as an averaging method, a maximum-minimum averaging method, a weighted averaging method, and the like, so as to obtain a grayed background image.
(2) And carrying out image segmentation on the background image subjected to the gray processing to obtain a background image marked with the target object.
The background image marked out of the target object may be a mono-channel mask image in which the transparency of the target object is 1, i.e. completely opaque, and the transparency of the background object in the areas other than the target object is 0, i.e. completely transparent, and the transparency of the edges of the target object is between 0 and 1.
When the target object is a portrait or a virtual character, the grayed background image may be subjected to image segmentation by using a human body segmentation technique. In the embodiment of the present application, the algorithm used for image segmentation on the grayed background image may be an image edge segmentation algorithm, an image threshold segmentation algorithm, a region-based segmentation algorithm, a morphological watershed algorithm, and the like, which is not limited in the embodiment of the present application.
Referring to fig. 3 in combination, the terminal marks a target object 311 in the background image 31.
And step 204, setting display parameters of the target window.
The display parameters of the target window include at least one of: length, width, vertex coordinates. The display parameters of the target window may be determined according to the position of the target object in the background image and the size of the target object. The display parameters of the target window may be set by default by the terminal or may be set by the user in a user-defined manner, which is not limited in the embodiment of the present application.
Specifically, the length and width of the target window may be determined according to the size of the target object. In some embodiments, the length of the target window is greater than the length of the target object and less than or equal to the length of the background image, and the width of the target window is greater than the width of the target object and less than or equal to the width of the background image. The vertex coordinates of the target window may be determined according to the position of the target object in the background image.
The terminal can uniquely determine the target window according to the display parameters of the target window.
And step 206, displaying the foreground image in the target window in the background image to obtain a composite image.
The target window includes a target object. Referring to fig. 3 in combination, the terminal displays the foreground image 32 and the target object 311 in the target window 312 in the background image 31, resulting in the composite image 33.
To sum up, the technical scheme provided by the embodiment of the application obtains the synthetic image by setting the display window in the background image and displaying the foreground image and the recording object which are obtained from the network or from the terminal locally in the display window, and the multi-frame synthetic image forms the recording video.
In the following, embodiments of the apparatus of the present application are described, and for portions of the embodiments of the apparatus not described in detail, reference may be made to technical details disclosed in the above-mentioned method embodiments.
Referring to fig. 4, a block diagram of a video generation apparatus according to an exemplary embodiment of the present application is shown. The video generation apparatus may be implemented as all or a part of the terminal by software, hardware, or a combination of both. The device includes: an image acquisition module 401, an image acquisition module 402, an image composition module 403, and a video generation module 404.
And an image acquisition module 401, configured to acquire a background image.
An image obtaining module 402, configured to obtain a foreground image.
An image synthesizing module 403, configured to display the foreground image in a target window in the background image to obtain a synthesized image, where the target window includes the target object.
A video generating module 404, configured to generate a target video according to the composite image.
In summary, according to the technical scheme provided by the embodiment of the application, the display window is arranged in the background image, the foreground image and the recording object obtained from the network or from the terminal locally are displayed on the display window to obtain the synthetic image, and the multi-frame synthetic image forms the recorded video.
In an alternative embodiment provided based on the embodiment shown in fig. 4, the image synthesis module 403 is configured to replace the displayed content in the region except for the target object in the target window with the foreground image.
In an optional embodiment provided based on the embodiment shown in fig. 4, the image synthesis module 403 is configured to superpose the foreground image on the upper layer of the display content displayed in the target window; and the transparency of the overlapping area of the foreground image and the target object is a preset value.
In an optional embodiment provided based on the embodiment shown in fig. 4, the apparatus further includes: an object tagging module (not shown in fig. 4).
And the object marking module is used for marking the target object in the background image.
Optionally, the object labeling module is configured to:
carrying out graying processing on the background image to obtain a grayed background image;
and performing image segmentation on the background image subjected to the graying processing to obtain the background image marked with the target object.
In an optional embodiment provided based on the embodiment shown in fig. 4, the apparatus further includes: a window determination module (not shown in fig. 4).
A window determination module to:
setting display parameters of the target window, wherein the display parameters of the target window comprise at least one of the following items: length, width, vertex coordinates;
and determining the target window according to the display parameters of the target window.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments, which are not described herein again.
Fig. 5 shows a block diagram of a terminal 500 according to an exemplary embodiment of the present application. The terminal 500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 500 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 500 includes: a processor 501 and a memory 502.
The processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 501 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). Processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a Graphics Processing Unit (GPU), which is responsible for rendering and drawing the content required to be displayed on the display screen.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502, and peripheral interface 503 may be connected by buses or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, touch screen display 505, camera 505, audio circuitry 507, positioning components 508, and power supply 509.
The peripheral interface 503 may be used to connect at least one Input/Output (I/O) related peripheral to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 504 is used for receiving and transmitting Radio Frequency (RF) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or Wireless Fidelity (WiFi) networks. In some embodiments, rf circuitry 504 may also include Near Field Communication (NFC) related circuitry, which is not limited in this application.
The display screen 505 is used to display a User Interface (UI). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one, providing the front panel of the terminal 500; in other embodiments, the display screens 505 may be at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or a folded surface of the terminal 500. Even more, the display screen 505 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 505 may be made of Liquid Crystal Display (LCD), organic Light-Emitting Diode (OLED), or the like.
The positioning component 508 is used to locate the current geographic Location of the terminal 500 for navigation or Location Based Service (LBS). The Positioning component 508 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
In some embodiments, the terminal 500 also includes one or more sensors 55. The one or more sensors 55 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 515.
The acceleration sensor 511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 500. For example, the acceleration sensor 511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 501 may control the touch screen 505 to display the user interface in a landscape view or a portrait view according to the acceleration signal of gravity collected by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the terminal 500, and the gyro sensor 512 may cooperate with the acceleration sensor 511 to acquire a 3D motion of the user on the terminal 500. The processor 501 may implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization while shooting, game control, and inertial navigation.
The pressure sensor 513 may be disposed on a side bezel of the terminal 500 and/or an underlying layer of the touch display screen 505. When the pressure sensor 513 is disposed on the side frame of the terminal 500, a holding signal of the terminal 500 by the user can be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 514 is used for collecting a fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 501 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 514 may be provided on the front, rear, or side of the terminal 500. When a physical button or a vendor Logo is provided on the terminal 500, the fingerprint sensor 514 may be integrated with the physical button or the vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the touch display screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, processor 501 may also dynamically adjust the shooting parameters of camera head assembly 505 based on the ambient light intensity collected by optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 500. The proximity sensor 516 is used to collect the distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 is gradually reduced, the processor 501 controls the touch display screen 505 to switch from the bright screen state to the dark screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 becomes gradually larger, the processor 501 controls the touch display screen 505 to switch from the message screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 5 is not intended to be limiting of terminal 500 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein a computer program, which is loaded and executed by a processor of a terminal to implement the video generation method in the above-described method embodiments.
Alternatively, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product for implementing the video generation method provided in the above method embodiments when the computer program product is executed.
It should be understood that reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. As used herein, the terms "first," "second," and the like, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another.
The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
The above description is only exemplary of the application and should not be taken as limiting the application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the application should be included in the protection scope of the application.
Claims (9)
1. A method of video generation, the method comprising:
acquiring a background image and acquiring a foreground image;
displaying the foreground image in a target window in the background image to obtain a composite image, wherein the target window comprises a target object, and the display parameters of the target window comprise at least one of the following parameters: length, width and vertex coordinates, wherein the display parameters of the target window can be determined according to the position of the target object in the background image and the size of the target object;
generating a target video according to the composite image;
wherein the displaying the foreground image in a target window in the background image to obtain a composite image comprises: displaying the foreground image on the upper layer of the display content in the target window in an overlapping manner; and the transparency of the overlapping area of the foreground image and the target object is a preset value.
2. The method of claim 1, wherein the displaying the foreground image within a target window in the background image, resulting in a composite image, further comprises:
replacing display content in a region within the target window other than the target object with the foreground image.
3. The method of claim 1, wherein after the acquiring the background image, further comprising:
marking the target object in the background image.
4. The method of claim 3, wherein said marking the target object in the background image comprises:
carrying out graying processing on the background image to obtain a grayed background image;
and carrying out image segmentation on the background image subjected to the graying processing to obtain the background image marked with the target object.
5. The method according to any one of claims 1 to 4, wherein before displaying the foreground image in a target window in the background image, obtaining a composite image, further comprising:
setting display parameters of the target window;
and determining the target window according to the display parameters of the target window.
6. A video generation apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a background image;
the image acquisition module is used for acquiring a foreground image;
an image synthesis module, configured to display the foreground image in a target window in the background image to obtain a synthesized image, where the target window includes a target object, and a display parameter of the target window includes at least one of: length, width, vertex coordinates, and the display parameters of the target window can be determined according to the position of the target object in the background image and the size of the target object;
the video generation module is used for generating a target video according to the synthetic image;
the image synthesis module is used for displaying the foreground image on the upper layer of the display content in the target window in an overlapping mode; and the transparency of the overlapping area of the foreground image and the target object is a preset value.
7. The apparatus of claim 6, wherein the image synthesis module is further configured to replace display content in a region other than the target object within the target window with the foreground image.
8. A terminal, characterized in that it comprises a processor and a memory, said memory storing a computer program which is loaded and executed by said processor to implement the video generation method according to any one of claims 1 to 5.
9. A computer-readable storage medium, in which a computer program is stored which is loaded and executed by a processor to implement the video generation method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010324210.1A CN111447389B (en) | 2020-04-22 | 2020-04-22 | Video generation method, device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010324210.1A CN111447389B (en) | 2020-04-22 | 2020-04-22 | Video generation method, device, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111447389A CN111447389A (en) | 2020-07-24 |
CN111447389B true CN111447389B (en) | 2022-11-04 |
Family
ID=71654249
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010324210.1A Active CN111447389B (en) | 2020-04-22 | 2020-04-22 | Video generation method, device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111447389B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112037227B (en) * | 2020-09-09 | 2024-02-20 | 脸萌有限公司 | Video shooting method, device, equipment and storage medium |
CN112860209A (en) * | 2021-02-03 | 2021-05-28 | 合肥宏晶微电子科技股份有限公司 | Video overlapping method and device, electronic equipment and computer readable storage medium |
CN113873272B (en) * | 2021-09-09 | 2023-12-15 | 北京都是科技有限公司 | Method, device and storage medium for controlling background image of live video |
CN113873273B (en) * | 2021-09-09 | 2023-12-26 | 北京都是科技有限公司 | Method, device and storage medium for generating live video |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101835023A (en) * | 2009-03-10 | 2010-09-15 | 英华达(西安)通信科技有限公司 | Method for changing background of video call |
CN104902189A (en) * | 2015-06-24 | 2015-09-09 | 小米科技有限责任公司 | Picture processing method and picture processing device |
CN107507216A (en) * | 2017-08-17 | 2017-12-22 | 北京觅己科技有限公司 | The replacement method of regional area, device and storage medium in image |
CN107707823A (en) * | 2017-10-18 | 2018-02-16 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN108174082A (en) * | 2017-11-30 | 2018-06-15 | 维沃移动通信有限公司 | The method and mobile terminal of a kind of image taking |
CN108737730A (en) * | 2018-05-25 | 2018-11-02 | 优酷网络技术(北京)有限公司 | Image combining method and device |
CN108965769A (en) * | 2018-08-28 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Image display method and device |
CN109872297A (en) * | 2019-03-15 | 2019-06-11 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110290425A (en) * | 2019-07-29 | 2019-09-27 | 腾讯科技(深圳)有限公司 | A kind of method for processing video frequency, device and storage medium |
-
2020
- 2020-04-22 CN CN202010324210.1A patent/CN111447389B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101835023A (en) * | 2009-03-10 | 2010-09-15 | 英华达(西安)通信科技有限公司 | Method for changing background of video call |
CN104902189A (en) * | 2015-06-24 | 2015-09-09 | 小米科技有限责任公司 | Picture processing method and picture processing device |
CN107507216A (en) * | 2017-08-17 | 2017-12-22 | 北京觅己科技有限公司 | The replacement method of regional area, device and storage medium in image |
CN107707823A (en) * | 2017-10-18 | 2018-02-16 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN108174082A (en) * | 2017-11-30 | 2018-06-15 | 维沃移动通信有限公司 | The method and mobile terminal of a kind of image taking |
CN108737730A (en) * | 2018-05-25 | 2018-11-02 | 优酷网络技术(北京)有限公司 | Image combining method and device |
CN108965769A (en) * | 2018-08-28 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Image display method and device |
CN109872297A (en) * | 2019-03-15 | 2019-06-11 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110290425A (en) * | 2019-07-29 | 2019-09-27 | 腾讯科技(深圳)有限公司 | A kind of method for processing video frequency, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111447389A (en) | 2020-07-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110502954B (en) | Video analysis method and device | |
CN108401124B (en) | Video recording method and device | |
CN109191549B (en) | Method and device for displaying animation | |
CN111447389B (en) | Video generation method, device, terminal and storage medium | |
CN108449641B (en) | Method, device, computer equipment and storage medium for playing media stream | |
CN111464749B (en) | Method, device, equipment and storage medium for image synthesis | |
CN108965922B (en) | Video cover generation method and device and storage medium | |
CN108616776B (en) | Live broadcast analysis data acquisition method and device | |
CN111880888B (en) | Preview cover generation method and device, electronic equipment and storage medium | |
CN110225390B (en) | Video preview method, device, terminal and computer readable storage medium | |
CN110827195B (en) | Virtual article adding method and device, electronic equipment and storage medium | |
CN110839174A (en) | Image processing method and device, computer equipment and storage medium | |
CN111754386A (en) | Image area shielding method, device, equipment and storage medium | |
CN111083526B (en) | Video transition method and device, computer equipment and storage medium | |
CN110189348B (en) | Head portrait processing method and device, computer equipment and storage medium | |
CN110503159B (en) | Character recognition method, device, equipment and medium | |
CN109819314B (en) | Audio and video processing method and device, terminal and storage medium | |
CN110769120A (en) | Method, device, equipment and storage medium for message reminding | |
CN112396076A (en) | License plate image generation method and device and computer storage medium | |
CN111586279A (en) | Method, device and equipment for determining shooting state and storage medium | |
CN110992268B (en) | Background setting method, device, terminal and storage medium | |
CN112738606A (en) | Audio file processing method and device, terminal and storage medium | |
CN112235650A (en) | Video processing method, device, terminal and storage medium | |
CN112419143A (en) | Image processing method, special effect parameter setting method, device, equipment and medium | |
CN111327819A (en) | Method, device, electronic equipment and medium for selecting image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |