CN112218154A - Video acquisition method and device, storage medium and electronic device - Google Patents

Video acquisition method and device, storage medium and electronic device Download PDF

Info

Publication number
CN112218154A
CN112218154A CN201910629661.3A CN201910629661A CN112218154A CN 112218154 A CN112218154 A CN 112218154A CN 201910629661 A CN201910629661 A CN 201910629661A CN 112218154 A CN112218154 A CN 112218154A
Authority
CN
China
Prior art keywords
video
target
client
segment
replacement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910629661.3A
Other languages
Chinese (zh)
Other versions
CN112218154B (en
Inventor
谭伟林
龙福康
谢昕虬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910629661.3A priority Critical patent/CN112218154B/en
Publication of CN112218154A publication Critical patent/CN112218154A/en
Application granted granted Critical
Publication of CN112218154B publication Critical patent/CN112218154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a video acquisition method and device, a storage medium and an electronic device. Wherein, the method comprises the following steps: the method comprises the steps that an initial video is obtained through a client running on a target terminal, wherein the initial video comprises a plurality of video segments which are continuous in time, and the plurality of video segments comprise target video segments to be replaced; playing a plurality of video segments in sequence through a client; under the condition that the target video segment is played, calling a video shooting component of the target terminal through the client to shoot the video, and obtaining a first replacement video segment of the target video segment; and performing video composition by using other video segments except the target video segment in the plurality of video segments and the first replacement video segment to obtain a target video. The invention solves the technical problem of complex video synthesis processing flow in a video split-mirror replacement mode in the related technology.

Description

Video acquisition method and device, storage medium and electronic device
Technical Field
The invention relates to the field of computers, in particular to a video acquisition method and device, a storage medium and an electronic device.
Background
At present, a user can obtain a replacement video segment by shooting a scene related to the split mirror of a target object in an original video, replace the split mirror of the target object with the replacement video segment, and synthesize the target object with other shots in a predetermined video to obtain a video created secondarily. The processing mode of the split mirror replacement can be applied to the perusal simulation in the short video of the mobile phone.
For example, for video clips of zhou xing chi and zhangzhi in the movie "the king of comedy": (week) "do not go to work but do not go to work", (one) "do you nourish me" and (week) "do you nourish you o", the user can synthesize a new video according to the split mirror: (replace) "do not go to work", "(do not care me to work)", (replace) "you go to work bar". The short video created in the second time is synthesized with the original Zhangbaizhi film by replacing the minute lens of the planet.
However, the above video processing method requires a certain shooting skill and a certain post video processing to be combined, which is difficult for a common user to complete, that is, the video split-mirror replacement method in the related art has a problem of complicated video combining processing flow.
Disclosure of Invention
The embodiment of the invention provides a video acquisition method and device, a storage medium and an electronic device, which at least solve the technical problem of complex video synthesis processing flow in a video split-mirror replacement mode in the related technology.
According to an aspect of the embodiments of the present invention, there is provided a method for acquiring a video, including: the method comprises the steps that an initial video is obtained through a client running on a target terminal, wherein the initial video comprises a plurality of video segments which are continuous in time, and the plurality of video segments comprise target video segments to be replaced; playing a plurality of video segments in sequence through a client; under the condition that the target video segment is played, calling a video shooting component of the target terminal through the client to shoot the video, and obtaining a first replacement video segment of the target video segment; and performing video composition by using other video segments except the target video segment in the plurality of video segments and the first replacement video segment to obtain a target video.
According to another aspect of the embodiments of the present invention, there is also provided a video acquisition apparatus, including: the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring an initial video through a client running on a target terminal, the initial video comprises a plurality of video segments which are continuous in time, and the plurality of video segments comprise target video segments to be replaced; the playing unit is used for sequentially playing a plurality of video segments through the client; the shooting unit is used for calling a video shooting component of the target terminal through the client to carry out video shooting under the condition that the target video segment is played, so as to obtain a first replacement video segment of the target video segment; and the synthesizing unit is used for carrying out video synthesis by using other video segments except the target video segment in the plurality of video segments and the first replacement video segment to obtain the target video.
According to a further aspect of the embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is configured to perform the above method when executed.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the method by the computer program.
In the embodiment of the invention, a mode that an initial video containing a plurality of video segments (split mirrors) with continuous time is obtained by a client and the replacement video segments are shot when a target video segment to be replaced is played is adopted, the initial video is obtained by the client running on a target terminal, wherein the initial video contains a plurality of video segments with continuous time, and the plurality of video segments comprise the target video segment to be replaced; playing a plurality of video segments in sequence through a client; under the condition that the target video segment is played, calling a video shooting component of the target terminal through the client to shoot the video, and obtaining a first replacement video segment of the target video segment; the method comprises the steps of carrying out video synthesis by using other video segments except the target video segment in the plurality of video segments and the first replacement video segment to obtain the target video, simplifying the processing flow of video synthesis due to the fact that a user does not need to carry out the processing of lens splitting and video synthesis, and further solving the technical problem that the video synthesis processing flow is complex in a video lens splitting replacement mode in the related technology.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of an application environment of a video acquisition method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an alternative video acquisition method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an alternative video acquisition method according to an embodiment of the invention;
FIG. 4 is a schematic diagram of an alternative video acquisition method according to an embodiment of the invention;
FIG. 5 is a schematic diagram of an alternative video capture method according to an embodiment of the invention;
FIG. 6 is a schematic diagram of an alternative video capture method according to an embodiment of the invention;
FIG. 7 is a schematic diagram of an alternative video capture method according to an embodiment of the invention;
FIG. 8 is a schematic diagram of an alternative video capture method according to an embodiment of the invention;
FIG. 9 is a schematic diagram of an alternative video capture method according to an embodiment of the invention;
FIG. 10 is a schematic diagram of yet another alternative video acquisition method according to an embodiment of the invention;
fig. 11 is a schematic flow chart of an alternative video acquisition method according to an embodiment of the invention;
fig. 12 is a schematic structural diagram of an alternative video acquisition apparatus according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present invention, a method for acquiring a video is provided. Alternatively, the above video acquisition method can be applied, but not limited, to the application environment shown in fig. 1. As shown in fig. 1, a client may run on the terminal device 102, and an initial video is obtained by the client running on the terminal device 102, where the initial video includes a plurality of temporally consecutive video segments, and the plurality of video segments include a target video segment to be replaced; sequentially playing the plurality of video segments through the client; under the condition that the target video segment is played, calling a video shooting component of the terminal device 102 through the client to carry out video shooting to obtain a first replacement video segment of the target video segment; and performing video composition by using other video segments except the target video segment in the plurality of video segments and the first replacement video segment to obtain a target video.
After obtaining the target video, the terminal device 102 may upload the obtained target video to the server 106 through the network 104, and complete distribution of the target video.
Optionally, in this embodiment, the terminal device may include, but is not limited to, at least one of the following: mobile phones, tablet computers, PCs, and the like. Such networks may include, but are not limited to: a wired network, a wireless network, wherein the wired network comprises: a local area network, a metropolitan area network, and a wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The server may include, but is not limited to, at least one of: PCs and other devices for providing model training functions. The above is only an example, and the present embodiment is not limited to this.
Optionally, in this embodiment, as an optional implementation manner, as shown in fig. 2, the method for acquiring a video may include:
step S202, an initial video is obtained through a client running on a target terminal, wherein the initial video comprises a plurality of video segments which are continuous in time, and the plurality of video segments comprise target video segments to be replaced;
step S204, a plurality of video segments are played in sequence through the client;
step S206, under the condition that the target video segment is played, calling a video shooting component of the target terminal through the client to shoot the video, and obtaining a first replacement video segment of the target video segment;
in step S208, video composition is performed using the first replacement video segment and other video segments of the plurality of video segments except the target video segment, so as to obtain a target video.
Optionally, the above video acquisition method may be applied to, but not limited to, a video synthesis process. For example, during shooting of short videos of mobile phones.
It should be noted that the initial video may correspond to a story template, and the plurality of video segments in the initial video correspond to the plurality of mirrors in the story template, wherein normal playing may be performed or playing may be skipped for video segments other than the target video segment in the plurality of video segments. For the target video segment, the target video segment may not be played, but jump to the video recording interface to record the replacement video segment, or may follow the recording of the replacement video segment through a floating layer window on the recording interface (e.g., at the lower right corner, the lower left corner, the upper right corner, and the upper left corner of the recording interface) to play.
In addition to the story template, as long as the video can be cut into a plurality of video segments, the video can be synthesized by using the above video acquisition method.
This is explained below with reference to an alternative example. And the mobile phone terminal is provided with a client of the target application. The target application provides video compositing functionality. The user opens the client, opens the camera to record, tabs (labels) of various playing methods of the video are arranged at the bottom of a display interface of the client, and the user can enter a shooting interface of a story template (as shown in figure 3) by selecting the tab of the story template. The self-favorite story template can be selected for shooting through horizontal sliding.
The story template may be a short video (which may be a movie clip, mv, animation, a video obtained by cutting a video uploaded by a user, or the like) provided to the client by a background server of the client, and the short video is cut according to the split mirrors to form a short video of the story template (as shown in fig. 4, the split mirror 2 and the split mirror 4 are cut out, and the split mirrors 1,3, and 5 are left as a story template to be provided for the user to shoot).
After selecting a particular story template, the user may perform a video shot, the flow of which may include the steps of:
s1, the mobile phone plays the short video of < lens 1 >;
and S2, after the < minute mirror 1> is played, the mobile phone starts a shooting mode to guide the user to shoot the short video (minute mirror 2 shot by the user).
S3, the mobile phone plays the short video of < split mirror 3 >;
and S4, after the < minute mirror 3> is played, the mobile phone starts a shooting mode to guide the user to shoot the short video (the minute mirror 4 shot by the user).
And S5, the mobile phone plays the short video with the < split mirror 5 >.
After the video shooting process is finished, video synthesis can be performed, MediaCodec can be used for video synthesis, and the split mirror 1, the split mirror 2 shot by the user, the split mirror 3, the split mirror 4 shot by the user, and the split mirror 5 are subjected to video synthesis to obtain a short synthesized video (as shown in fig. 5).
According to the video composition method and device, the mode that the client side obtains the initial video containing the plurality of video segments (split mirrors) in continuous time and shoots the replacement video segment when the target video segment to be replaced is played is adopted, the technical problem that the video composition processing flow is complex in the video split mirror replacement mode in the related technology is solved, the video composition processing flow is simplified, and the user experience is improved.
Optionally, in step S202, the target terminal acquires an initial video through a client running on the target terminal, where the initial video includes a plurality of temporally consecutive video segments, and the plurality of video segments includes the target video segment to be replaced.
And a client runs on the target terminal and can be used for video synthesis. After detecting a click operation performed on a client icon on the desktop of the target terminal, the client is opened (as shown in fig. 6).
After entering the client, a tab of various processing manners of the video may be displayed at the bottom of the client (for example, as shown in fig. 3, the tab may include "story template", "text", "default", album, etc.). "story templates" are creations of existing propositions, each template may correspond to a video clip of a movie or music piece. Under the mode of a story template, video production can be carried out in a segmented synthesis mode, a user video and an original video correspond to different sub-mirrors and are not on the same screen, and the sense of incongruity of the synthesized video can be reduced.
After the touch operation performed on the button of the story template is detected, a selection interface of the story template can be entered, and the segmented video shooting and composition are triggered.
It should be noted that the "story template" just triggers an example of the segmented video shooting and composition, and may also trigger the segmented video shooting and composition in other manners, for example, directly triggering by clicking a button on a desktop, triggering by a client of another application, or triggering by another manner on the client. The manner of triggering the segmented video capture and composition is not particularly limited in this embodiment.
For the shooting and composition of the segmented video, an initial video may be first acquired, which may contain a plurality of video segments in temporal succession, including a target video segment to be replaced, where the temporal succession is a succession in a specific scene, for example, a succession in a movie work, a succession in a music work, and so on. The initial video may be obtained in a number of ways.
As an alternative embodiment, the obtaining of the initial video by the client running on the target terminal may include: detecting, by a client, a first operation performed on a first button corresponding to an initial video displayed on the client; and responding to the first operation, downloading the initial video from the first server, wherein the initial video is stored in the first server.
The first button may correspond to a particular initial video. After detecting a first operation performed by a first button corresponding to an initial video, a request message for requesting the initial video may be sent to the first server in response to the first operation, where the request message may carry identification information of the initial video, so that the first server may determine the requested initial video.
After receiving the request message, the first server may match the initial video from the stored plurality of videos using the identification information, and transmit the matched initial video to the client.
After the initial video is first acquired by the first server, the target terminal may save the initial video in a specific storage area in the target terminal. After detecting the first operation performed by the first button corresponding to the initial video, the target terminal may retrieve the initial video from the specific storage area.
By the embodiment, the initial video (for example, an official operated story template) is provided for the client through the first server, the initial video is cut, the video material can be preset (both the split mirror and the material are prepared), and the quality of the composite video can be ensured.
As another alternative embodiment, the obtaining of the initial video by the client running on the target terminal may include: detecting, by the client, a second operation performed on a second button displayed on the client; responding to the second operation, and acquiring a video to be cut stored in the target terminal through the client; the method comprises the steps of obtaining an initial video by using a video to be cut through a client, wherein the initial video is obtained by cutting the video to be cut.
The initial video can also be obtained after the video to be cut and uploaded by the user is cut. After detecting a second operation performed on a second button (e.g., "upload") displayed on the client, entering a selection interface of the video file saved in the target terminal, and selecting the video to be cropped under the target path. The initial video can be obtained by cutting the video to be cut.
Through this embodiment, through obtaining the video of waiting to cut from the target terminal to obtain initial video, can satisfy different users' demand, improve the flexibility of video acquisition mode, and then improve user experience.
The video to be cut can be cut in various modes, the video to be cut can be cut by a user, and the video to be cut can also be cut by a server.
As an optional implementation manner, after the video to be cropped is selected, a cropping interface may be displayed on the client, and the video to be cropped is cropped according to the detected cropping mark to obtain an initial video (for example, the user uploads locally, provides a cropping tool, and generates a custom story template).
As another optional implementation, obtaining the initial video by the client using the video to be cropped may include: sending the video to be cut to a second server through the client, wherein the second server is used for identifying different objects contained in the video to be cut, and cutting the video to be cut according to the identified different objects to obtain an initial video; and receiving the initial video returned by the second server through the client.
The client may send the video to be cut to the second server. After receiving the video to be cropped uploaded by the client, the second server identifies different objects contained in each video picture in the video to be cropped, crops the video to be cropped according to the identified different objects to obtain an initial video, and sends the obtained initial video to the client.
For example, for a 15s video to be cropped, an object a (which may define the type of an identified object, e.g., a human-type object) included in the first 2s video pictures, an object B included in the 2 nd to 5s video pictures, an object a included in the 5 th to 8 th video pictures, an object C included in the 8 th to 12s video pictures, and an object B included in the 12 th to 15s video pictures may be cropped into 5 video segments (split mirrors).
According to the embodiment, the server cuts the video to be cut to obtain the initial video, the user does not need to execute excessive operations, the user operation flow is simplified, and the user experience is improved.
Among the plurality of video segments in the initial video, there may be one or more target video segments. A target video segment among the plurality of video segments may be set in advance (for example, the 2 n-th video segment is the target video segment, and the (2n +1) -th video segment is the target video), or a target video segment among the plurality of video segments may be set by a detected selection operation (for example, a selection flag may be displayed on the client, and the target video segment may be set according to a user selection).
For example, as shown in fig. 7, the client can provide a split mirror selection function, or a role selection function, to determine the target video segment.
It should be noted that the story template (initial video) may be configured by an administrator of the server, and a user may be provided with a portal to upload the template. The user can shoot after clicking the template, and usually the first section of the template is a video primary lens (the user can be handed over context).
Alternatively, in step S204, a plurality of video segments are played in sequence by the client.
A plurality of video segments can be played in sequence by the client. The playing can be performed in a time sequence of the plurality of video segments. The temporal order may be determined by the identity of the video segment.
For the video segments other than the target video segment in the plurality of video segments, the video playing can be directly carried out on the client.
Optionally, in this embodiment, playing the plurality of video segments in sequence by the client may include: detecting, by the client, a third operation performed on a third button displayed on the client in the process of playing the other video segment by the client; and responding to the third operation, and jumping to the shooting interface of the first replacement video segment in the client.
Playing other video segments (original split mirror) and without excessive user manipulation, the client can provide pause, start, and skip three basic functions (e.g., as shown in fig. 8). The skipping may be skipping the other video segments at this time, or skipping all the other video segments.
According to the embodiment, the client provides the skipping function of other video segments, so that the video segment skipping can be performed for the user when the user does not need to watch other video segments, the video acquisition time is saved, and the user experience is improved.
Alternatively, in step S206, in the case that the target video segment is played, the client calls the video shooting component of the target terminal to shoot the video, so as to obtain a first replacement video segment of the target video segment.
After the original video split-mirror (other video segments) is played, a user can shoot a split-mirror interface. After the other video segments before the target video segment are played or skipped, the initial video is played to the position of the target video segment. By calling a video shooting component (e.g., a front camera and a rear camera) of the target terminal to carry out video shooting, a first replacement video segment (split mirror shot by a user) of the target video segment is obtained.
Optionally, in this embodiment, before the client calls the video shooting component of the target terminal to perform video shooting to obtain a first replacement video segment of the target video segment, a first animation may be played through the client, where the first animation is used to prompt to start shooting of the first replacement video segment.
Optionally, in this embodiment, in the process of calling the video shooting component of the target terminal by the client to perform video shooting to obtain the first replacement video segment of the target video segment, a second animation may be played by the client, where the second animation is used to prompt to pause shooting of the first replacement video segment.
To enhance the user's presence, and to make the user feel that he is actually filming, a clapper animation (as shown in fig. 9) is provided by clicking the record button. When the recording is suspended, the recording is carried out, the 'cut animation' is shout, the playability is better compared with common shooting interaction, and the shooting interest of a user is raised. Also, before the first replacement video segment is captured, the user can select a beauty function in the upper right corner of the interface shown in fig. 9.
Through the embodiment, before the first replacement video segment is shot, the start animation is played, and the pause animation is played when the recording is paused, so that the telepresence of a user can be enhanced, and the user experience is improved.
Because the shot effect of the original story video (initial video) and the shooting effect of a user terminal (for example, a user mobile phone) are often different greatly, the difference between the original story video and the shooting effect of the user terminal needs to be smoothed, and some optimization needs to be performed. Optimization can be performed in a plurality of ways such as video hanging, video filtering and the like.
A target video pendant can be added to the first replacement video segment when the first replacement video segment is captured. The video pendant can imitate the object of the original video (the target video segment) and can form contrast with the object of the original video, so that the video becomes more interesting (for example, shooting a pernicious video).
Optionally, in this embodiment, in the process of calling a video shooting component of a target terminal through a client to perform video shooting to obtain a first replacement video segment of the target video segment, a target object and a target video pendant are displayed on a terminal screen of the target terminal, where the target object is an object shot through the video shooting component, the target video pendant is a preset video pendant corresponding to the target object, and the first replacement video segment includes a video picture shot by the target video pendant and the video shooting component which are superimposed; and controlling the target video pendant to move along with the target object on the terminal screen according to the preset relative position of the target object and the target video pendant.
According to different target objects, the target video pendant is different. The target video pendant may be: the pendant of face, video tracking pendant.
For the face pendant, an existing face pendant in the client may be used. But to complete the scene of 'shooting a story', hands, backgrounds and the like can also use the pendant to improve the substitution feeling.
The article pendant can have two realization schemes: one is simple hand recognition and the other is a free-article pendant.
For hand recognition, the principle uses feature point tracking similar to face recognition, and the scene of "holding" the props is more common. The scene of the article hanging piece is mainly the hand hanging piece.
For free-article hangers, such as: a hanger is needed on the clothes, a hanger is needed on the shoulder, or a hanger is needed on the table. The scene needs to use a "video tracking technology", and the video tracking algorithm which can be selected can be: CMT tracking algorithm, Lucas-Kanade optical flow tracking method or other tracking algorithm.
For example, a video pendant function may be selected on the terminal interface shown in fig. 9, which is a part of the components that are already available for the shooting component on the client. Unless the story template has special operation configuration, special video pendants and beauty special effects can be selected to meet the shooting requirements.
The capturing of the first replacement video segment can be performed using the object filter while the first replacement video segment is captured.
Optionally, in this embodiment, in a process of calling the video shooting component of the target terminal through the client to perform video shooting to obtain the first replacement video segment of the target video segment, a target video picture is displayed on a terminal screen of the target terminal, where the target video picture is a video picture obtained by processing a video picture shot by the video shooting component using a target filter, the target filter is a filter corresponding to the initial video, and the first replacement video segment includes the target video picture.
The initial video may be configured with specific filters to accommodate the scenes in the initial video. When the initial video is a story video, if the story video is some movie fragments, the user mobile phone is generally difficult to photograph "feel". The client may provide a "movie polo" filter to try to smooth out the offensiveness of the two sub-mirrors. In addition, the MV or the animation can be expressed according to the color saturation of a specific picture, and a filter is configured for a user.
Through this embodiment, handle first replacement video section through using video pendant or video filter, can weaken the difference between the camera lens effect that initial video camera lens effect and user terminal shot, improve the quality of synthetic video, and then improve user experience.
The client can also have a video re-recording function. In the process of shooting the first replacement video segment, if the user is not satisfied with the shot video, the unsatisfied whole first replacement video segment or a small sub-video segment in the first replacement video segment can be deleted and recorded again.
Optionally, in this embodiment, in the process of calling the video shooting component of the target terminal by the client to perform video shooting to obtain the first replacement video segment of the target video segment, a fourth operation performed on a fourth button displayed on the client may be detected by the client; responding to the fourth operation, deleting a sub-video segment shot within a preset time length before the current time, wherein the sub-video segment corresponds to a video from the first time to the second time in the target video segment; and returning to the shooting interface corresponding to the first moment in the target video segment.
In the first replacement video segment capturing (recording), after detecting a fourth operation performed on a fourth button (delete button) displayed on the client, a sub-video segment captured within a predetermined time period (for example, the first 2s, the first 5s) before the current time may be deleted, and the capturing interface corresponding to the first time in the target video segment may be returned for re-recording of the sub-video segment.
For example, as shown in fig. 9, the shooting duration progress is set above the terminal display interface, the user split-view setting is configured according to the story template, if the split-view duration is long, the progress bar is recorded slowly, and vice versa. When the user pauses, a small section is formed above the progress bar, and the small section recorded just before is deleted by clicking the deletion button. The part belongs to the capability of sectional shooting, and the functions are provided for users, so that the functions can be better played.
By the embodiment, in the recording process of the first replacement video segment, the efficiency of recording the first replacement video segment can be improved by providing the function of deleting and re-recording the sub-video segment.
Alternatively, in step S208, video composition is performed using video segments other than the target video segment in the plurality of video segments and the first replacement video segment, so as to obtain a target video.
After the recording of the first replacement video segment is completed, the other video segments and the first replacement video segment may be video-composited (e.g., video-composited in a video editing interface) to obtain the target video.
For example, after the timeline (corresponding to the target video segment) is finished, or the user directly clicks the "finish button", the user can jump to the video editing interface for video composition.
Because the difference between the lens effect of the original accident video and the shooting effect of the user terminal is often large, the difference between the two needs to be smoothed, and some optimization needs to be performed. The optimization may include: and (4) sound optimization.
Some story videos have background music, and the video sound shot by the user needs to be just bridged with the background sound, so that the background sound synthesis is additionally provided. In addition, the sound shot by the user terminal is generally not ideal in effect, a scheme for replacing the original sound of the video can be provided for the user to select, and the effect of synthesizing the video is better.
Optionally, in this embodiment, performing video composition using video segments other than the target video segment in the plurality of video segments and the first replacement video segment, and obtaining the target video may include: detecting, by the client, a fifth operation performed on a fifth button displayed on the client; and responding to the fifth operation, and performing video synthesis by using other video segments and a second replacement video segment to obtain a target video, wherein the video picture of the second replacement video segment is the video picture of the first replacement video segment, and the audio data of the second replacement video segment is the audio data of the target video segment.
After detecting a fifth operation performed on a fifth button (e.g., "use the original sound") displayed on the client, the original sound of the target video segment may be used to superimpose the video picture of the first replacement video segment, resulting in a second replacement video segment. And performing video synthesis on the other video segments and the second replacement video segment to obtain a target video.
Through the embodiment, the video original sound is used for replacing the voice data in the first replacement video segment to carry out video synthesis, so that the video synthesis cost can be reduced, and the video synthesis effect can be improved.
Video composition is handled from multiple aspects, including segment composition, background sound composition, transcoding, video cropping, etc. The segmented synthesis can use MediaCodec video synthesis, the background sound synthesis can use MediaCodec audio synthesis, the transcoding can use ffmpeg to reduce the code rate (can be completed in the same conversion command with the background sound synthesis), the video cutting can be carried out or not, and the playing layer is used for adapting the size of the mobile phone.
The video compositing process is described below in conjunction with an alternative example. The video editing interface may be as shown in fig. 10. The video editing interface has two functions: a sound selection function, an editing function.
The sound selection function provides two sound schemes. The sound scheme 1 uses the original sound of a story template split mirror, and has low cost and good effect; sound scheme 2 uses the user's voice, the ratio of background sounds to the user's voice can be adjusted on the left side, such as: the more up-regulated, the larger the human voice, the smaller the background sound.
The editing function provides an editing scheme. After the video editing interface is entered, all the mirrors are played circularly from beginning to end, the final result is previewed for the user, if some mirror is not shot well, the mirror can be directly clicked, and the user can return to the mirror shooting interface for re-editing.
And outputting the synthesized short video after video synthesis, and uploading the short video to a server to finish publishing. As shown in fig. 10, when the user clicks to publish, the user will go through the video composition process and finally upload to the server to complete the publication. And then the video of the friends can be viewed and can be directly shared with the friends.
It should be noted that the first to fifth buttons are virtual buttons, and may be directly displayed on the interface of the client, or may be hidden and displayed on the interface of the client, and are identified by a specific mark. The first to fifth operations may be touch operations, and may include, but are not limited to, one of the following: click, double click, slide, long press.
The following describes a video acquisition method in the embodiment of the present invention with reference to an alternative example. In this example, the initial video is a predetermined story template including 5 segments (video segments), where segment 1, segment 3, and segment 5 are original segments, and segment 2 and segment 4 are segments to be replaced (target video segments).
The client side greatly reduces the shooting cost of the user through a series of convenient tools, and assists the user in secondary creation. At least one short video shot by a mobile phone is used as a split mirror and inserted into a story template, so that secondary creation of videos is realized. Video synthesis comprises MediaCodec video synthesis, MediaCodec audio synthesis, transcoding and video clipping, and a shot by a user can be closer to an original video or achieve a vicious comedy effect by providing a video pendant, a prop and a video filter, so that the differentiation between a split mirror and a template is reduced; by providing original sound and background music of the video, the shooting effect is improved, and the video created secondarily is more harmonious and smooth.
As shown in fig. 11, in the video acquisition method in the present example, the mobile phone terminal acquires a video template by performing step S1102; playing the original split mirror and recording the user split mirror by alternately executing the step S1104 and the step S1106; video composition is performed by performing step S1108, and upload of the composed video is performed.
By the example, based on the existing story template, the creation cost of the user is lower, the idea is not required to be designed from 0, and the problem that the secondary creation of the short video can be completed only by needing professional video editing capacity originally can be reduced.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiment of the invention, a video acquisition device for implementing the video acquisition method is also provided. As shown in fig. 12, the apparatus includes:
(1) an obtaining unit 1202, configured to obtain an initial video through a client running on a target terminal, where the initial video includes multiple video segments that are continuous in time, and the multiple video segments include a target video segment to be replaced;
(2) a playing unit 1204, configured to play a plurality of video segments in sequence through the client;
(3) the shooting unit 1206 is used for calling a video shooting component of the target terminal through the client to carry out video shooting under the condition that the target video segment is played, so as to obtain a first replacement video segment of the target video segment;
(4) a compositing unit 1208, configured to perform video compositing using video segments other than the target video segment and the first replacement video segment in the plurality of video segments to obtain a target video.
Optionally, the above-mentioned video acquisition device may be applied to, but not limited to, a video synthesis process. For example, during shooting of short videos of mobile phones.
Alternatively, the acquisition unit 1202 may be configured to execute the aforementioned step S202, the playback unit 1204 may be configured to execute the aforementioned step S204, the photographing unit 1206 may be configured to execute the aforementioned step S206, and the synthesis unit 1208 may be configured to execute the aforementioned step S208.
According to the video composition method and device, the mode that the client side obtains the initial video containing the plurality of video segments (glasses separation) in continuous time and shoots the replacement video segment when the target video segment to be replaced is played is adopted, so that the technical problem that the video composition processing flow is complex in the video glasses separation replacement mode in the related technology is solved, the video composition processing flow is simplified, and the user experience is improved.
As an alternative embodiment, the obtaining unit 1202 includes:
(1) the first detection module is used for detecting a first operation executed on a first button corresponding to an initial video displayed on a client through the client;
(2) and the downloading module is used for responding to the first operation and downloading the initial video from the first server, wherein the initial video is stored in the first server.
According to the embodiment, the initial video is provided for the client through the first server, the initial video is cut, the video subject matter can be preset, and the quality of the synthesized video can be guaranteed.
As an alternative embodiment, the obtaining unit 1202 includes:
(1) the second detection module is used for detecting a second operation executed on a second button displayed on the client through the client;
(2) the first acquisition module is used for responding to the second operation and acquiring the video to be cut stored in the target terminal through the client;
(3) and the second acquisition module is used for acquiring an initial video by using the video to be cut through the client, wherein the initial video is obtained by cutting the video to be cut.
Through this embodiment, through obtaining the video of waiting to cut from the target terminal to obtain initial video, can satisfy different users' demand, improve the flexibility of video acquisition mode, and then improve user experience.
As an alternative embodiment, the second obtaining module includes:
(1) the sending submodule is used for sending the video to be cut to a second server through the client, wherein the second server is used for identifying different objects contained in the video to be cut and cutting the video to be cut according to the identified different objects to obtain an initial video;
(2) and the receiving submodule is used for receiving the initial video returned by the second server through the client.
According to the embodiment, the server cuts the video to be cut to obtain the initial video, the user does not need to execute excessive operations, the user operation flow is simplified, and the user experience is improved.
As an alternative embodiment, the playing unit 1204 includes:
(1) the third detection module is used for detecting a third operation executed on a third button displayed on the client through the client in the process of playing other video segments through the client;
(2) and the skipping module is used for responding to the third operation and skipping to the shooting interface of the first replacement video segment in the client.
According to the embodiment, the client provides the skipping function of other video segments, so that the video segment skipping can be performed for the user when the user does not need to watch other video segments, the video acquisition time is saved, and the user experience is improved.
As an alternative embodiment, the determining module comprises: the above-mentioned device still includes:
(1) the display unit is used for displaying a target object and a target video pendant on a terminal screen of a target terminal in the process of video shooting through the video shooting component, wherein the target object is shot through the video shooting component, the target video pendant is a preset video pendant corresponding to the target object, and the first replacement video segment comprises the superposed target video pendant and a video picture shot by the video shooting component;
(2) and the control unit is used for controlling the target video pendant to move along with the target object on the terminal screen according to the preset relative position of the target object and the target video pendant.
Through this embodiment, handle first replacement video section through using the video pendant, can weaken the difference between the camera lens effect that initial video camera lens effect and user terminal shot, improve the quality of composite video, and then improve user experience.
As an alternative embodiment, the above apparatus further comprises:
(1) the display unit is used for displaying a target video picture on a terminal screen of the target terminal in the process of calling a video shooting component of the target terminal through the client to carry out video shooting to obtain a first replacement video segment of the target video segment, wherein the target video picture is obtained by processing the video picture shot by the video shooting component through a target filter, the target filter is a filter corresponding to the initial video, and the first replacement video segment comprises the target video picture.
Through this embodiment, through using the target filter to handle first replacement video section, can weaken the difference between the initial video camera lens effect and the camera lens effect that user terminal shot, improve the quality of composite video, and then improve user experience.
As an alternative embodiment, the playing unit 1204 is further configured to play a first animation through the client before the client calls the video shooting component of the target terminal to perform video shooting to obtain a first replacement video segment of the target video segment, where the first animation is used to prompt the start of shooting the first replacement video segment; and in the process of calling a video shooting component of the target terminal through the client to shoot the video and obtaining a first replacement video segment of the target video segment, playing a second animation through the client, wherein the second animation is used for prompting to pause the shooting of the first replacement video segment.
Through the embodiment, before the first replacement video segment is shot, the start animation is played, and the pause animation is played when the recording is paused, so that the telepresence of a user can be enhanced, and the user experience is improved.
As an alternative embodiment, the synthesis unit 1208 includes:
(1) the detection module is used for detecting a fifth operation executed on a fifth button displayed on the client through the client;
(2) and the synthesizing module is used for responding to the fifth operation and performing video synthesis by using other video segments and the second replacement video segment to obtain the target video, wherein the video picture of the second replacement video segment is the video picture of the first replacement video segment, and the audio data of the second replacement video segment is the audio data of the target video segment.
By the embodiment, in the recording process of the first replacement video segment, the efficiency of recording the first replacement video segment can be improved by providing the function of deleting and re-recording the sub-video segment.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
According to a further aspect of embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring an initial video through a client running on a target terminal, wherein the initial video comprises a plurality of video segments which are continuous in time, and the plurality of video segments comprise target video segments to be replaced;
s2, playing a plurality of video segments in sequence through the client;
s3, calling a video shooting component of the target terminal through the client to shoot the video under the condition that the target video segment is played, and obtaining a first replacement video segment of the target video segment;
and S4, synthesizing the video by using the other video segments except the target video segment and the first replacement video segment in the plurality of video segments to obtain the target video.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the above-mentioned video acquisition method, as shown in fig. 13, the electronic device may include: processor 1302, memory 1304, transmission device 1306, display 1308, and the like. The memory has stored therein a computer program, and the processor is arranged to execute the steps of any of the above method embodiments by means of the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring an initial video through a client running on a target terminal, wherein the initial video comprises a plurality of video segments which are continuous in time, and the plurality of video segments comprise target video segments to be replaced;
s2, playing a plurality of video segments in sequence through the client;
s3, calling a video shooting component of the target terminal through the client to shoot the video under the condition that the target video segment is played, and obtaining a first replacement video segment of the target video segment;
and S4, synthesizing the video by using the other video segments except the target video segment and the first replacement video segment in the plurality of video segments to obtain the target video.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 13 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, and a Mobile Internet Device (MID), a PAD, and the like. Fig. 13 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 13, or have a different configuration than shown in FIG. 13.
The memory 1304 may be used to store software programs and modules, such as program instructions/modules corresponding to the video acquisition method and apparatus in the embodiments of the present invention, and the processor 1302 executes various functional applications and data processing by running the software programs and modules stored in the memory 1304, that is, implements the video acquisition method. The memory 1304 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1304 can further include memory remotely located from the processor 1302, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 1306 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1306 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmitting device 1306 is a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner. The display 1308 can be used to display video segments for playback.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. A method for obtaining video, comprising:
the method comprises the steps that an initial video is obtained through a client running on a target terminal, wherein the initial video comprises a plurality of video segments which are continuous in time, and the plurality of video segments comprise target video segments to be replaced;
sequentially playing the plurality of video segments through the client;
under the condition that the target video segment is played, calling a video shooting component of the target terminal through the client to carry out video shooting to obtain a first replacement video segment of the target video segment;
and performing video composition by using other video segments except the target video segment in the plurality of video segments and the first replacement video segment to obtain a target video.
2. The method of claim 1, wherein obtaining the initial video by the client running on the target terminal comprises:
detecting, by the client, a first operation performed on a first button displayed on the client corresponding to the initial video; downloading the initial video from a first server in response to the first operation, wherein the initial video is stored in the first server; alternatively, the first and second electrodes may be,
detecting, by the client, a second operation performed on a second button displayed on the client; responding to the second operation, and acquiring the video to be cut stored in the target terminal through the client; and obtaining the initial video by using the video to be cut through the client, wherein the initial video is obtained by cutting the video to be cut.
3. The method of claim 2, wherein obtaining, by the client, the initial video using the video to be cropped comprises:
sending the video to be cut to a second server through the client, wherein the second server is used for identifying different objects contained in the video to be cut, and cutting the video to be cut according to the identified different objects to obtain the initial video;
and receiving the initial video returned by the second server through the client.
4. The method of claim 1, wherein playing the plurality of video segments in sequence by the client comprises:
detecting, by the client, a third operation performed on a third button displayed on the client in the process of playing the other video segment by the client;
and responding to the third operation, and jumping to a shooting interface of the first replacement video segment in the client.
5. The method according to claim 1, wherein in the process of calling the video shooting component of the target terminal by the client terminal to shoot the video to obtain the first replacement video segment of the target video segment, the method further comprises:
in the process of video shooting through the video shooting component, displaying a target object and a target video pendant on a terminal screen of the target terminal, wherein the target object is shot through the video shooting component, the target video pendant is a preset video pendant corresponding to the target object, and the first replacement video segment comprises the target video pendant and a video picture shot by the video shooting component which are superposed;
and controlling the target video pendant to move along with the target object on the terminal screen according to the preset relative position of the target object and the target video pendant.
6. The method according to claim 1, wherein in the process of calling the video shooting component of the target terminal by the client terminal to shoot the video to obtain the first replacement video segment of the target video segment, the method further comprises:
and displaying a target video picture on a terminal screen of the target terminal, wherein the target video picture is obtained by processing the video picture shot by the video shooting component by using a target filter, the target filter is a filter corresponding to the initial video, and the first replacement video segment comprises the target video picture.
7. The method according to claim 1, wherein in the process of calling the video shooting component of the target terminal by the client terminal to shoot the video to obtain the first replacement video segment of the target video segment, the method further comprises:
detecting, by the client, a fourth operation performed on a fourth button displayed on the client;
in response to the fourth operation, deleting a sub-video segment shot within a preset time period before the current time, wherein the sub-video segment corresponds to videos from the first time to the second time in the target video segment;
and returning to the shooting interface corresponding to the first moment in the target video segment.
8. The method of claim 1,
before the client calls the video shooting component of the target terminal to shoot a video to obtain the first replacement video segment of the target video segment, the method further includes: playing a first animation through the client, wherein the first animation is used for prompting the start of shooting of the first replacement video segment;
in the process of calling the video shooting component of the target terminal to shoot a video through the client to obtain the first replacement video segment of the target video segment, the method further includes: playing a second animation through the client, wherein the second animation is used for prompting to pause shooting of the first replacement video segment.
9. The method according to any one of claims 1 to 8, wherein performing video compositing using the other video segment of the plurality of video segments other than the target video segment and the first replacement video segment, to obtain the target video comprises:
detecting, by the client, a fifth operation performed on a fifth button displayed on the client;
responding to the fifth operation, and performing video synthesis by using the other video segments and a second replacement video segment to obtain the target video, wherein a video picture of the second replacement video segment is a video picture of the first replacement video segment, and audio data of the second replacement video segment is audio data of the target video segment.
10. An apparatus for acquiring a video, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an initial video through a client running on a target terminal, the initial video comprises a plurality of video segments which are continuous in time, and the plurality of video segments comprise target video segments to be replaced;
the playing unit is used for sequentially playing the plurality of video segments through the client;
the shooting unit is used for calling a video shooting component of the target terminal through the client to shoot a video under the condition that the target video segment is played, so as to obtain a first replacement video segment of the target video segment;
and the synthesizing unit is used for performing video synthesis by using other video segments except the target video segment in the plurality of video segments and the first replacing video segment to obtain a target video.
11. The apparatus of claim 10, further comprising:
a display unit, configured to display a target video picture on a terminal screen of the target terminal in a process of calling the video shooting component of the target terminal through the client to perform video shooting to obtain the first replacement video segment of the target video segment, where the target video picture is a video picture obtained by processing a video picture shot by the video shooting component using a target filter, the target filter is a filter corresponding to the initial video, and the first replacement video segment includes the target video picture.
12. The apparatus according to claim 10, wherein the playing unit is further configured to play a first animation through the client before the client calls the video capturing component of the target terminal to capture video to obtain the first replacement video segment of the target video segment, wherein the first animation is used to prompt the start of capturing the first replacement video segment; and playing a second animation through the client in the process of calling the video shooting component of the target terminal to shoot the video through the client to obtain the first replacement video segment of the target video segment, wherein the second animation is used for prompting to pause the shooting of the first replacement video segment.
13. The apparatus according to any one of claims 10 to 12, wherein the synthesis unit comprises:
the detection module is used for detecting a fifth operation executed on a fifth button displayed on the client through the client;
and the synthesizing module is used for responding to the fifth operation and performing video synthesis by using the other video segments and a second replacement video segment to obtain the target video, wherein the video picture of the second replacement video segment is the video picture of the first replacement video segment, and the audio data of the second replacement video segment is the audio data of the target video segment.
14. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 10 when executed.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 10 by means of the computer program.
CN201910629661.3A 2019-07-12 2019-07-12 Video acquisition method and device, storage medium and electronic device Active CN112218154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910629661.3A CN112218154B (en) 2019-07-12 2019-07-12 Video acquisition method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910629661.3A CN112218154B (en) 2019-07-12 2019-07-12 Video acquisition method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN112218154A true CN112218154A (en) 2021-01-12
CN112218154B CN112218154B (en) 2023-02-10

Family

ID=74047742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910629661.3A Active CN112218154B (en) 2019-07-12 2019-07-12 Video acquisition method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN112218154B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438428A (en) * 2021-06-23 2021-09-24 北京百度网讯科技有限公司 Method, apparatus, device and computer-readable storage medium for automated video generation
CN113438434A (en) * 2021-08-26 2021-09-24 视见科技(杭州)有限公司 Text-based audio/video re-recording method and system
CN113727140A (en) * 2021-08-31 2021-11-30 维沃移动通信(杭州)有限公司 Audio and video processing method and device and electronic equipment
CN114928753A (en) * 2022-04-12 2022-08-19 广州阿凡提电子科技有限公司 Video splitting processing method, system and device
WO2023178589A1 (en) * 2022-03-24 2023-09-28 深圳市大疆创新科技有限公司 Filming guiding method, electronic device, system and storage medium
WO2024002132A1 (en) * 2022-06-30 2024-01-04 北京字跳网络技术有限公司 Multimedia data processing method and apparatus, device, storage medium and program product

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090163262A1 (en) * 2007-12-21 2009-06-25 Sony Computer Entertainment America Inc. Scheme for inserting a mimicked performance into a scene and providing an evaluation of same
US20150067514A1 (en) * 2013-08-30 2015-03-05 Google Inc. Modifying a segment of a media item on a mobile device
CN107454437A (en) * 2016-06-01 2017-12-08 深圳市维杰乐思科技有限公司 A kind of video labeling method and its device, server
CN108566519A (en) * 2018-04-28 2018-09-21 腾讯科技(深圳)有限公司 Video creating method, device, terminal and storage medium
CN108600825A (en) * 2018-07-12 2018-09-28 北京微播视界科技有限公司 Select method, apparatus, terminal device and the medium of background music shooting video
CN109151356A (en) * 2018-09-05 2019-01-04 传线网络科技(上海)有限公司 video recording method and device
CN109361954A (en) * 2018-11-02 2019-02-19 腾讯科技(深圳)有限公司 Method for recording, device, storage medium and the electronic device of video resource
CN109379633A (en) * 2018-11-08 2019-02-22 北京微播视界科技有限公司 Video editing method, device, computer equipment and readable storage medium storing program for executing
CN109429098A (en) * 2017-08-24 2019-03-05 中兴通讯股份有限公司 Method for processing video frequency, device and terminal
CN109729429A (en) * 2019-01-31 2019-05-07 百度在线网络技术(北京)有限公司 Video broadcasting method, device, equipment and medium
CN109905749A (en) * 2019-04-11 2019-06-18 腾讯科技(深圳)有限公司 Video broadcasting method and device, storage medium and electronic device
CN109963087A (en) * 2019-04-02 2019-07-02 张鹏程 A kind of multiterminal interdynamic video processing method, apparatus and system
US10348981B1 (en) * 2018-02-21 2019-07-09 International Business Machines Corporation Dynamic and contextual data replacement in video content

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090163262A1 (en) * 2007-12-21 2009-06-25 Sony Computer Entertainment America Inc. Scheme for inserting a mimicked performance into a scene and providing an evaluation of same
US20150067514A1 (en) * 2013-08-30 2015-03-05 Google Inc. Modifying a segment of a media item on a mobile device
CN107454437A (en) * 2016-06-01 2017-12-08 深圳市维杰乐思科技有限公司 A kind of video labeling method and its device, server
CN109429098A (en) * 2017-08-24 2019-03-05 中兴通讯股份有限公司 Method for processing video frequency, device and terminal
US10348981B1 (en) * 2018-02-21 2019-07-09 International Business Machines Corporation Dynamic and contextual data replacement in video content
CN108566519A (en) * 2018-04-28 2018-09-21 腾讯科技(深圳)有限公司 Video creating method, device, terminal and storage medium
CN108600825A (en) * 2018-07-12 2018-09-28 北京微播视界科技有限公司 Select method, apparatus, terminal device and the medium of background music shooting video
CN109151356A (en) * 2018-09-05 2019-01-04 传线网络科技(上海)有限公司 video recording method and device
CN109361954A (en) * 2018-11-02 2019-02-19 腾讯科技(深圳)有限公司 Method for recording, device, storage medium and the electronic device of video resource
CN109379633A (en) * 2018-11-08 2019-02-22 北京微播视界科技有限公司 Video editing method, device, computer equipment and readable storage medium storing program for executing
CN109729429A (en) * 2019-01-31 2019-05-07 百度在线网络技术(北京)有限公司 Video broadcasting method, device, equipment and medium
CN109963087A (en) * 2019-04-02 2019-07-02 张鹏程 A kind of multiterminal interdynamic video processing method, apparatus and system
CN109905749A (en) * 2019-04-11 2019-06-18 腾讯科技(深圳)有限公司 Video broadcasting method and device, storage medium and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姚震: ""影视角色人脸追踪与替换的研究与应用"", 《中国优秀硕士学位论文全文数据库》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438428A (en) * 2021-06-23 2021-09-24 北京百度网讯科技有限公司 Method, apparatus, device and computer-readable storage medium for automated video generation
CN113438434A (en) * 2021-08-26 2021-09-24 视见科技(杭州)有限公司 Text-based audio/video re-recording method and system
CN113727140A (en) * 2021-08-31 2021-11-30 维沃移动通信(杭州)有限公司 Audio and video processing method and device and electronic equipment
WO2023178589A1 (en) * 2022-03-24 2023-09-28 深圳市大疆创新科技有限公司 Filming guiding method, electronic device, system and storage medium
CN114928753A (en) * 2022-04-12 2022-08-19 广州阿凡提电子科技有限公司 Video splitting processing method, system and device
WO2024002132A1 (en) * 2022-06-30 2024-01-04 北京字跳网络技术有限公司 Multimedia data processing method and apparatus, device, storage medium and program product

Also Published As

Publication number Publication date
CN112218154B (en) 2023-02-10

Similar Documents

Publication Publication Date Title
CN112218154B (en) Video acquisition method and device, storage medium and electronic device
CN108566519B (en) Video production method, device, terminal and storage medium
CN107613235B (en) Video recording method and device
CN109151537B (en) Video processing method and device, electronic equipment and storage medium
WO2020107297A1 (en) Video clipping control method, terminal device, system
TWI579838B (en) Automatic generation of compilation videos
CN109275028B (en) Video acquisition method, device, terminal and medium
US9117483B2 (en) Method and apparatus for dynamically recording, editing and combining multiple live video clips and still photographs into a finished composition
US8353406B2 (en) System, method, and computer readable medium for creating a video clip
JP6367334B2 (en) Video processing method, apparatus, and playback apparatus
KR101604250B1 (en) Method of Providing Service for Recommending Game Video
US20130047081A1 (en) Methods and systems for creating video content on mobile devices using storyboard templates
TW201545120A (en) Automatic generation of compilation videos
WO2016134415A1 (en) Generation of combined videos
JP5701017B2 (en) Movie playback apparatus, movie playback method, computer program, and storage medium
JP2016517195A (en) Method and apparatus for improving video and media time series editing utilizing a list driven selection process
CN109905749B (en) Video playing method and device, storage medium and electronic device
JP2023554470A (en) Video processing methods, devices, equipment, storage media, and computer program products
WO2024007290A1 (en) Video acquisition method, electronic device, storage medium, and program product
US11089213B2 (en) Information management apparatus and information management method, and video reproduction apparatus and video reproduction method
JP2023169373A (en) Information processing device, moving image synthesis method and moving image synthesis program
CN107018442B (en) A kind of video recording synchronized playback method and device
JP2019220207A (en) Method and apparatus for using gestures for shot effects
US12003882B2 (en) Information processing devices, methods, and computer-readable medium for performing information processing to output video content using video from multiple video sources including one or more pan-tilt-zoom (PTZ)-enabled network cameras
JP2009232114A (en) Image reproducing apparatus, its method and image reproducing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40037800

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant