CN112822419A - Method and equipment for generating video information - Google Patents

Method and equipment for generating video information Download PDF

Info

Publication number
CN112822419A
CN112822419A CN202110119060.5A CN202110119060A CN112822419A CN 112822419 A CN112822419 A CN 112822419A CN 202110119060 A CN202110119060 A CN 202110119060A CN 112822419 A CN112822419 A CN 112822419A
Authority
CN
China
Prior art keywords
video information
portrait
information
video
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110119060.5A
Other languages
Chinese (zh)
Inventor
罗剑嵘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shengfutong Electronic Payment Service Co ltd
Original Assignee
Shanghai Shengfutong Electronic Payment Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shengfutong Electronic Payment Service Co ltd filed Critical Shanghai Shengfutong Electronic Payment Service Co ltd
Priority to CN202110119060.5A priority Critical patent/CN112822419A/en
Publication of CN112822419A publication Critical patent/CN112822419A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application aims to provide a method and equipment for generating video information, and the method comprises the following steps: acquiring one or more recorded first video information, wherein each first video information corresponds to a preset sequencing sequence; and generating target video information according to the portrait video information corresponding to each piece of recorded first video information and the recorded second video information, wherein the video duration of each piece of first video information is consistent with the video duration of the second video information. The method and the device can reduce the shooting cost of the user for shooting the specific video effect.

Description

Method and equipment for generating video information
Technical Field
The present application relates to the field of communications, and more particularly, to a technique for generating video information.
Background
With the development of the internet and the popularization of mobile devices, video applications (especially short video applications) have been increased explosively, people gradually tend to use videos (e.g. vlog) as one direction of content generation, for example, recording their work, study and life, the videos are low in production cost, fragmented in transmission and production, high in transmission speed, strong in social property, and simple in video shooting operation, but if various video technical effects need to be generated, professional technologies are generally needed for realization, and a certain technical cost for shooting videos needs to be learned for a photographer.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for generating video information.
According to an aspect of the present application, there is provided a method for generating video information, the method comprising:
acquiring one or more recorded first video information, and applying the one or more recorded first video information to user equipment, wherein each first video information corresponds to a preset sequencing sequence;
and generating target video information according to the portrait video information corresponding to each piece of recorded first video information and the recorded second video information, wherein the video duration of each piece of first video information is consistent with the video duration of the second video information.
According to an aspect of the present application, there is provided a user equipment for generating video information, the equipment comprising:
the one-to-one module is used for acquiring one or more recorded first video information, wherein each first video information corresponds to a preset sequencing sequence;
and the second module is used for generating target video information according to the portrait video information corresponding to each piece of recorded first video information in the one or more pieces of recorded first video information and the recorded second video information, wherein the video duration of each piece of first video information is consistent with the video duration of the second video information.
According to an aspect of the present application, there is provided an apparatus for generating video information, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of any of the methods described above.
According to another aspect of the application, there is provided a computer readable medium storing instructions that, when executed, cause a system to perform the operations of any of the methods described above.
According to an aspect of the application, a computer program product is provided, comprising a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method as described above.
Compared with the prior art, the user equipment obtains one or more pieces of recorded first video information, wherein each piece of first video information corresponds to a preset sequencing sequence, and target video information is generated according to the portrait video information corresponding to each piece of first video information in the one or more pieces of recorded first video information and the recorded second video information, wherein the video duration of each piece of first video information is consistent with the video duration of the second video information. According to the method and the device, portrait video information (for example, a motion track graph of a portrait in a video picture) in one or more first video information can be separated according to a recording sequence, and the portrait video information and the second video information are fused, so that a picture played by the portrait video information in target video information is generated, and videos with one person decorating multiple roles in the same picture can be generated efficiently under the condition that the portrait video information and the portrait information in the second video information are the same person, so that the shooting cost of a user is reduced, and the video shooting efficiency of the user is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 shows a schematic diagram according to the present application;
FIG. 2 illustrates a flow diagram of a method for generating video information for use with a user device according to one embodiment of the present application;
FIG. 3 shows a flow diagram for generating video information according to another embodiment of the present application;
FIG. 4 illustrates a device structure diagram of a user device for generating video information according to one embodiment of the present application;
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described in this disclosure.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an android operating system, an iOS operating system, etc. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and hardware thereof includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 shows a typical scenario of the present application, where a user holds user equipment, the user equipment includes a camera device, the user records a first video through the user equipment, and after the recording of the first video is completed, the user equipment acquires the first video, where the first video includes a portrait a and a gray background picture. The user equipment separates the portrait video information (for example, the background picture is removed from the portrait video information) of the portrait a and the background video information from the first video, wherein the background video information is stored, and the portrait video information of the portrait a is subsequently used for recording the second video. The recording time of the second video refers to the recording time of the first video, when the second video starts to be recorded, the user equipment presents the portrait video information of the portrait a in a picture-in-picture mode in the recording picture of the second video for the user to refer to when recording the second video (for example, the background in the second video and the motion track of the portrait to be recorded are matched with the portrait video information of the portrait a), and after the recording of the second video is completed, the user equipment acquires the second video, wherein the second video includes the portrait B and a gray background picture. The user equipment separates the portrait video information of the portrait B after adopting portrait separation for the second section of video (for example, the portrait video information has the background picture removed). The recording time of the third video segment refers to the recording time of the second video segment, when the third video segment starts to be recorded, the user equipment presents the portrait video information of the portrait a and the portrait video information of the portrait B in a picture-in-picture mode in a recording picture of the third video segment for reference when the user records the third video segment (for example, the background and the to-be-recorded portrait motion track in the third video segment are matched with the portrait video information of the portrait a and the portrait video information of the portrait B to meet the shooting requirements of the user), and after the third video segment is recorded, the user equipment acquires the third video segment, wherein the third video segment includes the portrait C. Then, the user equipment synthesizes a target video according to the portrait video information of the portrait A in the first video, the portrait video information of the portrait B in the second video and the sequencing sequence of the third video; or after the user adjusts the sequence of the portrait video information of the portrait a in the first video segment and the portrait video information of the portrait B in the second video segment, the user equipment synthesizes a target video (for example, the background video of the target video is the background video in the third video segment) according to the sequence of the portrait video information of the portrait B in the second video segment, the portrait video information of the portrait a in the first video segment and the sequence of the third video segment, and after the target video is played according to the sequence, the target video is played in a manner that the portrait B is superposed on the portrait a and the portrait a is superposed on the portrait C, wherein the user equipment includes but is not limited to computing equipment such as a mobile phone, a tablet and a computer.
Fig. 2 shows a method for generating video information according to an embodiment of the present application, applied to a user equipment, and the method includes steps S101 and S102.
Specifically, in step S101, the user equipment obtains one or more recorded first video information, where each first video information corresponds to a preset sorting order. Each of the first video information includes portrait information, the portrait information in each of the first video information may be a same person (e.g., person a) or a person recorded by a different person (e.g., person A, B, C …), and each of the first video information includes at least one picture of the portrait information from a start playing point to an end playing point. In some embodiments, there is at least one first video information in the one or more first video information, where portrait information in each first video information in the at least one first video information is multiple (for example, two or more portraits appear in a video of the first video information a), the one or more video information is recorded by the user equipment, and the user equipment sorts each first video information after completing recording the one or more first video information, where in some embodiments, the preset sorting order includes at least any one of:
1) the recording completion time sequence of each first video information;
2) the user manually adjusts the sequence of each first video information;
for example, the user equipment receives a video production task, where the video recording task includes recording one or more first video information, for example, the user equipment is recording first video information 1, the user equipment starts recording first video information 2 after the recording of first video information 1 is completed, the user equipment starts recording first video information 3 after the recording of first video information 2 is completed, and the user equipment starts recording first video information 4 after the recording of first video information 3 is completed. That is, the one or more recorded first video information are first video information 1, first video information 2, first video information 3, and first video information 4, respectively, and the user equipment allocates a corresponding sorting order to each first video information according to the recording completion time of each first video information, as in the foregoing example, the sorting order of each first video information is represented according to a sorting number, and the sorting order is in direct proportion to the superposition relationship between each first video information, that is, the position of the subsequent first video information displayed in the picture is higher the sorting order is, for example, the sorting order of the first video information 1 is the highest, and the portrait video information extracted from the first video information 1 is not blocked by other video information. In some embodiments, each first video information is recorded with a predetermined time countdown (e.g., 5s), which may facilitate the preparation of the person to be photographed, and may also allow the user time to adjust the photographing content according to the related content extracted from the first video information 1 (e.g., the subsequent portrait video information) when the first video information 2, the first video information 3, and the like are subsequently photographed. In some embodiments, the video duration of each first video information is consistent, so that the target video can be synthesized with the second video information according to the related content of each video information.
For another example, after the user equipment sequences a plurality of pieces of first video information according to the video recording completion time (e.g., first video information 1, first video information 2, first video information 3, and first video information 4), the user manually adjusts the sequence of the plurality of pieces of first video information, and in some embodiments, each time one piece of first video information is recorded, the user equipment performs a capture operation on a video frame of the first video information (e.g., captures a video frame at the start play point of the first video information, or a video frame at any time point), where the captured video frame is used to identify the first video information corresponding to the video frame, and is applied to the video sequencing aspect, for example, a video frame captured by first video information 1, a video frame captured by first video information 2, a video frame captured by first video information 3, The video frames of the screenshots of the first video information 4 are arranged in sequence, and the user can adjust the sequence of the plurality of first video information by moving the sequence of the screenshots of the first video information 4, for example, the video frame of the screenshots of the first video information 3 is adjusted to be before the video frame of the screenshots of the first video information 1, and based on the adjustment operation, the sequence of the first video information acquired by the user equipment from front to back is as follows: first video information 3, first video information 2, first video information 1, first video information 4.
In some embodiments, the method further comprises step S104 (not shown), and in step S104, the user equipment generates one or more recorded first video information. For example, the generating process of the one or more first video information is a recording process of the one or more first video information. After the one or more first video information are recorded completely, the user equipment acquires the one or more first video information to provide a basis for subsequently completing a production task of the target video.
In some embodiments, the generating one or more recorded first video information comprises: recording first video information; extracting portrait video information in the first video information from the first video information, and recording the portrait video information in the first video information as recording reference information to obtain second first video information; extracting portrait video information in the second first video information from the second first video information, and recording third first video information by taking the portrait video information in the first video information and the portrait video information in the second first video information as recording reference information; by analogy, recording the (N + 1) th first video information by taking the portrait video information in the first video information to the portrait video information in the Nth first video information as recording reference information, wherein N is a positive integer; and generating one or more recorded first video information according to the first video information to the (N + 1) th first video information. For example, the user equipment records first video information, where the first video information includes portrait information and background information, a video duration of the first video information is a preset duration (e.g., 5min), and then, in response to a recording operation of a user, the user equipment prepares to capture second first video information, and in some embodiments, the extracting portrait video information from the first video information, and recording the portrait video information from the first video information as recording reference information to record the second first video information includes: performing portrait segmentation on the first video information to extract portrait video information in the first video information, and storing background video information except the portrait video information; and when the second first video information is recorded, displaying the portrait information in the portrait video information in a recording picture of the second first video information to be used as recording reference information to record the second first video information. For example, the user equipment separates portrait video information in the first video information from the first video information (i.e. removes background video information in the first video information, and only obtains portrait video information), for example, the operation of separating portrait video information includes: the user equipment converts the first video information into an image of one frame, then separates the images of each frame, for example, the user equipment may use an image segmentation api (e.g., Face + +, remove. bg, aip) or a neural training network to separate the images, and combines the processed images of each frame to generate the image video information. At this point, the portrait video information is free of any background information (transparent, if any). In some embodiments, the user equipment separates the portrait video information from the background video information in the first video information without deleting the background video information, and the user equipment retains the background video information so that the background video information of the first video information is used as the background video information of the generated target video when the ordering of the first video information is adjusted to the end.
When the second first video information starts to be recorded (for example, when the progress is 0: 00), the user equipment presents the portrait video information in a picture-in-picture mode in a recording picture of the second first video information. In some embodiments, when the second first video information starts to be recorded, the user equipment presents a preset video frame of the portrait video information (for example, a starting portrait frame of the portrait video information, a portrait frame corresponding to a time point preset by the user equipment) in the recorded frame of the second first video information, so as to serve as recording reference information when the user records the second first video information, that is, a specific position of a person in a previous video on a screen can be seen in real time when the second first video information is recorded. In some embodiments, when the second first video information starts to be recorded, the user equipment presents a starting video picture of the portrait video information in a recorded picture of the second first video information, and when a progress bar of the second first video information starts to change, the recorded picture presented in the second first video information also changes into a portrait picture corresponding to the progress in the portrait video information (for example, when the progress bar of the second first video information is 0:02, the portrait video information itself plays at a progress of 0: 02), so as to serve as recording reference information when the user records the second first video information, that is, when the second first video information is recorded, the user can see a specific position of a character in a previous video on a screen in real time. It is convenient to think about the recording of the character in the second first video information according to the previous character picture (e.g., the current action, expression, language, etc. of the character) (e.g., the action, expression, language, etc. of the character), after the second first video information is generated according to the above-mentioned thought, the user equipment prepares to shoot the third first video information in response to the user's recording operation, the user equipment separates the portrait video information in the second first video information from the second first video information, in some embodiments, the separation of the portrait video information in the second first video information is identical to the separation of the portrait video information in the first video information, and no repeated statement is made here, and the user equipment has obtained the portrait video information in the first video information, when the third first video information starts to be recorded, the user equipment uses the portrait video information in the second first video information and the portrait video information in the first video information as recording reference information for recording the third first video information, for example, the user equipment may present a portrait picture at a time point preset in the portrait video information of the second first video information and a portrait picture at a time point preset in the portrait video information of the first video information in a recording picture of the third first video information, wherein the portrait picture at a time point preset in the portrait video information of the first video information is presented in a sequence before the portrait picture at a time point preset in the portrait video information of the second first video information, and in some embodiments, the portrait picture in the portrait video information of the first video information in the recording picture of the third first video information, The portrait frame in the portrait video information of the second first video information, which is present in the third first video information recording frame, changes with the change of the recording progress bar (for example, when the recording progress is 0:05s, the portrait video information of the two first video information is played to the progress of 0:05 s), and so on, when the N +1 th first video information needs to be recorded, the portrait information corresponding to all the first video information before the N +1 th first video information is present in the recording frame of the N +1 th first video information, so as to be used as the recording reference information for the recording at this time. Here, the first video information and the second video information … N +1 th video information generated in the foregoing are used as one or more recorded first video information to be acquired by the user equipment.
In some embodiments, the presenting the portrait information in the portrait video information in the recording picture of the second first video information to record the second first video information as the recording reference information includes: determining first position information of portrait information in the portrait video information in the first video information; determining second position information of the portrait information in the portrait video information presented in the recording picture of the second first video information according to the first position information; and recording the portrait information in the portrait video information presented in the second position information as recording reference information to the second first video information. For example, in order to make the portrait information as the recording reference information more accurate, the user equipment needs to correspond the position of the portrait information in the portrait video information in the first video information with the position of the portrait information as the recording reference information in the second first video information. In some embodiments, the user device uses a static portrait information in the portrait video information as the recording reference information, and the user device uses a starting portrait frame of the portrait video information or a portrait frame corresponding to a predetermined time point as the portrait information, and the portrait information is statically fixed in a position for reference when the user records, for example, the portrait information is a portrait frame at a schedule of 0:10, and at the schedule, the portrait frame is located in a first position information (for example, a vertical line screen edge A1cm away from the left and a horizontal line screen edge A2cm away from the top) in a video frame of a first video information, and then, when the portrait information is used as the recording reference information, at the start of recording a second first video information, the user device determines a second position information (namely, a screen edge A1cm away from the left, a screen edge a cm away from the left, a horizontal line screen edge a cm away from the top) of the video frame of the second first video information, The upper horizontal line screen edge A2cm) and performs recording based on the recording reference information.
In some embodiments, the user equipment uses the entire dynamic portrait information in the portrait video information as the recording reference information, and the user equipment uses the entire moving picture performed by the portrait in the portrait video information as the portrait information, and the portrait information dynamically moves at different positions for the user to record as a reference, for example, from a starting playing point to an ending playing point, the portrait in the first video information runs from a left vertical line screen edge until a right vertical line screen edge, so that the motion track of the portrait changes corresponding to different progress time points. Then, when the portrait information is used as recording reference information and the second first video information starts to be recorded, the user equipment determines first position information of the portrait information at each time point in the first video information, and maps the first position information at each time point to a video picture corresponding to a recording time point of the second first video information to determine second position information of the portrait information in the video picture corresponding to each recording time point of the second first video information, that is, when the user shoots the second first video information, the recording scheme can be adjusted according to the position of the portrait information at each recording time point in the picture, so that the second first video information can generate a better synthetic effect when being subsequently recorded and participating in synthesizing a target video.
In step S102, the user equipment generates target video information according to the portrait video information corresponding to each of the one or more pieces of recorded first video information and the recorded second video information, where a video duration of each of the one or more pieces of recorded first video information is consistent with a video duration of the second video information. For example, the portrait frame in each recorded first video information keeps being presented from the video starting stage to the video ending stage, the portrait frame in each first video information includes one or more portraits, where only one portrait frame in each first video information is exemplified, where, on the premise that the portrait frame in each recorded first video information keeps being presented from the video starting stage to the video ending stage, the video duration of each portrait video information separated from each first video information is consistent with the duration of the first video information, in some embodiments, there is a case where the video duration of the separated portrait video information is inconsistent with the duration of the first video information (i.e., there is a case where the video duration of the portrait video information is inconsistent with the duration of the second video information), in which case, less than the number of people in the video frame corresponding to one time point may exist in the finally synthesized target video information, the video frame is a video frame at the starting time point The number of people in the face. In some embodiments, the video duration of the portrait video information corresponding to each piece of first video information is consistent with the video duration of the second video information, and the blocking sequence of the portrait video information corresponding to each piece of first video information in the target video information is consistent with the preset sorting sequence corresponding to each piece of first video information. The occlusion order is a superposition relationship of the portrait video information corresponding to each piece of the first video information, for example, the order of the plurality of pieces of first video information is: the first video information 1 and the first video information 2 are, in the subsequently generated target video information, a picture corresponding to the portrait video information in the first video information 1 is in front of a picture corresponding to the portrait video information in the first video information 2 (that is, there is a high probability that the picture corresponding to the portrait video information in the first video information 1 blocks the picture corresponding to the portrait video information in the first video information 2).
In some embodiments, the second video information is ordered after the one or more recorded first video information. For example, the recording completion time of the one or more recorded first video information is earlier than the recording completion time of the second video information, and the user does not adjust the sorting order of the second video information. And on the premise that the sequencing order of the second video information is behind the one or more recorded first video information, the second video information does not need to be subjected to portrait separation when the target video information is subsequently generated. Here, the one or more first video information and the second video information are generated after the user records through the user equipment, and only the difference is that the one or more first video information needs to be separated from corresponding portrait video information to be combined with the second video information to generate the target video information, wherein the second video information does not need to be separated from the portrait, and the background video information of the second video information is used as the background information of the subsequent target video information.
In some embodiments, the method further includes step S103 (not shown), in step S103, the user equipment plays the target video information from a play start point of the target video information, where the corresponding one or more portrait information in each first video information in the video picture corresponding to the play start point and the portrait information in the second video information are forward overlaid in the order of sorting. For example, after the target video information is generated, the user equipment plays the target video information, and at the play starting point of the target video information, the portrait video information corresponding to each piece of first video information is presented in the target video information in a manner of overlaying the video pictures presented in the second video information, where the overlaying presentation is determined by the ordering order of each piece of first video information (i.e., the ordering order of the portrait video information corresponding to each piece of first video information), for example, the ordering order is from first to last: the video information processing method comprises first video information 1, first video information 2, first video information 3, first video information 4 and second video information 1, namely, a presented video picture is that portrait video information of the first video information 1 blocks portrait video information of the first video information 2, portrait video information of the first video information 2 blocks portrait video information of the first video information 3, and portrait video information of the first video information 3 blocks portrait video information of the first video information 4 and is superposed in the second video information 1. In the process of playing the target video information, the portrait video information extracted from the first video information 1, the portrait video information extracted from the first video information 2, the portrait video information extracted from the first video information 3, and the portrait video information extracted from the first video information 4 are played simultaneously along with the movement of the progress bar of the second video information 1, if the portraits in the plurality of videos are the same recorded figure, the effect that different individuals of the same person move in the same video picture can be presented in the playing process of the target video, so that the video generation of one-figure multi-role is more efficient, in some embodiments, if the portraits in the plurality of videos are different figures, the effect that different people move in the same video picture can be presented in the playing process of the target video, thereby improving the efficiency of the user in clipping the video.
In some embodiments, the method further includes step S105 (not shown), and in step S105, the user equipment acquires the recorded second video information. For example, the second video information may be uploaded after the user completes recording, or may be recorded by the user with reference to the portrait content of the one or more first video information. If the second video information is recorded by the user referring to the portrait content of the one or more first video information, then the generated video engagement degree is higher when the target video information is subsequently generated according to the portrait video information corresponding to each first video information and the second video information.
In some embodiments, the obtaining the recorded second video information includes: acquiring portrait information in portrait video information in each piece of first video information; when the second video information is recorded, displaying the portrait information in the portrait video information in each piece of first video information in a recording picture of the second video information to be used as recording reference information to record the second video information; and acquiring the recorded second video information after the second video information is recorded. For example, the recording mode of the second video information may refer to the aforementioned recording mode of recording the (N + 1) th first video information, that is, the portrait information in the portrait video information in each first video information is presented in the recording picture of the second video information for a recording reference, in some embodiments, the user equipment determines the position information of the portrait information in each portrait video information in the corresponding first video information, so as to determine the position information of the portrait information in each portrait video information in the video picture of the second video information based on the position information, so that the recording of the second video information has a reference basis, and the subsequently generated target video information meets the requirements of the user.
In some embodiments, in step S102, in response to a trigger event in the user equipment, the user equipment performs a preset operation corresponding to the trigger event on at least one of the one or more first video information and the second video information to generate a plurality of candidate video information; and generating target video information according to the portrait video information corresponding to each candidate video information in the first M candidate video information in the plurality of candidate video information and the last candidate video information, wherein the video time length of each candidate video information is consistent, the video time length of the portrait video information corresponding to each candidate video information is consistent with the video time length of the last candidate video information, M is a positive integer, and the plurality of candidate video information comprises the first M candidate video information and the last candidate video information. For example, after the user equipment acquires the one or more first video information and the second video information, a plurality of candidate video information are generated in response to a preset operation of the user on at least one video information of the plurality of video information. In some embodiments, the preset operation comprises at least any one of:
1) a deletion operation of the at least one video information;
2) a re-recording operation of the at least one video information;
3) and adjusting the sequencing order of the at least one piece of video information. In some embodiments, the user may perform at least one of a deletion operation, a re-recording operation, and an adjustment operation of the sorting order on the at least one video information simultaneously. Taking any one preset operation as an example here:
for example, the one or more first video information and the second video information are sorted in order (e.g., the first video information 1, the first video information 2, the first video information 3, the first video information 4, and the second video information 1), in response to a user's trigger operation on at least one of the plurality of video information, for example, the user deletes the first video information 2 manually, the plurality of candidate video information generated at this time include the first video information 1, the first video information 3, the first video information 4, and the second video information 1, and the user equipment generates the target video information according to the portrait video information extracted from the first video information 1, the first video information 3, and the first video information 4, and the second video information 1.
For example, the one or more first video information and the second video information are ordered in sequence (e.g., the first video information 1, the first video information 2, the first video information 3, the first video information 4, and the second video information 1), and in response to a user's trigger operation on at least one of the video information, for example, the user re-records the first video information 2 and the first video information 3, the ordering sequence of the two first video information in all the video information is not changed, and only the video content is changed. The multiple candidate video information generated at this time include first video information 1, re-recorded first video information 2, re-recorded first video information 3, first video information 4, and second video information 1, and the user equipment then generates target video information according to portrait video information extracted from the first video information 1, the re-recorded first video information 2, the re-recorded first video information 3, and the first video information 4, and the second video information 1.
In some embodiments, the preset operation includes an operation of adjusting a sorting order of the at least one video information, and the generating the target video information according to the corresponding portrait video information and the last candidate video information in each of the first M candidate video information of the plurality of candidate video information includes: and generating target video information according to the portrait video information corresponding to each candidate video information in the first M candidate video information in the plurality of candidate video information, the third position information of the portrait video information corresponding to each candidate video information in the candidate video information, the adjusted ordering sequence of each candidate video information and the last candidate video information. For example, the one or more first video information and the second video information are ordered in sequence (for example, in response to a sequence adjustment operation of the user (for example, before the first video information 3 is adjusted to the first video information 2 and before the second video information 1 is adjusted to the first video information 1), the generated plurality of candidate video information includes the second video information 1, the first video information 3, the first video information 2 and the first video information 4, the user equipment respectively extracts portrait video information corresponding to the second video information 1, the first video information 3 and the first video information 2, since the background video information of the video is reserved when the user equipment performs the face separation on the video, the first video information 4 is a complete video and is not segmented. And then the user equipment generates target video information according to the second video information 1, the first video information 3, the portrait video information corresponding to the first video information 2 and the first video information 4. In some embodiments, the generating the target video information according to the corresponding portrait video information in each candidate video information of the first M candidate video information of the plurality of candidate video information, the third position information of the corresponding portrait video information in each candidate video information in the candidate video information, the adjusted sorting order of each candidate video information, and the last candidate video information includes: extracting corresponding portrait video information in each candidate video information in the first M candidate video information; determining fourth position information of each portrait video information in the picture of the last candidate video information according to the third position information of the corresponding portrait video information in each candidate video information in the candidate video information; and presenting each portrait video information in the picture of the last candidate video information according to the fourth position information, and performing superposition sorting on the portrait video information according to the sorting sequence adjusted by each candidate video information to generate target video information, wherein the sequence of the portrait video information in the last candidate video information is positioned behind each portrait video information. For example, on the premise that the user equipment separates the portrait video information from each candidate video information in the first M candidate video information, the user equipment determines third location information (for example, a dynamic location moving in real time, or location information of a picture in the portrait video information corresponding to a certain progress time point in the candidate video information) of each portrait video information in the corresponding candidate video information, and then, the user equipment determines fourth location information of each portrait video information in the picture of the last candidate video information according to the third location information, and the specific manner of determining the fourth location information may refer to the foregoing: and determining second position information of the portrait information in the portrait video information presented in the recording picture of the second first video information according to the first position information.
Fig. 3 is a flowchart illustrating a process for generating video information according to another embodiment of the present application, where a user clicks a video recording button to start recording, the time is counted down by 5 seconds, a first video is recorded by a user equipment, the video includes portrait information, a video tag is generated at the lower left corner of the user equipment after the recording is completed, and then the video is subjected to portrait separation, and the time duration of a subsequent video is determined by the first video; when the user equipment records the Nth video, the portrait which is separated from the portrait is presented with the video in a picture-in-picture mode; the user can select deletion and retake the recorded video by pressing the video label at the lower left corner for a long time, and can also drag the video label to change the front-back sequencing of people (the sequencing determines the shielding relation); after the video creation is finished, the operation can be finished by clicking the right upper corner (completion) button, and N videos can be combined into one video after clicking is finished.
Fig. 4 shows a user equipment for producing video information according to an embodiment of the present application, where the user equipment includes a module 101 and a module 102, specifically, the module 101 is used for acquiring one or more recorded first video information, where each first video information corresponds to a preset sorting order. Each of the first video information includes portrait information, the portrait information in each of the first video information may be a same person (e.g., person a) or a person recorded by a different person (e.g., person A, B, C …), and each of the first video information includes at least one picture of the portrait information from a start playing point to an end playing point. In some embodiments, there is at least one first video information in the one or more first video information, where the portrait information in each first video information in the at least one first video information is multiple (for example, two or more portraits appear in the video of the first video information a), the one or more video information is recorded by the user equipment, and the user equipment sorts each first video information after recording the one or more first video information.
A second module 102, configured to generate target video information according to portrait video information corresponding to each of the one or more pieces of recorded first video information and recorded second video information, where a video duration of each of the first video information is consistent with a video duration of the second video information. For example, the portrait frame in each recorded first video information keeps being presented from the video start stage to the video end stage, each portrait frame in each first video information includes one or more portraits, where only one portrait frame in each first video information is taken as an example, where, on the premise that the portrait frame in each recorded first video information keeps being presented from the video start stage to the video end stage, the video duration of the portrait video information separated from each first video information is consistent with the duration of the first video information, and in order to keep the subsequent synthesizing target video information more efficient, the duration of each first video information is consistent with the duration of the second video information.
In some embodiments, the video duration of the portrait video information corresponding to each piece of first video information is consistent with the video duration of the second video information, and the blocking sequence of the portrait video information corresponding to each piece of first video information in the target video information is consistent with the preset sorting sequence corresponding to each piece of first video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the user equipment further includes a third module 103 (not shown), where the third module 103 is configured to play the target video information from a play start point of the target video information, where one or more corresponding portrait information in each first video information in a video frame corresponding to the play start point and the portrait information in the second video information are forward overlaid in a sorted order. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the second video information is ordered after the one or more recorded first video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the user equipment further includes a quad module 104 (not shown), and the quad module 104 is configured to generate one or more recorded first video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the generating one or more recorded first video information comprises:
recording first video information;
extracting portrait video information in the first video information from the first video information, and recording the portrait video information in the first video information as recording reference information to obtain second first video information;
extracting portrait video information in the second first video information from the second first video information, and recording third first video information by taking the portrait video information in the first video information and the portrait video information in the second first video information as recording reference information;
by analogy, recording the (N + 1) th first video information by taking the portrait video information in the first video information to the portrait video information in the Nth first video information as recording reference information, wherein N is a positive integer;
and generating one or more recorded first video information according to the first video information to the (N + 1) th first video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the extracting the portrait video information in the first video information from the first video information, and recording the portrait video information in the first video information as recording reference information to record second first video information includes:
performing portrait segmentation on the first video information to extract portrait video information in the first video information, and storing background video information except the portrait video information;
and when the second first video information is recorded, displaying the portrait information in the portrait video information in a recording picture of the second first video information to be used as recording reference information to record the second first video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the presenting the portrait information in the portrait video information in the recording picture of the second first video information to record the second first video information as the recording reference information includes: determining first position information of portrait information in the portrait video information in the first video information;
determining second position information of the portrait information in the portrait video information presented in the recording picture of the second first video information according to the first position information;
and recording the portrait information in the portrait video information presented in the second position information as recording reference information to the second first video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the preset ordering order comprises at least any one of:
the recording completion time sequence of each first video information;
the user manually adjusts the sequence of each first video information; the related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the user equipment further includes a five module 105 (not shown), and the five module 105 is configured to obtain the recorded second video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the obtaining the recorded second video information includes:
acquiring portrait information in portrait video information in each piece of first video information;
when the second video information is recorded, displaying the portrait information in the portrait video information in each piece of first video information in a recording picture of the second video information to be used as recording reference information to record the second video information;
and acquiring the recorded second video information after the second video information is recorded. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, a secondary module 102, configured to, in response to a trigger event in the user equipment, perform a preset operation corresponding to the trigger event on at least one of the one or more first video information and the second video information to generate a plurality of candidate video information;
and generating target video information according to the portrait video information corresponding to each candidate video information in the first M candidate video information in the plurality of candidate video information and the last candidate video information, wherein the video time length of each candidate video information is consistent, the video time length of the portrait video information corresponding to each candidate video information is consistent with the video time length of the last candidate video information, M is a positive integer, and the plurality of candidate video information comprises the first M candidate video information and the last candidate video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the preset operation comprises at least any one of:
a deletion operation of the at least one video information;
a re-recording operation of the at least one video information;
an adjustment operation to the sorting order of the at least one video information; the related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the preset operation includes an operation of adjusting a sorting order of the at least one video information, and the generating the target video information according to the corresponding portrait video information and the last candidate video information in each of the first M candidate video information of the plurality of candidate video information includes:
and generating target video information according to the portrait video information corresponding to each candidate video information in the first M candidate video information in the plurality of candidate video information, the third position information of the portrait video information corresponding to each candidate video information in the candidate video information, the adjusted ordering sequence of each candidate video information and the last candidate video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
In some embodiments, the generating the target video information according to the corresponding portrait video information in each candidate video information of the first M candidate video information of the plurality of candidate video information, the third position information of the corresponding portrait video information in each candidate video information in the candidate video information, the adjusted sorting order of each candidate video information, and the last candidate video information includes:
extracting corresponding portrait video information in each candidate video information in the first M candidate video information;
determining fourth position information of each portrait video information in the picture of the last candidate video information according to the third position information of the corresponding portrait video information in each candidate video information in the candidate video information;
and presenting each portrait video information in the picture of the last candidate video information according to the fourth position information, and performing superposition sorting on the portrait video information according to the sorting sequence adjusted by each candidate video information to generate target video information, wherein the sequence of the portrait video information in the last candidate video information is positioned behind each portrait video information. The related operations are the same as or similar to those of the embodiment shown in FIG. 2, and therefore are not described again, and are included herein by reference.
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 5, the system 300 can be implemented as any of the devices in the various embodiments described. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the preceding claims.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (18)

1. A method for generating video information, applied to a user equipment, wherein the method comprises:
acquiring one or more recorded first video information, wherein each first video information corresponds to a preset sequencing sequence;
and generating target video information according to the portrait video information corresponding to each piece of recorded first video information and the recorded second video information, wherein the video duration of each piece of first video information is consistent with the video duration of the second video information.
2. The method according to claim 1, wherein the video duration of the portrait video information corresponding to each first video information is consistent with the video duration of the second video information, and the occlusion order of the portrait video information corresponding to each first video information in the target video information is consistent with the preset sorting order corresponding to each first video information.
3. The method of claim 1, wherein the method further comprises:
and playing the target video information from a playing starting point of the target video information, wherein one or more pieces of portrait information corresponding to each piece of first video information in a video picture corresponding to the playing starting point and the portrait information in the second video information are superposed in a forward direction according to a sequencing order.
4. The method of claim 1, wherein the second video information is ordered after the one or more recorded first video information.
5. The method of any of claims 1-4, wherein the method further comprises:
and generating one or more recorded first video information.
6. The method of claim 5, wherein the generating one or more recorded first video information comprises:
recording first video information;
extracting portrait video information in the first video information from the first video information, and recording the portrait video information in the first video information as recording reference information to obtain second first video information;
extracting portrait video information in the second first video information from the second first video information, and recording third first video information by taking the portrait video information in the first video information and the portrait video information in the second first video information as recording reference information;
by analogy, recording the (N + 1) th first video information by taking the portrait video information in the first video information to the portrait video information in the Nth first video information as recording reference information, wherein N is a positive integer;
and generating one or more recorded first video information according to the first video information to the (N + 1) th first video information.
7. The method of claim 6, wherein said extracting the portrait video information in the first video information from the first video information, and recording the portrait video information in the first video information as the recording reference information to the second first video information comprises:
performing portrait segmentation on the first video information to extract portrait video information in the first video information, and storing background video information except the portrait video information;
and when the second first video information is recorded, displaying the portrait information in the portrait video information in a recording picture of the second first video information to be used as recording reference information to record the second first video information.
8. The method of claim 7, wherein said presenting the portrait information in the portrait video information in the recording of the second first video information as recording reference information comprises:
determining first position information of portrait information in the portrait video information in the first video information;
determining second position information of the portrait information in the portrait video information presented in the recording picture of the second first video information according to the first position information;
and recording the portrait information in the portrait video information presented in the second position information as recording reference information to the second first video information.
9. The method of any of claims 1 to 8, wherein the preset ordering order comprises at least any one of:
the recording completion time sequence of each first video information;
and the user manually adjusts the sequence of each first video information.
10. The method of any of claims 1 to 9, wherein the method further comprises:
and acquiring the recorded second video information.
11. The method of claim 10, wherein the obtaining the recorded second video information comprises:
acquiring portrait information in portrait video information in each piece of first video information;
when the second video information is recorded, displaying the portrait information in the portrait video information in each piece of first video information in a recording picture of the second video information to be used as recording reference information to record the second video information;
and acquiring the recorded second video information after the second video information is recorded.
12. The method of claim 11, wherein the generating target video information according to the portrait video information corresponding to each of the one or more recorded first video information and the recorded second video information comprises:
responding to a trigger event in the user equipment, and executing a preset operation corresponding to the trigger event on at least one of the one or more first video information and the second video information to generate a plurality of candidate video information;
and generating target video information according to the portrait video information corresponding to each candidate video information in the first M candidate video information in the plurality of candidate video information and the last candidate video information, wherein the video time length of each candidate video information is consistent, the video time length of the portrait video information corresponding to each candidate video information is consistent with the video time length of the last candidate video information, M is a positive integer, and the plurality of candidate video information comprises the first M candidate video information and the last candidate video information.
13. The method of claim 12, wherein the preset operation comprises at least any one of:
a deletion operation of the at least one video information;
a re-recording operation of the at least one video information;
and adjusting the sequencing order of the at least one piece of video information.
14. The method of claim 13, wherein the preset operation comprises an adjustment operation to a sorting order of the at least one video information,
the generating target video information according to the portrait video information corresponding to each candidate video information in the first M candidate video information in the plurality of candidate video information and the last candidate video information includes:
and generating target video information according to the portrait video information corresponding to each candidate video information in the first M candidate video information in the plurality of candidate video information, the third position information of the portrait video information corresponding to each candidate video information in the candidate video information, the adjusted ordering sequence of each candidate video information and the last candidate video information.
15. The method of claim 14, wherein the generating the target video information according to the corresponding portrait video information in each of the first M candidate video information in the plurality of candidate video information, the third position information of the corresponding portrait video information in each candidate video information in the candidate video information, the adjusted ordering order of each candidate video information, and the last candidate video information comprises:
extracting corresponding portrait video information in each candidate video information in the first M candidate video information;
determining fourth position information of each portrait video information in the picture of the last candidate video information according to the third position information of the corresponding portrait video information in each candidate video information in the candidate video information;
and presenting each portrait video information in the picture of the last candidate video information according to the fourth position information, and performing superposition sorting on the portrait video information according to the sorting sequence adjusted by each candidate video information to generate target video information, wherein the sequence of the portrait video information in the last candidate video information is positioned behind each portrait video information.
16. An apparatus for generating video information, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of any of claims 1 to 15.
17. A computer-readable medium comprising instructions that, when executed by a computer, cause the computer to perform the operations of any of the methods of claims 1-15.
18. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method according to any one of claims 1 to 15 when executed by a processor.
CN202110119060.5A 2021-01-28 2021-01-28 Method and equipment for generating video information Pending CN112822419A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110119060.5A CN112822419A (en) 2021-01-28 2021-01-28 Method and equipment for generating video information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110119060.5A CN112822419A (en) 2021-01-28 2021-01-28 Method and equipment for generating video information

Publications (1)

Publication Number Publication Date
CN112822419A true CN112822419A (en) 2021-05-18

Family

ID=75859880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110119060.5A Pending CN112822419A (en) 2021-01-28 2021-01-28 Method and equipment for generating video information

Country Status (1)

Country Link
CN (1) CN112822419A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727026A (en) * 2021-08-31 2021-11-30 维沃移动通信(杭州)有限公司 Video recording method, device and equipment
CN117014247A (en) * 2023-08-28 2023-11-07 广东金朋科技有限公司 Scene generation method, system and storage medium based on state learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000013769A (en) * 1998-05-22 2000-01-14 Samsung Electron Co Ltd Multipoint image conference system and its realizing method
CN105007395A (en) * 2015-07-22 2015-10-28 深圳市万姓宗祠网络科技股份有限公司 Privacy processing method for continuously recording video
CN106604144A (en) * 2015-10-16 2017-04-26 上海龙旗科技股份有限公司 Video processing method and device
CN106686440A (en) * 2016-12-28 2017-05-17 杭州趣维科技有限公司 Quick and highly efficient picture-in-picture video manufacturing method applied to mobile phone platform
CN106851162A (en) * 2017-02-17 2017-06-13 成都依能科技股份有限公司 video recording method and device
CN108259781A (en) * 2017-12-27 2018-07-06 努比亚技术有限公司 image synthesizing method, terminal and computer readable storage medium
CN110475086A (en) * 2019-07-23 2019-11-19 咪咕动漫有限公司 Video recording method and system, server and terminal
CN110557565A (en) * 2019-08-30 2019-12-10 维沃移动通信有限公司 Video processing method and mobile terminal
CN111832539A (en) * 2020-07-28 2020-10-27 北京小米松果电子有限公司 Video processing method and device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000013769A (en) * 1998-05-22 2000-01-14 Samsung Electron Co Ltd Multipoint image conference system and its realizing method
US6195116B1 (en) * 1998-05-22 2001-02-27 Samsung Electronics Co., Ltd. Multi-point video conferencing system and method for implementing the same
CN105007395A (en) * 2015-07-22 2015-10-28 深圳市万姓宗祠网络科技股份有限公司 Privacy processing method for continuously recording video
CN106604144A (en) * 2015-10-16 2017-04-26 上海龙旗科技股份有限公司 Video processing method and device
CN106686440A (en) * 2016-12-28 2017-05-17 杭州趣维科技有限公司 Quick and highly efficient picture-in-picture video manufacturing method applied to mobile phone platform
CN106851162A (en) * 2017-02-17 2017-06-13 成都依能科技股份有限公司 video recording method and device
CN108259781A (en) * 2017-12-27 2018-07-06 努比亚技术有限公司 image synthesizing method, terminal and computer readable storage medium
CN110475086A (en) * 2019-07-23 2019-11-19 咪咕动漫有限公司 Video recording method and system, server and terminal
CN110557565A (en) * 2019-08-30 2019-12-10 维沃移动通信有限公司 Video processing method and mobile terminal
CN111832539A (en) * 2020-07-28 2020-10-27 北京小米松果电子有限公司 Video processing method and device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727026A (en) * 2021-08-31 2021-11-30 维沃移动通信(杭州)有限公司 Video recording method, device and equipment
CN117014247A (en) * 2023-08-28 2023-11-07 广东金朋科技有限公司 Scene generation method, system and storage medium based on state learning

Similar Documents

Publication Publication Date Title
WO2022001593A1 (en) Video generation method and apparatus, storage medium and computer device
CN108665742B (en) Method and device for reading through reading device
US20160321833A1 (en) Method and apparatus for generating moving photograph based on moving effect
US11445144B2 (en) Electronic device for linking music to photography, and control method therefor
KR102424296B1 (en) Method, storage medium and electronic device for providing a plurality of images
CN109656363B (en) Method and equipment for setting enhanced interactive content
US20140193138A1 (en) System and a method for constructing and for exchanging multimedia content
CN112822419A (en) Method and equipment for generating video information
CN114520877A (en) Video recording method and device and electronic equipment
WO2022100162A1 (en) Method and apparatus for producing dynamic shots in short video
CN113965665B (en) Method and equipment for determining virtual live image
CN114332417A (en) Method, device, storage medium and program product for multi-person scene interaction
CN110572717A (en) Video editing method and device
WO2024007290A1 (en) Video acquisition method, electronic device, storage medium, and program product
CN113490063B (en) Method, device, medium and program product for live interaction
CN114143568A (en) Method and equipment for determining augmented reality live image
WO2024153191A1 (en) Video generation method and apparatus, electronic device, and medium
CN108960130B (en) Intelligent video file processing method and device
CN112818719A (en) Method and device for identifying two-dimensional code
WO2023241377A1 (en) Video data processing method and device, equipment, system, and storage medium
CN114025237B (en) Video generation method and device and electronic equipment
CN114222069B (en) Shooting method, shooting device and electronic equipment
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN113657245B (en) Method, device, medium and program product for human face living body detection
KR101947553B1 (en) Apparatus and Method for video edit based on object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination