CN111314793A - Video processing method, apparatus and computer readable medium - Google Patents

Video processing method, apparatus and computer readable medium Download PDF

Info

Publication number
CN111314793A
CN111314793A CN202010183770.XA CN202010183770A CN111314793A CN 111314793 A CN111314793 A CN 111314793A CN 202010183770 A CN202010183770 A CN 202010183770A CN 111314793 A CN111314793 A CN 111314793A
Authority
CN
China
Prior art keywords
video
fragment
target
user equipment
fragments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010183770.XA
Other languages
Chinese (zh)
Other versions
CN111314793B (en
Inventor
陈大年
高超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202010183770.XA priority Critical patent/CN111314793B/en
Publication of CN111314793A publication Critical patent/CN111314793A/en
Application granted granted Critical
Publication of CN111314793B publication Critical patent/CN111314793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Abstract

The application provides a video processing scheme, in the scheme, a first user device can sequentially obtain a plurality of video fragments in a segmented shooting mode, each video fragment is used as a part of a target video and can form a complete target video, the first user device can send the obtained video fragments to the target device after obtaining any one video fragment, segmented uploading is achieved, transmission of the shot video fragments and shooting of subsequent video fragments are conducted simultaneously, the whole target video does not need to be waited to be sent together after shooting is completed, overlong waiting time caused by long transmission time is avoided, meanwhile, the first user device can send fragment pointers related to the video fragments to the target device, and the video fragments are arranged and sequentially to be combined into the complete target video.

Description

Video processing method, apparatus and computer readable medium
Technical Field
The present application relates to the field of information technology, and in particular, to a video processing method, device, and computer readable medium.
Background
With the continuous development of internet technology, the demand of users for diversified contents is increasingly strong, and video contents such as short videos and live broadcasts also become a new hotspot in the industry. When a user wants to share and distribute video contents such as short videos and live broadcasts, the user equipment needs to be used for completing relevant operations such as recording, editing and transmission, so that the videos are provided for required target equipment. In the processing process, the recording, editing and transmission of the short video are all performed in a mode of one mirror to the bottom, linear editing and synchronous uploading, namely, after a user finishes shooting the whole video at one time, the video is edited, and then the user clicks buttons such as 'release' button, 'share' button and the like to input a confirmation instruction and then completes the transmission of the video. Although this method is simple and clear, it lacks flexibility, especially in the case of longer and longer video recording requirements, the traditional linear method of recording, editing and uploading videos makes the user wait for a long time after confirmation, and thus the user experience is poor.
Content of application
An object of the present application is to provide a video processing method, apparatus, and computer readable medium.
To achieve the above object, some embodiments of the present application provide a video processing method, including:
the method comprises the steps that first user equipment obtains a target video comprising a plurality of video fragments, wherein the video fragments are sequentially obtained in a segmented shooting mode;
after the first user equipment acquires any one video fragment, the video fragment is sent to target equipment;
the first user equipment sends a fragment pointer related to the video fragments to the target equipment, wherein the fragment pointer is used for identifying the arrangement sequence of the video fragments in the target video to which the video fragments belong so as to combine the video fragments into a complete target video according to the arrangement sequence.
The embodiment of the application also provides another video processing method, which comprises the following steps:
the target device receives video fragments and a fragment pointer from first user equipment, the video fragments are sequentially obtained by the first user equipment in a segmented shooting mode, any one video fragment is sent to the target device after being obtained by the first user equipment, and the fragment pointer is used for identifying the arrangement sequence of the video fragments in the target video to which the video fragments belong so as to combine the video fragments into a complete target video according to the arrangement sequence.
Furthermore, the present application also provides a video processing device, which includes a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the device to execute the video processing method.
Embodiments of the present application also provide a computer readable medium, on which computer program instructions are stored, the computer readable instructions being executable by a processor to implement the video processing method.
Some embodiments of the present application provide a video processing scheme, in which a first user equipment may sequentially obtain a plurality of video segments in a segmented shooting manner, each video segment serves as a part of a target video and may constitute a complete target video, and after obtaining any one video segment, the first user equipment may send the obtained video segment to the target equipment, thereby implementing segmented uploading, so that transmission of the video segment that has been shot and shooting of subsequent video segments are performed simultaneously, and it is not necessary to wait for the whole target video to be sent together after shooting is completed, thereby avoiding an excessively long waiting time due to a long transmission time, and at the same time, the first user equipment may send a segment pointer about the video segments to the target equipment, so as to combine the video segment arrangement sequence into the complete target video.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a schematic processing flow diagram of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an association relationship between a target video and video slices in an embodiment of the present application;
FIG. 3 is a schematic processing flow diagram of another video processing method according to an embodiment of the present application;
fig. 4 is a schematic processing flow diagram of a video processing method including a video editing process according to an embodiment of the present application;
FIG. 5 is a schematic processing flow diagram of another video processing method including a video editing process according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a computing device implementing video processing according to an embodiment of the present application;
the same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In a typical configuration of the present application, the terminal, the devices serving the network each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
Some embodiments of the present application provide a video processing method, which employs a combination of segmented shooting and segmented transmission, the first user equipment may obtain a plurality of video slices in sequence by means of segmented shooting, each video slice being a part of the target video, a complete target video can be composed, after the first user equipment acquires any one video fragment, the acquired video fragments can be sent to the target equipment, so that the segmented uploading is realized, the transmission of the shot video fragments and the shooting of the subsequent video fragments are simultaneously carried out, the whole target video does not need to be sent together after the shooting is finished, the overlong waiting time caused by long transmission time is avoided, and meanwhile, the first user equipment sends a fragment pointer about the video fragments to the target equipment so as to combine the video fragment arrangement sequence into a complete target video. The first user equipment implementing the method may be any terminal equipment with a shooting function, and may include, but is not limited to, a computer with a shooting function, a mobile phone, a tablet computer, a smart watch, and the like.
The embodiment of the application provides a video processing method implemented at a first user equipment, the processing flow of the method is shown in fig. 1, and the method is divided into two parts of content, including shooting and sending of a target video.
With respect to the capturing portion of the target video, the first user device may capture and acquire the target video including a plurality of video slices. The first user equipment has a camera shooting function, can be connected with external camera equipment and can call the camera equipment to shoot videos, or has a built-in camera shooting module, and can shoot videos. For example, when the first user equipment is a mobile phone, a built-in camera on the mobile phone may be called to complete shooting of the target video.
In the embodiment, the video slices forming the target video are sequentially obtained by means of sectional shooting, and the shooting process of each video slice can be interrupted without being completed by one-time shooting. Referring to the segment shooting process in fig. 1, first, the first user equipment may start shooting of the first video segment according to the shooting start instruction c 1. The shooting start instruction may be obtained based on an input by a user, and may be, for example, an operation in response to the user clicking a shooting start button or a gesture operation for inputting a preset.
The subsequent breakpoint suspension command c2, breakpoint start command c3, and shooting termination command c4 are obtained in a manner similar to the shooting start command and can be obtained based on the input of the user. The gesture operation in the actual scene may include, but is not limited to, the following types: the method comprises the following steps of hanging operation gestures of a user above a shooting interface, contact operation gestures of the user on the shooting interface, a movement trend of the user driving user equipment in a display state of the shooting interface, and the like.
The suspension operation gesture of the user above the shooting interface may refer to a suspension sliding track of the user above the shooting interface displayed by the user equipment within an acquisition range of an image sensor of the user equipment. The image sensor may be a Charge Coupled Device (CCD) sensor, or may also be a Metal-Oxide Semiconductor (CMOS) sensor, which is not particularly limited in this embodiment. The suspension sliding track may include, but is not limited to, a straight line or a curve with any shape, which is composed of a plurality of dwell points corresponding to a plurality of consecutive sliding events, and this embodiment is not particularly limited thereto.
The touch operation gesture of the user on the shooting interface may refer to a touch sliding track of the user on the shooting interface displayed by the user equipment. Generally, user devices can be classified into two types according to whether a display device has a characteristic of touch input, one type is a touch device, and the other type is a non-touch device. Specifically, a contact operation gesture of a user on a business card display interface displayed on a touch screen of the touch device may be detected. The contact sliding trajectory may include, but is not limited to, a straight line or a curve with an arbitrary shape, which is composed of a plurality of touch points corresponding to a plurality of consecutive touch events, and this embodiment is not particularly limited thereto. For example, the user may slide the shooting interface of the host application in one direction.
The movement trend of the user equipment driven by the user in the display state of the shooting interface may be that the user holds the user equipment by hand, and drives a movement track of the user equipment, such as shaking, turning, moving along a specific track, when the user equipment displays the shooting interface.
And after shooting is started, immediately starting recording the first video fragment by the first user equipment until the first user equipment obtains a breakpoint suspension instruction c2, finishing shooting of the current video fragment according to the breakpoint suspension instruction, and finishing shooting, so that one video fragment can be obtained, wherein the obtained video fragment is the first video fragment v 1.
After obtaining one video segment, the first user equipment may start shooting of a next video segment according to the breakpoint starting instruction after obtaining the breakpoint starting instruction c 3. The interval between the breakpoint suspension command c2 and the breakpoint starting command c3 may be determined according to the shooting requirements of the user, for example, two video segments need to be completed in different shooting environments, at this time, the user inputs the breakpoint suspension command c2 to end the shooting of the previous video segment, then changes a shooting scene, and inputs the breakpoint starting command c3 after the scene change is completed to start the shooting of the next video segment.
In this embodiment, after acquiring the first video segment v1, the first user equipment may start shooting the second video segment v2 according to the breakpoint starting instruction c 3. The user can complete the shooting of the subsequent third, fourth and even more video slices in sequence by inputting the breakpoint suspension instruction c2 and the breakpoint starting instruction c3 until the shooting of the last video slice v _ final is started.
After starting to capture the last video slice, the first user equipment may end the capturing of the last video slice according to the capturing termination instruction c4, thereby obtaining the last video slice v _ final. When all video slices of the target video are completely shot, the first user equipment obtains the target video, and the relationship between the target video and the video slices included in the target video can be as shown in fig. 2.
The transmission part of the target video may include transmission of the video slice and transmission of a slice pointer corresponding to the video slice. The slice pointer is used to identify the arrangement order of the video slices in the target video to which the video slices belong, for example, in the initial case, the slice pointer of the first video slice v1 taken may be p1 to indicate that the arrangement order is the first in the target video, the slice pointer of the second video slice v2 may be p2 to indicate that the arrangement order is the second in the target video, and so on, the slice pointers of the respective video slices in the initial case may be determined. Based on the arrangement order identified by the slice pointers, any device may combine the video slices, for example, according to the increasing order "p 1, p2, p3, … …" of the slice pointers, so as to obtain a complete target video. Therefore, even if the order of actually receiving the video fragments by the target device is different from the arrangement order of the video fragments in the target video due to different sending or receiving orders, the target device is not influenced to smoothly combine the video fragments.
When the video fragments are sent, the first user equipment can send the video fragments to the target equipment after acquiring any one video fragment, and the video fragments are not required to be sent together after the whole target video is shot. Thus, in the processing process, the transmission of the video slice which has finished shooting and the shooting of the subsequent video slice can be simultaneously carried out. For example, after the first video clip v1 is captured, the first user device has acquired the video clip v1, and may send the video clip v1 to the target device. During the transmission of the video slice, the first user equipment may shoot subsequent video slices simultaneously. By combining the segmented shooting and the segmented transmission, when the shooting of the last video segment is finished, the first user equipment already finishes the transmission work of part of other video segments, and compared with the traditional mode of transmitting the whole target video after shooting, the transmission time can be greatly shortened.
When the fragment pointer corresponding to the video fragment is sent, because the data volume of the fragment pointer is smaller than that of the video fragment, the fragment pointer related to the video fragment can be sent to the target device at any time after the fragment pointer is obtained, and the target device can combine the video fragments into a complete target video after the video fragment and the fragment pointer corresponding to the video fragment are obtained, so that the actual applications of sharing, releasing and the like of the target video are completed.
In some embodiments of the present application, the first user equipment may send a slice pointer for any one of the video slices of the target video when sending the video slice to the target equipment. After the first user equipment finishes shooting and acquires a video fragment, the initial fragment pointer of the video fragment can be determined, and the initial fragment pointer can be allocated to each video fragment according to the shooting finishing sequence in an actual scene. For example, when the first video segment v1 is shot by the first user device, since the video segment v1 is the first acquired video segment, an initial segment pointer p1 may be assigned to the first acquired video segment, which indicates that the video segment is arranged in the first order. When the first user equipment sends the first video slice v1 in the target video, the slice pointers of the video slices v1 can be sent simultaneously. Fig. 1 shows that the slice pointers of the video slices are transmitted in this way.
In other embodiments of the present application, the first user equipment may also send a slice pointer for all video slices of the target video after determining the slice pointers of the video slices. For the first user equipment, at least after all video slices of the target video are acquired, slice pointers of all video slices can be determined.
Taking fig. 3 as an example, where the target video includes 4 video slices, after the first user equipment captures the last video slice v4, the slice pointer of the 4 th video slice may be determined to be p4 according to the order of acquiring the video slices. At this time, the first user equipment acquires the fragment pointers of all the video fragments, that is, the fragment pointers of all the video fragments can be uniformly sent to the target equipment at any time thereafter. In fig. 3, the specific time of sending may be when the first user equipment sends the last video fragment of the target video to the target equipment, that is, when the first user equipment sends the last video fragment of the target video to the target equipment, the fragment pointers of all the video fragments are sent.
For the target device side, when the first user device sends the video slices and the slice pointers obtained in the foregoing manner, the video slices and the slice pointers may be received from the first user device. The target device may be a network device or a second user device according to different practical application scenarios, and after receiving the video slice and the slice pointer from the first user device, the received video slice and the slice pointer may be processed in different manners.
The network device may include, but is not limited to, implementations such as a network host, a single network server, multiple sets of network servers, or a cloud-computing-based collection of computers. Here, the Cloud is made up of a large number of hosts or web servers based on Cloud Computing (Cloud Computing), which is a type of distributed Computing, one virtual computer consisting of a collection of loosely coupled computers. When the target device is a network device, the first user device uploads the shot target videos to the network device, so that the network device can provide the target videos to other users.
For the network device, after receiving the video fragments and the fragment pointers, the arrangement order of the corresponding video fragments in the target video to which the video fragments belong may be determined according to the fragment pointers, and the video fragments may be combined into a complete target video according to the arrangement order. If the target video needs to be provided to other users, the video fragments can be combined into a complete target video in the manner described above, and the complete target video is provided to other users. For example, the scene may correspond to a scene of publishing and sharing a video in a UGC (User Generated Content) platform, where the first User equipment is a client of the UGC platform, the network equipment is a server of the UGC platform, the User publishes the video to the server by using a manner of segmented shooting and segmented uploading through the client, and the server may combine the video fragments and the fragment pointer into a complete target video after receiving the video fragments and the fragment pointer, and provide the target video for other users to browse and play.
Alternatively, another way may be provided in the solution of the embodiment of the present application, after receiving the video slices and the slice pointers, the network device may store only the video slices and the slice pointers, and does not combine them into a complete target video. The network device may wait for a request from the second user device, after receiving the request, send the video fragment and the fragment pointer to the second user device according to the request of the second user device, determine, by the second user device according to the fragment pointer, an arrangement order of the video fragment corresponding to the second user device in the target video to which the second user device belongs, and combine the video fragments into the complete target video according to the arrangement order.
The second user device may include, but is not limited to, a computer, a mobile phone, a tablet computer, a smart watch, and other terminal devices. When the target device is a second user device, the first user device directly shares the target video to the second user device in a point-to-point manner, so that a user of the second user device can view the target video. For example, a scene may correspond to a scene in which videos are shared peer to peer between users, both a first user device and a second user device run application programs that can implement peer to peer video sharing, a user a sends a video to the second user device in a segmented shooting and segmented uploading manner through the application program on the first user device, and a user b receives video segments and segment pointers through the application program on the second user device and combines the video segments and the segment pointers into a complete target video.
According to the scheme provided by the embodiment of the application, the segmented shooting and the segmented uploading are adopted in order to shorten the waiting time, so that when the user does not finish shooting all contents of the target video, part of video fragments are already transmitted. If the user gives up sending the target video due to various factors, the completed video fragments may cause privacy disclosure of the user. Therefore, in some embodiments of the present application, the first user equipment may further send a sending confirmation instruction or a sending cancellation instruction to the target equipment, after receiving the sending confirmation instruction, the target equipment may store the received video fragments and the fragment pointer according to the sending confirmation instruction, and if receiving the sending cancellation instruction, the target equipment indicates that the user abandons sending of the target video, and at this time, the target equipment deletes the received video fragments and the fragment pointer according to the sending cancellation instruction.
For example, in an actual scene, after the last video segment is shot, the first user equipment may prompt the user to confirm by clicking buttons such as "publish", "share", and "confirm" or performing other preset operations. If the user confirms, the first user equipment may acquire the transmission confirmation instruction, otherwise, if the user does not confirm, the first user equipment may acquire the transmission cancellation instruction.
In addition, the transmission canceling instruction and the transmission confirming instruction may also be for the specified video fragment, that is, the target device may store the received specified video fragment and the fragment pointer corresponding to the specified video fragment according to the transmission confirming instruction, or delete the received specified video fragment and the fragment pointer corresponding to the specified video fragment according to the transmission canceling instruction. For example, the cancel command and the confirm command may carry identification information of the video fragment, and after receiving the cancel command and the confirm command, the target device may perform persistent storage or deletion on the specified video fragment and its corresponding fragment pointer according to the identification information carried in the cancel command and the confirm command.
And for the video fragments which are shot by the first user equipment and are finished, the user can edit the video fragments. The editing processing comprises editing the content in the video clips and editing the arrangement sequence of the video clips.
In some embodiments of the present application, the content in the video clip may be edited before the video clip is transmitted, and the video clip may be transmitted to the target device after the content editing is completed. Therefore, after acquiring any one video fragment, the first user equipment can edit the video fragment according to the first editing instruction and send the edited video fragment to the target equipment. The first editing instruction is an instruction for instructing to edit the content in the video segment, and the editing of the content refers to modification of the video content presented by the video segment, and may include but is not limited to adding subtitles and special effects in the display content of the video segment, clipping the video segment, modifying the display effect, and the like.
In this scenario, after acquiring the breakpoint suspension instruction c2 and before transmitting the video clip, the first user equipment may acquire the first editing instruction input by the user, thereby editing the content in the video clip before transmitting the video clip. Fig. 4 shows a flow of video processing corresponding to this scenario, taking the first video segment v1 as an example, after acquiring the breakpoint suspension instruction c2 for the video segment v1, the first user equipment acquires the video segment v1, and then the user may input a first editing instruction c5 in the interactive interface provided by the first user equipment, for example, add subtitles to the video segment, and cut off the content from the 2 nd minute to the 3 rd minute. After the first user equipment acquires the first editing instruction c5, the above content editing processing may be performed on the video fragment v1, and the edited video fragment v1' is sent to the target device after the content editing processing is completed. Then the first user equipment can start shooting the second video segment v2 according to the breakpoint starting instruction c3, and complete the processing of all the subsequent video segments in a similar manner. In an actual scene, if a user does not need to edit the content of a certain video segment, the user may not input a corresponding first editing instruction, and directly execute the subsequent steps. For example, after acquiring a certain video segment, the first user equipment may prompt the user whether the video segment needs to be edited, if the user selects that the editing is not needed, the subsequent steps may be executed, otherwise, an interactive interface is provided for the user to input a first editing instruction, so as to implement content editing processing on the video segment.
In other embodiments of the present application, in order to make the entire video processing process smoother, the editing of the content of the video slice may be relatively independent from the transmission of the video slice, that is, there may be no sequential logical relationship between the two, and the user may edit the content of the video after completing the shooting and uploading of all the video slices. The first user equipment can perform content editing processing on a video clip according to the first editing instruction at any time after the video clip is acquired, regardless of the current state (non-transmission, transmission or transmission completion) of the video clip.
In order to enable content editing of a video segment to be perceived by a target device, in the video processing method provided in this embodiment, after acquiring any one video segment, the first user device may perform content editing processing on the video segment according to a first editing instruction, acquire first editing information about the video segment, and then send the first editing information to the target device. Since the first editing information can be used for performing content editing processing on the corresponding video fragment, after the target device acquires the first editing information, the target device can process the video fragment which is not edited according to the first editing information, so as to acquire the edited video fragment.
Fig. 5 shows a flow of video processing corresponding to this scenario, and taking the first video slice v1 as an example, after acquiring the breakpoint suspension instruction c2 for the video slice v1, the first user equipment acquires the video slice v 1. At this point, the first user device then sends the video clip v1 to the target device. After the user finishes shooting all the video slices, editing the video slice v1, and performing content editing processing on the video slices according to a first editing instruction c5 input by the user by the first user equipment to obtain first editing information info about the video slices. For example, in the present embodiment, the first editing instruction c5 adds subtitles to the video slice v1, so that the corresponding first editing information info is the added subtitle content and the timestamp of the subtitle content corresponding to the video content. After the first user equipment sends the first editing information info to the target device, the target device may perform content editing processing on the previously received unedited video clip v1 according to the first editing information info, so as to update the video clip v1, and obtain an edited video clip v 1.
The first user equipment can provide first editing information determined based on the first editing instruction for the target equipment, and can adjust the arrangement sequence of the video fragments according to the second editing instruction, obtain adjusted fragment pointers, and send the adjusted fragment pointers to the target equipment. The second editing instruction is used for indicating the adjustment of the arrangement sequence of the acquired video fragments, and after the first user equipment adjusts the arrangement sequence of the video fragments based on the second editing instruction, fragment pointers corresponding to the video fragments are changed, so that the adjusted fragment pointers are obtained. After the target device acquires the adjusted segment pointer from the first user device, if the segment pointer of the video segment has been received before, the segment pointer of the video segment may be updated, so that the target video in the target device is synchronized with the target video locally adjusted by the first user device.
In addition, besides the adjustment of the shot video slices through the first editing instruction and the second editing instruction, the shot video slices can be re-shot. Therefore, in the method provided by another embodiment of the present application, the first user equipment may further obtain a new video segment according to the re-shooting instruction, and replace the obtained video segment with the new video segment. For example, after the first user device has shot and obtained 4 video slices v1 to v4, it is found that the third video slice v3 is not good to shoot, at this time, the user may input a rephoto instruction for the third video slice v3, after the first user device obtains the rephoto instruction, a new video slice v3-1 may be shot, and then the original video slice v3 is replaced with the new video slice v3-1, and the slice pointer of the new video slice v3-1 may directly use the slice pointer of the original video slice v 3.
In the scheme, the uploading of the video fragments can be at any time after the shooting is obtained, so that if the replaced video fragments are not sent to the target equipment during the re-shooting, the first user equipment can send new video fragments to the target equipment according to the original mode after the local replacement. If the replaced video fragment is sent to the target device, the first user device may send the new video fragment and the replacement information to the target device in addition to locally replacing, so that the target device replaces the corresponding video fragment with the new video fragment according to the replacement information.
Based on the same inventive concept, the embodiment of the present application further provides a video processing device, and the corresponding method of the device is the video processing method in the foregoing embodiment, and the principle of solving the problem is similar to that of the method. The device comprises a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the device to perform the aforementioned video processing method implemented at the first user equipment side or at the target device side.
Fig. 6 shows a structure of a device suitable for implementing the method and/or technical solution in the embodiment of the present application, and the device 600 includes a Central Processing Unit (CPU)601, which can execute various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage portion 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for system operation are also stored. The CPU 601, ROM 602, and RAM603 are connected to each other via a bus 604. An Input/Output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, touch screen, microphone, infrared sensor, and the like; an output section 607 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), an LED Display, an OLED Display, and the like, and a speaker; a storage section 608 including one or more computer-readable media such as a hard disk, optical disk, magnetic disk, semiconductor memory, or the like; and a communication section 609 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet.
In particular, the methods and/or embodiments in the embodiments of the present application may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart or block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer-readable medium carries one or more computer-readable instructions executable by a processor to implement the methods and/or aspects of the embodiments of the present application as described above.
To sum up, in the video processing scheme provided in the embodiment of the present application, the first user equipment may sequentially obtain a plurality of video segments in a segmented shooting manner, where each video segment is a part of a target video and may form a complete target video, and after obtaining any one video segment, the first user equipment may send the obtained video segment to the target equipment, thereby implementing segmented uploading, so that transmission of the video segment that has been shot and shooting of subsequent video segments are performed simultaneously, and it is not necessary to wait for the whole target video to be sent together after shooting is completed, thereby avoiding an excessively long waiting time due to a long transmission time, and at the same time, the first user equipment may send a segment pointer about the video segments to the target equipment, so as to combine the video segment arrangement sequence into the complete target video.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In some embodiments, the software programs of the present application may be executed by a processor to implement the above steps or functions. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (16)

1. A video processing method, wherein the method comprises:
the method comprises the steps that first user equipment obtains a target video comprising a plurality of video fragments, wherein the video fragments are sequentially obtained in a segmented shooting mode;
after the first user equipment acquires any one video fragment, the video fragment is sent to target equipment;
the first user equipment sends a fragment pointer related to the video fragments to the target equipment, wherein the fragment pointer is used for identifying the arrangement sequence of the video fragments in the target video to which the video fragments belong so as to combine the video fragments into a complete target video according to the arrangement sequence.
2. The method of claim 1, wherein the first user equipment acquires a target video comprising a plurality of video slices, the video slices being sequentially acquired by means of segment shooting, comprising:
the first user equipment starts shooting of a first video fragment according to the shooting starting instruction;
the first user equipment finishes shooting of the current video fragment according to the breakpoint suspension instruction to obtain a video fragment;
the first user equipment starts shooting of the next video fragment according to the breakpoint starting instruction;
and the first user equipment finishes shooting of the last video fragment according to the shooting termination instruction, and acquires a target video comprising a plurality of video fragments.
3. The method of claim 1, wherein the first user equipment sends the video fragment to a target device after acquiring any one video fragment, and the method comprises:
after acquiring any one video fragment, the first user equipment edits the video fragment according to a first editing instruction and sends the edited video fragment to target equipment.
4. The method of claim 1, wherein the method further comprises:
after acquiring any one video fragment, the first user equipment carries out content editing processing on the video fragment according to a first editing instruction to acquire first editing information about the video fragment;
and the first user equipment sends the first editing information to target equipment, and the first editing information is used for carrying out content editing processing on the corresponding video fragment so as to obtain the edited video fragment.
5. The method of claim 1, wherein the method further comprises:
the first user equipment adjusts the arrangement sequence of the video fragments according to a second editing instruction to obtain an adjusted fragment pointer;
and the first user equipment sends the adjusted fragment pointer to the target equipment.
6. The method of claim 1, wherein the method further comprises:
the first user equipment acquires a new video fragment according to the rephoto instruction, and replaces the acquired video fragment of the first user equipment with the new video fragment;
and if the replaced video fragment is sent to the target equipment, the first user equipment sends the new video fragment and the replacement information to the target equipment, so that the target equipment replaces the corresponding video fragment with the new video fragment according to the replacement information.
7. The method of claim 1, wherein the first user device sending a slice pointer for a video slice to the target device comprises:
when the first user equipment sends any video fragment of a target video to the target equipment, sending a fragment pointer related to the video fragment; or
The first user equipment sends the fragment pointers of the video fragments after determining the fragment pointers of all the video fragments of the target video.
8. The method of claim 1, wherein the method further comprises:
the first user equipment sends a sending confirmation instruction or a sending cancellation instruction to target equipment, so that the target equipment stores the received video fragments and the fragment pointers according to the sending confirmation instruction or deletes the received video fragments and the fragment pointers according to the sending cancellation instruction.
9. The method of claim 1, wherein the target device is a second user device or a network device.
10. A video processing method, wherein the method comprises:
the target device receives video fragments and a fragment pointer from first user equipment, the video fragments are sequentially obtained by the first user equipment in a segmented shooting mode, any one video fragment is sent to the target device after being obtained by the first user equipment, and the fragment pointer is used for identifying the arrangement sequence of the video fragments in the target video to which the video fragments belong so as to combine the video fragments into a complete target video according to the arrangement sequence.
11. The method of claim 10, wherein the method further comprises:
the target equipment determines the arrangement sequence of the corresponding video fragments in the target video to which the video fragments belong according to the fragment pointer;
and the target equipment combines the video fragments into a complete target video according to the arrangement sequence.
12. The method of claim 10, wherein the target device is a network device;
the network equipment sends the video fragments and the fragment pointers to second user equipment according to a request of the second user equipment, so that the second user equipment determines the arrangement sequence of the corresponding video fragments in the target video to which the video fragments belong according to the fragment pointers, and combines the video fragments into a complete target video according to the arrangement sequence.
13. The method of claim 10, wherein the method further comprises:
the target device receives first editing information from first user equipment;
and the target equipment carries out content editing processing on the corresponding video fragment according to the first editing information so as to obtain the edited video fragment.
14. The method of claim 10, wherein the method further comprises:
the target device receives a sending confirmation instruction or a sending cancellation instruction from first user equipment;
and the target equipment stores the received video fragments and the fragment pointers according to the sending confirmation instruction, or deletes the received video fragments and the fragment pointers according to the sending cancellation instruction.
15. A video processing apparatus comprising a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform the method of any of claims 1 to 14.
16. A computer readable medium having stored thereon computer program instructions executable by a processor to implement the method of any one of claims 1 to 14.
CN202010183770.XA 2020-03-16 2020-03-16 Video processing method, apparatus and computer readable medium Active CN111314793B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010183770.XA CN111314793B (en) 2020-03-16 2020-03-16 Video processing method, apparatus and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010183770.XA CN111314793B (en) 2020-03-16 2020-03-16 Video processing method, apparatus and computer readable medium

Publications (2)

Publication Number Publication Date
CN111314793A true CN111314793A (en) 2020-06-19
CN111314793B CN111314793B (en) 2022-03-18

Family

ID=71160666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010183770.XA Active CN111314793B (en) 2020-03-16 2020-03-16 Video processing method, apparatus and computer readable medium

Country Status (1)

Country Link
CN (1) CN111314793B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030206190A1 (en) * 2000-03-31 2003-11-06 Matsushita Electric Industrial Co., Ltd. Data editing system and data editing-side unit
CN104823453A (en) * 2012-10-05 2015-08-05 谷歌公司 Stitching videos into aggregate video
CN105338368A (en) * 2015-11-02 2016-02-17 腾讯科技(北京)有限公司 Method, device and system for converting live stream of video into on-demand data
CN105611429A (en) * 2016-02-04 2016-05-25 北京金山安全软件有限公司 Video file backup method and device and electronic equipment
US20160212452A1 (en) * 2015-01-16 2016-07-21 Fujitsu Limited Video transmission method and video transmission apparatus
CN106254776A (en) * 2016-08-22 2016-12-21 北京金山安全软件有限公司 Video processing method and device and electronic equipment
CN107613235A (en) * 2017-09-25 2018-01-19 北京达佳互联信息技术有限公司 video recording method and device
CN109275028A (en) * 2018-09-30 2019-01-25 北京微播视界科技有限公司 Video acquiring method, device, terminal and medium
CN110383820A (en) * 2018-05-07 2019-10-25 深圳市大疆创新科技有限公司 Method for processing video frequency, system, the system of terminal device, movable fixture

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030206190A1 (en) * 2000-03-31 2003-11-06 Matsushita Electric Industrial Co., Ltd. Data editing system and data editing-side unit
CN104823453A (en) * 2012-10-05 2015-08-05 谷歌公司 Stitching videos into aggregate video
US20160212452A1 (en) * 2015-01-16 2016-07-21 Fujitsu Limited Video transmission method and video transmission apparatus
CN105338368A (en) * 2015-11-02 2016-02-17 腾讯科技(北京)有限公司 Method, device and system for converting live stream of video into on-demand data
CN105611429A (en) * 2016-02-04 2016-05-25 北京金山安全软件有限公司 Video file backup method and device and electronic equipment
CN106254776A (en) * 2016-08-22 2016-12-21 北京金山安全软件有限公司 Video processing method and device and electronic equipment
CN107613235A (en) * 2017-09-25 2018-01-19 北京达佳互联信息技术有限公司 video recording method and device
CN110383820A (en) * 2018-05-07 2019-10-25 深圳市大疆创新科技有限公司 Method for processing video frequency, system, the system of terminal device, movable fixture
CN109275028A (en) * 2018-09-30 2019-01-25 北京微播视界科技有限公司 Video acquiring method, device, terminal and medium

Also Published As

Publication number Publication date
CN111314793B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN108989691B (en) Video shooting method and device, electronic equipment and computer readable storage medium
US9349414B1 (en) System and method for simultaneous capture of two video streams
GB2593005A (en) Video generation method and device, electronic device and computer storage medium
CN111447489A (en) Video processing method and device, readable medium and electronic equipment
CN111629252A (en) Video processing method and device, electronic equipment and computer readable storage medium
US10674183B2 (en) System and method for perspective switching during video access
CN107920274A (en) A kind of method for processing video frequency, client and server
CN112653920B (en) Video processing method, device, equipment and storage medium
CN111526411A (en) Video processing method, device, equipment and medium
CN113225483B (en) Image fusion method and device, electronic equipment and storage medium
CN111367447A (en) Information display method and device, electronic equipment and computer readable storage medium
CN111436005A (en) Method and apparatus for displaying image
CN113806306A (en) Media file processing method, device, equipment, readable storage medium and product
US20220269714A1 (en) Multimedia information processing method, apparatus, electronic device, and medium
CN113655930B (en) Information publishing method, information display method and device, electronic equipment and medium
CN114679628A (en) Bullet screen adding method and device, electronic equipment and storage medium
US20240121349A1 (en) Video shooting method and apparatus, electronic device and storage medium
CN114430460A (en) Shooting method and device and electronic equipment
CN111314793B (en) Video processing method, apparatus and computer readable medium
CN111385599B (en) Video processing method and device
US11438287B2 (en) System and method for generating and reproducing ultra short media content
CN113559503A (en) Video generation method, device and computer readable medium
CN114528433A (en) Template selection method and device, electronic equipment and storage medium
CN111385638B (en) Video processing method and device
CN110703971A (en) Method and device for publishing information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant