CN108200477B - Method, device and equipment for generating and playing video file - Google Patents

Method, device and equipment for generating and playing video file Download PDF

Info

Publication number
CN108200477B
CN108200477B CN201711232812.9A CN201711232812A CN108200477B CN 108200477 B CN108200477 B CN 108200477B CN 201711232812 A CN201711232812 A CN 201711232812A CN 108200477 B CN108200477 B CN 108200477B
Authority
CN
China
Prior art keywords
video
shooting
mode information
shooting mode
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711232812.9A
Other languages
Chinese (zh)
Other versions
CN108200477A (en
Inventor
范云飞
刘托
杨建军
叶明�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuandu Internet Technology Co ltd
Original Assignee
Beijing Yuandu Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuandu Internet Technology Co ltd filed Critical Beijing Yuandu Internet Technology Co ltd
Publication of CN108200477A publication Critical patent/CN108200477A/en
Application granted granted Critical
Publication of CN108200477B publication Critical patent/CN108200477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The method and the device for generating and playing the video file provided by the embodiment of the invention are applied to the field of video playing control. The video file playing method comprises the following steps: playing the video; when an operation request of a user for the played video is detected, video shooting mode information corresponding to the video in a video file to which the video belongs is obtained; the operation request comprises any one of an amplification operation request and a reduction request, and the video shooting mode information is at least one of near-to-far mode information and far-to-near mode information; acquiring a target frame corresponding to the operation request in the video file according to the video shooting mode information; and playing the acquired target frame. The method and the device have the advantages that the user can obtain the video with higher definition when the video is amplified or reduced, the user can conveniently realize real playing experience of switching between a near view and a far view through an operation request of the video file playing device, and the user can use the device conveniently to a great extent.

Description

Method, device and equipment for generating and playing video file
Technical Field
The invention relates to the field of video processing, in particular to a method, a device and equipment for generating and playing a video file.
Background
In the prior art, when a user views a picture or a video preview screen through an image display device, a zoom-in operation or a zoom-out operation may be performed on the picture or the video preview screen, for example, for a displayed picture, when a zoom-in operation or a zoom-out operation request of the user for the picture is received, the image display device performs a corresponding zoom-in or zoom-out display on the picture. The operation of enlarging or reducing a picture is to operate a pixel point of the picture. The amplification is mainly realized by an interpolation method, and the reduction is mainly realized by a down-sampling method. And carrying out pixel interpolation or downsampling on the picture so as to reduce the picture quality of the picture. The preview screen can be enlarged or reduced by zooming operation with respect to the video preview screen. However, the image scaling method described above can only be performed on a picture or a video preview screen, and cannot perform an enlargement/reduction operation on a video.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a device for generating and playing a video file, which aim to improve the above problem.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, an embodiment of the present invention provides a video file generating apparatus, where the apparatus includes:
the video information acquisition module is used for acquiring a video shot by a video shooting device and video shooting mode information corresponding to the video; the video shooting mode information is as follows: at least one of near-to-far and far-to-near mode information;
and the video file generating module is used for generating an initial video file containing the video shooting mode information for the video according to the video shooting mode information.
In a second aspect, an unmanned aerial vehicle provided in an embodiment of the present invention includes a video shooting device, a memory, and a processor, where the video shooting device is configured to shoot a video according to set video shooting mode information, where the video shooting mode information is: at least one of near-to-far and far-to-near mode information; the memory is configured to store video captured by the video capture device and the video capture mode information;
the processor is configured to obtain the video and the video shooting mode information, and generate an initial video file containing the video shooting mode information for the video according to the video shooting mode information.
In a third aspect, a video file generation method provided in an embodiment of the present invention is applied to an unmanned aerial vehicle, and generates a video file for a video shot by a video shooting device on the unmanned aerial vehicle, where the method includes:
acquiring a video shot by a video shooting device and video shooting mode information corresponding to the video; the video shooting mode is as follows: at least one of near-to-far and far-to-near mode information;
and generating an initial video file containing the video shooting mode information for the video according to the video shooting mode information.
In a fourth aspect, an embodiment of the present invention provides a video file playing apparatus, including:
the playing module is used for playing the video;
the video shooting mode information acquisition module is used for acquiring video shooting mode information corresponding to the video in a video file to which the video belongs when an operation request of a user on the played video is detected; the operation request comprises any one of an amplification operation request and a reduction request, and the video shooting mode information is at least one of near-to-far mode information and far-to-near mode information;
a target frame acquiring module, configured to acquire a target frame corresponding to the operation request from the video file according to the video shooting mode information;
the playing module is further configured to play the target frame acquired by the target frame acquiring module.
In a fifth aspect, a terminal device provided in an embodiment of the present invention includes the above video file playing apparatus.
In a sixth aspect, a method for playing a video file according to an embodiment of the present invention includes:
playing the video;
when an operation request of a user for the played video is detected, video shooting mode information corresponding to the video in a video file to which the video belongs is obtained; the operation request comprises any one of an amplification operation request and a reduction request, and the video shooting mode information is at least one of near-to-far mode information and far-to-near mode information;
acquiring a target frame corresponding to the operation request in the video file according to the video shooting mode information;
and playing the acquired target frame.
In a seventh aspect, a video file generating apparatus provided in an embodiment of the present invention includes:
the video shooting method comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining an initial video file, the initial video file comprises a video and video shooting mode information corresponding to the video, and the video shooting mode information is at least one of far-to-near mode information and near-to-far mode information;
the second acquisition module is used for acquiring a selected video frame of the video;
the adding module is used for adding corresponding video shooting mode information for the selected video frame;
and the video file generation module is used for sequencing the selected video frames added with the video shooting mode information to generate a secondary video file.
In an eighth aspect, an embodiment of the present invention provides a terminal device, which includes the video file generating apparatus described above.
In a ninth aspect, a video file generating method provided in an embodiment of the present invention is applied to a mobile terminal, and the method includes:
acquiring an initial video file, wherein the initial video file comprises a video and video shooting mode information corresponding to the video, and the video shooting mode information comprises at least one of far-to-near mode information and near-to-far mode information;
acquiring a selected video frame of the video;
adding corresponding video shooting mode information to the selected video frame;
and sequencing the selected video frames added with the video shooting mode information to generate a secondary video file.
The video file generation method, the video file generation device and the video file generation equipment provided by the embodiment of the invention can generate the video file for the video according to the video shooting mode information, wherein the video file comprises the video shooting mode information indicating the shooting mode from near to far or from far to near during video shooting. The method and the device are convenient for selecting and switching the long-range view video frame and the short-range view video frame according to the user requirements when the video file is played, and provide the operation experience of amplifying and reducing the video for the user on the premise of ensuring the video playing definition.
According to the video file playing method, device and equipment provided by the embodiment of the invention, the target frame to be played can be obtained according to the operation trend corresponding to the operation request of the user and the span value corresponding to the operation trend, so that the user can obtain a video with higher definition when the video is amplified or reduced, the user can conveniently realize real playing experience of switching between a close view and a distant view through the operation request of the video file playing device, the user can use the video file playing method, device and equipment to a great extent, the interest of video playing is enhanced, and the user interaction effect is enhanced.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic diagram of interaction between an unmanned aerial vehicle and a first mobile terminal and a second terminal provided in an embodiment of the present invention;
FIG. 2 is a block diagram of a mobile device provided by an embodiment of the present invention;
fig. 3 is a block diagram of a drone provided by an embodiment of the present invention;
fig. 4 is a flowchart illustrating steps of a video file generation method according to a first embodiment of the present invention;
fig. 5 is a flowchart of a video file shooting method according to a first embodiment of the invention;
fig. 6 is a flowchart illustrating steps of a video file generation method according to a second embodiment of the present invention;
fig. 7 is a flowchart illustrating steps of a video file playing method according to a third embodiment of the present invention;
fig. 8 is a functional block diagram of a first video file generating apparatus according to a fourth embodiment of the present invention;
fig. 9 is a functional block diagram of a second video file generating apparatus according to a fifth embodiment of the present invention;
fig. 10 is a functional block diagram of a video file playback apparatus according to a sixth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following embodiments of the present invention mainly include: the first video file generation method and the first video file generation device can be applied to video shooting equipment such as an unmanned aerial vehicle and the like with a movable video shooting function; the second video file generation method and device can be applied to equipment with video file processing capacity, such as an intelligent mobile terminal; the video file playing method and the video file playing device can be applied to equipment with a video file playing function, such as an intelligent mobile terminal.
As shown in fig. 1, the schematic diagram is that an unmanned aerial vehicle 200 applied to a first video file generation apparatus provided by the embodiment of the present invention interacts with a first mobile terminal 110 applied to a second video file generation apparatus and a second mobile terminal 120 applied to the video file playing apparatus. The drone 200 is in communication connection with the first mobile terminal 110 and the second mobile terminal 120 through a network to perform data communication or interaction.
The first mobile terminal 110 and the second mobile terminal 120 may be different mobile devices, or may be the same mobile device including the second video file generating apparatus and the video file playing apparatus. The mobile device 100 may be a Personal Computer (PC), a tablet PC, a smart phone, a Personal Digital Assistant (PDA), or the like.
Fig. 2 is a block diagram of the mobile device. The mobile device 100 includes a second video file generating device/video file playing device, a touch display 101, a memory 102, a storage controller 103, a processor 104, a peripheral interface 105, an input/output unit 106, and the like.
The touch display 101, the memory 102, the storage controller 103, the processor 104, the peripheral interface 105, and the input/output unit 106 are electrically connected to each other directly or indirectly, so as to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The second video file generating means/video file playing means includes at least one of a software or firmware (firmware) type which can be stored in the memory 102. The processor 104 is configured to execute an executable module stored in the memory 102, such as a software functional module or a computer program included in the second video file generation apparatus or the video file playing control apparatus.
The Memory 102 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 102 is used for storing a program, and the processor 104 executes the program after receiving an execution instruction, and a method executed by the mobile device 100 defined by a process disclosed in any embodiment of the present invention may be applied to the processor 104, or implemented by the processor 104.
The processor 104 may be an integrated circuit chip having signal processing capabilities. The Processor 104 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 105 couples various input and output units 106 to the processor 104 and to the memory 102. In some embodiments, the peripheral interface, the processor, and the memory controller may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input and output unit 106 is used for providing input data for a user to realize the interaction between the user and the unmanned aerial vehicle. The input/output unit may be, but is not limited to, a touch screen, a mouse, a keyboard, and the like, and is configured to output a corresponding signal in response to a user operation.
The touch display 101 provides an interactive interface (e.g., a user interface) for a user or for displaying image data to a user reference. In this embodiment, the touch display may be a capacitive touch screen or a resistive touch screen supporting single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations generated simultaneously from one or more positions on the touch display, and the sensed touch operations are sent to the processor for calculation and processing.
Referring to fig. 3, the drone 200 may include a video camera 201, a memory 202, and a processor 203. Which are electrically connected, directly or indirectly, to each other to enable transmission or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. Wherein, video shooting device 201 is used for carrying out the video image acquisition operation of unmanned aerial vehicle in flight process, and it can shoot the video according to the video shooting mode information that sets up, video shooting mode information is: at least one of near-to-far and far-to-near mode information; the memory 202 is configured to store video captured by the video capture device and the video capture mode information. The processor 203 is configured to obtain the video and the video shooting mode information, and generate an initial video file containing the video shooting mode information for the video according to the video shooting mode information. Wherein, under the condition that the video comprises two video shooting modes, the video shooting mode information comprises: the video shooting mode identifier and the shooting duration and the shooting sequence corresponding to the video shooting mode identifier, at this time, the processor 203 is configured to: setting a segmentation identifier for the video according to the shooting sequence and the shooting duration; and adding a corresponding video shooting mode identifier for each video segment after the segment identifier is set.
The Memory 202 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 202 is configured to store a plurality of instructions, where the instructions are suitable for being loaded and executed by the processor 203, and the processor 203 executes the instructions after obtaining the instructions to be executed, and the processor 203 in this embodiment may execute the corresponding instructions of each step in the method described in the following embodiments to complete the functions of each step.
The processor 203 may be an integrated circuit chip having signal processing capabilities. The Processor 203 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a voice Processor, a video Processor, and the like; but may also be a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 203 may be any conventional processor or the like.
It is understood that the drone 200 may also include a memory controller (not shown) between the memory 202 and the processor 203, and a peripheral interface (not shown), as shown in fig. 2.
Referring to fig. 4, a flowchart of steps of a video file generation method applied to the drone 200 shown in fig. 1 according to a first embodiment of the present invention is applied to a drone to generate an initial video file for a video shooting device on the drone to shoot a video. The steps shown in fig. 4 will be explained in detail below.
Step S401, acquiring a video shot by a video shooting device and video shooting mode information corresponding to the video.
A plurality of shooting modes are preset in a video shooting device of the unmanned aerial vehicle, and video shooting operation is carried out according to the shooting mode indicated by a user. The video capture mode includes: at least one of from near to far and from far to near. In one embodiment, the video capture mode may be any combination of two capture modes, such as near-to-far and then far-to-near, or far-to-near and then near-to-far.
The video photographing mode information may include: video capture mode identification, which may include a near-to-far identification and a far-to-near identification. When the video includes two video shooting modes, the video shooting mode information may further include: and the video shooting mode identification and the shooting duration and the shooting sequence corresponding to the video shooting mode identification. The duration of the video shooting mode can be the duration of shooting of each video shooting mode, and the shooting mode sequence can be the sequence of switching various video shooting modes, such as controlling the unmanned aerial vehicle to shoot from near to far and then from far to near first, or controlling the unmanned aerial vehicle to shoot from far to near and then from near to far first.
Before the video shooting device of the unmanned aerial vehicle shoots a video, the video shooting mode indicating information indicating that the video shooting mode is relevant can be acquired. The video shooting mode indication information can be generated by a flight controller of the unmanned aerial vehicle according to a preset control program, and can also be sent to a video file generation device of the unmanned aerial vehicle through an unmanned aerial vehicle control terminal for a user.
Referring to fig. 5, a specific implementation process for controlling the video shooting device to shoot a video according to the video shooting mode indication information will be described in detail below with respect to the process shown in fig. 5.
Step S501, acquiring the current shooting sequence.
The shooting sequence of the current shooting task required to be executed by the video shooting device of the unmanned aerial vehicle is shot from near to far after a near view, or shot from far to near after a far view, or the shooting modes of the two or more than the two are combined.
And step S502, acquiring the current video shooting mode identification according to the shooting sequence.
And after the shooting sequence is obtained, obtaining the current shooting mode according to the obtained shooting sequence. In one embodiment, if it is detected that the current shooting sequence is the shooting sequence from the near view to the far view, the corresponding shooting mode is the shooting mode from the near to the far, and the corresponding video shooting mode is identified as the shooting mode from the near to the far. If the current shooting sequence is detected to be that the near scene extends to the far scene and then is recovered to the near scene, the corresponding shooting mode is from near to far and then from far to near, and the corresponding video shooting mode identification is from near to far identification and from far to near identification.
Step S503, acquiring the shooting duration corresponding to the current video shooting mode identifier according to the current video shooting mode identifier.
The video file generation device is internally preset with preset shooting time corresponding to different video shooting mode identifiers, and shooting time matched with each shooting mode identifier can be set. And acquiring a shooting time corresponding to the current video shooting mode identifier after acquiring the video shooting mode identifier of the current shooting operation according to the steps.
And step S504, controlling the video shooting device to shoot the video according to the shooting mode corresponding to the current shooting mode identification.
And after the relevant video shooting mode information corresponding to the shooting task is obtained, controlling the video shooting device to carry out video shooting operation according to the shooting mode corresponding to the current shooting mode identification, and acquiring video information.
And step S505, when the shooting duration is detected to reach the corresponding shooting duration, switching the shooting mode.
And each shooting mode corresponds to different shooting durations, the video shooting device is controlled to shoot the video according to the shooting mode corresponding to the current shooting mode identifier, and the duration of video shooting operation in the shooting mode is monitored. And when the shooting duration is detected to reach the corresponding shooting duration, stopping the video shooting operation in the current shooting mode, switching to a new shooting mode, and performing the video shooting operation in the new shooting mode. If the preset shooting mode has been switched over, the video shooting operation is terminated.
In one embodiment, the video capture mode information includes: the first video shooting mode is shooting from near to far, the shooting duration is 1min, the second video shooting mode is shooting from far to near, and the shooting duration is 2 min. And when the video shooting device is controlled to shoot the video according to the video shooting mode information, the unmanned aerial vehicle is controlled to shoot the video with the duration of 1min from near to far, and then the unmanned aerial vehicle is controlled to shoot the video with the duration of 2min from far to near. The video acquired in this step includes a close-view video frame and a long-view video frame in the first shooting mode, a long-view video frame and a close-view video frame in the second shooting mode, a shooting duration corresponding to the first video shooting mode, a second video shooting mode, and a corresponding shooting duration.
The above-mentioned process for controlling the video shooting device to shoot the video provided in the embodiment of the present application is to be understood that the present application is not limited thereto. As can be seen from the above description, the video shot by the video shooting device has the corresponding video shooting mode information, so this step S401 can obtain the video shooting mode information corresponding to the video while obtaining the video.
Step S402, generating an initial video file containing the video shooting mode information for the video according to the video shooting mode information.
And generating an initial video file containing the video shooting mode information for the video, namely adding information such as a shooting mode identifier, shooting duration and the like corresponding to the shooting mode when the video is shot into the video, and generating the initial video file.
In another embodiment, that is, in a case that a plurality of video shooting mode identifiers are simultaneously included in a same video, the generating an initial video file including the video shooting mode information for the video according to the video shooting mode information includes: setting a segmentation identifier for the video according to the shooting sequence and the shooting duration; and adding a corresponding video shooting mode identifier for each video segment after the segment identifier is set.
In the same video shooting task, at least two modes, namely a far mode and a near mode and a far mode, may be included at the same time, for example, a video shot by the video shooting device of the unmanned aerial vehicle is firstly from the ground to the far mode, and then returns to the ground from the far mode, and then the corresponding shooting mode is firstly zoomed out by the near mode and then zoomed in by the far mode. In this case, the video file generation device in the unmanned aerial vehicle may store the video segments taken and add the segment identifiers according to the change of the shooting mode, and add the corresponding video shooting mode identifiers in each video segment after the segment identifiers are added, so as to record the shooting mode of the current video end.
In one embodiment, video captured during a zoom-in process is divided into a first video segment and video zoomed in by a zoom-out process is divided into a second video segment. And adding a near-to-far shooting mode identifier in the first video segment, and adding a far-to-near shooting mode identifier in the second video segment. And adding a segmentation identifier to the segments, adding a video shooting mode identifier, and generating the initial video file according to the video after the segmentation and the addition of the shooting mode identifier.
And after the user obtains the initial video file through the mobile terminal, the video playing can be controlled according to the video shooting mode information contained in the video file. In an embodiment, after obtaining the video file and decoding the video picture contained in the obtained video file, a user inputs an operation request corresponding to the video picture to be played at a mobile terminal, where the operation request may be an enlargement operation request or a reduction request, that is, a scene in a currently played video frame is enlarged or reduced. And after the mobile terminal acquires the operation request of the user, acquiring a target frame corresponding to the operation request in the video file according to the operation request of the user and the video shooting mode information. For example, if the operation request input by the user is an amplification operation request, a close-range video with a larger scene than the currently played video is searched for, and the searched close-range video is played as a target video. Correspondingly, if the operation request input by the user is a zoom-out request, a long-range video with a smaller scene than the currently played video is searched, and the searched long-range video is played as a target video.
According to the video file generation method provided by the embodiment of the invention, the video file generation device in the unmanned aerial vehicle can generate the initial video file containing the video shooting mode information for the video according to the acquired video shooting mode information, so that a user can conveniently check the initial video file generated by the unmanned aerial vehicle to select and switch the long-shot picture frame and the short-shot picture frame, videos of shooting scenes at different distances are provided for the user on the premise of ensuring the video playing definition, and the use by the user is facilitated.
Referring to fig. 6, a flowchart of a method for generating a video file applied to a second mobile terminal 110 according to a second embodiment of the present invention is shown. The second video file generation device applied to the second video file generation method provided by this embodiment may preferably be a video file generation module disposed in the mobile terminal, and is connected to the unmanned aerial vehicle to obtain an initial video file generated by the unmanned aerial vehicle, and perform corresponding processing on the initial video file. The process shown in fig. 6 will be explained in detail below.
Step S601, an initial video file is acquired.
The initial video file is generated by the unmanned aerial vehicle for the acquired video according to the video shooting mode information in the above embodiment, the initial video file includes the video and the video shooting mode information corresponding to the video, and the video shooting mode information includes at least one of the two modes of information from far to near and from near to far;
the second video file generation device may obtain the initial file in a variety of ways, for example, directly obtain the initial video file generated by the drone cached by the cache module in the drone control device to which the second video file generation device is applied, or obtain the initial video file generated by the drone by establishing a communication connection with the drone, or obtain the initial video file from another transfer server in which the initial video file is stored.
Step S602, acquiring a selected video frame of the video, that is, acquiring a selected video frame of a video included in the video file;
and after all video frames of the initial video file are obtained, obtaining the selected video frames according to a preset processing rule for subsequent processing. The preset processing rule may be: single-time frame skipping, double-time frame skipping and the like, and the premise that the selected video frame is obtained according to the preset processing rule is as follows: the obtained adjacent video frames among the multiple selected video frames meet the continuity rule of video observed by human eyes, so that the condition that the user experiences the influence on the experience of the user due to the video discontinuity perceived by the user with naked eyes in the subsequent video playing process when the frames skip is too much is avoided.
In one embodiment, the video frames of the initial video file are arranged as: 1. 2, 3, 98, 99 and 100. The preset processing rule is three times of frequency hopping, that is, one video frame is reserved for every third video frame, and the obtained selected video frames may be: 1. 5, 9.. 93, 97.
The video pictures are stored in an initial video file in a compressed format, and after the selected video frames are obtained, the obtained selected video frames need to be decoded by a decoder, so that the video pictures contained in each selected video frame are obtained. Therefore, the embodiment of the present application further includes, after acquiring the selected video frame of the video: decoding the selected video frame to obtain a video picture corresponding to the selected video frame; the adding of the corresponding video shooting mode information to the selected video frame is as follows: and adding corresponding video shooting mode information to the video picture decoded by the selected video frame.
And step S603, adding corresponding video shooting mode information to the selected video frame.
And after all the selected video frames are acquired, adding corresponding video shooting mode information to the selected video frames in each video shooting mode so that the selected video frames in each video shooting mode correspond to the same video shooting mode identifier and the like.
In one embodiment, an initial video file includes a plurality of video shooting modes, and the video included in the initial video file includes a plurality of video segments divided by segment identifiers, and each video segment corresponds to one video shooting mode. When the video shooting mode information is added for a selected video frame, segment identifiers of video segments corresponding to different video shooting modes can be added, that is, the video shooting mode information includes: under the condition of the video shooting mode identification and the segment identification, adding corresponding video shooting mode information to the selected video frame no longer comprises: adding the video capture mode identification and segment identification to the selected video frame.
And step S604, sequencing the selected video frames added with the video shooting mode information to generate a secondary video file.
And sequencing the selected video frames added with the video shooting mode information according to the initial sequence of the corresponding selected video frames in the initial video file and sequencing the selected video frames according to the initial sequence. That is, the step of sorting the selected video frames to which the video capture mode information is added includes: acquiring an initial sequence of the selected video frames in the initial video file; and sequencing the selected video frames according to the initial sequence.
In other embodiments, all video frames of the video in the initial video file may be decoded to obtain all video frames, and then a part of the video pictures in all the video pictures may be selected as the selected video frames according to a preset processing rule to generate the secondary video file.
On the basis of the above embodiment, the secondary video file obtained after processing by the second video file generation device is named and then uploaded to a specific folder of the server through a specific uplink interface, and the folder name is notified to the server, so that the server maps a downlink interface address according to the folder name, and other terminal devices obtain the video playing file through the downlink interface address. Therefore, the video playing files acquired by the other terminals also contain information similar to the video shooting mode.
The video file generation method provided by the embodiment of the invention is applied to secondary processing operation of the initial video file generated by the unmanned aerial vehicle. And acquiring selected video frames from the acquired initial video file, adding corresponding video shooting mode information and sequencing to the acquired selected video frames, and generating a secondary video file. The video frames in the initial file are selected, the number of the video frames is reduced on the basis of ensuring the ornamental value of a user, and the problems that the number of the decoded video frames is large, data redundancy exists, and the effect of uploading the decoded video frames to a server and other terminals for playing is influenced are avoided. And the operation of decoding for multiple times is saved, and the playing and the storage of other terminals are facilitated. The video frames are selected, corresponding video shooting mode information is added, and the video frames are sequenced according to the original relative sequence, so that the video pictures are kept in the original storage state, and the use and storage of other terminal equipment are facilitated. Meanwhile, the initial video file comprises the long-range view video frame and the short-range view video frame, so that the generated secondary video file can be used for selectively switching the long-range view picture frame and the short-range view picture frame according to the requirements of a user, videos of shooting scenes at different distances are provided for the user on the premise of ensuring the video playing definition, and the use by the user is facilitated.
Referring to fig. 7, a flowchart of a video file playing method applied to the second terminal 120 shown in fig. 1 according to a third embodiment of the present invention is provided. The process shown in fig. 7 will be explained in detail below.
Step S701, a video file is played.
The video file playing device applied by the video file playing method provided by this embodiment may be a video file playing module applied in the terminal, and may also be an independent playing device. The video file obtained by the video file playing device may be an initial video file generated by the unmanned aerial vehicle, or may be a secondary video file generated by the second video file generating device. Any video file contains video shooting mode information of a video, the video shooting mode information can be the same as the video shooting mode information described in the above embodiment, including video shooting mode identification, shooting duration, shooting sequence and the like, and the video contains a close-shot video frame and a far-shot video frame.
If the video file is an initial video file, the video file playing device needs to decode the initial video file to obtain a video picture contained in the initial video file. If the initial video file is a secondary video file, the video file playing device does not need to perform decoding operation any more, and can directly control the playing of the video frames in the secondary video file.
Step S702, when an operation request of a user for the played video is detected, video shooting mode information corresponding to the video in a video file to which the video belongs is obtained;
the operation request comprises any one of an amplification operation request and a reduction request, and the video shooting mode information is at least one of near-to-far mode information and far-to-near mode information.
When the video file is in a playing state, a touch display arranged in a second mobile terminal of the video file playing device collects an operation request of a user acting on the second mobile terminal, and obtains video shooting mode information of the video file to which the video belongs. If the operation request is an amplification operation request, the user wants to play a video frame corresponding to a scene larger than the scene in the currently played video frame, namely the video frame is a close-range video frame relative to the currently played video frame. If the operation request is a zoom-out request, indicating that the user wants to play a video frame corresponding to a scene smaller than the scene in the currently played video frame, namely, the video frame is a long-range video frame relative to the currently played video frame.
The implementation manner of the operation request can be various, including but not limited to: and the second mobile device is provided with an indication mark for indicating the zooming-in/zooming-out operation, a double-point touch mode on the touch screen, a sliding direction of single-point touch and the like. In one embodiment, a "+" indicative of a zoom-in operation request or a "-" indicative of a zoom-out operation may be provided, a single-line slider and a sliding slot may be provided, dragging to the right across indicates a zoom-in operation request, and sliding to the left across indicates a zoom-out operation. The user acts on the touch display screen, the two fingers approach to each other to represent zoom-out operation, and the two fingers move away from each other to represent zoom-in operation requests. Or a single finger of the user acts on any position on the touch display screen, the leftward sliding represents the zooming-out operation, and the rightward sliding represents the zooming-in operation request. In other embodiments, the user may also customize, by using the second mobile terminal, an operation gesture corresponding to the zoom-in operation request and the zoom-out operation, which is not limited herein.
Step S703, obtaining a target frame corresponding to the operation request from the video file according to the video shooting mode information.
The video shooting mode information of the obtained video file mainly comprises a video shooting mode identifier, a segmentation identifier and the like representing a shooting mode. After the operation request of the user is obtained, the target frame required to be displayed by the user is searched according to the operation request of the user.
Step S704, the acquired target frame is played.
And after the target frame required to be displayed by the user is found, controlling the target frame to play so as to achieve the purpose that the user controls the video file playing device to play the target video through the operation request.
When the operation request of the user is an amplification operation request, the acquiring, according to the video shooting mode information, a target frame corresponding to the operation request from the video file includes: and acquiring a target frame from the video frame shot at a shooting position closer to the shooting position of the currently played video frame according to the video shooting mode information.
In addition, since the played video may include video segments captured in at least two video capture modes, the video segment currently played by the playing module is determined before the target frame is acquired. The method comprises the steps of obtaining a segment identification of a current playing object and a video shooting mode identification of a video segment corresponding to the segment to which the segment identification belongs. The method and the device limit the amplification and reduction operations of the user on the currently played video in the same video segment, and prevent the video of other scenes shot under other shooting scenes from being switched to, so as to influence the user's perceptibility. Namely, the segment identification of the segment to which the currently played video frame belongs and the video shooting mode information corresponding to the segment to which the currently played video frame belongs are obtained, and according to the video shooting mode information, the target frame is obtained from the video frame shot at the shooting position in the segment which is closer to the shooting position of the currently played video frame.
And judging whether the video shooting mode identification corresponding to the segment to which the segment identification belongs is a long-distance and short-distance shooting mode identification or not. And if the video shooting mode identification corresponding to the segment to which the segment identification belongs is the far-from-near shooting mode identification, acquiring a target video from a subsequent video of the currently played video in the segment according to the video storage sequence.
When storing a video file, the video storage sequence is generally to store the captured video frames according to the time sequence when the video is shot in the video shooting mode. If the video shooting mode is shooting from far to near, the display scenes of the shooting scene are gradually increased from the far view picture frame to the near view picture frame in the video storage sequence from front to back. On the contrary, if the video shooting mode is shooting from near to far, the display scenes of the shooting scene become smaller step by switching the near-view picture frame to the far-view picture frame from front to back in the video storage sequence.
And after the video segment corresponding to the segment identification is determined, acquiring the video shooting mode of the video segment. And if the video shooting mode is from far to near, acquiring a near view video frame with a relatively large display scene in a subsequent video of the currently played video in the segment according to the video storage sequence aiming at an amplification operation request of a user, and acquiring a target video in the near view video frame in the subsequent video.
And if the video shooting mode identification corresponding to the segment to which the segment identification belongs is not the far-near shooting mode identification, acquiring a target video from the preorder video of the currently played video in the segment according to the video storage sequence.
And aiming at the amplification operation request of a user, according to the video storage sequence, obtaining a close-range video frame with a relatively large display scene from a preamble video of the current playing video in the segment, and obtaining a target video from the close-range video frame in the preamble video frame.
In a case that the operation request is a zoom-out operation request, the acquiring a target frame corresponding to the operation request according to the video shooting mode information includes:
and acquiring a target frame from the video frame shot at a shooting position far away from the shooting position of the currently played video frame according to the video shooting mode information.
Before the target frame is obtained, the video segment where the current playing module is located is judged. The method comprises the steps of obtaining a segment identification of a currently played video frame and a video shooting mode identification of a video segment corresponding to the segment to which the segment identification belongs. The method and the device limit the amplification and reduction operations of the user on the currently played video in the same video segment, and prevent the video of other scenes shot under other shooting scenes from being switched to, so as to influence the user's perceptibility. That is, in this embodiment, when the segment identifier of the segment to which the currently played video frame belongs and the video shooting mode information corresponding to the segment to which the currently played video frame belongs are obtained, according to the video shooting mode information, the target frame is obtained from the video frame shot at the shooting position far from the shooting position of the currently played video frame in the segment.
And judging whether the video shooting mode identification corresponding to the segment to which the segment identification belongs is a long-distance and short-distance shooting mode identification or not.
And if the video shooting mode identification corresponding to the segment to which the segment identification belongs is a far-from-near shooting mode identification, acquiring a target frame from the preorder video of the currently played video in the segment according to the video storage sequence.
And after the video segment corresponding to the segment identification is determined, acquiring the video shooting mode of the video segment. And if the video shooting mode is from far to near, acquiring a long-range view video frame with a relatively smaller display scene in a pre-sequence video of the current playing video in the segment according to the video storage sequence aiming at a zoom-out operation request of a user, and acquiring a target frame in the long-range view video frame in the subsequent video.
And if the video shooting mode identification corresponding to the segment to which the segment identification belongs is not the far-near shooting mode identification, acquiring a target frame from a subsequent video of the currently played video in the segment according to the video storage sequence.
And aiming at the zoom-out operation request of a user, according to the video storage sequence, obtaining a long-range video frame with a relatively small display scene from a subsequent video of the currently played video in the segment, and obtaining a target frame from the long-range video frame in the long-range video frame.
The video file playing method provided by the above embodiment is suitable for executing switching of a fixed number of video frames for a collected one-time operation request. On the basis of the above embodiment, the number of switching video frames can also be obtained by collecting the trend size of the operation request applied by the user. The step of acquiring the target video corresponding to the zoom-in operation request according to the video shooting mode information further includes:
identifying an operation trend value of the operation request and an interval to which the operation trend value belongs; acquiring a span value corresponding to the interval; and acquiring the target frame according to the span value.
When an operation request of a user is obtained, analyzing an operation trend value contained in the operation request and an interval to which the operation trend value belongs. The operation tendency of the operation request of the user may be an enlargement tendency representing the enlargement operation request and a reduction tendency representing the reduction operation request. The operation trend value corresponds to one trend interval in a plurality of trend intervals, and each trend interval is matched with a span value and indicates the number of switching video frames corresponding to the trend interval.
On the basis of the above-described embodiment, the video frame switching speed can also be controlled in accordance with the change speed of the user operation request. In the formula
Figure BDA0001488451000000211
And
Figure BDA0001488451000000212
in the specification, ω is a trend value of gesture zooming in and out, ω0、ω1、ω2、ω3、ω4Setting a value range t for the gesture trend1、t2、t3、t4The time intervals of the refresh video frames are listed, and t is the time interval variable of the refresh video frames.
Assuming that the time interval for refreshing the video frame by the mobile phone fastest is τ, for the refresh mode of the video frame in the operation process of the user, the following constraints are provided:
Figure BDA0001488451000000213
the T is the inter-frame delay time of the finally refreshed video frame, where T in the formula is the above T (the time interval variable of the refreshed video frame) and the time interval of the fastest refreshed video frame of the mobile phone is.
For the single-hop frame and the double-hop frame, the following can be explained:
assume that the video frame number is: 1. 2, 3, 4, 5, 6, 7, 8, 9; if the frame is not skipped, the display sequence is 1, 2, 3, 4, 5, 6, 7, 8 and 9 when the display is in the forward direction; if the frame is skipped by one time and displayed in the forward direction, the display sequence is 1, 3, 5, 7 and 9; if the frame is skipped twice and displayed in the forward direction, the display sequence is 1, 4 and 7.
The zooming-in and zooming-out trend can also be reflected in the left and right sliding of the screen by the user.
When the user clicks the zoom-in or zoom-out icon, the video frames can be searched and displayed at the same video frame interval. Such as once per point, skip x frames for finding and display.
Of course, the video playing may also be controlled according to other parameters of the operation request of the user, which is only a part of examples and is not limited herein.
On the basis of the above embodiments, in order to further reduce the memory pressure of the mobile terminal, a ping-pong decoding mode may be adopted to control the video frame to be played while being decoded. A prepared storage area is arranged in the mobile terminal, and the prepared storage area can be divided into three parts: a preamble storage area, a current storage area, and a subsequent storage area. The prepared storage area is used for storing partial picture frames played by the video picture frames, and preferably, the stored partial picture frames are the currently played picture frames and the storage sequence thereof or picture frames adjacent to each other before and after the playing sequence. And after the second sequence of the video picture frame playing is obtained, according to the currently played picture frame, obtaining a pre-sequence picture frame before the sequence of the currently played picture frame and a post-sequence picture frame after the sequence of the currently played picture frame. The preamble picture frame may be a picture of a previous frame of the current picture frame, or may be a certain number of picture frames before the current picture frame. The number of the subsequent picture frames is selected according to the number of the same preceding picture frames.
And after the current video frame, the preorder picture frame and the subsequent picture frame are obtained, storing the preorder picture frame, the currently played picture frame and the subsequent picture frame in the prepared storage area according to a preset storage rule. Preferably, the currently played picture frame is stored in the current storage area, the preamble picture frame is stored in the preamble storage area, and the subsequent picture frame is stored in the subsequent storage area. And after the currently played video picture frame and the adjacent picture frame are stored in the prepared storage area, controlling the picture frame to be played, specifically, controlling the video picture frame to be played according to the operation posture. The operation gestures include a first gesture for indicating reading of the picture frame forward and a second gesture for indicating reading of the picture frame backward, and the manner of controlling the picture frame stored in the preliminary storage area to play according to the operation gestures may include:
and if the operation posture is the first posture, taking the preorder picture frame as a new current picture frame, taking a picture frame before the preorder picture frame as a new preorder picture frame, and controlling the new current picture frame to be played.
And if the operation posture is the second posture, taking the subsequent picture as a new current picture frame, taking a picture frame after the sequence of the subsequent picture frame as a new subsequent picture frame, and controlling the new current picture frame to be played.
In one embodiment, the currently stored sequence of the video picture frames is 1, 2, 3, 4, 5, 6, 7 and 8, and the currently played video frame is picture frame No. 5. The preparation storage area is as follows: A. b, C and D. A and B are the preorder storage areas, A is used for storing No. 3 picture frames, and B is used for storing No. 4 picture frames. C and D are the subsequent storage areas, C is used for storing No. 6 picture frames, and D is used for storing No. 7 picture frames. And if the operation posture is the first posture and indicates that the video frame needs to be read forwards, taking the No. 4 picture frame as a new current picture frame, and storing the No. 3 picture frame as a new preamble picture frame to B. And storing the picture frame No. 2 before the new preamble picture frame into A, storing the picture frame No. 5 into C as a new subsequent picture frame, and storing the picture frame No. 6 into D. And so on. The currently played part of the video picture frames are stored in the prepared storage area, so that the condition that the memory of the mobile terminal is insufficient when all the video picture frames are stored at the same time is avoided. The current playing and the adjacent partial picture frames are decoded and stored in advance, so that the pause phenomenon in the video downloading process is avoided, and the user experience is further improved.
The video file playing method provided by the embodiment of the invention acquires the direction and the number of the switched video frames through the acquired operation trend corresponding to the operation request applied by the user and the span value corresponding to the operation trend so as to acquire and play the video frames which the user wants to play, thereby further facilitating the use of the user.
Referring to fig. 8, a functional block diagram of a first video file generating apparatus 800 according to a fourth embodiment of the present invention is shown. The first video file generation apparatus 800 includes: a video information acquisition module 801 and a video file generation module 802.
A video information obtaining module 801, configured to obtain a video captured by a video capturing device and video capturing mode information corresponding to the video; the video shooting mode information is as follows: at least one of near-to-far and far-to-near mode information;
a video file generating module 802, configured to generate an initial video file containing the video shooting mode information for the video according to the video shooting mode information.
In the case that the video includes two video shooting modes, the video shooting mode information acquired by the video information acquiring module 801 includes: and the video shooting mode identification and the shooting duration and the shooting sequence corresponding to the video shooting mode identification.
The video file generation module 802 is configured to:
setting a segmentation identifier for the video according to the shooting sequence and the shooting duration;
and adding a corresponding video shooting mode identifier for each video segment after the segment identifier is set.
The first video file generation device provided by the embodiment of the invention can generate the initial video file containing the video shooting mode information for the video according to the video shooting mode information, so that a user can conveniently download the initial video file generated by the unmanned aerial vehicle to switch long-shot picture frames, short-shot picture frames and the like, videos of shooting scenes at different distances are provided for the user on the premise of ensuring the video playing definition, and the use of the user is facilitated. For a specific implementation process of the first video file generation apparatus provided in the embodiment of the present invention, please refer to the above method embodiment, which is not described in detail herein.
Referring to fig. 9, a functional block diagram of a second video file generating apparatus 900 according to a fifth embodiment of the present invention is shown. The second video file generation device is applied to the mobile terminal. The device comprises: a first obtaining module 901, a second obtaining module 902, an adding module 903 and a video file generating module 904.
A first obtaining module 901, configured to obtain an initial video file, where the initial video file includes a video and video shooting mode information corresponding to the video, and the video shooting mode information is at least one of far-to-near mode information and near-to-far mode information;
a second obtaining module 902, configured to obtain a selected video frame of the video;
an adding module 903, configured to add corresponding video shooting mode information to the selected video frame;
a video file generating module 904, configured to sort the selected video frames to which the video shooting mode information is added, and generate a secondary video file.
The apparatus may further include:
a decoding module 905 (not shown in the figure) for decoding the selected video frame to obtain a video picture corresponding to the selected video frame;
the adding module 903 adds corresponding video shooting mode information to the selected video frame as follows: and adding corresponding video shooting mode information to the video picture decoded by the selected video frame.
The video photographing mode information includes: a video capture mode identification and a segment identification, the adding module configured to: adding the video capture mode identification and segment identification to the selected video frame.
The video file generation module 904 is configured to:
acquiring an initial sequence of the selected video frames in the initial video file;
and sequencing the selected video frames according to the initial sequence.
The second video file generation device provided by the embodiment of the invention is used for carrying out secondary processing operation on the initial video file generated by the unmanned aerial vehicle. And acquiring selected video frames from the acquired initial video file, adding corresponding video shooting mode information and sequencing to the acquired selected video frames, and generating a secondary video file. And selecting the video frames in the initial file, reducing the number of the video frames on the basis of ensuring the appreciation of users, saving the operation of decoding for multiple times and facilitating the playing and storage of other terminals. The video frames are selected, corresponding video shooting mode information is added, and the video frames are sequenced according to the original relative sequence, so that the video pictures are kept in the original storage state, and the use and storage of other terminal equipment are facilitated. For a specific implementation process of the second video file generation apparatus provided in the embodiment of the present invention, please refer to the above method embodiment, which is not described in detail herein.
Fig. 10 is a functional block diagram of a video file playing apparatus 1000 according to a sixth embodiment of the present invention. The video file playing apparatus 1000 includes: a playing module 1001, a video shooting mode information acquisition module 1002, and a target frame acquisition module 1003.
A playing module 1001 for playing a video;
a video shooting mode information obtaining module 1002, configured to, when an operation request of a user for the played video is detected, obtain video shooting mode information corresponding to the video in a video file to which the video belongs; the operation request comprises any one of an amplification operation request and a reduction request, and the video shooting mode information is at least one of near-to-far mode information and far-to-near mode information;
a target frame obtaining module 1003, configured to obtain a target frame corresponding to the operation request in the video file according to the video shooting mode information;
the playing module 1001 is further configured to play the target frame acquired by the target frame acquiring module.
In the case where the operation request is a zoom-in operation request, the target frame acquisition module 903 is configured to:
acquiring a target frame from a video frame shot at a shooting position closer to the shooting position of the currently played video frame according to the video shooting mode information;
in the case that the operation request is a zoom-out operation request, the target frame acquiring module 903 is configured to:
and acquiring a target frame from the video frame shot at a shooting position far away from the shooting position of the currently played video frame according to the video shooting mode information.
The video shooting mode information includes a segment identifier, and in the case where the operation request is an enlargement operation request, the target frame acquiring module 1003 is configured to:
acquiring a segment identifier of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and acquiring a target frame from a video frame shot at a shooting position closer to the shooting position of the currently played video frame in the segment according to the video shooting mode information;
in a case where the operation request is a zoom-out operation request, the target frame acquiring module 1003 is configured to:
the method comprises the steps of obtaining a segment identification of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and obtaining a target frame from video frames shot at a shooting position far away from the shooting position of the currently played video frame in the segment according to the video shooting mode information.
The target frame acquisition module 1003 is configured to:
identifying an operation trend value of the operation request and an interval to which the operation trend value belongs;
acquiring a span value corresponding to the interval;
and acquiring the target frame according to the span value.
The video file playing device provided by the embodiment of the invention acquires the direction and the number of the switched video frames through the acquired operation trend corresponding to the operation request applied by the user and the span value corresponding to the operation trend so as to acquire and play the video frames which the user wants to play, thereby further facilitating the use of the user. For a specific implementation process of the video file playing apparatus provided in the embodiment of the present invention, please refer to the above method embodiment, which is not described herein any more.
In summary, the embodiments of the present application provide:
a video file generation apparatus, the apparatus comprising:
the video information acquisition module is used for acquiring a video shot by a video shooting device and video shooting mode information corresponding to the video; the video shooting mode information is as follows: at least one of near-to-far and far-to-near mode information;
and the video file generating module is used for generating an initial video file containing the video shooting mode information for the video according to the video shooting mode information.
Wherein, under the condition that the video comprises two video shooting modes, the video shooting mode information acquired by the video information acquisition module comprises: and the video shooting mode identification and the shooting duration and the shooting sequence corresponding to the video shooting mode identification.
Wherein the video file generation module is configured to:
setting a segmentation identifier for the video according to the shooting sequence and the shooting duration;
and adding a corresponding video shooting mode identifier for each video segment after the segment identifier is set.
An unmanned aerial vehicle, includes video capture device, memory and processor, the video capture device is configured to, according to the video capture mode information of setting shoot the video, the video capture mode information is: at least one of near-to-far and far-to-near mode information; the memory is configured to store a video photographed by the video photographing apparatus and the video photographing mode information,
the processor is configured to obtain the video and the video shooting mode information, and generate an initial video file containing the video shooting mode information for the video according to the video shooting mode information.
Wherein, under the condition that the video comprises two video shooting modes, the video shooting mode information comprises: and the video shooting mode identification and the shooting duration and the shooting sequence corresponding to the video shooting mode identification.
Wherein the processor is configured to:
setting a segmentation identifier for the video according to the shooting sequence and the shooting duration;
and adding a corresponding video shooting mode identifier for each video segment after the segment identifier is set.
A video file generation method is applied to an unmanned aerial vehicle and used for generating a video file for a video shot by a video shooting device on the unmanned aerial vehicle, and the method comprises the following steps:
acquiring a video shot by a video shooting device and video shooting mode information corresponding to the video; the video shooting mode is as follows: at least one of near-to-far and far-to-near mode information;
and generating an initial video file containing the video shooting mode information for the video according to the video shooting mode information.
Wherein, under the condition that the video comprises two video shooting modes, the video shooting mode information comprises: and the video shooting mode identification and the shooting duration and the shooting sequence corresponding to the video shooting mode identification.
Wherein the generating an initial video file containing the video shooting mode information for the video according to the video shooting mode information comprises:
setting a segmentation identifier for the video according to the shooting sequence and the shooting duration;
and adding a corresponding video shooting mode identifier for each video segment after the segment identifier is set.
A video file playback apparatus comprising:
the playing module is used for playing the video;
the video shooting mode information acquisition module is used for acquiring video shooting mode information corresponding to the video in a video file to which the video belongs when an operation request of a user on the played video is detected; the operation request comprises any one of an amplification operation request and a reduction request, and the video shooting mode information is at least one of near-to-far mode information and far-to-near mode information;
a target frame acquiring module, configured to acquire a target frame corresponding to the operation request from the video file according to the video shooting mode information;
the playing module is further configured to play the target frame acquired by the target frame acquiring module.
Wherein, in a case that the operation request is a zoom-in operation request, the target frame acquisition module is configured to:
acquiring a target frame from a video frame shot at a shooting position closer to the shooting position of the currently played video frame according to the video shooting mode information;
in a case where the operation request is a zoom-out operation request, the target frame acquisition module is configured to:
and acquiring a target frame from the video frame shot at a shooting position far away from the shooting position of the currently played video frame according to the video shooting mode information.
Wherein the video shooting mode information includes a segment identifier, and in the case where the operation request is an enlargement operation request, the target frame acquisition module is configured to:
acquiring a segment identifier of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and acquiring a target frame from a video frame shot at a shooting position closer to the shooting position of the currently played video frame in the segment according to the video shooting mode information;
in a case where the operation request is a zoom-out operation request, the target frame acquisition module is configured to:
the method comprises the steps of obtaining a segment identification of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and obtaining a target frame from video frames shot at a shooting position far away from the shooting position of the currently played video frame in the segment according to the video shooting mode information.
Wherein the target frame acquisition module is configured to:
identifying an operation trend value of the operation request and an interval to which the operation trend value belongs;
acquiring a span value corresponding to the interval;
and acquiring the target frame according to the span value.
A terminal device, characterized by comprising the video file playing apparatus described above.
A method of video file playback, the method comprising:
playing the video;
when an operation request of a user for the played video is detected, video shooting mode information corresponding to the video in a video file to which the video belongs is obtained; the operation request comprises any one of an amplification operation request and a reduction request, and the video shooting mode information is at least one of near-to-far mode information and far-to-near mode information;
acquiring a target frame corresponding to the operation request in the video file according to the video shooting mode information;
and playing the acquired target frame.
Wherein, in a case that the operation request is an amplification operation request, the acquiring, according to the video shooting mode information, a target frame corresponding to the operation request in the video file includes:
acquiring a target frame from a video frame shot at a shooting position closer to the shooting position of the currently played video frame according to the video shooting mode information;
in a case that the operation request is a zoom-out operation request, the acquiring a target frame corresponding to the operation request according to the video shooting mode information includes:
and acquiring a target frame from the video frame shot at a shooting position far away from the shooting position of the currently played video frame according to the video shooting mode information.
Wherein the video shooting mode information includes a segment identifier, and in a case that the operation request is an amplification operation request, the acquiring, according to the video shooting mode information, a target frame corresponding to the operation request in the video file includes:
acquiring a segment identifier of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and acquiring a target frame from a video frame shot at a shooting position closer to the shooting position of the currently played video frame in the segment according to the video shooting mode information;
in a case that the operation request is a zoom-out operation request, the acquiring a target frame corresponding to the operation request according to the video shooting mode information includes:
the method comprises the steps of obtaining a segment identification of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and obtaining a target frame from video frames shot at a shooting position far away from the shooting position of the currently played video frame in the segment according to the video shooting mode information.
Wherein the acquiring, according to the video shooting mode information, a target frame corresponding to the operation request in the video file includes:
identifying an operation trend value of the operation request and an interval to which the operation trend value belongs;
acquiring a span value corresponding to the interval;
and acquiring the target frame according to the span value.
A video file generation apparatus comprising:
the video shooting method comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining an initial video file, the initial video file comprises a video and video shooting mode information corresponding to the video, and the video shooting mode information is at least one of far-to-near mode information and near-to-far mode information;
the second acquisition module is used for acquiring a selected video frame of the video;
the adding module is used for adding corresponding video shooting mode information for the selected video frame;
and the video file generation module is used for sequencing the selected video frames added with the video shooting mode information to generate a secondary video file.
Wherein the apparatus further comprises:
the decoding module is used for decoding the selected video frame to obtain a video picture corresponding to the selected video frame;
the adding module adds corresponding video shooting mode information to the selected video frame as follows: and adding corresponding video shooting mode information to the video picture decoded by the selected video frame.
Wherein the video photographing mode information includes: a video capture mode identification and a segment identification, the adding module configured to: adding the video capture mode identification and segment identification to the selected video frame.
Wherein the video file generation module is configured to:
acquiring an initial sequence of the selected video frames in the initial video file;
and sequencing the selected video frames according to the initial sequence.
A terminal device characterized by comprising the video file generation apparatus described above.
A video file generation method is applied to a mobile terminal, and comprises the following steps:
acquiring an initial video file, wherein the initial video file comprises a video and video shooting mode information corresponding to the video, and the video shooting mode information comprises at least one of far-to-near mode information and near-to-far mode information;
acquiring a selected video frame of the video;
adding corresponding video shooting mode information to the selected video frame;
and sequencing the selected video frames added with the video shooting mode information to generate a secondary video file.
After acquiring the selected video frame of the video, the method further comprises:
decoding the selected video frame to obtain a video picture corresponding to the selected video frame;
the adding of the corresponding video shooting mode information to the selected video frame is as follows: and adding corresponding video shooting mode information to the video picture decoded by the selected video frame.
Wherein the video photographing mode information includes: the video shooting mode identification and the segmentation identification, wherein the adding of the corresponding video shooting mode information to the selected video frame includes: adding the video capture mode identification and segment identification to the selected video frame.
Wherein the step of ordering the selected video frames to which the video capture mode information is added comprises:
acquiring an initial sequence of the selected video frames in the initial video file;
and sequencing the selected video frames according to the initial sequence.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (25)

1. An apparatus for generating a video file, the apparatus comprising:
the video information acquisition module is used for acquiring a video shot by a video shooting device and video shooting mode information corresponding to the video; the video shooting mode information is as follows: at least one of near-to-far mode information and far-to-near mode information, and the shooting duration of each video shooting mode;
the video file generation module is used for generating an initial video file containing the video shooting mode information for the video according to the video shooting mode information;
the shooting modes from near to far and from far to near represent the position state of the video shooting device from a target shooting object.
2. The apparatus according to claim 1, wherein in a case where the video includes two video shooting modes, the video information obtaining module obtains the video shooting mode information including: and the video shooting mode identification and the shooting duration and the shooting sequence corresponding to the video shooting mode identification.
3. The apparatus of claim 2, wherein the video file generation module is configured to:
setting a segmentation identifier for the video according to the shooting sequence and the shooting duration;
and adding a corresponding video shooting mode identifier for each video segment after the segment identifier is set.
4. An unmanned aerial vehicle, includes video capture device, memory and processor, the video capture device is configured to, according to the video capture mode information of setting shoot the video, the video capture mode information is: at least one of near-to-far mode information and far-to-near mode information, and the shooting duration of each video shooting mode; the memory is configured to store a video captured by the video capture device and the video capture mode information,
the processor is configured to acquire the video and the video shooting mode information, and generate an initial video file containing the video shooting mode information for the video according to the video shooting mode information;
the shooting modes from near to far and from far to near represent the position state of the video shooting device from a target shooting object.
5. The drone of claim 4, wherein in a case that the video includes two video capture modes, the video capture mode information includes: and the video shooting mode identification and the shooting duration and the shooting sequence corresponding to the video shooting mode identification.
6. The drone of claim 5, wherein the processor is configured to:
setting a segmentation identifier for the video according to the shooting sequence and the shooting duration;
and adding a corresponding video shooting mode identifier for each video segment after the segment identifier is set.
7. A video file generation method is applied to an unmanned aerial vehicle and used for generating a video file for a video shot by a video shooting device on the unmanned aerial vehicle, and is characterized by comprising the following steps:
acquiring a video shot by a video shooting device and video shooting mode information corresponding to the video; the video shooting mode is as follows: at least one of near-to-far mode information and far-to-near mode information, and the shooting duration of each video shooting mode;
generating an initial video file containing the video shooting mode information for the video according to the video shooting mode information;
the shooting modes from near to far and from far to near represent the position state of the video shooting device from a target shooting object.
8. The method according to claim 7, wherein in the case that the video includes two video shooting modes, the video shooting mode information comprises: and the video shooting mode identification and the shooting duration and the shooting sequence corresponding to the video shooting mode identification.
9. The method of claim 8, wherein generating an initial video file containing the video capture mode information for the video according to the video capture mode information comprises:
setting a segmentation identifier for the video according to the shooting sequence and the shooting duration;
and adding a corresponding video shooting mode identifier for each video segment after the segment identifier is set.
10. A video file playback apparatus, comprising:
the playing module is used for playing the video;
the video shooting mode information acquisition module is used for acquiring video shooting mode information corresponding to the video in a video file to which the video belongs when an operation request of a user on the played video is detected; the operation request comprises any one of an amplification operation request and a reduction request, and the video shooting mode information is at least one of near-to-far mode information and far-to-near mode information;
a target frame acquiring module, configured to acquire a target frame from video frames captured at a capturing position closer to a target capturing object than a capturing position of a currently played video frame according to the video capturing mode information, when the operation request is an enlargement operation request; under the condition that the operation request is a zoom-out operation request, acquiring a target frame from video frames shot at a shooting position far away from a target shooting object compared with the shooting position of the currently played video frame according to the video shooting mode information;
the playing module is also used for playing the target frame acquired by the target frame acquiring module;
the shooting modes from near to far and from far to near represent the position state of the video shooting device from the target shooting object.
11. The apparatus of claim 10, wherein the video capture mode information comprises a segment identifier, and wherein in the event that the operation request is a zoom-in operation request, the target frame acquisition module is either configured to:
acquiring a segment identifier of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and acquiring a target frame from a video frame shot at a shooting position closer to the shooting position of the currently played video frame in the segment according to the video shooting mode information;
in a case that the operation request is a zoom-out operation request, the target frame acquisition module is configured to:
the method comprises the steps of obtaining a segment identification of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and obtaining a target frame from video frames shot at a shooting position far away from the shooting position of the currently played video frame in the segment according to the video shooting mode information.
12. The apparatus of any of claims 10 to 11, wherein the target frame acquisition module is configured to:
identifying an operation trend value of the operation request and an interval to which the operation trend value belongs;
acquiring a span value corresponding to the interval;
acquiring the target frame according to the span value;
wherein the operation request represents an enlargement trend of the enlargement operation request and represents a reduction trend of the reduction operation request; the operation trend value corresponds to one trend interval in a plurality of trend intervals, and each trend interval is matched with a span value and used for indicating the number of switching video frames corresponding to the trend interval.
13. A terminal device characterized by comprising the video file playback apparatus according to claims 10 to 12.
14. A method for playing a video file, the method comprising:
playing the video;
when an operation request of a user for the played video is detected, video shooting mode information corresponding to the video in a video file to which the video belongs is obtained; the operation request comprises any one of an amplification operation request and a reduction request, and the video shooting mode information is at least one of near-to-far mode information and far-to-near mode information;
acquiring a target frame corresponding to the operation request in the video file according to the video shooting mode information; wherein, in a case that the operation request is an amplification operation request, the acquiring, according to the video shooting mode information, a target frame corresponding to the operation request in the video file includes: acquiring a target frame from video frames shot at a shooting position closer to a target shooting object than the shooting position of the currently played video frame according to the video shooting mode information; in a case that the operation request is a zoom-out operation request, the acquiring a target frame corresponding to the operation request according to the video shooting mode information includes: acquiring a target frame from video frames shot at a shooting position far away from a target shooting object compared with the shooting position of the currently played video frame according to the video shooting mode information;
playing the acquired target frame;
the shooting modes from near to far and from far to near represent the position state of the video shooting device from the target shooting object.
15. The method according to claim 14, wherein the video shooting mode information includes a segment identifier, and in the case where the operation request is an enlargement operation request, the acquiring a target frame corresponding to the operation request in the video file according to the video shooting mode information includes:
acquiring a segment identifier of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and acquiring a target frame from a video frame shot at a shooting position closer to the shooting position of the currently played video frame in the segment according to the video shooting mode information;
in a case that the operation request is a zoom-out operation request, the acquiring a target frame corresponding to the operation request according to the video shooting mode information may include:
the method comprises the steps of obtaining a segment identification of a segment to which a currently played video frame belongs and video shooting mode information corresponding to the segment to which the currently played video frame belongs, and obtaining a target frame from video frames shot at a shooting position far away from the shooting position of the currently played video frame in the segment according to the video shooting mode information.
16. The method according to any one of claims 14 to 15, wherein the obtaining of the target frame corresponding to the operation request in the video file according to the video shooting mode information comprises:
identifying an operation trend value of the operation request and an interval to which the operation trend value belongs;
acquiring a span value corresponding to the interval;
acquiring the target frame according to the span value;
wherein the operation request represents an enlargement trend of the enlargement operation request and represents a reduction trend of the reduction operation request; the operation trend value corresponds to one trend interval in a plurality of trend intervals, and each trend interval is matched with a span value and used for indicating the number of switching video frames corresponding to the trend interval.
17. A video file generation apparatus, comprising:
the video shooting system comprises a first obtaining module, a second obtaining module and a video shooting module, wherein the first obtaining module is used for obtaining an initial video file, the initial video file comprises a video and video shooting mode information corresponding to the video, and the video shooting mode information is at least one of far-to-near mode information and near-to-far mode information and shooting duration of each video shooting mode;
the second acquisition module is used for acquiring a selected video frame of the video;
the adding module is used for adding corresponding video shooting mode information for the selected video frame;
the video file generation module is used for sequencing the selected video frames added with the video shooting mode information to generate a secondary video file;
the shooting modes from near to far and from far to near represent the position state of the video shooting device from the target shooting object.
18. The apparatus of claim 17, further comprising:
the decoding module is used for decoding the selected video frame to obtain a video picture corresponding to the selected video frame;
the adding module adds corresponding video shooting mode information to the selected video frame as follows: and adding corresponding video shooting mode information to the video picture decoded by the selected video frame.
19. The apparatus according to claim 17 or 18, wherein the video capture mode information comprises: a video capture mode identification and a segment identification, the adding module configured to: adding the video capture mode identification and segment identification to the selected video frame.
20. The apparatus of claim 17, wherein the video file generation module is configured to:
acquiring an initial sequence of the selected video frames in the initial video file;
and sequencing the selected video frames according to the initial sequence.
21. A terminal device characterized by comprising the video file generation apparatus of claims 17-20.
22. A video file generation method is applied to a mobile terminal, and is characterized by comprising the following steps:
acquiring an initial video file, wherein the initial video file comprises a video and video shooting mode information corresponding to the video, and the video shooting mode information comprises at least one of far-near mode information and near-far mode information and shooting duration of each video shooting mode;
acquiring a selected video frame of the video;
adding corresponding video shooting mode information to the selected video frame;
sequencing the selected video frames added with the video shooting mode information to generate a secondary video file;
the shooting modes from near to far and from far to near represent the position state of the video shooting device from the target shooting object.
23. The method of claim 22, wherein after acquiring the selected video frame of the video, the method further comprises:
decoding the selected video frame to obtain a video picture corresponding to the selected video frame;
the adding of the corresponding video shooting mode information to the selected video frame is as follows: and adding corresponding video shooting mode information to the video picture decoded by the selected video frame.
24. The method of claim 22, wherein the video capture mode information comprises: the video shooting mode identification and the segmentation identification, wherein the adding of the corresponding video shooting mode information to the selected video frame includes: adding the video capture mode identification and segment identification to the selected video frame.
25. The method of claim 22, wherein the step of ordering the selected video frames to which the video capture mode information is added comprises:
acquiring an initial sequence of the selected video frames in the initial video file;
and sequencing the selected video frames according to the initial sequence.
CN201711232812.9A 2016-12-09 2017-11-30 Method, device and equipment for generating and playing video file Active CN108200477B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2016111355340 2016-12-09
CN201611135534 2016-12-09

Publications (2)

Publication Number Publication Date
CN108200477A CN108200477A (en) 2018-06-22
CN108200477B true CN108200477B (en) 2021-04-09

Family

ID=62573505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711232812.9A Active CN108200477B (en) 2016-12-09 2017-11-30 Method, device and equipment for generating and playing video file

Country Status (1)

Country Link
CN (1) CN108200477B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108900771B (en) 2018-07-19 2020-02-07 北京微播视界科技有限公司 Video processing method and device, terminal equipment and storage medium
CN108616696B (en) 2018-07-19 2020-04-14 北京微播视界科技有限公司 Video shooting method and device, terminal equipment and storage medium
CN109639970B (en) * 2018-12-17 2021-07-30 维沃移动通信有限公司 Shooting method and terminal equipment
CN111212225A (en) * 2020-01-10 2020-05-29 上海摩象网络科技有限公司 Method and device for automatically generating video data and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333703A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Method and terminal for photographing by virtue of two cameras
CN105187715A (en) * 2015-08-03 2015-12-23 杨珊珊 Method and device for sharing aerial photography content, and unmanned aerial vehicle
CN105283816A (en) * 2013-07-31 2016-01-27 深圳市大疆创新科技有限公司 Remote control method and terminal
CN106027896A (en) * 2016-06-20 2016-10-12 零度智控(北京)智能科技有限公司 Video photographing control device and method, and unmanned aerial vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040120549A1 (en) * 2002-12-24 2004-06-24 Chung-Shan Institute Of Science And Technology Half-plane predictive cancellation method for laser radar distance image noise

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105283816A (en) * 2013-07-31 2016-01-27 深圳市大疆创新科技有限公司 Remote control method and terminal
CN104333703A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Method and terminal for photographing by virtue of two cameras
CN105187715A (en) * 2015-08-03 2015-12-23 杨珊珊 Method and device for sharing aerial photography content, and unmanned aerial vehicle
CN106027896A (en) * 2016-06-20 2016-10-12 零度智控(北京)智能科技有限公司 Video photographing control device and method, and unmanned aerial vehicle

Also Published As

Publication number Publication date
CN108200477A (en) 2018-06-22

Similar Documents

Publication Publication Date Title
EP3226537B1 (en) Mobile terminal and method for controlling the same
CN108200477B (en) Method, device and equipment for generating and playing video file
US11521654B2 (en) Recording and playing video using orientation of device
US9357117B2 (en) Photographing device for producing composite image and method using the same
KR20220130197A (en) Filming method, apparatus, electronic equipment and storage medium
JP7017175B2 (en) Information processing equipment, information processing method, program
US20130040700A1 (en) Image capture device and image capture method
CN103581544A (en) Dynamic region of interest adaptation and image capture device providing same
KR20130138123A (en) Continuous video capture during switch between video capture devices
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN108027936B (en) Methods, systems, and media for presenting interactive elements within video content
US20170034451A1 (en) Mobile terminal and control method thereof for displaying image cluster differently in an image gallery mode
CN112532808A (en) Image processing method and device and electronic equipment
CN112839163B (en) Shot image sharing method and device, mobile terminal and readable storage medium
CN108781254A (en) It takes pictures method for previewing, graphic user interface and terminal
CN110740261A (en) Video recording method, device, terminal and storage medium
CN111669495B (en) Photographing method, photographing device and electronic equipment
US9888206B2 (en) Image capturing control apparatus that enables easy recognition of changes in the length of shooting time and the length of playback time for respective settings, control method of the same, and storage medium
CN114422692A (en) Video recording method and device and electronic equipment
EP2793462B1 (en) Method and apparatus for video call in communication system
KR20160016574A (en) Method and device for providing image
CN107705275B (en) Photographing method and mobile terminal
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
GB2513865A (en) A method for interacting with an augmented reality scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20190108

Address after: 300220 Hexi District, Tianjin Dongting Road 20, Chen Tang science and Technology Business District Service Center 309-9.

Applicant after: Tianjin Yuandu Technology Co.,Ltd.

Address before: 401121 No. 19 Yinglong Avenue, Longxing Street, Yubei District, Chongqing

Applicant before: CHONGQING ZERO INTELLIGENT CONTROL TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right
CB02 Change of applicant information

Address after: 0701600 A102, No. 67, tourism East Road, Anxin County, Baoding City, Hebei Province

Applicant after: Hebei xiong'an Yuandu Technology Co.,Ltd.

Address before: No. 309-9, service center, Chen Tong, Hexi District, Dongting Road, Hexi District, Tianjin, Tianjin

Applicant before: Tianjin Yuandu Technology Co.,Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20200922

Address after: 102100 building 27, yard 8, Fenggu 4th Road, Yanqing garden, Zhongguancun, Yanqing District, Beijing 1916

Applicant after: Beijing Yuandu Internet Technology Co.,Ltd.

Address before: 0701600 A102, No. 67, tourism East Road, Anxin County, Baoding City, Hebei Province

Applicant before: Hebei xiong'an Yuandu Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant