Specific embodiment
Purpose, technical scheme and advantage to make the embodiment of the present invention are clearer, below in conjunction with the embodiment of the present invention
In attached drawing, the technical solution in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
Part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
All other embodiments obtained without creative efforts shall fall within the protection scope of the present invention.Therefore,
The detailed description of the embodiment of the present invention to providing in the accompanying drawings is not intended to limit the model of claimed invention below
It encloses, but is merely representative of the selected embodiment of the present invention.Based on the embodiments of the present invention, those of ordinary skill in the art are not having
All other embodiments obtained under the premise of creative work are made, shall fall within the protection scope of the present invention.
Following embodiment of the present invention mainly includes:First video file generation method and device, can be applied to unmanned plane
Deng the video capture equipment with removable shooting video capability;Second video file generation method and device, can be applied to
The equipment that intelligent mobile terminal etc. has video file processing capacity;Video file broadcasting method and device, can be applied to intelligence
The equipment that energy mobile terminal etc. has video file playing function.
As shown in Figure 1, be the unmanned plane 200 applied of the first video file generating means provided in an embodiment of the present invention with
The first movement terminal 110 and the video file playing device that second video file generating means are applied applied second
The schematic diagram that mobile terminal 120 interacts.The unmanned plane 200 is moved by network and first movement terminal 110 and second
Terminal 120 is communicatively coupled, with into row data communication or interaction.
The first movement terminal 110 and second mobile terminal 120 can be different mobile equipment or
Same mobile equipment comprising the second video file generating means and the video file playing device.The movement
Equipment 100 can be PC (personal computer, PC), tablet computer, smart mobile phone, personal digital assistant
(personal digital assistant, PDA) etc..
As shown in Fig. 2, it is the block diagram of the mobile equipment.The mobile equipment 100 includes the second video file
It is generating means/video file playing device, touch control display 101, memory 102, storage control 103, processor 104, outer
If interface 105, input-output unit 106 etc..
Touch control display 101, the memory 102, storage control 103, processor 104, Peripheral Interface 105, input
106 each element of output unit is directly or indirectly electrically connected between each other, to realize the transmission of data or interaction.For example, this
A little elements can be realized by one or more communication bus or signal wire be electrically connected between each other.The second video file life
Described deposit can be stored in the form of software or firmware (firmware) including at least one into device/video file playing device
In reservoir 102.The processor 104 is used to perform the executable module stored in memory 102, such as second video text
The software function module or computer program that part generating means or the video file broadcast control device include.
Wherein, memory 102 may be, but not limited to, random access memory (Random Access Memory,
RAM), read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only
Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM),
Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc..
Wherein, for memory 102 for storing program, the processor 104 performs described program, subsequently after execute instruction is received
Method performed by the mobile equipment 100 for the flow definition that any embodiment of the embodiment of the present invention discloses can be applied to processor
It is realized in 104 or by processor 104.
Processor 104 may be a kind of IC chip, have the processing capacity of signal.Above-mentioned processor 104 can
To be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network processing unit
(Network Processor, abbreviation NP) etc.;Can also be digital signal processor (DSP), application-specific integrated circuit (ASIC),
Ready-made programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hard
Part component.It can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor
Can be microprocessor or the processor can also be any conventional processor etc..
Various input-output units 106 are coupled to processor 104 and memory 102 by the Peripheral Interface 105.One
In a little embodiments, Peripheral Interface, processor and storage control can be realized in one single chip.In some other example
In, they can be realized by independent chip respectively.
Input-output unit 106 is used to that user input data to be supplied to realize user and the interaction of the unmanned plane.It is described
Input-output unit may be, but not limited to, and touch screen, mouse and keyboard etc. export correspondence for the operation in response to user
Signal.
Touch control display 101 provides an interactive interface (such as user interface) to the user or for showing picture number
It is referred to according to user.In the present embodiment, the touch control display can be the condenser type for supporting single-point and multi-point touch operation
Touch screen or resistance type touch control screen etc..Single-point and multi-point touch operation is supported to refer to that touch control display can be sensed from the touch-control
On display one or more positions simultaneously generate touch control operation, and by the touch control operation that this is sensed transfer to processor into
Row calculates and processing.
With reference to Fig. 3, the unmanned plane 200 can include video capture device 201, memory 202 and processor 203.
It is directly or indirectly electrically connected between each other, to realize the transmission of data or interaction.For example, these elements between each other may be used
It is realized and is electrically connected by one or more communication bus or signal wire.Wherein, the video capture device 201 is used to perform nothing
The man-machine video image acquisition operation in flight course can shoot video according to the video shooting mode information of setting,
The video shooting mode information is:From the near to the distant and from the distant to the near at least one of both of which information;The memory
202 are configured as, and store the video captured by the video capture device and the video shooting mode information.The processor
203 are configured as, and obtain the video and the video shooting mode information, and are institute according to the video shooting mode information
State the initial video file that video generation includes the video shooting mode information.Wherein, two kinds of videos are included in the video
In the case of screening-mode, the video shooting mode information includes:Video shooting mode identifies and video shooting mode mark
Corresponding shooting duration of video and shooting sequencing, at this point, the processor 203 is configured as:According to the shooting sequencing
Segment identification is set for the video with the shooting duration of video;Corresponding video is added for every section of video after setting segment identification
Screening-mode identifies.
Wherein, memory 202 may be, but not limited to, random access memory (Random Access Memory,
RAM), read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only
Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM),
Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc..
Wherein, memory 202 is for storing a plurality of instruction, and described instruction is suitable for being loaded and being performed by processor 203, the processor
203 after pending instruction is obtained, execution described instruction, and the processor 203 in the present embodiment is executable to be implemented below
The corresponding instruction of each step in method described in example, to complete the function of each step.
The processor 203 can be a kind of IC chip, have signal handling capacity.The processor 203 can be with
It is general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network
Processor, NP), speech processor and video processor etc.;Can also be digital signal processor, application-specific integrated circuit,
Field programmable gate array either other programmable logic device, discrete gate or transistor logic, discrete hardware components.
It can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor can be
Microprocessor or the processor 203 can also be any conventional processors etc..
It it is understood that also can as shown in Figure 2, between memory 202 and processor 203 in the unmanned plane 200
Comprising storage control (not shown) and include Peripheral Interface (not shown).
Fig. 4 is referred to, the video file applied to unmanned plane 200 shown in FIG. 1 provided for first embodiment of the invention
The step flow chart of generation method applied to unmanned plane, shoots video generation for the video capture device on unmanned plane and initially regards
Frequency file.Specific explanations will be carried out to step shown in Fig. 4 below.
Step S401 obtains the video of video capture device shooting and video shooting mode corresponding with video letter
Breath.
Multiple screening-modes are preset in the video capture device of unmanned plane, are regarded according to the screening-mode that user indicates
Frequency shooting operation.The video shooting mode includes:At least one of from the near to the distant and from the distant to the near.In a kind of embodiment
In, the video shooting mode can also be any combination of two kinds of screening-modes, such as first from the near to the distant again from the distant to the near or
Person is first from the distant to the near again from the near to the distant etc..
The video shooting mode information can include:Video shooting mode identifies, and the video shooting mode mark can
To include identifying from the near to the distant and identify from the distant to the near.Two kinds of video shooting modes are included in the video, it is described to regard
It can also include in frequency screening-mode information:Video shooting mode identifies and video shooting mode identify corresponding shooting duration of video and
Shoot sequencing.The video shooting mode duration can be the shooting duration of each video shooting mode, the bat
The pattern sequencing of taking the photograph can be the sequencing of various video shooting modes switching, e.g. first control unmanned plane from the near to the distant
It is shot from the distant to the near again after shooting and still unmanned plane is first controlled to shoot again shooting etc. from the near to the distant from the distant to the near.
It is related can first to obtain instruction video shooting mode before video capture is carried out for the video capture device of unmanned plane
Video shooting mode instruction information.The video shooting mode instruction information can be by the flight controller of unmanned plane according to pre-
If control Program Generating, or user by unmanned aerial vehicle (UAV) control terminal be sent to the unmanned plane video file generate
Device.
Control video capture device indicates that the specific implementation process of information shooting video refers to according to video shooting mode
Fig. 5 below will be specifically described process shown in fig. 5.
Step S501 obtains current shooting sequencing.
The shooting sequencing of the current shooting task performed needed for the video capture device of unmanned plane is obtained, is first close shot
Shooting after the shooting from the near to the distant of distant view or first distant view more than the shooting from the distant to the near of close shot or above-mentioned the two afterwards
Pattern combination etc..
Step S502 obtains current video shooting mode according to the shooting sequencing and identifies.
After obtaining the shooting sequencing, current shooting mould is obtained according to the acquired shooting sequencing
Formula.In one embodiment, it is right if detecting the shooting sequence that current shooting sequencing is distant view after first close shot
The screening-mode answered should be screening-mode from the near to the distant, and corresponding video shooting mode is identified as to be identified from the near to the distant.If inspection
It is that first close shot extends to distant view recycling to close shot to measure current shooting sequencing, then its corresponding screening-mode is by near
And it is remote again from the distant to the near, corresponding video shooting mode is identified as mark from the near to the distant and identifies from the distant to the near.
Step S503 is identified according to the current video shooting mode, is obtained and the current video shooting mode
Identify corresponding shooting duration of video.
Different video screening-mode is preset in the video file generating means and identifies corresponding preset shooting duration of video,
Each screening-mode mark can be directed to, matched shooting duration of video is set.Current shooting operation is obtained according to above-mentioned steps
After video shooting mode mark, obtain and the current corresponding shooting duration of video of video shooting mode mark.
Step S504 controls the video capture device to identify corresponding screening-mode according to the current screening-mode
Shoot video.
After the associated video screening-mode information corresponding to above-mentioned acquisition this shooting task, the video is controlled to clap
It takes the photograph device and corresponding screening-mode progress movie shooting operation is identified according to the current screening-mode, acquire video information.
Step S505 when detecting that shooting duration of video reaches corresponding shooting duration of video, switches screening-mode.
Each screening-mode corresponds to different shooting duration of video, is controlling the video capture device according to the current bat
The corresponding screening-mode shooting video of pattern identification is taken the photograph, and monitors the duration that vision operation is shot under the screening-mode.
When detecting that shooting duration of video reaches corresponding shooting duration of video, stop the movie shooting operation under current shooting pattern, switch new
Screening-mode carries out the movie shooting operation under new screening-mode.If preset screening-mode finishing switching, eventually
Secondary movie shooting operation here.
In one embodiment, the video shooting mode information includes:First video shooting mode is clapped from the near to the distant
It takes the photograph, shooting duration of video 1min, the second video shooting mode is is taken the photograph by remote and close-perspective recording, shooting duration of video 2min.Then regarded described in control
When frequency filming apparatus shoots video according to the video shooting mode information, first unmanned plane is controlled to shoot 1min durations from the near to the distant
Video, then the video that unmanned plane is controlled to shoot 2min durations from the distant to the near.Then in the video acquired in this step, first is included
Close shot video frame and distant view video frame under screening-mode and distant view video frame and close shot video under the second screening-mode
Frame and the corresponding shooting duration of video of the first video shooting mode and the second video shooting mode and corresponding shooting duration of video.
The above-mentioned process that video is shot for control video capture device provided by the embodiments of the present application, it is to be understood that
The application is not limited thereto.From above-mentioned introduction it is found that the video captured by video capture device there are corresponding video captures
Pattern information, therefore this step S401 can obtain the corresponding video shooting mode information of the video simultaneously in acquisition video.
Step S402 believes according to the video shooting mode information for video generation comprising the video shooting mode
The initial video file of breath.
The initial video file that the video shooting mode information is included for video generation, as regards described
Information, the generations such as the screening-mode mark corresponding to screening-mode, shooting duration of video when the upper video capture is added in frequency are initial
Video file.
In other embodiments, that is, in same section of video the situation of various video screening-mode mark is included simultaneously
Under, it is described that initially regarding for the video shooting mode information is included as video generation according to the video shooting mode information
Frequency file includes:Segment identification is set for the video according to the shooting sequencing and the shooting duration of video;For setting point
Every section of video after segment identification adds corresponding video shooting mode mark.
It, may be simultaneously comprising at least two patterns such as from the distant to the near and from the near to the distant, example in same video capture task
Such as, the video of the video capture device shooting of the unmanned plane for first by ground distally, then by being back to ground at a distance, then phase
The screening-mode answered is first by closely zooming out, then by far furthering.In the case, the video file generating means in the unmanned plane
The video segmentation of shooting can be stored according to the variation of screening-mode and add segment identification, it is every after segment identification is added
Corresponding video shooting mode mark is added in section video, to record the screening-mode at current video end.
In one embodiment, will the first video-frequency band be divided by video captured during closely zooming out, it will be by remote
The video to further is divided into the second video-frequency band.In first video-frequency band addition, screening-mode identifies from the near to the distant, described the
Screening-mode identifies the addition of two video-frequency bands from the distant to the near.Segment identification is added into segmentation, and adds video shooting mode mark, root
The initial video file is generated according to being segmented and adding the video after screening-mode identifies.
After user obtains the initial video file by mobile terminal, it can be clapped according to the video included in video file
Take the photograph pattern information control video playing.In one embodiment, user obtains the video file, and decodes and regarded described in acquisition
After the video pictures that frequency file includes, the corresponding operation requests of required broadcasting video pictures, the operation are inputted in mobile terminal
Request can be that either the scene in currently playing video frame is as amplified or contracted by zoom-out request for amplifieroperation request
It is small.After the operation requests of acquisition for mobile terminal user, existed according to the operation requests of user and the video shooting mode information
Target frame corresponding with the operation requests is obtained in the video file.For example, operation requests input by user are grasped for amplification
It asks, then the larger close shot video of the relatively current scene for playing video is searched, using the close shot video searched as target
Video playing.Correspondingly, if operation requests input by user are zoom-out request, the relatively current scene for playing video is searched
Smaller distant view video is played the distant view video searched as target video.
The video file generation method that the embodiments of the present invention provide, video file generating means in unmanned plane can be with
It is literary for initial video of the video generation comprising the video shooting mode information according to the video shooting mode information of acquisition
Part, user to be facilitated to check, the initial video file of unmanned plane generation carries out the choosing of distant view picture frame and close shot picture frame etc.
Switching is selected, the video of different distance photographed scene is provided to the user under the premise of video playing clarity is ensured, facilitates use
Family uses.
Fig. 6 is referred to, the video file applied to the second mobile terminal 110 provided for second embodiment of the invention generates
The step flow chart of method.The second video file generation dress that second video file generation method provided in this embodiment is applied
It puts, the video file generation module that can be preferably set in mobile terminal is connect with the unmanned plane, obtains unmanned plane life
Into initial video file, and the initial video file is handled accordingly.Process shown in fig. 6 will be carried out below
Specific explanations.
Step S601 obtains initial video file.
It is acquired video according to the video that the initial video file, which is unmanned plane described in above example,
The initial video file that screening-mode information is generated, comprising video and the video is corresponding regards in the initial video file
Frequency screening-mode information, video shooting mode information include from the distant to the near and from the near to the distant at least one in both of which information
Kind;
The second video file generating means obtain the original document mode can there are many, for example, directly from
In its unmanned aerial vehicle (UAV) control device applied cache module cache unmanned plane generation the initial video file or with
The unmanned plane establishes communication connection, obtains the initial video file of the unmanned plane generation or is stored with this from other
The initial video file is obtained on the transfer server of initial video file.
Step S602 obtains the selected video frame of the video, that is, obtain the choosing of video included in the video file
Determine video frame;
After all videos frame for obtaining the initial video file, video frame is selected according to preset processing Rule and is done
Subsequent processing.It is described it is preset processing rule can be:Single times of frame-skipping, double frame-skipping etc., according to the preset processing rule
Obtaining the premise of the selected video frame is:So that obtain multiple selected video frame between adjoining video frame between to meet
The continuity rule of human eye observation's video, during in order to avoid frame-skipping is excessive, user's naked eyes are discovered in subsequent video playing process video
Discontinuity, affect the experience sense of user by.
In one embodiment, the video frame of initial video file is arranged as:1st, 2,3,, 98,99 and 100.It is preset
Processing rule is three times frequency hopping, as retains a video frame every three video frame, then the selected video frame obtained can be:
1、5、9...93、97。
The video frame of video pictures in the compressed format is stored in initial video file, after obtaining above-mentioned selected video frame,
It needs, using the acquired selected video frame of decoder decoding, to obtain the video pictures included in each selected video frame.
Therefore, the embodiment of the present application further includes after the selected video frame for obtaining the video:The selected video frame is decoded, obtains institute
State the corresponding video pictures of selected video frame;It is described to be for the corresponding video shooting mode information of selected video frame addition:For institute
It states the decoded video pictures of selected video frame and adds corresponding video shooting mode information.
Step S603 adds corresponding video shooting mode information for the selected video frame.
It is that the corresponding video of selected video frame addition under each video shooting mode is clapped after obtaining all selected video frame
Pattern information is taken the photograph, so that the selected video frame under each video shooting mode corresponds to same video shooting mode mark etc..
In one embodiment, there are many video shooting modes that a initial video file includes, then described initial
The video included in video file includes the video-frequency band that multistage is divided by segment identification, and each video-frequency band corresponds to a kind of video capture
Pattern.When adding the video shooting mode information for selected video frame, corresponding different video shooting mould can also be added
The segment identification of the video-frequency band of formula, that is, include in the video shooting mode information:Video shooting mode identifies and segmentation
It is described to add no longer including for corresponding video shooting mode information for the selected video frame in the case of mark:For the choosing
Determine video frame and add the video shooting mode mark and segment identification.
Step S604 is ranked up the selected video frame after the addition video shooting mode information, generation two
Secondary video file.
The selected video frame after the addition video shooting mode information is ranked up, the foundation reference of sequence
Selected initial sequencing of the video frame in the initial video file accordingly, and according to the initial sequencing to institute
Selected video frame is stated to be ranked up.Selected video frame after the namely described pair addition video shooting mode information is arranged
The step of sequence, includes:Obtain initial sequencing of the selected video frame in the initial video file;According to described first
Beginning sequencing is ranked up the selected video frame.
In other embodiments, can also first by all videos frame decoding of the video in the initial video file,
Whole video frame is obtained, the partial video picture conduct in all videos picture is chosen further according to preset processing rule
Selected video frame, to generate the secondary video file.
On the basis of above-described embodiment, will the second video file generating means processing after obtain secondary video text
The specific file underedge uploaded onto the server after part name by specified upstream interface, and file name is notified into server,
A downstream interface address is mapped so that server presss from both sides title according to this document, other-end equipment passes through the downstream interface address
Obtain the video playing file.It can be seen that the video playing file that the other-end obtains equally includes similar video shooting mode
Information.
Video file generation method provided in an embodiment of the present invention, two applied to the initial video file that unmanned plane generates
Secondary processing operation.Selected video frame is obtained from acquired initial video file, by acquired selected video frame addition pair
The video shooting mode information answered and sequence generate secondary video file.Video frame in the original document is selected,
Video frame quantity is reduced on the basis of user's degree of watching is ensured, it is larger to avoid quantity when all videos frame is decoded, and
There are data redundancy, influence subsequently to upload onto the server and the effect of other terminal plays.And multiple decoded operation is eliminated,
Facilitate the broadcasting and storage of other terminals.Selected video frame adds corresponding video shooting mode information, and according to original opposite
Sequence sorts, so that the video pictures retain original storage state, facilitates the use and storage of other terminal devices.Together
When, due to including distant view video frame and close shot video frame in initial video file, the secondary video file generated can root
The selection that distant view picture frame and close shot picture frame etc. are carried out according to the demand of user switches, in the premise for ensureing video playing clarity
Under provide the video of different distance photographed scene to the user, it is convenient for users.
Fig. 7 is referred to, the video text applied to second terminal 120 shown in FIG. 1 provided for third embodiment of the invention
The step flow chart of part playback method.Specific explanations will be carried out to process shown in Fig. 7 below.
Step S701, playing video file.
The video file playing device that video file broadcasting method provided in this embodiment is applied can be applied to end
Video file playing module in end, or independent playback equipment.The video file playing device obtains described
Video file can be the initial video file of unmanned plane generation, or generated and filled by second video file
Put the secondary video file of generation.The video shooting mode information of video, the video are included in any video file
Screening-mode information can be with the video shooting mode information described in above-described embodiment, including video shooting mode mark, shooting
Duration and shooting sequencing etc., include close shot video frame and distant view video frame in video.
If the video file is initial video file, the video file playing device is needed to the initial video
File is decoded processing, obtains the video pictures included in the initial video file.If the initial video file is
Secondary video file, then the video file playing device do not need to be decoded operation again, secondary video can be directly controlled
Presentation of video frames in file.
Step S702 when detecting operation requests of the user to the video of broadcasting, is obtained and is regarded belonging to the video
Video shooting mode information corresponding with the video in frequency file;
Wherein, the operation requests include any one of amplifieroperation request and zoom-out request, the video capture mould
Formula information is from the near to the distant and from the distant to the near at least one of both of which information.
When the video file is in broadcast state, what is set in the second mobile terminal of the video file playing device touches
Control display acquisition user acts on the operation requests of second mobile terminal, and obtain the video of the affiliated video file of video
Screening-mode information.If the operation requests are asked for amplifieroperation, represent that user wishes to play regarding for relatively current broadcasting
The larger corresponding video frame of scene of scene in frequency frame, the close shot video frame of as relatively described currently playing video frame.
If the operation requests are zoom-out request, the scene that mark user wishes to play in the video frame of relatively current broadcasting is smaller
The corresponding video frame of scene, the distant view video frame of as relatively described currently playing video frame.
The realization method of the operation requests can there are many, including but not limited to:It is set in the second movement equipment
The sign for being used to indicate amplification/reduction operation identification, the two point touch control manner on touch screen, single-point touch slip side
To etc..In one embodiment, the "+" of characterization amplifieroperation request can be set or characterize the "-" of reduction operation, setting
Single-line type sliding shoe and sliding groove, sidewards dragging to the right represent amplifieroperation request, slide to the left represent reduction operation sidewards.With
Family acts on expression reduction operation close to each other between two on touching display screen fingers, and the expression that is located remotely from each other of two fingers is put
Big operation requests.The single finger of either user acts on any position on touching display screen, slides represent to reduce to the left
Operation is slided represent amplifieroperation request to the right.In other embodiments, user can also pass through second mobile terminal
Self-defined amplifieroperation request and the corresponding operation posture of reduction operation, do not limit herein.
Step S703 is obtained and the operation requests pair according to the video shooting mode information in the video file
The target frame answered.
The video shooting mode information of acquired video file mainly includes the video capture of characterization screening-mode
Pattern identification, segment identification etc..After the operation requests for obtaining user, the display according to required for the operation requests of user search user
Target frame.
Step S704 plays acquired target frame.
After finding user's need target frame to be shown, the target frame is controlled to play, to realize that user please by operation
Control video file playing device is asked to play the purpose of target video.
Wherein, it is described to be believed according to the video shooting mode when the operation requests of user are asked for amplifieroperation
Breath obtains target frame corresponding with the operation requests in the video file and includes:Believed according to the video shooting mode
Breath obtains target frame in the video frame of camera site shooting nearer compared with the camera site of currently playing video frame.
In addition, since the video-frequency band shot under at least two video shooting modes may be included in the video that is played, because
This first judges the video-frequency band residing for currently playing module before target frame is obtained.Obtain the segmentation belonging to currently playing object
The video shooting mode mark of the corresponding video-frequency band of segmentation belonging to mark and the segment identification.By user for currently playing
Video amplification and reduction operation be limited in same video-frequency band, prevent from being switched to the other fields shot under other photographed scenes
The video of scape influences the susceptibility of user.Namely obtain the segment identification that is segmented belonging to currently playing video frame and with institute
Belong to and be segmented corresponding video shooting mode information, according to the video shooting mode information, in the segmentation with it is currently playing
The camera site of video frame obtains target frame in the video frame compared to the shooting of nearlyer camera site.
Judge that the corresponding video shooting mode of segmentation is identified whether as by remote and near screening-mode belonging to the segment identification
Mark.If the corresponding video shooting mode of segmentation is identified as screening-mode mark from the distant to the near belonging to the segment identification,
Sequence, the postorder video acquisition target video of currently playing video in the segmentation are stored according to video.
When video file stores, video storage sequence is generally time series when video is shot according to video shooting mode
The acquired video frame of storage.If the video shooting mode to be taken the photograph by remote and close-perspective recording, in video storage sequence, by it is preceding extremely
It is afterwards the gradually switching from distant view picture frame to close shot picture frame, the display scene of photographed scene gradually increases., whereas if institute
Video shooting mode is stated to shoot from the near to the distant, then in video storage sequence, from front to back for from close shot picture frame to prospect map
The gradually switching of piece frame, the display scene of photographed scene gradually become smaller.
After the corresponding video-frequency band of segmentation described in the segment identification is determined, the video shooting mode of the video-frequency band is obtained.
If the video shooting mode is from the distant to the near, asked for the amplifieroperation of user, sequence stored according to the video,
It is obtained in the postorder video of currently playing video in the segmentation and shows the relatively large close shot video frame of scene, in the rear sequence
Target video is obtained in close shot video frame in video.
If the corresponding video shooting mode mark of segmentation belonging to the segment identification is not screening-mode mark from the distant to the near
Know, then sequence is stored according to video, target video is obtained in the preamble video of currently playing video in the segmentation.
It is asked for the amplifieroperation of user, sequence, the currently playing video in the segmentation is stored according to the video
It is obtained in preamble video and shows the relatively large close shot video frame of scene, obtained in the close shot video frame in the preamble video frame
Take target video.
In the case that the operation requests be reduction operation request, it is described according to the video shooting mode acquisition of information with
The corresponding target frame of the operation requests includes:
According to the video shooting mode information, position is being shot farther out compared with the camera site of currently playing video frame
It puts in the video frame of shooting and obtains target frame.
Before target frame is obtained, the video-frequency band residing for currently playing module is first judged.Obtain currently playing video frame
The video shooting mode mark of the corresponding video-frequency band of segmentation belonging to affiliated segment identification and the segment identification.By user's needle
Amplification and reduction operation to currently playing video are limited in same video-frequency band, are prevented from being switched under other photographed scenes and be clapped
The video of other scenes taken the photograph influences the susceptibility of user.It is, the present embodiment is being obtained belonging to currently playing video frame
The segment identification of segmentation and video shooting mode information corresponding with affiliated segmentation, according to the video shooting mode information,
Target frame is obtained in the video frame that camera site is shot farther out compared with the camera site of currently playing video frame in the segmentation.
Judge that the corresponding video shooting mode of segmentation is identified whether as by remote and near screening-mode belonging to the segment identification
Mark.
If the corresponding video shooting mode of segmentation is identified as screening-mode mark from the distant to the near belonging to the segment identification,
Sequence, the preamble video acquisition target frame of currently playing video in the segmentation are then stored according to video.
After the corresponding video-frequency band of segmentation described in the segment identification is determined, the video shooting mode of the video-frequency band is obtained.
If the video shooting mode is from the distant to the near, asked for the reduction operation of user, sequence stored according to the video,
It is obtained in the preamble video of currently playing video in the segmentation and shows the relatively small distant view video frame of scene, in the rear sequence
Target frame is obtained in distant view video frame in video.
If the corresponding video shooting mode mark of segmentation belonging to the segment identification is not screening-mode mark from the distant to the near
Know, then sequence is stored according to video, target frame is obtained in the postorder video of currently playing video in the segmentation.
It is asked for the reduction operation of user, sequence, the currently playing video in the segmentation is stored according to the video
It is obtained in postorder video and shows the relatively small distant view video frame of scene, obtained in the distant view video frame in the preamble video frame
Take target frame.
The video file broadcasting method that above-described embodiment provides is asked collected once-through operation, is adapted for carrying out
The video frame switching of fixed quantity.On the basis of above-described embodiment, the operation requests that can also be applied by acquiring user
The quantity of trend acquiring size Switch Video frame.It is described to be asked according to the video shooting mode acquisition of information and the amplifieroperation
The step of seeking corresponding target video further includes:
Identify the operation Trend value of the operation requests and the section operated belonging to Trend value;Obtain the section pair
The span value answered;The target frame is obtained according to the span value.
When obtaining the operation requests of user, parse and Trend value and the operation trend are operated included in the operation requests
Section belonging to value.The operation trend of the operation requests of user can be that the amplification trend of characterization amplifieroperation request and characterization contract
The diminution trend of small operation requests.The operation Trend value corresponds to a trend section in multiple trend sections, each trend
Section matches a span value, indicates the quantity of the corresponding Switch Video frame in the trend section.
It, can also be according to the pace of change control video frame switching speed that user's operation is asked on the basis of above-described embodiment
Degree.In formulaWithIn, ω is gesture zoom
Trend value, ω0、ω1、ω2、ω3、ω4To set interval, t to gesture trend1、t2、t3、t4The refreshing respectively enumerated regards
The time interval of frequency frame, t are the time interval variable of refreshing video frame.
Assuming that the time interval of the most fast refreshing video frame of mobile phone is τ, for the brush of the video frame of user in operation
New paragon then has following constraint:
Above-mentioned T is the interframe delay times of final refreshing video frame, and the t in formula is the t (refreshings of resolving above
The time interval variable of video frame), the time interval of the most fast refreshing video frame of mobile phone is.
For single times of above-mentioned frame-skipping, double frame-skipping, can be explained as follows:
It is assumed that video frame serial number:1、2、3、4、5、6、7、8、9;Then if not frame-skipping, during positive display, display order is
1、2、3、4、5、6、7、8、9;If single times of frame-skipping and positive display, display order 1,3,5,7,9;If double frame-skipping and
Positive display, display order 1,4,7...
Its zoom trend can also be embodied in user and horizontally slip screen.
When user, which clicks, zooms in or out icon, then videos frame pitch can be waited to search video frame and frame of display video.
For example every primary, jumps x frames and searches and show.
Only it is some examples herein it is, of course, also possible to control video playing according to the other parameter of the operation requests of user,
It does not limit.
On the basis of above-described embodiment, in order to further slow down the memory pressure of mobile terminal, table tennis solution may be used
The mode of code controls video frame to be played in decoding.Prepared storage region is provided in the mobile terminal, it can will be described pre-
Standby storage region is divided into three parts:Preamble storage region, currently stored region and postorder storage region.The preparation storage
Region is used to store the part picture frame that the video pictures frame plays, it is preferable that the part picture frame stored is currently broadcasts
Adjacent picture frame before and after the picture frame and its storage order or playing sequence put.Obtain the second sequence that video pictures frame plays
After row, according to currently playing picture frame, preamble picture frame before obtaining the sequence of currently playing picture frame and work as
Postorder picture frame after the sequence of the picture frame of preceding broadcasting.The preamble picture frame can be the former frame figure of current image frame
Piece, or a certain number of picture frames before current image frame.The quantity basis for selecting of the postorder picture frame is the same
The quantity of sequence picture frame.
After obtaining the current video frame, the preamble picture frame and the postorder picture frame, advised according to preset storage
Then the preamble picture frame, the currently playing picture frame and the postorder picture frame are stored to the prepared memory block
Domain.Preferably, the currently playing picture frame is stored in the currently stored region, the preamble picture frame is stored in
The postorder picture frame is stored in the postorder storage region by the preamble storage region.By currently playing video pictures
After frame and its neighbouring picture frame are stored in the prepared storage region, the picture frame is controlled to play, specifically, according to operation
Gesture stability video pictures frame plays.The operating attitude, which includes being used to indicate, to be read forward the first posture of picture frame and is used for
It indicates the second posture of reading picture frame backward, the picture of the prepared storage region memory storage is controlled according to the operating attitude
The mode that frame plays can include:
If the operating attitude is first posture, using the preamble picture frame as new current image frame, by institute
As new preamble picture frame, the control new current image frame plays picture frame before stating the sequence of preamble picture frame.
If the operating attitude is second posture, using the postorder picture as new current image frame, by described in
As new postorder picture frame, the control new current image frame plays picture frame after the sequence of postorder picture frame.
In one embodiment, the currently stored sequence of the video pictures frame is 1,2,3,4,5,6,7 and 8, currently
The video frame of broadcasting is No. 5 picture frames.The prepared storage region is:A, B, C and D.Wherein, A and B is the preamble memory block
Domain, A is for storing No. 3 picture frames, and B is for No. 4 picture frames of storage.C and D is the postorder storage region, and C is for storage 6
Picture frame, D is for No. 7 picture frames of storage.If the operating attitude is the first posture, expression needs to read video frame forward,
It then needs, using No. 4 picture frames as new current image frame, to store No. 3 picture frames as new preamble picture frame to B.It will be new
In No. 2 picture frame storages to A of picture frame before preamble picture frame, using No. 5 picture frames as new postorder picture frame storage to C
It is interior, it will be in 6 good picture frame storages to D.And so on.By the currently playing part picture frame storage of video pictures frame to preparation
In storage region, when avoiding while storing all videos picture frame, the situation of mobile terminal low memory.By currently playing and
Its part picture frame abutted decodes storage in advance, also avoids the Caton phenomenon in video downloading process, further improves
User experience.
The video file broadcasting method that the embodiments of the present invention provide, the operation requests applied by the user acquired
Corresponding operation trend span value corresponding with operation trend obtains direction and the quantity of Switch Video frame, to obtain and play
User wishes the video frame played, has further facilitated user's use.
Fig. 8 is referred to, the function module of the first video file generating means 800 provided for fourth embodiment of the invention
Figure.The first video file generating means 800 include:Acquiring video information module 801, video file generation module 802.
Acquiring video information module 801, for obtaining the video and corresponding with the video of video capture device shooting
Video shooting mode information;The video shooting mode information is:From the near to the distant and from the distant to the near in both of which information extremely
Few one kind;
Video file generation module 802, for including institute for video generation according to the video shooting mode information
State the initial video file of video shooting mode information.
Two kinds of video shooting modes are included in the video, acquired in the acquiring video information module 801
Video shooting mode information includes:Video shooting mode identifies and video shooting mode identifies corresponding shooting duration of video and shooting
Sequencing.
The video file generation module 802 is configured as:
Segment identification is set for the video according to the shooting sequencing and the shooting duration of video;
Corresponding video shooting mode mark is added for every section of video after setting segment identification.
The first video file generating means that the embodiments of the present invention provide, can believe according to the video shooting mode
The initial video file for including the video shooting mode information for video generation is ceased, user to be facilitated to download unmanned plane life
Into the initial video file carry out the switching of distant view picture frame and close shot picture frame etc., ensureing video playing clarity
Under the premise of provide the video of different distance photographed scene to the user, it is convenient for users.Provided in an embodiment of the present invention first
The specific implementation process of video file generating means refers to above method embodiment, and this is no longer going to repeat them.
Fig. 9 is referred to, the function module of the second video file generating means 900 provided for fifth embodiment of the invention
Figure.The second video file generating means are applied to mobile terminal.Described device includes:First acquisition module 901, second are obtained
Modulus block 902, add module 903 and video file generation module 904.
First acquisition module 901, for obtaining initial video file, comprising video and described in the initial video file
The corresponding video shooting mode information of video, the video shooting mode information is both of which is believed from the distant to the near and from the near to the distant
At least one of breath;
Second acquisition module 902, for obtaining the selected video frame of the video;
Add module 903, for adding corresponding video shooting mode information for the selected video frame;
Video file generation module 904, for the selected video frame after the addition video shooting mode information
It is ranked up, generates secondary video file.
Described device can also include:
905 (not shown) of decoder module for decoding the selected video frame, obtains the selected video frame and corresponds to
Video pictures;
The add module 903 adds corresponding video shooting mode information for the selected video frame:For the choosing
Determine the decoded video pictures of video frame and add corresponding video shooting mode information.
The video shooting mode information includes:Video shooting mode identify and segment identification, the add module by with
It is set to:The video shooting mode mark and segment identification are added for the selected video frame.
The video file generation module 904 is configured as:
Obtain initial sequencing of the selected video frame in the initial video file;
The selected video frame is ranked up according to the initial sequencing.
Second video file generating means provided in an embodiment of the present invention, for the initial video file generated to unmanned plane
After-treatment operation.Selected video frame is obtained from acquired initial video file, acquired selected video frame is added
Add corresponding video shooting mode information and sequence, generate secondary video file.Video frame in the original document is carried out
It selects, video frame quantity is reduced on the basis of user's degree of watching is ensured, and eliminate multiple decoded operation, facilitate other ends
The broadcasting and storage at end.Selected video frame adds corresponding video shooting mode information, and sorts according to original relative ranks, with
The video pictures is made to retain original storage state, facilitate the use and storage of other terminal devices.The embodiment of the present invention carries
The specific implementation process of the second video file generating means supplied refers to above method embodiment, and this is no longer going to repeat them.
Figure 10 is referred to, the functional block diagram of video file playing device 1000 provided for sixth embodiment of the invention.
The video file playing device 1000 includes:Playing module 1001, video shooting mode data obtaining module 1002, target frame
Acquisition module 1003.
Playing module 1001, for playing video;
Video shooting mode data obtaining module 1002, for detecting that user please to the operation of the video of broadcasting
When asking, video shooting mode information corresponding with the video in the affiliated video file of the video is obtained;Wherein, the operation
Request includes any one of amplifieroperation request and zoom-out request, and the video shooting mode information is from the near to the distant and by remote
And at least one of nearly both of which information;
Target frame acquisition module 1003, for obtained in the video file according to the video shooting mode information with
The corresponding target frame of the operation requests;
The playing module 1001 is additionally operable to play the target frame acquired in the target frame acquisition module.
In the case where the operation requests is amplifieroperation requests, the target frame acquisition module 903 is configured as:
According to the video shooting mode information, compared with the camera site of currently playing video frame close-perspective recording act as regent
It puts in the video frame of shooting and obtains target frame;
It is asked in the operation requests for reduction operation, the target frame acquisition module 903 is configured as:
According to the video shooting mode information, position is being shot farther out compared with the camera site of currently playing video frame
It puts in the video frame of shooting and obtains target frame.
The video shooting mode information includes segment identification, in the situation that the operation requests are amplifieroperation request
Under, the target frame acquisition module 1003 is configured as:
Obtain the segment identification being segmented belonging to currently playing video frame and the corresponding video shooting mode letter of affiliated segmentation
Breath, according to the video shooting mode information, in the segmentation compared with the camera site of currently playing video frame close-perspective recording
Target frame is obtained in the video frame for put shooting that acts as regent;
It is asked in the operation requests for reduction operation, the target frame acquisition module 1003 is configured as:
Obtain the segment identification that is segmented belonging to currently playing video frame and with the corresponding video shooting mode of affiliated segmentation
Information, according to the video shooting mode information, in the segmentation compared with the camera site of currently playing video frame farther out
Target frame is obtained in the video frame of camera site shooting.
The target frame acquisition module 1003 is configured as:
Identify the operation Trend value of the operation requests and the section operated belonging to Trend value;
Obtain the corresponding span value in the section;
The target frame is obtained according to the span value.
The video file playing device that the embodiments of the present invention provide, the operation requests applied by the user acquired
Corresponding operation trend span value corresponding with operation trend obtains direction and the quantity of Switch Video frame, to obtain and play
User wishes the video frame played, has further facilitated user's use.Video file playing device provided in an embodiment of the present invention
Specific implementation process refer to above method embodiment, this is no longer going to repeat them.
In conclusion the embodiment of the present application provides:
A kind of video file generating means, described device include:
Acquiring video information module, for obtaining the video of video capture device shooting and video corresponding with the video
Screening-mode information;The video shooting mode information is:From the near to the distant and from the distant to the near at least one in both of which information
Kind;
Video file generation module, for being regarded according to the video shooting mode information for video generation comprising described
The initial video file of frequency screening-mode information.
Wherein, two kinds of video shooting modes are included in the video, acquired in the acquiring video information module
Video shooting mode information include:Video shooting mode identifies and video shooting mode identifies corresponding shooting duration of video and bat
Take the photograph sequencing.
Wherein, the video file generation module is configured as:
Segment identification is set for the video according to the shooting sequencing and the shooting duration of video;
Corresponding video shooting mode mark is added for every section of video after setting segment identification.
A kind of unmanned plane, including video capture device, memory and processor, the video capture device is configured as,
Video is shot according to the video shooting mode information of setting, the video shooting mode information is:From the near to the distant and from the distant to the near
At least one of both of which information;The memory is configured as, and stores the video captured by the video capture device
And the video shooting mode information,
The processor is configured as, and obtains the video and the video shooting mode information, and according to the video
Screening-mode information includes the initial video file of the video shooting mode information for video generation.
Wherein, two kinds of video shooting modes are included in the video, the video shooting mode information includes:
Video shooting mode identifies and video shooting mode identifies corresponding shooting duration of video and shooting sequencing.
Wherein, the processor is configured as:
Segment identification is set for the video according to the shooting sequencing and the shooting duration of video;
Corresponding video shooting mode mark is added for every section of video after setting segment identification.
A kind of video file generation method is the video of the video capture device shooting on unmanned plane applied to unmanned plane
Video file is generated, the method includes:
Obtain the video of video capture device shooting and video shooting mode information corresponding with the video;The video
Screening-mode is:From the near to the distant and from the distant to the near at least one of both of which information;
It is initial comprising the video shooting mode information for video generation according to the video shooting mode information
Video file.
Wherein, two kinds of video shooting modes are included in the video, the video shooting mode information includes:
Video shooting mode identifies and video shooting mode identifies corresponding shooting duration of video and shooting sequencing.
Wherein, it is described to be believed according to the video shooting mode information for video generation comprising the video shooting mode
The initial video file of breath includes:
Segment identification is set for the video according to the shooting sequencing and the shooting duration of video;
Corresponding video shooting mode mark is added for every section of video after setting segment identification.
A kind of video file playing device, including:
Playing module, for playing video;
Video shooting mode data obtaining module, for detecting operation requests of the user to the video of broadcasting
When, obtain video shooting mode information corresponding with the video in the affiliated video file of the video;Wherein, the operation please
Ask including amplifieroperation request and any one of zoom-out request, the video shooting mode information for from the near to the distant and by remote and
At least one of nearly both of which information;
Target frame acquisition module, for obtained in the video file according to the video shooting mode information with it is described
The corresponding target frame of operation requests;
The playing module is additionally operable to play the target frame acquired in the target frame acquisition module.
Wherein, in the case where the operation requests is amplifieroperation requests, the target frame acquisition module is configured as:
According to the video shooting mode information, compared with the camera site of currently playing video frame close-perspective recording act as regent
It puts in the video frame of shooting and obtains target frame;
It is asked in the operation requests for reduction operation, the target frame acquisition module is configured as:
According to the video shooting mode information, position is being shot farther out compared with the camera site of currently playing video frame
It puts in the video frame of shooting and obtains target frame.
Wherein, the video shooting mode information includes segment identification, is amplifieroperation request in the operation requests
In the case of, the target frame acquisition module is configured as:
Obtain the segment identification being segmented belonging to currently playing video frame and the corresponding video shooting mode letter of affiliated segmentation
Breath, according to the video shooting mode information, in the segmentation compared with the camera site of currently playing video frame close-perspective recording
Target frame is obtained in the video frame for put shooting that acts as regent;
It is asked in the operation requests for reduction operation, the target frame acquisition module is configured as:
Obtain the segment identification that is segmented belonging to currently playing video frame and with the corresponding video shooting mode of affiliated segmentation
Information, according to the video shooting mode information, in the segmentation compared with the camera site of currently playing video frame farther out
Target frame is obtained in the video frame of camera site shooting.
Wherein, the target frame acquisition module is configured as:
Identify the operation Trend value of the operation requests and the section operated belonging to Trend value;
Obtain the corresponding span value in the section;
The target frame is obtained according to the span value.
A kind of terminal device, which is characterized in that include video file playing device recited above.
A kind of video file broadcasting method, the method includes:
Play video;
When detecting operation requests of the user to the video of broadcasting, obtain in the affiliated video file of the video with
The corresponding video shooting mode information of the video;Wherein, the operation requests are included in amplifieroperation request and zoom-out request
It is any, the video shooting mode information is from the near to the distant and from the distant to the near at least one of both of which information;
Target corresponding with the operation requests is obtained in the video file according to the video shooting mode information
Frame;
Play acquired target frame.
Wherein, it is described to be believed according to the video shooting mode in the case where the operation requests is amplifieroperation requests
Breath obtains target frame corresponding with the operation requests in the video file and includes:
According to the video shooting mode information, compared with the camera site of currently playing video frame close-perspective recording act as regent
It puts in the video frame of shooting and obtains target frame;
In the case that the operation requests be reduction operation request, it is described according to the video shooting mode acquisition of information with
The corresponding target frame of the operation requests includes:
According to the video shooting mode information, position is being shot farther out compared with the camera site of currently playing video frame
It puts in the video frame of shooting and obtains target frame.
Wherein, the video shooting mode information includes segment identification, is amplifieroperation request in the operation requests
In the case of, it is described that mesh corresponding with the operation requests is obtained in the video file according to the video shooting mode information
Mark frame includes:
Obtain the segment identification that is segmented belonging to currently playing video frame and with the corresponding video shooting mode of affiliated segmentation
Information, it is relatively near compared with the camera site of currently playing video frame in the segmentation according to the video shooting mode information
Target frame is obtained in the video frame of camera site shooting;
In the case that the operation requests be reduction operation request, it is described according to the video shooting mode acquisition of information with
The corresponding target frame of the operation requests includes:
Obtain the segment identification that is segmented belonging to currently playing video frame and with the corresponding video shooting mode of affiliated segmentation
Information, according to the video shooting mode information, in the segmentation compared with the camera site of currently playing video frame farther out
Target frame is obtained in the video frame of camera site shooting.
Wherein, it is described to be obtained and the operation requests pair in the video file according to the video shooting mode information
The target frame answered includes:
Identify the operation Trend value of the operation requests and the section operated belonging to Trend value;
Obtain the corresponding span value in the section;
The target frame is obtained according to the span value.
A kind of video file generating means, including:
First acquisition module comprising video and described regards for obtaining initial video file, in the initial video file
Frequently corresponding video shooting mode information, the video shooting mode information are from the distant to the near and from the near to the distant both of which information
At least one of;
Second acquisition module, for obtaining the selected video frame of the video;
Add module, for adding corresponding video shooting mode information for the selected video frame;
Video file generation module, for being carried out to the selected video frame after the addition video shooting mode information
Sequence generates secondary video file.
Wherein, described device further includes:
Decoder module for decoding the selected video frame, obtains the corresponding video pictures of the selected video frame;
The add module is that the corresponding video shooting mode information of the selected video frame addition is:It is regarded for described select
Video pictures after frequency frame decoding add corresponding video shooting mode information.
Wherein, the video shooting mode information includes:Video shooting mode identifies and segment identification, the add module
It is configured as:The video shooting mode mark and segment identification are added for the selected video frame.
Wherein, the video file generation module is configured as:
Obtain initial sequencing of the selected video frame in the initial video file;
The selected video frame is ranked up according to the initial sequencing.
A kind of terminal device, which is characterized in that including video file generating means recited above.
A kind of video file generation method, applied to mobile terminal, the method includes:
Initial video file is obtained, video and the corresponding video capture mould of the video are included in the initial video file
Formula information, video shooting mode information include from the distant to the near and from the near to the distant at least one of both of which information;
Obtain the selected video frame of the video;
Corresponding video shooting mode information is added for the selected video frame;
The selected video frame after the addition video shooting mode information is ranked up, generation secondary video text
Part.
After the selected video frame for obtaining the video, the method further includes:
The selected video frame is decoded, obtains the corresponding video pictures of the selected video frame;
It is described to be for the corresponding video shooting mode information of selected video frame addition:It is decoded for the selected video frame
Video pictures add corresponding video shooting mode information.
Wherein, the video shooting mode information includes:Video shooting mode identifies and segment identification, and described is the choosing
Determine the corresponding video shooting mode information of video frame addition to include:The video shooting mode mark is added for the selected video frame
Knowledge and segment identification.
Wherein, the step of selected video frame after the described pair of addition video shooting mode information is ranked up includes:
Obtain initial sequencing of the selected video frame in the initial video file;
The selected video frame is ranked up according to the initial sequencing.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through
Other modes are realized.The apparatus embodiments described above are merely exemplary, for example, flow chart and block diagram in attached drawing
Show the device of multiple embodiments according to the present invention, the architectural framework in the cards of method and computer program product,
Function and operation.In this regard, each box in flow chart or block diagram can represent the one of a module, program segment or code
Part, a part for the module, program segment or code include one or more and are used to implement holding for defined logic function
Row instruction.It should also be noted that at some as in the realization method replaced, the function that is marked in box can also be to be different from
The sequence marked in attached drawing occurs.For example, two continuous boxes can essentially perform substantially in parallel, they are sometimes
It can perform in the opposite order, this is depended on the functions involved.It is it is also noted that every in block diagram and/or flow chart
The combination of a box and the box in block diagram and/or flow chart can use function or the dedicated base of action as defined in performing
It realizes or can be realized with the combination of specialized hardware and computer instruction in the system of hardware.
In addition, each function module in each embodiment of the present invention can integrate to form an independent portion
Point or modules individualism, can also two or more modules be integrated to form an independent part.
If the function is realized in the form of software function module and is independent product sale or in use, can be with
It is stored in a computer read/write memory medium.Based on such understanding, technical scheme of the present invention is substantially in other words
The part contribute to the prior art or the part of the technical solution can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, is used including some instructions so that a computer equipment (can be
People's computer, server or network equipment etc.) perform all or part of the steps of the method according to each embodiment of the present invention.
And aforementioned storage medium includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.It needs
Illustrate, herein, relational terms such as first and second and the like be used merely to by an entity or operation with
Another entity or operation distinguish, and without necessarily requiring or implying between these entities or operation, there are any this realities
The relationship or sequence on border.Moreover, term " comprising ", "comprising" or its any other variant are intended to the packet of nonexcludability
Contain so that process, method, article or equipment including a series of elements not only include those elements, but also including
It other elements that are not explicitly listed or further includes as elements inherent to such a process, method, article, or device.
In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including the element
Process, method, also there are other identical elements in article or equipment.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, that is made any repaiies
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should be noted that:Similar label and letter exists
Similar terms are represented in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing
It is further defined and is explained.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in change or replacement, should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention described should be subject to the protection scope in claims.