CN108063894A - A kind of method for processing video frequency and mobile terminal - Google Patents
A kind of method for processing video frequency and mobile terminal Download PDFInfo
- Publication number
- CN108063894A CN108063894A CN201711405235.9A CN201711405235A CN108063894A CN 108063894 A CN108063894 A CN 108063894A CN 201711405235 A CN201711405235 A CN 201711405235A CN 108063894 A CN108063894 A CN 108063894A
- Authority
- CN
- China
- Prior art keywords
- video
- sub
- camera
- picture frame
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The present invention, which provides a kind of method for processing video frequency and mobile terminal, this method, to be included:If receiving video segmentation instruction, each corresponding depth of view information of pixel in each picture frame of video is obtained;According to each corresponding depth of view information of pixel in each picture frame of the video, the video is layered, obtains at least two layers of sub-video;If the object run for receiving the sub-video of the destination layer to being directed at least two layers of the sub-video instructs, the then sub-video of the destination layer at least two layers of the sub-video according to object run instruction processing, wherein, the sub-video of the destination layer is the sub-video of any layer at least two layers of the sub-video.Pass through method for processing video frequency provided by the invention, video is layered according to depth of view information, and the sub-video of any layer obtained after video segmentation can individually be controlled, solve the problems, such as the object that cannot individually control different levels in video pictures in the prior art.
Description
Technical field
The present invention relates to field of communication technology more particularly to a kind of method for processing video frequency and mobile terminal.
Background technology
Video record refers to continuous capturing picture frame and records, and arranges sequentially in time, obtains video.At present,
Video record has become people's record and the important way for sharing life, and compared to taking pictures, video record can be more lively
React picture activity.However, each two field picture of video record capture at present is usually all main body and background is integral, it is only capable of
Entire picture based on video is controlled, for example, playing video frame by frame according to the entire picture of video, it is impossible to individually control video
The object of different levels in picture, video processing mode are more single.
The content of the invention
The embodiment of the present invention provides a kind of method for processing video frequency and mobile terminal, to solve individually control in the prior art
The problem of object of different levels causes video processing mode more single in video pictures processed.
In order to solve the above-mentioned technical problem, the present invention is realized in:
In a first aspect, an embodiment of the present invention provides a kind of method for processing video frequency.This method includes:
If receiving video segmentation instruction, obtain the corresponding depth of field of each pixel in each picture frame of video and believe
Breath;
According to each corresponding depth of view information of pixel in each picture frame of the video, the video is divided
Layer, obtains at least two layers of sub-video;
If the object run for receiving the sub-video of the destination layer to being directed at least two layers of the sub-video instructs,
The sub-video of destination layer at least two layers of sub-video described in object run instruction processing, wherein, the target
The sub-video of layer is the sub-video of any layer at least two layers of the sub-video.
Second aspect, the embodiment of the present invention also provide a kind of mobile terminal.The mobile terminal includes:
If acquisition module for receiving video segmentation instruction, obtains each pixel in each picture frame of video
Corresponding depth of view information;
Hierarchical block, for each corresponding depth of view information of pixel in each picture frame according to the video, to institute
It states video to be layered, obtains at least two layers of sub-video;
Processing module, if for receiving the mesh of the sub-video to being directed to the destination layer at least two layers of the sub-video
Operational order is marked, then the sub-video of the destination layer at least two layers of the sub-video according to object run instruction processing,
Wherein, the sub-video of the destination layer is the sub-video of any layer at least two layers of the sub-video.
The third aspect, the embodiment of the present invention also provide a kind of mobile terminal, including processor, memory and are stored in described
It is real when the computer program is performed by the processor on memory and the computer program that can run on the processor
The step of showing above-mentioned method for processing video frequency.
Fourth aspect, the embodiment of the present invention also provide a kind of computer readable storage medium, the computer-readable storage
Computer program is stored on medium, the computer program realizes the step of above-mentioned method for processing video frequency when being executed by processor
Suddenly.
In the embodiment of the present invention, if being instructed by receiving video segmentation, obtain each in each picture frame of video
The corresponding depth of view information of pixel;According to each corresponding depth of view information of pixel in each picture frame of the video, to institute
It states video to be layered, obtains at least two layers of sub-video;If receive the mesh to being directed at least two layers of the sub-video
The object run instruction of the sub-video of layer is marked, then at least two layers of the sub-video according to object run instruction processing
The sub-video of destination layer, wherein, the sub-video of the destination layer is the sub-video of any layer at least two layers of the sub-video.
Video is layered according to depth of view information, and can be to any layer at least two layers of the sub-video that is obtained after video segmentation
Sub-video is individually controlled, and enriches video control mode, and solving cannot individually control in video pictures in the prior art
The problem of object of different levels, video processing mode is more single.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention
Attached drawing to be used is briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present invention,
For those of ordinary skill in the art, without having to pay creative labor, can also be obtained according to these attached drawings
Obtain other attached drawings.
Fig. 1 is the flow chart of method for processing video frequency provided in an embodiment of the present invention;
Fig. 2 is the flow chart for the method for processing video frequency that further embodiment of this invention provides;
Fig. 3 is that the first camera and second camera provided in an embodiment of the present invention move in parallel the setting of terminal width direction
Schematic diagram;
Fig. 4 is that the first camera and second camera provided in an embodiment of the present invention move in parallel the setting of terminal length direction
Schematic diagram;
Fig. 5 is that the first camera and second camera provided in an embodiment of the present invention move in parallel the setting of terminal width direction
Distance interval schematic diagram;
Fig. 6 is the first camera provided in an embodiment of the present invention and the schematic diagram of second camera acquired image frames;
Fig. 7 is the schematic diagram for the first picture frame that the first camera provided in an embodiment of the present invention collects;
Fig. 8 is the schematic diagram for the second picture frame that second camera provided in an embodiment of the present invention collects;
Fig. 9 is the schematic diagram of the depth of view information of computing object P provided in an embodiment of the present invention;
Figure 10 is the schematic diagram of the picture frame layering of video provided in an embodiment of the present invention;
Figure 11 is the schematic diagram of video segmentation rear video broadcast interface provided in an embodiment of the present invention;
Figure 12 is the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 13 is the structure chart for the mobile terminal that further embodiment of this invention provides;
A kind of hardware architecture diagram of Figure 14 mobile terminals of each embodiment to realize the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without creative efforts
Example, belongs to the scope of protection of the invention.
The embodiment of the present invention provides a kind of method for processing video frequency.Referring to Fig. 1, Fig. 1 is video provided in an embodiment of the present invention
The flow chart of processing method, as shown in Figure 1, comprising the following steps:
If step 101 receives video segmentation instruction, it is corresponding to obtain each pixel in each picture frame of video
Depth of view information.
For example, video segmentation push button can be pre-set in the broadcast interface of video, if receiving for above-mentioned
The touch control operation of video segmentation push button, it is determined that receive video segmentation instruction.
Optionally, when receiving video segmentation instruction, each pixel in each picture frame of video can be calculated
The corresponding depth of view information of point, for example, every in each picture frame of the methods of being predicted by image shift, consecutive image calculating video
The corresponding depth of view information of a pixel;Can also be that each pixel corresponds in each picture frame for obtain pre-stored video
Depth of view information.Above-mentioned depth of view information can include depth of field value, and depth of field value can represent each object distance camera shooting in picture frame
The distance of head.
Step 102, according to the corresponding depth of view information of each pixel in each picture frame of the video, to the video
It is layered, obtains at least two layers of sub-video.
It specifically, can be respectively by each corresponding depth of view information of pixel and threshold value in each picture frame of the video
Compare, determine the layering belonging to each pixel of each picture frame.For example, when needing video being divided into three layers, Ke Yifen
Not by the corresponding depth of view information of the pixel of each picture frame respectively compared with first threshold and second threshold, first threshold is less than
Second threshold, the pixel that the corresponding depth of view information of pixel in each picture frame is less than or equal to first threshold are divided into the
One layer, the corresponding depth of view information of pixel in each picture frame is more than first threshold and less than the pixel division of second threshold
For the second layer, the pixel that the corresponding depth of view information of pixel in each picture frame is greater than or equal to second threshold is divided into the
Three layers.
If step 103, the target for the sub-video for receiving the destination layer to being directed at least two layers of the sub-video are grasped
It instructs, then the sub-video of the destination layer at least two layers of the sub-video according to object run instruction processing, wherein,
The sub-video of the destination layer is the sub-video of any layer at least two layers of the sub-video.
In the embodiment of the present invention, object run instruction can rationally be set according to actual conditions, for example, object run
Instruction can be pausing operation instruction, play operation instruction, virtualization operational order etc..It is understood that the embodiment of the present invention
Object run can be pre-set at video playing interface in advance and instructs corresponding object run push button, it should in user's touch-control
During object run push button, corresponding object run instruction is generated.
Specifically, after to video segmentation, can each sub-video that individually control hierarchy obtains, for example, in video point
After three straton videos, the sub-video of first layer can be controlled to play, control other two layers sub-video pause play or only
Virtualization processing is carried out to the sub-video of the second layer or filter processing etc. only is carried out to the sub-video of first layer.
In the embodiment of the present invention, above-mentioned mobile terminal can be mobile phone, tablet computer (Tablet Personal
Computer), laptop computer (Laptop Computer), personal digital assistant (personal digital
Assistant, abbreviation PDA) or wearable device (Wearable Device) etc..
The method for processing video frequency of the embodiment of the present invention if receiving video segmentation instruction, obtains each image of video
The corresponding depth of view information of each pixel in frame;According to the corresponding depth of field letter of each pixel in each picture frame of the video
Breath, is layered the video, obtains at least two layers of sub-video;If it receives to being directed at least two layers of the sub-video
In destination layer sub-video object run instruction, then according to the object run instruction processing described at least two layers of son regard
The sub-video of destination layer in frequency, wherein, the sub-video of the destination layer is any layer at least two layers of the sub-video
Sub-video.Video is layered according to depth of view information, and can be at least two layers of the sub-video that is obtained after video segmentation
The sub-video of any layer is individually controlled, and enriches video control mode, is solved individually control in the prior art and is regarded
The problem of object of different levels in frequency picture, video processing mode is more single.
Referring to Fig. 2, Fig. 2 is the flow chart of method for processing video frequency provided in an embodiment of the present invention.The embodiment of the present invention with it is upper
The difference of one embodiment, which essentially consists in, further defines each pixel in each picture frame of process calculating of recorded video
Corresponding depth of view information.In the embodiment of the present invention, before above-mentioned steps 101, the method further includes:Passing through the movement
During the camera recorded video of terminal, calculate respectively each in each picture frame of the mobile terminal camera acquisition
The corresponding depth of view information of pixel;Respectively by each corresponding depth of view information of pixel in each picture frame and corresponding picture
Vegetarian refreshments associated storage.
As shown in Fig. 2, method for processing video frequency provided in an embodiment of the present invention comprises the following steps:
Step 201, during the camera recorded video by mobile terminal, calculate the mobile terminal respectively and take the photograph
The corresponding depth of view information of each pixel in each picture frame gathered as head.
In the embodiment of the present invention, each image of mobile terminal camera acquisition can be calculated during recorded video
The corresponding depth of view information of each pixel in frame.
Optionally, in order to improve the accuracy for the depth of view information being calculated, the mobile terminal is taken the photograph including at least first
As head and second camera, first camera and the second camera are arranged in parallel in along target direction and described move end
The same side at end and first camera is identical with the focal length of the second camera, the target direction for it is described it is mobile eventually
Hold width or the mobile terminal length direction, above-mentioned steps 201 namely the camera shooting by the mobile terminal
During head recorded video, each pixel in each picture frame of the mobile terminal camera acquisition is calculated respectively and is corresponded to
Depth of view information, including:
During by first camera and the second camera recorded video, described first is obtained respectively
The picture frame for same target that camera and the second camera are gathered in synchronization obtains the first picture frame and
Two picture frames;
Coordinates of targets X of the pixel corresponding to the object in described first image frame is determined respectivelyaWith described
The coordinates of targets X of two picture framesb, wherein, the coordinates of targets is the coordinate along the target direction;
UsingCalculate the corresponding depth of field value of pixel corresponding to the object;
Wherein, D represents the corresponding depth of field value of pixel corresponding to the object, and f represents the coke of first camera
Away from or the second camera focal length, T represents the distance interval of first camera and the second camera.
In the embodiment of the present invention, mobile terminal includes at least the first camera and second camera, and the first camera is along mesh
Mark direction is arranged in parallel in the same side of mobile terminal.For example, with reference to Fig. 3, mobile terminal 1 includes the first camera 11 and second
The width of camera 12, the first camera 11 and second camera 12 along mobile terminal 1 is arranged in parallel in mobile terminal 1
Front or the back side, referring to Fig. 4, mobile terminal 1 includes the first camera 11 and second camera 12, the first camera 11 and second
The length direction of camera 12 along mobile terminal 1 is arranged in parallel in the front or the back side of mobile terminal 1.Above-mentioned first camera and
The focal length of second camera is identical, as shown in figure 5, the distance between the first camera 11 and second camera 12 are at intervals of T.
Below using the first camera and second camera along the width of mobile terminal be arranged in parallel in mobile terminal as
Example illustrates, and when receiving video record instruction, can start the first camera and second camera gathers image simultaneously
Frame, as shown in fig. 6, the first picture frame 111 and the second picture frame 121 as shown in Figure 7 and Figure 8 is respectively obtained, in the first image
There are the corresponding pixels of same target P in 111 and second picture frame 121 of frame.Fig. 9 is showing for the depth of view information of computing object P
It is intended to, referring to Fig. 9, T is the distance interval of the first camera and second camera, and f is the first camera or second camera
Focal length, wherein, the first camera is identical with the focal length of second camera, XaFor abscissas of the object P in the first picture frame, Xb
For abscissas of the object P in the second picture frame, triangle PPaPb and triangle POaOb is two similar triangles as shown in Figure 9
Shape is obtained according to similar triangle theory:
It is obtained by above-mentioned formula:
That is the distance of object P to camera is D=Z-f, above-mentioned D namely the corresponding depth of field value of the corresponding pixels of object P.
The embodiment of the present invention may be employed aforesaid way and calculate the corresponding depth of field value of each pixel in picture frame respectively.It can
With understanding, the embodiment of the present invention can be after the corresponding depth of field value of each pixel in picture frame be calculated, can be only
The picture frame of above-mentioned first camera acquisition or the picture frame of second camera acquisition are stored, to reduce the occupancy of memory space.
The embodiment of the present invention is directed to the picture frame of same target by two cameras in synchronization acquisition, to calculate figure
As the corresponding depth of view information of the corresponding pixel of each object in frame, can improve each in each picture frame being calculated
The accuracy of the corresponding depth of view information of pixel.
Step 202 stores in each picture frame each pixel in each pixel, each picture frame respectively
In corresponding depth of view information and each picture frame each between the corresponding depth of view information of pixel and corresponding pixel
Incidence relation.
Specifically, each picture frame that camera gathers depth of view information corresponding with its each pixel can be packaged as
One frame data, wherein, the corresponding depth of view information of each pixel can be expressed as D=D (x, y), wherein, (x, y) represents object
Position namely object in the picture frame corresponding pixel position in picture frame, D represent the corresponding depth of field letter of the object
Breath namely the corresponding depth of view information of the corresponding pixel of the object.
If step 203 receives video segmentation instruction, it is corresponding to obtain each pixel in each picture frame of video
Depth of view information.
In the embodiment of the present invention, when receiving video segmentation instruction, it can obtain every in stored each picture frame
The corresponding depth of view information of a pixel, so as to improve the acquisition efficiency of depth of view information, further improves video segmentation efficiency.
Step 204, according to the corresponding depth of view information of each pixel in each picture frame of the video, to the video
It is layered, obtains at least two layers of sub-video.
In the embodiment of the present invention, multiple threshold values for video segmentation can be pre-set, and pass through threshold value and video
Each corresponding depth of view information of pixel is compared in each picture frame, is determined in each picture frame belonging to each pixel
Layer.
Optionally, in order to improve the flexibility of video segmentation, above-mentioned steps 204 namely described according to the every of the video
The corresponding depth of view information of each pixel, is layered the video in a picture frame, obtains at least two layers of sub-video, bag
It includes:
Obtain layering quantity and layering distance;
It is corresponding according to each pixel in each picture frame of the layering quantity, the layering distance and the video
Depth of view information is layered the video, obtains the sub-video of the layering quantity.
In the embodiment of the present invention, above-mentioned layering quantity and layering distance can rationally be set by user.It specifically, can
With when receiving video segmentation instruction, pop-up sets interface for user setting layering quantity and layering distance.It is appreciated that
, above-mentioned layering quantity and layering distance can also be preset.
Be below layering quantity be n, layering distance be followed successively by d1, d2, d3 ... exemplified by dn, then i-th layer of corresponding depth of field letter
Ceasing (namely depth of field value) is:
Wherein, i=1,2,3 ..., n, d0=0, dn=∞ can obtain i-th layer of institute according to above-mentioned D=D (x, y) as a result,
Corresponding all image-region ∑s (x, y).
For example, work as n=3, d1=1m, d2=10m, during d3=20m, the depth of field section of each layer can be as follows:
The depth of field section of first layer is:0≤D < 5.5;
The depth of field section of the second layer is:5.5≤D < 15;
The depth of field section of third layer is:15≤D < ∞.
Specifically, each picture frame of video can be 3 straton images by above-mentioned 3 layers of depth of field interal separation, for example, as schemed
Shown in 10, the first picture frame 111 is divided into 3 straton images, is respectively first layer subgraph 1111,1112 and of second layer subgraph
Third layer subgraph 1113.It is understood that above-mentioned first layer, second layer subgraph 1112 and third layer subgraph 1113 are only
For representing different images layer, sequencing is not limited.Specifically, video segmentation rear video broadcast interface can be with display operation
Interface, to receive operational order, wherein, aforesaid operations interface can be as shown in figure 11.
Each pixel is corresponding in each picture frame of the embodiment of the present invention according to layering quantity, layering distance and video
Depth of view information is layered video, so as to flexibly control video segmentation according to actual needs.
Optionally, in order to improve video segmentation effect, above-mentioned steps 204 namely each figure according to the video
As the corresponding depth of view information of each pixel in frame, the video is layered, obtains at least two layers of sub-video, including:
Image-region division is carried out to each picture frame of the video, each picture frame for obtaining the video is corresponding
At least two image-regions;
Respectively according to the picture of each image-region in corresponding at least two image-region of each picture frame of the video
The corresponding depth of view information of vegetarian refreshments determines each image district in corresponding at least two image-region of each picture frame of the video
Layer belonging to domain;
According to the layer belonging to each image-region in corresponding at least two image-region of each picture frame of the video,
The video is layered, obtains at least two layers of sub-video.
In actual conditions, when according to depth of view information to video segmentation, if some object is larger, may exist and belong to this
The pixel of object belongs to different layers, so as to influence the display effect for the sub-video that video segmentation obtains.For example, video segmentation
Afterwards, a part for bus belongs to the sub-video of first layer in video, and another part belongs to the sub-video of the second layer, causes list
When solely playing the sub-video of first layer or individually playing the sub-video of the second layer, bus picture is imperfect, influences display effect.
In the embodiment of the present invention, image-region division can be carried out to each picture frame, for example, can be in detection image frame
The profile of each object, and picture frame is divided into according to the profile of each object by different image-regions.To a certain image
Frame carries out image-region division, after obtaining at least two image-regions, can determine the layer belonging to each image-region respectively,
Belong to each layer of pixel quantity in each image-region for example, can count, and by the pixel comprising the image-region
Most layers is determined as the layer belonging to the image-region.Specifically, in each image-region for each picture frame for obtaining video
After affiliated layer, each picture frame of video can be layered respectively, obtain the sub-image frame of each picture frame, and video
All picture frames in belong to the sub-image frame of same layer and then form the sub-video of this layer.
The embodiment of the present invention carries out image-region division by each picture frame to video, and according to each figure of video
As the layer belonging to each image-region in corresponding at least two image-region of frame, video is layered, thereby may be ensured that
The image-region of same target is in same layer, can improve the display effect of the sub-video obtained after video segmentation.
If step 205, the target for the sub-video for receiving the destination layer to being directed at least two layers of the sub-video are grasped
It instructs, then the sub-video of the destination layer at least two layers of the sub-video according to object run instruction processing, wherein,
The sub-video of the destination layer is the sub-video of any layer at least two layers of the sub-video.
In the embodiment of the present invention, object run instruction can rationally be set according to actual conditions, for example, object run
Instruction can be pausing operation instruction, play operation instruction, virtualization operational order etc..It is understood that the embodiment of the present invention
Object run can be pre-set at video playing interface in advance and instructs corresponding object run push button, it should in user's touch-control
During object run push button, corresponding object run instruction is generated.
Optionally, if above-mentioned steps 205 namely the destination layer received to being directed at least two layers of the sub-video
Sub-video object run instruction, then the target according to object run instruction processing at least two layers of sub-video
The sub-video of layer, including:
When object run instruction includes playing control instruction, the sub-video of the destination layer is played;
When object run instruction, which includes pause, plays control instruction, pause plays the sub-video of the destination layer;
When object run instruction includes blurring operational order, the sub-video of the destination layer is performed at virtualization
Reason;
When object run instruction includes filter operational order, the sub-video of the destination layer is performed at filter
Reason.
It, can be at least two layers of sub-video after video segmentation obtains at least two layers of sub-video in practical application
The sub-video of any layer carries out individually playing control, and controls other layers of sub-video placed in a suspend state, for example, when user thinks
Observe the active situation of the object (for example, the nearby layer residing for personage in video) of first layer in video pictures, then it can be by the
Two layers of sub-video and the pause of the sub-video of third layer, such user can focus on the character activities of observation first layer,
Exclude interference caused by other layer of object.For example, pause plays the automobile of the second layer and the mountain of third layer shown in Figure 11
Peak only plays the moving frame of the personage of first layer.
Optionally, the embodiment of the present invention can carry out virtualization processing with sub-video, for example, when user's touch-control such as Figure 11 institutes
During the virtualization button shown, can certain be subjected to virtualization processing by one or more layers sub-video, and other layers do not do virtualization processing, this
Sample can realize the effect of the picture of prominent a certain layer, and virtualization processing is done on the mountain peak of automobile and third layer to the second layer, first
The personage of layer does not process, so when playing video, the automobile of the second layer and the mountain peak of third layer be all it is fuzzy, only the
One layer of figure picture is clear.
Optionally, the embodiment of the present invention can also carry out filter processing by one or more layers sub-video to certain, and other layers are not
Filter processing is done, so can equally realize the effect of prominent a certain layer picture, and display effect can be enhanced.It is for example, right
The processing of oil painting filter is done on the automobile of the second layer as shown in figure 11 and the mountain peak of third layer, and the personage of first layer does not process, this
Sample is when playing video, the automobile of the second layer and the mountain peak of third layer all figure pictures with oil paint effect, only first layer
It is clearly.
The embodiment of the present invention by multiple camera collection images to calculate the depth of view information of image, then by image according to scape
Deeply convince that breath is divided into multilayer, specifically, during video record, the image that each frame data include camera acquisition is (namely former
Beginning image) and corresponding depth of view information, in video playing, according to depth of view information, original image can be divided into multilayer, and
Different processing can be done to the image of different layers, for example thinks that user the body layer of observation does not process, other layer of pause is broadcast
Put or other layers are blurred and filter processing, to protrude the activity of body layer picture object, add video playing
Interesting and operability.
Referring to Figure 12, Figure 12 is the structure chart of mobile terminal provided in an embodiment of the present invention.As shown in figure 12, mobile terminal
1200 include:Acquisition module 1201, hierarchical block 1202 and processing module 1203, wherein:
If acquisition module 1201 for receiving video segmentation instruction, obtains each picture in each picture frame of video
The corresponding depth of view information of vegetarian refreshments;
Hierarchical block 1202, for each corresponding depth of view information of pixel in each picture frame according to the video,
The video is layered, obtains at least two layers of sub-video;
Processing module 1203, if for receiving the sub-video to being directed to the destination layer at least two layers of the sub-video
Object run instruction, then the son of the destination layer according to object run instruction processing at least two layers of sub-video regard
Frequently, wherein, the sub-video of the destination layer is the sub-video of any layer at least two layers of the sub-video.
Optionally, referring to Figure 13, the mobile terminal 1200 further includes:
Computing module 1204 if receiving layering instruction for described, obtains every in each picture frame of the video
Before the corresponding depth of view information of a pixel, during the camera recorded video by the mobile terminal, count respectively
Calculate the corresponding depth of view information of each pixel in each picture frame of the mobile terminal camera acquisition;
Memory module 1205, for storing respectively in each picture frame in each pixel, each picture frame
In each corresponding depth of view information of pixel and each picture frame each corresponding depth of view information of pixel with it is corresponding
Incidence relation between pixel.
Optionally, the mobile terminal includes at least the first camera and second camera, first camera and institute
State the same side and first camera and described that second camera is arranged in parallel in the mobile terminal along target direction
The focal length of two cameras is identical, the target direction be the mobile terminal width or the mobile terminal length direction,
The computing module 1204 is specifically used for:
During by first camera and the second camera recorded video, described first is obtained respectively
The picture frame for same target that camera and the second camera are gathered in synchronization obtains the first picture frame and
Two picture frames;
Coordinates of targets X of the pixel corresponding to the object in described first image frame is determined respectivelyaWith described
The coordinates of targets X of two picture framesb, wherein, the coordinates of targets is the coordinate along the target direction;
UsingCalculate the corresponding depth of field value of pixel corresponding to the object;
Wherein, D represents the corresponding depth of field value of pixel corresponding to the object, and f represents the coke of first camera
Away from or the second camera focal length, T represents the distance interval of first camera and the second camera.
Optionally, the processing module 1203 is specifically used for:
When object run instruction includes playing control instruction, the sub-video of the destination layer is played;
When object run instruction, which includes pause, plays control instruction, pause plays the sub-video of the destination layer;
When object run instruction includes blurring operational order, the sub-video of the destination layer is performed at virtualization
Reason;
When object run instruction includes filter operational order, the sub-video of the destination layer is performed at filter
Reason.
Optionally, the hierarchical block 1202 is specifically used for:
Obtain layering quantity and layering distance;
It is corresponding according to each pixel in each picture frame of the layering quantity, the layering distance and the video
Depth of view information is layered the video, obtains the sub-video of the layering quantity.
Optionally, the hierarchical block 1202 is specifically used for:
Image-region division is carried out to each picture frame of the video, each picture frame for obtaining the video is corresponding
At least two image-regions;
Respectively according to the picture of each image-region in corresponding at least two image-region of each picture frame of the video
The corresponding depth of view information of vegetarian refreshments determines each image district in corresponding at least two image-region of each picture frame of the video
Layer belonging to domain;
According to the layer belonging to each image-region in corresponding at least two image-region of each picture frame of the video,
The video is layered, obtains at least two layers of sub-video.
Mobile terminal 1200 provided in an embodiment of the present invention can realize mobile terminal in the embodiment of the method for Fig. 1 to Fig. 2
The each process realized, to avoid repeating, which is not described herein again.
If the mobile terminal 1200 of the embodiment of the present invention, acquisition module 1201 for receiving video segmentation instruction, obtain
Take each corresponding depth of view information of pixel in each picture frame of video;Hierarchical block 1202, for according to the video
The corresponding depth of view information of each pixel, is layered the video, obtains at least two layers of sub-video in each picture frame;
Processing module 1203, if being grasped for receiving the target of the sub-video of the destination layer to being directed at least two layers of the sub-video
It instructs, then the sub-video of the destination layer at least two layers of the sub-video according to object run instruction processing, wherein,
The sub-video of the destination layer is the sub-video of any layer at least two layers of the sub-video.According to depth of view information to video into
Row layering, and the sub-video of any layer at least two layers of the sub-video that is obtained after video segmentation can individually be controlled,
Video control mode is enriched, solves the object that cannot individually control different levels in video pictures in the prior art, video
The problem of processing mode is more single.
A kind of hardware architecture diagram of Figure 14 mobile terminals of each embodiment to realize the present invention.It, should referring to Figure 14
Mobile terminal 1400 includes but not limited to:Radio frequency unit 1401, network module 1402, audio output unit 1403, input unit
1404th, sensor 1405, display unit 1406, user input unit 1407, interface unit 1408, memory 1409, processor
The components such as 1410 and power supply 1411.It will be understood by those skilled in the art that the mobile terminal structure shown in Figure 14 not structure
The restriction of paired mobile terminal, mobile terminal can include than illustrate more or fewer components or some components of combination or
The different component arrangement of person.In embodiments of the present invention, mobile terminal includes but not limited to mobile phone, tablet computer, notebook electricity
Brain, palm PC, car-mounted terminal, wearable device and pedometer etc..
Wherein, if processor 1410 for receiving video segmentation instruction, obtain each in each picture frame of video
The corresponding depth of view information of pixel;According to each corresponding depth of view information of pixel in each picture frame of the video, to institute
It states video to be layered, obtains at least two layers of sub-video;If receive the mesh to being directed at least two layers of the sub-video
The object run instruction of the sub-video of layer is marked, then at least two layers of the sub-video according to object run instruction processing
The sub-video of destination layer, wherein, the sub-video of the destination layer is the sub-video of any layer at least two layers of the sub-video.
The embodiment of the present invention is layered video according to depth of view information, and can to obtained after video segmentation at least two
The sub-video of any layer is individually controlled in the sub-video of layer, enriches video control mode, solves in the prior art not
The object of different levels in video pictures can be individually controlled, the problem of video processing mode is more single.
Optionally, if described receive layering instruction, each pixel pair in each picture frame of the video is obtained
Before the depth of view information answered, the method further includes:
During the camera recorded video by the mobile terminal, the mobile terminal camera is calculated respectively
The corresponding depth of view information of each pixel in each picture frame of acquisition;
Each corresponding scape of pixel in each pixel, each picture frame is stored in each picture frame respectively
Deeply convince that each association between the corresponding depth of view information of pixel and corresponding pixel is closed in breath and each picture frame
System.
Optionally, the mobile terminal includes at least the first camera and second camera, first camera and institute
State the same side and first camera and described that second camera is arranged in parallel in the mobile terminal along target direction
The focal length of two cameras is identical, the target direction be the mobile terminal width or the mobile terminal length direction,
During the camera recorded video by the mobile terminal, the mobile terminal camera acquisition is calculated respectively
Each picture frame in each corresponding depth of view information of pixel, including:
During by first camera and the second camera recorded video, described first is obtained respectively
The picture frame for same target that camera and the second camera are gathered in synchronization obtains the first picture frame and
Two picture frames;
Coordinates of targets X of the pixel corresponding to the object in described first image frame is determined respectivelyaWith described
The coordinates of targets X of two picture framesb, wherein, the coordinates of targets is the coordinate along the target direction;
UsingCalculate the corresponding depth of field value of pixel corresponding to the object;
Wherein, D represents the corresponding depth of field value of pixel corresponding to the object, and f represents the coke of first camera
Away from or the second camera focal length, T represents the distance interval of first camera and the second camera.
Optionally, if the target of the sub-video for receiving the destination layer to being directed at least two layers of the sub-video is grasped
It instructs, then the sub-video of the destination layer at least two layers of the sub-video according to object run instruction processing, including:
When object run instruction includes playing control instruction, the sub-video of the destination layer is played;
When object run instruction, which includes pause, plays control instruction, pause plays the sub-video of the destination layer;
When object run instruction includes blurring operational order, the sub-video of the destination layer is performed at virtualization
Reason;
When object run instruction includes filter operational order, the sub-video of the destination layer is performed at filter
Reason.
Optionally, each corresponding depth of view information of pixel in each picture frame according to the video, to described
Video is layered, and obtains at least two layers of sub-video, including:
Obtain layering quantity and layering distance;
It is corresponding according to each pixel in each picture frame of the layering quantity, the layering distance and the video
Depth of view information is layered the video, obtains the sub-video of the layering quantity.
Optionally, each corresponding depth of view information of pixel in each picture frame according to the video, to described
Video is layered, and obtains at least two layers of sub-video, including:
Image-region division is carried out to each picture frame of the video, each picture frame for obtaining the video is corresponding
At least two image-regions;
Respectively according to the picture of each image-region in corresponding at least two image-region of each picture frame of the video
The corresponding depth of view information of vegetarian refreshments determines each image district in corresponding at least two image-region of each picture frame of the video
Layer belonging to domain;
According to the layer belonging to each image-region in corresponding at least two image-region of each picture frame of the video,
The video is layered, obtains at least two layers of sub-video.
It should be understood that in the embodiment of the present invention, radio frequency unit 1401 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, handled to processor 1410;In addition, by uplink
Data sending is to base station.In general, radio frequency unit 1401 includes but not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 1401 can also pass through wireless communication system and network and other
Equipment communicates.
Mobile terminal has provided wireless broadband internet to the user by network module 1402 and has accessed, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 1403 can be receiving by radio frequency unit 1401 or network module 1402 or in memory
The voice data stored in 1409 is converted into audio signal and exports as sound.Moreover, audio output unit 1403 can be with
The relevant audio output of specific function performed with mobile terminal 1400 is provided (for example, call signal receives sound, message sink
Sound etc.).Audio output unit 1403 includes loud speaker, buzzer and receiver etc..
Input unit 1404 is used to receive audio or video signal.Input unit 1404 can include graphics processor
(Graphics Processing Unit, GPU) 14041 and microphone 14042, graphics processor 14041 in video to capturing
In pattern or image capture mode by image capture apparatus (such as camera) obtain static images or video image data into
Row processing.Treated, and picture frame may be displayed on display unit 1406.Through treated the picture frame of graphics processor 14041
It can be stored in memory 1409 (or other storage mediums) or be carried out via radio frequency unit 1401 or network module 1402
It sends.Microphone 14042 can receive sound, and can be voice data by such acoustic processing.Audio that treated
Data can be converted to the lattice that mobile communication base station can be sent to via radio frequency unit 1401 in the case of telephone calling model
Formula exports.
Mobile terminal 1400 further includes at least one sensor 1405, for example, optical sensor, motion sensor and other
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein, ambient light sensor can be according to ring
The light and shade of border light adjusts the brightness of display panel 14061, proximity sensor can when mobile terminal 1400 is moved in one's ear,
Close display panel 14061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions
The size of (generally three axis) acceleration, can detect that size and the direction of gravity when static, available for identification mobile terminal appearance
State (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion)
Deng;Sensor 1405 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, gas
Meter, hygrometer, thermometer, infrared ray sensor etc. are pressed, details are not described herein.
Display unit 1406 is used to show by information input by user or be supplied to the information of user.Display unit 1406 can
Including display panel 14061, liquid crystal display (Liquid Crystal Display, LCD), organic light-emitting diodes may be employed
Forms such as (Organic Light-Emitting Diode, OLED) are managed to configure display panel 14061.
User input unit 1407 can be used for the number for receiving input or character information and generation and the use of mobile terminal
The key signals input that family is set and function control is related.Specifically, user input unit 1407 include touch panel 14071 with
And other input equipments 14072.Touch panel 14071, also referred to as touch-screen collect user on it or neighbouring touch are grasped
Make (for example user uses any suitable objects such as finger, stylus or attachment on touch panel 14071 or in touch panel
Operation near 14071).Touch panel 14071 may include both touch detecting apparatus and touch controller.Wherein, touch
The touch orientation of detection device detection user is touched, and detects the signal that touch operation is brought, transmits a signal to touch controller;
Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 1410,
It receives the order that processor 1410 is sent and is performed.Furthermore, it is possible to using resistance-type, condenser type, infrared ray and surface
The polytypes such as sound wave realize touch panel 14071.Except touch panel 14071, user input unit 1407 can also include
Other input equipments 14072.Specifically, other input equipments 14072 can include but is not limited to physical keyboard, function key (ratio
Such as volume control button, switch key), trace ball, mouse, operation lever, details are not described herein.
Further, touch panel 14071 can be covered on display panel 14061, when touch panel 14071 detects
After touch operation on or near it, processor 1410 is sent to determine the type of touch event, is followed by subsequent processing device 1410
Corresponding visual output is provided on display panel 14061 according to the type of touch event.Although in fig. 14, touch panel
14071 realize the function that outputs and inputs of mobile terminal with display panel 14061 is the component independent as two, but
In some embodiments, touch panel 14071 with display panel 14061 can be integrated and realize outputting and inputting for mobile terminal
Function does not limit specifically herein.
Interface unit 1408 is the interface that external device (ED) is connected with mobile terminal 1400.For example, external device (ED) can include
Wired or wireless headphone port, external power supply (or battery charger) port, wired or wireless data port, storage card
Port, the port for device of the connection with identification module, audio input/output (I/O) port, video i/o port, earphone
Port etc..Interface unit 1408 can be used for receiving the input (for example, data message, electric power etc.) from external device (ED) simultaneously
And one or more elements that the input received is transferred in mobile terminal 1400 or it can be used in mobile terminal
Data are transmitted between 1400 and external device (ED).
Memory 1409 can be used for storage software program and various data.Memory 1409 can mainly include storage program
Area and storage data field, wherein, storing program area can storage program area, needed at least one function application program (such as
Sound-playing function, image player function etc.) etc.;Storage data field can be stored uses created data (ratio according to mobile phone
Such as voice data, phone directory) etc..In addition, memory 1409 can include high-speed random access memory, can also include non-
Volatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.
Processor 1410 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part is stored in storage by running or performing the software program being stored in memory 1409 and/or module and call
Data in device 1409 perform the various functions of mobile terminal and processing data, so as to carry out integral monitoring to mobile terminal.Place
Reason device 1410 may include one or more processing units;Preferably, processor 1410 can integrate application processor and modulation /demodulation
Processor, wherein, the main processing operation system of application processor, user interface and application program etc., modem processor master
Handle wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1410.
Mobile terminal 1400 can also be included to the power supply 1411 (such as battery) of all parts power supply, it is preferred that power supply
1411 can be logically contiguous by power-supply management system and processor 1410, so as to realize that management is filled by power-supply management system
The functions such as electricity, electric discharge and power managed.
In addition, mobile terminal 1400 includes some unshowned function modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 1410, memory 1409, storage
On memory 1409 and the computer program that can be run on the processor 1410, the computer program is by processor 1410
Each process of above-mentioned method for processing video frequency embodiment is realized during execution, and identical technique effect can be reached, to avoid repeating,
Which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned method for processing video frequency embodiment, and energy when being executed by processor
Reach identical technique effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation
RAM), magnetic disc or CD etc..
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those elements, and
And it further includes other elements that are not explicitly listed or further includes as this process, method, article or device institute inherently
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this
Also there are other identical elements in the process of element, method, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme substantially in other words does the prior art
Going out the part of contribution can be embodied in the form of software product, which is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), used including some instructions so that a station terminal (can be mobile phone, computer services
Device, air conditioner or network equipment etc.) perform method described in each embodiment of the present invention.
The embodiment of the present invention is described above in conjunction with attached drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned specific embodiment is only schematical rather than restricted, those of ordinary skill in the art
Under the enlightenment of the present invention, present inventive concept and scope of the claimed protection are not being departed from, can also made very much
Form is belonged within the protection of the present invention.
Claims (13)
1. a kind of method for processing video frequency, applied to mobile terminal, which is characterized in that including:
If receiving video segmentation instruction, each corresponding depth of view information of pixel in each picture frame of video is obtained;
According to each corresponding depth of view information of pixel in each picture frame of the video, the video is layered, is obtained
To at least two layers of sub-video;
If the object run for receiving the sub-video of the destination layer to being directed at least two layers of the sub-video instructs, basis
The sub-video of destination layer at least two layers of sub-video described in the object run instruction processing, wherein, the destination layer
Sub-video is the sub-video of any layer at least two layers of the sub-video.
If 2. according to the method described in claim 1, it is characterized in that, it is described receive layering instruction, obtain the video
Each picture frame in before each corresponding depth of view information of pixel, the method further includes:
During the camera recorded video by the mobile terminal, the mobile terminal camera acquisition is calculated respectively
Each picture frame in each corresponding depth of view information of pixel;
The corresponding depth of field letter of each pixel in each pixel, each picture frame is stored in each picture frame respectively
Each incidence relation between the corresponding depth of view information of pixel and corresponding pixel in breath and each picture frame.
3. according to the method described in claim 2, it is characterized in that, the mobile terminal includes at least the first camera and second
Camera, first camera and the second camera are arranged in parallel in the same side of the mobile terminal along target direction
And first camera is identical with the focal length of the second camera, the target direction is the mobile terminal width
Or the mobile terminal length direction, during the camera recorded video by the mobile terminal, count respectively
The corresponding depth of view information of each pixel in each picture frame of the mobile terminal camera acquisition is calculated, including:
During by first camera and the second camera recorded video, first camera shooting is obtained respectively
The picture frame for same target that head and the second camera are gathered in synchronization, obtains the first picture frame and the second figure
As frame;
Coordinates of targets X of the pixel corresponding to the object in described first image frame is determined respectivelyaWith in second image
The coordinates of targets X of frameb, wherein, the coordinates of targets is the coordinate along the target direction;
UsingCalculate the corresponding depth of field value of pixel corresponding to the object;
Wherein, D represents the corresponding depth of field value of pixel corresponding to the object, f represent first camera focal length or
The focal length of the second camera, T represent the distance interval of first camera and the second camera.
4. according to the method in any one of claims 1 to 3, which is characterized in that if it is described receive to be directed to it is described at least
The object run instruction of the sub-video of destination layer in two layers of sub-video, then according to object run instruction processing extremely
The sub-video of destination layer in sub-video two layers few, including:
When object run instruction includes playing control instruction, the sub-video of the destination layer is played;
When object run instruction, which includes pause, plays control instruction, pause plays the sub-video of the destination layer;
When object run instruction includes blurring operational order, virtualization processing is performed to the sub-video of the destination layer;
When object run instruction includes filter operational order, filter processing is performed to the sub-video of the destination layer.
5. according to the method in any one of claims 1 to 3, which is characterized in that each figure according to the video
As the corresponding depth of view information of each pixel in frame, the video is layered, obtains at least two layers of sub-video, including:
Obtain layering quantity and layering distance;
According to the corresponding depth of field of each pixel in each picture frame of the layering quantity, the layering distance and the video
Information is layered the video, obtains the sub-video of the layering quantity.
6. according to the method in any one of claims 1 to 3, which is characterized in that each figure according to the video
As the corresponding depth of view information of each pixel in frame, the video is layered, obtains at least two layers of sub-video, including:
Image-region division is carried out to each picture frame of the video, each picture frame for obtaining the video is corresponding at least
Two image-regions;
Respectively according to the pixel of each image-region in corresponding at least two image-region of each picture frame of the video
Corresponding depth of view information determines each image-region institute in corresponding at least two image-region of each picture frame of the video
The layer of category;
According to the layer belonging to each image-region in corresponding at least two image-region of each picture frame of the video, to institute
It states video to be layered, obtains at least two layers of sub-video.
7. a kind of mobile terminal, which is characterized in that including:
If acquisition module for receiving video segmentation instruction, obtains each pixel in each picture frame of video and corresponds to
Depth of view information;
Hierarchical block for each corresponding depth of view information of pixel in each picture frame according to the video, is regarded to described
Frequency is layered, and obtains at least two layers of sub-video;
Processing module, if being grasped for receiving the target of the sub-video of the destination layer to being directed at least two layers of the sub-video
It instructs, then the sub-video of the destination layer at least two layers of the sub-video according to object run instruction processing, wherein,
The sub-video of the destination layer is the sub-video of any layer at least two layers of the sub-video.
8. mobile terminal according to claim 7, which is characterized in that the mobile terminal further includes:
Computing module if receiving layering instruction for described, obtains each pixel in each picture frame of the video
Before corresponding depth of view information, during the camera recorded video by the mobile terminal, the shifting is calculated respectively
The corresponding depth of view information of each pixel in each picture frame of dynamic terminal camera acquisition;
Memory module, for storing in each picture frame each pixel in each pixel, each picture frame respectively
In the corresponding depth of view information of point and each picture frame each corresponding depth of view information of pixel and corresponding pixel it
Between incidence relation.
9. mobile terminal according to claim 8, which is characterized in that the mobile terminal include at least the first camera and
Second camera, first camera and the second camera are arranged in parallel in the same of the mobile terminal along target direction
One side and first camera is identical with the focal length of the second camera, the target direction are the mobile terminal width
Direction or the mobile terminal length direction, the computing module are specifically used for:
During by first camera and the second camera recorded video, first camera shooting is obtained respectively
The picture frame for same target that head and the second camera are gathered in synchronization, obtains the first picture frame and the second figure
As frame;
Coordinates of targets X of the pixel corresponding to the object in described first image frame is determined respectivelyaWith in second image
The coordinates of targets X of frameb, wherein, the coordinates of targets is the coordinate along the target direction;
UsingCalculate the corresponding depth of field value of pixel corresponding to the object;
Wherein, D represents the corresponding depth of field value of pixel corresponding to the object, f represent first camera focal length or
The focal length of the second camera, T represent the distance interval of first camera and the second camera.
10. the mobile terminal according to any one of claim 7 to 9, which is characterized in that the processing module is specifically used
In:
When object run instruction includes playing control instruction, the sub-video of the destination layer is played;
When object run instruction, which includes pause, plays control instruction, pause plays the sub-video of the destination layer;
When object run instruction includes blurring operational order, virtualization processing is performed to the sub-video of the destination layer;
When object run instruction includes filter operational order, filter processing is performed to the sub-video of the destination layer.
11. the mobile terminal according to any one of claim 7 to 9, which is characterized in that the hierarchical block is specifically used
In:
Obtain layering quantity and layering distance;
According to the corresponding depth of field of each pixel in each picture frame of the layering quantity, the layering distance and the video
Information is layered the video, obtains the sub-video of the layering quantity.
12. the mobile terminal according to any one of claim 7 to 9, which is characterized in that the hierarchical block is specifically used
In:
Image-region division is carried out to each picture frame of the video, each picture frame for obtaining the video is corresponding at least
Two image-regions;
Respectively according to the pixel of each image-region in corresponding at least two image-region of each picture frame of the video
Corresponding depth of view information determines each image-region institute in corresponding at least two image-region of each picture frame of the video
The layer of category;
According to the layer belonging to each image-region in corresponding at least two image-region of each picture frame of the video, to institute
It states video to be layered, obtains at least two layers of sub-video.
13. a kind of mobile terminal, which is characterized in that including processor, memory and be stored on the memory and can be in institute
The computer program run on processor is stated, such as claim 1 to 6 is realized when the computer program is performed by the processor
Any one of method for processing video frequency the step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711405235.9A CN108063894B (en) | 2017-12-22 | 2017-12-22 | Video processing method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711405235.9A CN108063894B (en) | 2017-12-22 | 2017-12-22 | Video processing method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108063894A true CN108063894A (en) | 2018-05-22 |
CN108063894B CN108063894B (en) | 2020-05-12 |
Family
ID=62140011
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711405235.9A Active CN108063894B (en) | 2017-12-22 | 2017-12-22 | Video processing method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108063894B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113066001A (en) * | 2021-02-26 | 2021-07-02 | 华为技术有限公司 | Image processing method and related equipment |
CN114422713A (en) * | 2022-03-29 | 2022-04-29 | 湖南航天捷诚电子装备有限责任公司 | Image acquisition and intelligent interpretation processing device and method |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6937244B2 (en) * | 2003-09-23 | 2005-08-30 | Zhou (Mike) Hong | Apparatus and method for reducing the memory traffic of a graphics rendering system |
US20060245653A1 (en) * | 2005-03-14 | 2006-11-02 | Theodore Camus | Method and apparatus for detecting edges of an object |
CN101925929A (en) * | 2008-01-22 | 2010-12-22 | 杰森·博 | Methods and apparatus for displaying image with enhanced depth effect |
CN102457756A (en) * | 2011-12-29 | 2012-05-16 | 广西大学 | Structure and method for 3D (Three Dimensional) video monitoring system capable of watching video in naked eye manner |
CN102509343A (en) * | 2011-09-30 | 2012-06-20 | 北京航空航天大学 | Binocular image and object contour-based virtual and actual sheltering treatment method |
US20130201184A1 (en) * | 2012-02-07 | 2013-08-08 | National Chung Cheng University | View synthesis method capable of depth mismatching checking and depth error compensation |
CN104333748A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Method, device and terminal for obtaining image main object |
US20160125611A1 (en) * | 2014-10-31 | 2016-05-05 | Canon Kabushiki Kaisha | Depth measurement apparatus, imaging apparatus and depth measurement method |
EP3021206A1 (en) * | 2013-07-10 | 2016-05-18 | Huawei Technologies Co., Ltd. | Method and device for refocusing multiple depth intervals, and electronic device |
CN105791803A (en) * | 2016-03-16 | 2016-07-20 | 深圳创维-Rgb电子有限公司 | Display method and system capable of converting two-dimensional image into multi-viewpoint image |
CN106540447A (en) * | 2016-11-04 | 2017-03-29 | 宇龙计算机通信科技(深圳)有限公司 | VR scenario building method and system, VR game building methods and system, VR equipment |
CN106681512A (en) * | 2016-12-30 | 2017-05-17 | 宇龙计算机通信科技(深圳)有限公司 | Virtual reality device and corresponding display method |
-
2017
- 2017-12-22 CN CN201711405235.9A patent/CN108063894B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6937244B2 (en) * | 2003-09-23 | 2005-08-30 | Zhou (Mike) Hong | Apparatus and method for reducing the memory traffic of a graphics rendering system |
US20060245653A1 (en) * | 2005-03-14 | 2006-11-02 | Theodore Camus | Method and apparatus for detecting edges of an object |
CN101925929A (en) * | 2008-01-22 | 2010-12-22 | 杰森·博 | Methods and apparatus for displaying image with enhanced depth effect |
CN102509343A (en) * | 2011-09-30 | 2012-06-20 | 北京航空航天大学 | Binocular image and object contour-based virtual and actual sheltering treatment method |
CN102457756A (en) * | 2011-12-29 | 2012-05-16 | 广西大学 | Structure and method for 3D (Three Dimensional) video monitoring system capable of watching video in naked eye manner |
US20130201184A1 (en) * | 2012-02-07 | 2013-08-08 | National Chung Cheng University | View synthesis method capable of depth mismatching checking and depth error compensation |
EP3021206A1 (en) * | 2013-07-10 | 2016-05-18 | Huawei Technologies Co., Ltd. | Method and device for refocusing multiple depth intervals, and electronic device |
US20160125611A1 (en) * | 2014-10-31 | 2016-05-05 | Canon Kabushiki Kaisha | Depth measurement apparatus, imaging apparatus and depth measurement method |
CN104333748A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Method, device and terminal for obtaining image main object |
CN105791803A (en) * | 2016-03-16 | 2016-07-20 | 深圳创维-Rgb电子有限公司 | Display method and system capable of converting two-dimensional image into multi-viewpoint image |
CN106540447A (en) * | 2016-11-04 | 2017-03-29 | 宇龙计算机通信科技(深圳)有限公司 | VR scenario building method and system, VR game building methods and system, VR equipment |
CN106681512A (en) * | 2016-12-30 | 2017-05-17 | 宇龙计算机通信科技(深圳)有限公司 | Virtual reality device and corresponding display method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113066001A (en) * | 2021-02-26 | 2021-07-02 | 华为技术有限公司 | Image processing method and related equipment |
WO2022179581A1 (en) * | 2021-02-26 | 2022-09-01 | 华为技术有限公司 | Image processing method and related device |
CN114422713A (en) * | 2022-03-29 | 2022-04-29 | 湖南航天捷诚电子装备有限责任公司 | Image acquisition and intelligent interpretation processing device and method |
Also Published As
Publication number | Publication date |
---|---|
CN108063894B (en) | 2020-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107872623B (en) | A kind of image pickup method, mobile terminal and computer readable storage medium | |
CN107592471A (en) | A kind of high dynamic range images image pickup method and mobile terminal | |
CN107613131A (en) | A kind of application program disturbance-free method and mobile terminal | |
CN108108114A (en) | A kind of thumbnail display control method and mobile terminal | |
CN108471498A (en) | A kind of shooting preview method and terminal | |
CN107817939A (en) | A kind of image processing method and mobile terminal | |
CN108495029A (en) | A kind of photographic method and mobile terminal | |
CN109151546A (en) | A kind of method for processing video frequency, terminal and computer readable storage medium | |
CN107832110A (en) | A kind of information processing method and mobile terminal | |
CN110351593A (en) | Information processing method, device, terminal device and computer readable storage medium | |
CN107908705A (en) | A kind of information-pushing method, information push-delivery apparatus and mobile terminal | |
CN108200269A (en) | Display screen control management method, terminal and computer readable storage medium | |
CN107943390A (en) | A kind of word clone method and mobile terminal | |
CN108682040A (en) | A kind of sketch image generation method, terminal and computer readable storage medium | |
CN107749046A (en) | A kind of image processing method and mobile terminal | |
CN109710165A (en) | A kind of drawing processing method and mobile terminal | |
CN107886321A (en) | A kind of method of payment and mobile terminal | |
CN110213485A (en) | A kind of image processing method and terminal | |
CN108462826A (en) | A kind of method and mobile terminal of auxiliary photo-taking | |
CN108257104A (en) | A kind of image processing method and mobile terminal | |
CN109639987A (en) | A kind of bracelet image pickup method, equipment and computer readable storage medium | |
CN108600089A (en) | A kind of display methods and terminal device of facial expression image | |
CN110442279A (en) | A kind of message method and mobile terminal | |
CN110099218A (en) | Interaction control method, equipment and computer readable storage medium in a kind of shooting process | |
CN109618218A (en) | A kind of method for processing video frequency and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |