CN109889893A - Method for processing video frequency, device and equipment - Google Patents
Method for processing video frequency, device and equipment Download PDFInfo
- Publication number
- CN109889893A CN109889893A CN201910304462.5A CN201910304462A CN109889893A CN 109889893 A CN109889893 A CN 109889893A CN 201910304462 A CN201910304462 A CN 201910304462A CN 109889893 A CN109889893 A CN 109889893A
- Authority
- CN
- China
- Prior art keywords
- image
- frame image
- video
- posture
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000012545 processing Methods 0.000 title claims abstract description 35
- 230000008569 process Effects 0.000 claims description 23
- 238000012360 testing method Methods 0.000 claims description 8
- 230000005055 memory storage Effects 0.000 claims 1
- 238000003672 processing method Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 26
- 230000009471 action Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000630 rising effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 241000406668 Loxodonta cyclotis Species 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the present disclosure provides a kind of method for processing video frequency, device and equipment, includes the first object in every frame image this method comprises: obtaining continuous N frame image in video, N is the integer greater than 1;It determines the type of gesture of the first object in every frame image, and according to the type of gesture of the first object in every frame image, determines that the posture distribution of the first object, posture distribution are used to indicate the changing rule of the posture of the first object;According to the distribution of the posture of the first object and N frame image, increase special efficacy in video.Improve the accuracy of increased special efficacy in video.
Description
Technical field
The embodiment of the present disclosure is related to field of computer technology more particularly to a kind of method for processing video frequency, device and equipment.
Background technique
In order to improve video display effect, special efficacy can be increased in video, for example, special efficacy may include increasing in video
Add light flash, increase preset sound etc..
When increasing special efficacy in video, usually by manually seeing viewing video, when occurring deliberate action in determining video,
The corresponding playing time of the deliberate action is associated with a special efficacy;In video display process, when being played to the moment, then regarding
Corresponding special efficacy is shown in frequency.For example, there is applause movement at the 10th second in video of artificial observation, then it was associated at the 10th second
One special efficacy shows a special efficacy relevant to applause when video playing was by the 10th second.However, in above process, according to
The playing time of video increases special efficacy, occur in video special efficacy corresponding with display deliberate action at the time of deliberate action when
It carves there may be deviation, causes the accuracy of increased special efficacy in video poor.
Summary of the invention
The embodiment of the present disclosure provides a kind of method for processing video frequency, device and equipment, improves increased special efficacy in video
Accuracy.
In a first aspect, the embodiment of the present disclosure provides a kind of method for processing video frequency, comprising:
Continuous N frame image is obtained in video, includes the first object in every frame described image, and the N is greater than 1
Integer;
Determine the type of gesture of first object in every frame image, and according to first object in every frame image
Type of gesture, determine that the posture distribution of first object, posture distribution are used to indicate the posture of first object
Changing rule;
According to the distribution of the posture of first object and the N frame image, increase special efficacy in the video.
In a kind of possible embodiment, according to the distribution of the posture of first object and the N frame image, described
Increase special efficacy in video, comprising:
Judge whether the posture distribution of first object meets default posture distribution;
When the posture distribution of first object meets default posture distribution, it is corresponding to obtain the default posture distribution
Target special efficacy, and the target special efficacy is increased in the video according to the N frame image.
In a kind of possible embodiment, according to the type of gesture of first object in every frame image, institute is determined
State the posture distribution of the first object, comprising:
According to sequence of the N frame image in the video, the N frame image is grouped, at least two groups are obtained
Image, includes continuous M frame image in every group of image, and the M is the integer greater than 1;
According to the type of gesture of first object in every group of image in each image, the corresponding appearance of every group of image is determined
Gesture type;
According to the corresponding type of gesture of every group of image, the posture distribution of first object is determined.
In a kind of possible embodiment, for arbitrary first image in the N frame image, described first is determined
The type of gesture of first object in image, comprising:
The test object region in the first image includes in the first image in the subject area with described the
The corresponding part of an object;
The subject area is handled, to determine the type of gesture of first object in the first image.
In a kind of possible embodiment, the test object region in the first image, comprising:
The data for indicating the first image are input to the first identification model, to obtain the subject area;Wherein, institute
It states the first identification model multiple groups first sample is learnt to obtain, every group of first sample includes sample image and the sample
Sample object region in this image includes the corresponding image of first object in the sample image.
In a kind of possible embodiment, the subject area is handled, to determine in the first image
The type of gesture of first object, comprising:
The data for indicating the subject area are input to the second identification model, described in obtaining in the first image
The type of gesture of first object;Wherein, second identification model is learnt to obtain to the second sample of multiple groups, and every group
Two samples include sample object region and the sample type of gesture that identifies in the sample object region, the sample pair
As including the corresponding image of first object in region.
In a kind of possible embodiment, the video is the video shot;Continuous N frame is obtained in video
Image, comprising:
N frame image to be processed is obtained in the video, includes having shot in the video in the N frame image to be processed
Last N frame image;
Judge in the N frame image to be processed whether to include first object in every frame image to be processed, if so,
The N frame image to be processed is determined as the N frame image.
In a kind of possible embodiment, the target special efficacy is increased in the video according to the N frame image, is wrapped
It includes:
The increase special efficacy in the nth frame image in the N frame image.
In a kind of possible embodiment, the video is the video that shooting is completed;It is described to obtain in video continuously
N frame image, comprising:
Image selection operation to be processed is executed, the image selection operation to be processed includes: the default figure from the video
As rising, continuous N frame image to be processed is obtained in the video;
Execute N frame image and determine operation, the N frame image determine operation include: judge be in the N frame image to be processed
It include the corresponding image of first object in no every frame image to be processed, if so, N frame image to be processed is determined
For the N frame image, if it is not, the pre-set image to be then updated to the frame figure after pre-set image described in the video
Picture;
It repeats the image selection operation to be processed and the N frame image determines operation, until determination obtains the N
Frame image.
In a kind of possible embodiment, the target special efficacy is increased in the video according to the N frame image, is wrapped
It includes:
Increase the special efficacy in an at least frame image in the N frame image.
It is described to obtain continuous N frame image in video in a kind of possible embodiment, comprising:
Determination stays in increased special efficacy in the video;
It determines and stays in corresponding first object of increased special efficacy in the video;
According to first object, the N frame image is determined in the video.
In a kind of possible embodiment, before obtaining continuous N frame image in video, further includes:
Determination does not increase the target special efficacy in the N frame image.
Second aspect, the embodiment of the present disclosure provide a kind of video process apparatus, including obtain module, the first determining module,
Second determining module and increase module, wherein
The acquisition module is used for, and obtains continuous N frame image in video, includes first pair in every frame described image
As the N is the integer greater than 1;
First determining module is used for, and determines the type of gesture of first object in every frame image;
Second determining module is used for, according to the type of gesture of first object in every frame image, determine described in
The posture of first object is distributed, and the posture distribution is used to indicate the changing rule of the posture of first object;
The increase module is used for, according to the distribution of the posture of first object and the N frame image, in the video
Increase special efficacy.
In a kind of possible embodiment, the increase module is specifically used for:
Judge whether the posture distribution of first object meets default posture distribution;
When the posture distribution of first object meets default posture distribution, it is corresponding to obtain the default posture distribution
Target special efficacy, and the target special efficacy is increased in the video according to the N frame image.
In a kind of possible embodiment, second determining module is specifically used for:
According to sequence of the N frame image in the video, the N frame image is grouped, at least two groups are obtained
Image, includes continuous M frame image in every group of image, and the M is the integer greater than 1;
According to the type of gesture of first object in every group of image in each image, the corresponding appearance of every group of image is determined
Gesture type;
According to the corresponding type of gesture of every group of image, the posture distribution of first object is obtained.
In a kind of possible embodiment, for arbitrary first image in the N frame image, described first is determined
Module is specifically used for:
The test object region in the first image includes in the first image in the subject area with described the
The corresponding part of an object;
The subject area is handled, to obtain the type of gesture of first object in the first image.
In a kind of possible embodiment, first determining module is specifically used for:
The data for indicating the first image are input to the first identification model, to obtain the subject area;Wherein, institute
It states the first identification model multiple groups first sample is learnt to obtain, every group of first sample includes sample image and the sample
Sample object region in this image includes the corresponding image of first object in the sample image.
In a kind of possible embodiment, first determining module is specifically used for:
The data for indicating the subject area are input to the second identification model, described in obtaining in the first image
The type of gesture of first object;Wherein, second identification model is learnt to obtain to the second sample of multiple groups, and every group
Two samples include sample object region and the sample type of gesture that identifies in the sample object region, the sample pair
As including the corresponding image of first object in region.
In a kind of possible embodiment, the video is the video shot;The acquisition module is specifically used for:
N frame image to be processed is obtained in the video, includes having shot in the video in the N frame image to be processed
Last N frame image;
Judge in the N frame image to be processed whether to include first object in every frame image to be processed, if so,
The N frame image to be processed is determined as the N frame image.
In a kind of possible embodiment, the increase module is specifically used for:
The increase special efficacy in the nth frame image in the N frame image.
In a kind of possible embodiment, the video is the video that shooting is completed;The acquisition module is specifically used for:
Image selection operation to be processed is executed, the image selection operation to be processed includes: the default figure from the video
As rising, continuous N frame image to be processed is obtained in the video;
Execute N frame image and determine operation, the N frame image determine operation include: judge be in the N frame image to be processed
It include the corresponding image of first object in no every frame image to be processed, if so, N frame image to be processed is determined
For the N frame image, if it is not, the pre-set image to be then updated to the frame figure after pre-set image described in the video
Picture;
It repeats the image selection operation to be processed and the N frame image determines operation, until determination obtains the N
Frame image.
In a kind of possible embodiment, the increase module is specifically used for:
Increase the special efficacy in an at least frame image in the N frame image.
In a kind of possible embodiment, the acquisition module is specifically used for:
Determination stays in increased special efficacy in the video;
It determines and stays in corresponding first object of increased special efficacy in the video;
According to first object, the N frame image is determined in the video.
In a kind of possible embodiment, described device further includes third determining module, wherein
The third determining module is used for, and before the acquisition module obtains continuous N frame image in video, is determined
Do not increase the target special efficacy in the N frame image.
The third aspect, the embodiment of the present disclosure provide a kind of electronic equipment, comprising: processor, the processor and memory
Coupling;
The memory is used for, and stores computer program;
The processor is used for, and executes the computer program stored in the memory, so that the terminal device is held
The above-mentioned described in any item methods of first aspect of row.
Fourth aspect, the embodiment of the present disclosure provide a kind of readable storage medium storing program for executing, including program or instruction, when described program or
When instruction is run on computers, the method as described in above-mentioned first aspect any one is performed.
Method for processing video frequency, device and the equipment that the embodiment of the present disclosure provides, when needing to increase by the first object in video
When corresponding special efficacy, continuous, the N frame image including the first object is determined in video, obtains first pair in every frame image
The type of gesture of elephant, and according to the type of gesture of the first object in every frame image, the posture distribution of the first object is obtained, according to
The posture distribution of first object and N frame image, increase special efficacy in video.In above process, it as unit of video frame, determines
The posture distribution of the first object, is distributed according to the posture of the first object in video, can accurately determine and obtain video in video
In whether there is deliberate action, and then can accurately determine obtain whether increasing special efficacy in video.It is determining in video
When increasing special efficacy, special efficacy is increased according to continuous N frame image in video, that is, can increase in video using video frame as granularity
Add special efficacy, improves the accuracy for increasing special efficacy.
Detailed description of the invention
In order to illustrate more clearly of the embodiment of the present disclosure or technical solution in the prior art, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the architecture diagram for the video processing that the embodiment of the present disclosure provides;
Fig. 2 is a kind of flow diagram for method for processing video frequency that the embodiment of the present disclosure provides;
Fig. 3 A is a kind of schematic diagram for video frame that the embodiment of the present disclosure provides;
Fig. 3 B is the schematic diagram for another video frame that the embodiment of the present disclosure provides;
Fig. 4 A is the schematic diagram for another video frame that the embodiment of the present disclosure provides;
Fig. 4 B is the schematic diagram for another video frame that the embodiment of the present disclosure provides;
Fig. 5 is the flow diagram for another method for processing video frequency that the embodiment of the present disclosure provides;
Fig. 6 is video processing procedure schematic diagram provided by the embodiments of the present application;
Fig. 7 is a kind of structural schematic diagram for video process apparatus that the embodiment of the present disclosure provides;
Fig. 8 is the structural schematic diagram for another video process apparatus that the embodiment of the present disclosure provides;
Fig. 9 is the structural schematic diagram for the electronic equipment that the embodiment of the present disclosure provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present disclosure clearer, below in conjunction with the embodiment of the present disclosure
In attached drawing, the technical solution in the embodiment of the present disclosure is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Fig. 1 is the architecture diagram for the video processing that the embodiment of the present disclosure provides.When increasing special efficacy in video, usually judge
Whether occur deliberate action (for example, applaud, shake the head) in video, when determine there is deliberate action in video after, then
Increase the corresponding special efficacy of preset movement in video.Referring to Figure 1, when needing to increase in video deliberate action (assuming that pre-
If acting corresponding first object, that is, execute deliberate action by the first object, the first object can be hand, leg, head, vehicle etc.) it is right
When the special efficacy answered, image zooming-out can be carried out in video, with obtain the continuous image of N (image 1, image 2 ..., image
N).Identifying processing can be carried out to the every image extracted, to obtain the type of gesture of the first object in every image, and root
According to the type of gesture of the first object in every frame image, the posture distribution of the first object is obtained, is distributed in the posture of the first object
It when meeting default distribution, then can determine occur deliberate action in video, then increase the corresponding spy of deliberate action in video
Effect.
In above process, as unit of video frame, the posture distribution of the first object in video is determined, according in video the
The posture of an object is distributed, and can accurately be determined and be obtained whether occurring in video deliberate action, and then can accurately determine
Obtain whether increasing special efficacy in video.When determining increase special efficacy in video, increased in video according to continuous N frame image
Add special efficacy, that is, special efficacy can be increased in video by granularity of video frame, improve the accuracy for increasing special efficacy.
In the following, technical solution shown in the application is described in detail by specific embodiment.Under it should be noted that
The several specific embodiments in face can be combined with each other, and for the same or similar content, no longer carry out weight in various embodiments
Multiple explanation.
Fig. 2 is a kind of flow diagram for method for processing video frequency that the embodiment of the present disclosure provides.Refer to Fig. 2, this method
May include:
S201, continuous N frame image is obtained in video.
The executing subject of the embodiment of the present disclosure can be electronic equipment, or be arranged at video in the electronic device
Manage device.Optionally, video process apparatus can be by software realization, can also being implemented in combination with by software and hardware.
Optionally, electronic equipment can be the equipment such as mobile phone, computer, video camera with processing function.
It wherein, include the first object in every frame image, N is the integer greater than 1.
It include complete video content in every frame image, for example, when video is video by compression processing, then N
Frame image is the key frame in video.
Optionally, the first object can be hand, leg, head, vehicle, aircraft etc..
Optionally, it can first determine that, to increased special efficacy in video, determination is corresponding to increased special efficacy in video
First object, and according to the first object, N frame image is determined in video.For example, determining to increased special efficacy pair in video
It when the first object answered, can first determine to the corresponding deliberate action of increased special efficacy in video, determining execution, this is default dynamic
The object of work is the first object.
For example, it is assumed that being lamp light specific effect to increased special efficacy in video, the corresponding deliberate action of lamp light specific effect is to applaud
Movement, the object for executing applause movement is hand, hence, it can be determined that the first object is hand, correspondingly, determining company in video
Continuous N images include hand.
When the application scenarios difference of video processing, determine that the process of continuous N frame image is also different, for example, can wrap
Include the possible application scenarios of at least the following two kinds:
A kind of possible application scenarios: video is the video shot, that is, video capture is carried out on one side, on one side just
Increase special efficacy in the video of shooting.
In this kind of possible application scenarios, continuous N frame image can be obtained by following feasible implementation:
N frame image to be processed is obtained in video, is included the last N frame image shot in video in N frame image to be processed, is judged N frame
It whether include the first object in every frame image to be processed in image to be processed, if so, N frame image to be processed is determined as institute
State N frame image.If it is not, then the N frame image to be processed is not determined as the N frame image, new image can be obtained in shooting
Later, N frame image to be processed is updated, and is repeated the above process, until determination obtains the N frame image.
In the following, being carried out in detail in conjunction with Fig. 3 A- Fig. 3 B to the process for determining continuous N frame image in this kind of application scenarios
Explanation.
Fig. 3 A is a kind of schematic diagram for video frame that the embodiment of the present disclosure provides.Assuming that the first object is hand, N 6.It please join
See Fig. 3 A, it is assumed that the last frame image of current shooting is the 80th frame image, and the 75th frame image includes into the 80th frame image
Hand includes hand in obtained last 6 frame image (the 75th frame image to the 80th frame image) due to shooting, then can be by the 75th frame
Image to the 80th frame image is determined as continuous 6 frame image.
Fig. 3 B is the schematic diagram for another video frame that the embodiment of the present disclosure provides.Assuming that the first object is hand, N 6.Please
Referring to Fig. 3 B, at the T1 moment, the last frame image of shooting is the 80th frame image, wherein is wrapped in 75-76,78-80 frame image
Include hand, do not include hand in the 77th frame image, due to exist not in the 6 frame images that finally shoot include hand image, then after
Continuous to be shot, until at the T2 moment, the last frame image shot is the 83rd frame image, and the 78th frame image is to the
It include hand in 83 frame images, then by the 78th frame image, into the 83rd frame, image is determined as continuous 6 frame image.
Alternatively possible application scenarios: video is the video that shooting is completed, that is, is increased in the video for having shot completion
Special efficacy.
In this kind of possible application scenarios, continuous N frame image can be obtained by following feasible implementation: held
Row image selection operation to be processed, image selection operation to be processed includes: from the pre-set image of video, and acquisition connects in video
Continuous N frame image to be processed.It executes N frame image and determines operation, N frame image determines that operation includes: to judge in N frame image to be processed
It whether include the corresponding image of the first object in every frame image to be processed, if so, N frame image to be processed is determined as N frame
Image, if it is not, pre-set image to be then updated in video to the frame image after pre-set image.Repeat image choosing to be processed
It selects operation and N frame image determines operation, until determination obtains N frame image.
Optionally, pre-set image can be updated to the frame image in video after pre-set image.Alternatively, can will be pre-
If image update be the second image a later frame image, second image be the N frame image to be processed in the last one not
Image including the first object.
In the following, being carried out in detail in conjunction with Fig. 4 A- Fig. 4 B to the process for determining continuous N frame image in this kind of application scenarios
Explanation.
Fig. 4 A is the schematic diagram for another video frame that the embodiment of the present disclosure provides.Assuming that the first object be hand, N 6, in advance
If image is first frame image.Fig. 4 A is referred to, when initial, pre-set image is first frame image, accordingly, it is determined that N frame is to be processed
Image is the 1st frame image to the 6th frame image.Since 3rd frame image of the 1st frame image into the 6th frame image does not include hand, then will
Pre-set image is updated to the second frame image, correspondingly, N frame image update to be processed is the 2nd frame image to the 7th frame image.Due to
2nd frame image does not include hand in the 3rd frame image into the 7th frame image, then pre-set image is updated to third frame image, accordingly
, N frame image update to be processed is image of the 3rd frame image into the 8th frame image.Since the 3rd frame image is into the 8th frame image
The 3rd frame image in do not include hand, then pre-set image is updated to the 4th frame image, correspondingly, N frame image update to be processed is
4th frame image to the 9th frame image, since the 4th frame image includes hand into the 9th frame image, then by the 4th frame image to the 9th frame
Image is determined as continuous 6 frame image.
Fig. 4 B is the schematic diagram for another video frame that the embodiment of the present disclosure provides.Assuming that the first object be hand, N 6, in advance
If image is first frame image.Fig. 4 B is referred to, when initial, pre-set image is first frame image, accordingly, it is determined that N frame is to be processed
Image is the 1st frame image to the 6th frame image.Since 3rd frame image of the 1st frame image into the 6th frame image does not include hand, then exist
1st frame image determines the second image into the 6th frame image, due to not including hand in the 3rd frame image, the 3rd frame image is true
It is set to the second image, therefore, pre-set image is updated to the 4th frame image (a later frame image of the second image), correspondingly, N frame
Image update to be processed is the 4th frame image to the 9th frame image, since the 4th frame image includes hand into the 9th frame image, then will
4th frame image to the 9th frame image is determined as continuous 6 frame image.
Optionally, in order to avoid increasing duplicate special efficacy in identical video frame, it is determined that the obtained N frame image is
The image of target special efficacy (to increased special efficacy in video) is not increased.
S202, the type of gesture for determining the first object in every frame image.
Optionally, a variety of type of gesture of the first object can be preset, for example, when the first object is hand, then hand
Type of gesture may include: that both hands face is opened, puts one's palms together devoutly, clenches fist.For example, when first object is head, then head
Type of gesture may include: come back, bow, the inclined head in left side, the inclined head in right side etc..
The process for obtaining the type of gesture of the first object in every frame image is identical, in the following, to obtain in the first image
The process of the type of gesture of first object is illustrated.
For arbitrary first image in N frame image, can in the first image test object region, in subject area
Including part corresponding with the first object in the first image, and subject area is handled, to obtain the in the first image
The type of gesture of an object.
Optionally, following feasible implementation test object region in the first image can be passed through: will indicates first
The data of image are input to the first identification model, to obtain subject area;Wherein, the first identification model is to multiple groups first sample
Learnt, every group of first sample includes the sample object region in sample image and sample image, in sample image
Including the corresponding image of the first object.
The data input for indicating the first image can be the first image, gray level image of the first image etc..Subject area can
Think a rectangular area in the first image including the first object.
Since the first identification model learns to obtain to a large amount of first sample, it can by the first identification model
With the accurate detection subject area in the first image.
Subject area can be determined according to the output of the first identification model.The output of first identification model can be the first figure
As in the corresponding image of subject area, or at least two vertex of subject area in the first image position (such as
Coordinate).When the output of the first identification model is two vertex of subject area, which is two tops on diagonal line
Point.
Optionally, the type of gesture of the first object in the first image can be obtained by following feasible implementation:
The data for indicating subject area are input to the second identification model, to obtain the type of gesture of the first object in the first image;
Wherein, the second identification model is learnt to obtain to the second sample of multiple groups, and every group of second sample includes sample object region
It include the corresponding figure of the first object in sample object region with the sample type of gesture identified in sample object region
Picture.
The data for indicating subject area can be the corresponding image of subject area or at least two vertex of subject area
Position (such as coordinate) in the first image.When the data for indicating subject area are two vertex of subject area, this two
A vertex is two vertex on diagonal line.
The type of gesture of the first object in the first image can be determined according to the output of the second identification model.Second identification
The output of model can be the character (for example, number, letter etc.) of expression type of gesture.
Since the second identification model obtains a large amount of second sample learning, it can by the second identification model
With accurately determination obtains the type of gesture of the first object in subject area.
S203, according to the type of gesture of the first object in every frame image, determine the posture distribution of the first object.
Wherein, the posture distribution of the first object is used to indicate the changing rule of the posture of the first object.
For example, it is assumed that the first object is hand, N 6, the type of gesture of the first object in the 6 frame image is successively are as follows: both hands
Face is opened, both hands face is opened, both hands face is opened, puts one's palms together devoutly, puts one's palms together devoutly, puts one's palms together devoutly.Thus, it is possible to obtain
The posture of an object is distributed are as follows: both hands face, which is opened to, puts one's palms together devoutly.
It optionally, can be to passing through following feasible reality in order to improve the accuracy of the posture distribution for obtaining the first object
Existing mode obtains the posture distribution of the first object: according to the sequence of N frame image in video, being grouped, obtains to N frame image
At least two groups image, includes continuous M frame image in every group of image, and M is the integer greater than 1;According to each figure in every group of image
The type of gesture of the first object as in, determines the corresponding type of gesture of every group of image;According to the corresponding posture class of every group of image
Type obtains the posture distribution of the first object.
Optionally, for any one group of image, if being greater than or equal to the posture class of first threshold image in this group of image
Type is prime type, it is determined that the corresponding type of gesture of this group of image is prime type.
For example, it is assumed that M is 3, first threshold 2, then when there are 2 or the corresponding posture classes of 3 images in one group of image
Type is when putting one's palms together devoutly type, it is determined that the corresponding type of gesture of this group of image is type of putting one's palms together devoutly.
For example, it is assumed that N be 9, the 9 frame image be denoted as respectively image 1, image 2 ..., image 9, M 3, then to 9 frame
The grouping of image and the determining obtained corresponding type of gesture of each image group can be for shown in tables 1:
Table 1
It should be noted that table 1 only illustrates the grouping carried out to image in exemplary fashion, so that each image is corresponding
Type of gesture.
In above process, it even if the type of gesture to the first object in individual images identifies mistake, can still obtain
The posture distribution of correct first object is obtained, so that the error resilience performance of video processing is higher.
S204, according to the posture of the first object distribution and N frame image, increase special efficacy in video.
Optionally, it can be determined that whether the posture distribution of the first object meets default posture distribution, in the appearance of the first object
When gesture distribution meets default posture distribution, obtains default posture and be distributed corresponding target special efficacy, and in video according to N frame image
Increase target special efficacy.
Optionally, when the application scenarios difference of video processing, increase the mistake of target special efficacy in video according to N frame image
Journey is also different.
A kind of possible application scenarios: video is the video shot, that is, video capture is carried out on one side, on one side just
Increase special efficacy in the video of shooting.
It, can be in the increase special efficacy in the nth frame image in N frame image under this kind of application scenarios.Alternatively, in nth frame
The corresponding playing time of image increases special efficacy in video, and the display moment of special efficacy can be preset duration.
Alternatively possible application scenarios: video is the video that shooting is completed, that is, is increased in the video for having shot completion
Special efficacy.
Under this kind of application scenarios, special efficacy can be increased in at least frame image in N frame image.For example, can be in N
All increase special efficacy in frame image, that is, increase special efficacy in video between the corresponding playing time of N frame image.Alternatively,
Increase special efficacy in parts of images in the N frame image, that is, between the corresponding playing time of parts of images in the N frame image
Increase special efficacy in video.
The method for processing video frequency that the embodiment of the present disclosure provides, when needing to increase in video the corresponding special efficacy of the first object
When, continuous, the N frame image including the first object is determined in video, obtains the posture class of the first object in every frame image
Type, and according to the type of gesture of the first object in every frame image, the posture distribution of the first object is obtained, according to the first object
Posture distribution and N frame image, increase special efficacy in video.In above process, it as unit of video frame, determines first in video
Whether the posture of object is distributed, be distributed according to the posture of the first object in video, can accurately determine and obtain occurring in video
Deliberate action, and then can accurately determine obtain whether increasing special efficacy in video.When determining increase special efficacy in video,
Increase special efficacy in video according to continuous N frame image, that is, special efficacy can be increased in video by granularity of video frame, improved
Increase the accuracy of special efficacy.
On the basis of any one above-mentioned embodiment, in the following, by embodiment shown in fig. 5, to method for processing video frequency
It is described in detail.
Fig. 5 is the flow diagram for another method for processing video frequency that the embodiment of the present disclosure provides.Refer to Fig. 5, the party
Method may include:
S501, continuous N frame image is obtained in video.
It should be noted that the implementation procedure of S501 may refer to the implementation procedure of S202, no longer repeated herein.
S502, the sequence according to N frame image in video, are grouped N frame image, obtain at least two groups image.
It wherein, include continuous M frame image in every group of image, M is the integer greater than 1.
From the first frame image in N frame image, continuous M frame image is successively divided into one group, obtains at least two groups figure
Picture.For example, the 1st frame image to the M frame image in N frame image is divided into one group, M+1 frame image to 2M frame image is divided into one
Group, and so on, until the grouping of N frame image is finished.
Optionally, N is the integral multiple of M.
S503, the type of gesture for determining the first object in every group of image in each image.
It should be noted that the implementation procedure of S503 may refer to the implementation procedure of S202, no longer repeated herein.
S504, according to the type of gesture of the first object in every group of image in each image, determine that every group of image is corresponding
Type of gesture.
For any one group of image, if the type of gesture for being greater than or equal to first threshold image in this group of image is first
Type of gesture, it is determined that the corresponding type of gesture of this group of image is prime type.
For example, it is assumed that M is 3, first threshold 2, then when there are 2 or the corresponding posture classes of 3 images in one group of image
Type is when putting one's palms together devoutly type, it is determined that the corresponding type of gesture of this group of image is type of putting one's palms together devoutly.
S505, according to the corresponding type of gesture of every group of image, determine the posture distribution of the first object.
For example, determination obtains 2 groups of images in S502, it is assumed that the corresponding posture of first group of image if the first object is hand
Type is both hands face opening, and the corresponding type of gesture of second group of image is to put one's palms together devoutly, then the posture of the first object is distributed as
Both hands face, which is opened to, puts one's palms together devoutly.
S506, judge whether the posture distribution of the first object meets default posture distribution.
If so, executing S507-S508.
If it is not, then executing S501.
Optionally, if the changing rule of the posture of the first object indicated by the posture distribution of the first object, with default appearance
The changing rule of the posture of the first indicated object of gesture distribution is identical, it is determined that it is pre- whether the posture distribution of the first object meets
If posture is distributed.
S507, the corresponding target special efficacy of default posture distribution is obtained.
Optionally, the corresponding relationship between posture distribution and special efficacy can be preset, correspondingly, can be according to default appearance
Gesture distribution and the object relationship determine target special efficacy.
S508, target special efficacy is increased according to N frame image in video.
It should be noted that the implementation procedure of S508 may refer to the implementation procedure of S204, no longer repeated herein.
In the embodiment shown in fig. 5, as unit of video frame, the posture distribution of the first object in video is determined, according to
The posture distribution of the first object, can accurately determine and obtain whether occurring in video deliberate action in video, and then can be quasi-
True determination obtains whether increasing special efficacy in video.When determining increase special efficacy in video, existed according to continuous N frame image
Increase special efficacy in video, that is, special efficacy can be increased in video by granularity of video frame, improve the accuracy for increasing special efficacy.
Further, still available to obtain correctly even if identifying mistake to the type of gesture of the first object in individual images
The posture of first object is distributed, so that the error resilience performance of video processing is higher.
On the basis of any one above-mentioned embodiment, in the following, by specific example, implementing to the above method in conjunction with Fig. 6
Method for processing video frequency shown in example is described in detail.
Fig. 6 is video processing procedure schematic diagram provided by the embodiments of the present application.Assuming that the first object is hand, N 6, wait increase
The special efficacy added is to spread flower.Refer to Fig. 6, it is assumed that determine obtained 6 images be respectively P1, P2 ..., P6.
Fig. 6 is referred to, is one group of image by P1, P2 and P3 points, is one group of image by P4, P5 and P6 points.It will indicate respectively
The data of 6 images are input to the first preset model, obtain the subject area in every image, wherein wrap in subject area
Include hand.The subject area indicated in 6 images is input to the second preset model respectively, obtains type of gesture in one's hands, for example, really
Surely the type of gesture of the hand obtained is respectively as follows: both hands face opening, both hands face is opened, put one's palms together devoutly, putting one's palms together devoutly, both hands close
Ten, it puts one's palms together devoutly.It is possible thereby to determine the corresponding type of gesture of first group of image as both hands face opening, second group of image is corresponding
Type of gesture be to put one's palms together devoutly, hence, it can be determined that the first object (hand) corresponding posture distribution are as follows: both hands face is opened to
It puts one's palms together devoutly, determines that posture distribution meets default posture grouping, then increase in 6 images and spread colored special efficacy.Certainly, may be used also
Colored special efficacy is spread to increase in the parts of images in 6 images.
In the embodiment shown in fig. 6, as unit of video frame, the posture distribution of the first object in video is determined, according to
The posture distribution of the first object, can accurately determine and obtain whether occurring in video deliberate action in video, and then can be quasi-
True determination obtains whether increasing special efficacy in video.When determining increase special efficacy in video, existed according to continuous N frame image
Increase special efficacy in video, that is, special efficacy can be increased in video by granularity of video frame, improve the accuracy for increasing special efficacy.
Further, still available to obtain correctly even if identifying mistake to the type of gesture of the first object in individual images
The posture of first object is distributed, so that the error resilience performance of video processing is higher.
Fig. 7 is a kind of structural schematic diagram for video process apparatus that the embodiment of the present disclosure provides.Fig. 7 is referred to, the video
Processing unit 10 may include obtaining module 11, the first determining module 12, the second determining module 13 and increasing module 14, wherein
The acquisition module 11 is used for, and obtains continuous N frame image in video, includes first in every frame described image
Object, the N are the integer greater than 1;
First determining module 12 is used for, and determines the type of gesture of first object in every frame image;
Second determining module 13 is used for, and according to the type of gesture of first object in every frame image, determines institute
The posture distribution of the first object is stated, the posture distribution is used to indicate the changing rule of the posture of first object;
The increase module 14 is used for, according to the distribution of the posture of first object and the N frame image, in the video
Middle increase special efficacy.
The video process apparatus that the embodiment of the present disclosure provides can execute technical solution shown in above method embodiment,
Realization principle and beneficial effect are similar, are no longer repeated herein.
In a kind of possible embodiment, the increase module 14 is specifically used for:
Judge whether the posture distribution of first object meets default posture distribution;
When the posture distribution of first object meets default posture distribution, it is corresponding to obtain the default posture distribution
Target special efficacy, and the target special efficacy is increased in the video according to the N frame image.
In a kind of possible embodiment, second determining module 13 is specifically used for:
According to sequence of the N frame image in the video, the N frame image is grouped, at least two groups are obtained
Image, includes continuous M frame image in every group of image, and the M is the integer greater than 1;
According to the type of gesture of first object in every group of image in each image, the corresponding appearance of every group of image is determined
Gesture type;
According to the corresponding type of gesture of every group of image, the posture distribution of first object is obtained.
In a kind of possible embodiment, for arbitrary first image in the N frame image, described first is determined
Module 12 is specifically used for:
The test object region in the first image includes in the first image in the subject area with described the
The corresponding part of an object;
The subject area is handled, to obtain the type of gesture of first object in the first image.
In a kind of possible embodiment, first determining module 12 is specifically used for:
The data for indicating the first image are input to the first identification model, to obtain the subject area;Wherein, institute
It states the first identification model multiple groups first sample is learnt to obtain, every group of first sample includes sample image and the sample
Sample object region in this image includes the corresponding image of first object in the sample image.
In a kind of possible embodiment, first determining module 12 is specifically used for:
The data for indicating the subject area are input to the second identification model, described in obtaining in the first image
The type of gesture of first object;Wherein, second identification model is learnt to obtain to the second sample of multiple groups, and every group
Two samples include sample object region and the sample type of gesture that identifies in the sample object region, the sample pair
As including the corresponding image of first object in region.
In a kind of possible embodiment, the video is the video shot;The acquisition module 11 is specifically used
In:
N frame image to be processed is obtained in the video, includes having shot in the video in the N frame image to be processed
Last N frame image;
Judge in the N frame image to be processed whether to include first object in every frame image to be processed, if so,
The N frame image to be processed is determined as the N frame image.
In a kind of possible embodiment, the increase module 14 is specifically used for:
The increase special efficacy in the nth frame image in the N frame image.
In a kind of possible embodiment, the video is the video that shooting is completed;The acquisition module 11 is specifically used
In:
Image selection operation to be processed is executed, the image selection operation to be processed includes: the default figure from the video
As rising, continuous N frame image to be processed is obtained in the video;
Execute N frame image and determine operation, the N frame image determine operation include: judge be in the N frame image to be processed
It include the corresponding image of first object in no every frame image to be processed, if so, N frame image to be processed is determined
For the N frame image, if it is not, the pre-set image to be then updated to the frame figure after pre-set image described in the video
Picture;
It repeats the image selection operation to be processed and the N frame image determines operation, until determination obtains the N
Frame image.
In a kind of possible embodiment, the increase module 14 is specifically used for:
Increase the special efficacy in an at least frame image in the N frame image.
In a kind of possible embodiment, the acquisition module 11 is specifically used for:
Determination stays in increased special efficacy in the video;
It determines and stays in corresponding first object of increased special efficacy in the video;
According to first object, the N frame image is determined in the video.
Fig. 8 is the structural schematic diagram for another video process apparatus that the embodiment of the present disclosure provides.Embodiment shown in Fig. 7
On the basis of, Fig. 8 is referred to, video process apparatus 10 further includes third determining module 15, wherein
The third determining module 15 is used for, before the acquisition module 11 obtains continuous N frame image in video,
Determination does not increase the target special efficacy in the N frame image.
The video process apparatus that the embodiment of the present disclosure provides can execute technical solution shown in above method embodiment,
Realization principle and beneficial effect are similar, are no longer repeated herein.
Fig. 9 is the structural schematic diagram for the electronic equipment that the embodiment of the present disclosure provides.Electronic equipment 20 can be terminal device
Or server.Wherein, terminal device can include but is not limited to such as mobile phone, laptop, digit broadcasting receiver,
Personal digital assistant (Personal Digital Assistant, abbreviation PDA), tablet computer (Portable Android
Device, abbreviation PAD), portable media player (PortableMedia Player, abbreviation PMP), car-mounted terminal (example
Such as vehicle mounted guidance terminal) mobile terminal and such as number TV, desktop computer etc. fixed terminal.Fig. 9 is shown
Electronic equipment be only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
Fig. 9 is referred to, electronic equipment 20 may include processing unit (such as central processing unit, graphics processor etc.) 21,
It can be according to the program being stored in read-only memory (Read Only Memory, abbreviation ROM) 22 or from storage device
28 programs being loaded into random access storage device (Random AccessMemory, abbreviation RAM) 23 and execute various appropriate
Movement and processing.In RAM 23, also it is stored with electronic equipment 20 and operates required various programs and data.Processing unit 21,
ROM 22 and RAM 23 is connected with each other by bus 24.Input/output (I/O) interface 25 is also connected to bus 24.
In general, following device can connect to I/O interface 25: including such as touch screen, touch tablet, keyboard, mouse, camera shooting
The input unit 26 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (Liquid Crystal
Display, abbreviation LCD), loudspeaker, vibrator etc. output device 27;Storage device 28 including such as tape, hard disk etc.;
And communication device 29.Communication device 29 can permit electronic equipment 20 and wirelessly or non-wirelessly be communicated with other equipment to exchange
Data.Although Fig. 9 shows the electronic equipment 20 with various devices, it should be understood that being not required for implementing or have
All devices shown.It can alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 29, or from 28 quilt of storage device
Installation, or be mounted from ROM22.When the computer program is executed by processing unit 21, the side of the embodiment of the present disclosure is executed
The above-mentioned function of being limited in method.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not
It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the electricity
When sub- equipment executes, so that the electronic equipment executes method shown in above-described embodiment.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (Local
Area Network, abbreviation LAN) or wide area network (Wide Area Network, abbreviation WAN)-be connected to subscriber computer, or
Person, it may be connected to outer computer (such as being connected using ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.
Finally, it should be noted that the above various embodiments is only to illustrate the technical solution of the embodiment of the present disclosure, rather than to it
Limitation;Although the embodiment of the present disclosure is described in detail referring to foregoing embodiments, those skilled in the art
It is understood that it is still possible to modify the technical solutions described in the foregoing embodiments, either to part of or
All technical features are equivalently replaced;And these are modified or replaceed, this public affairs that it does not separate the essence of the corresponding technical solution
Open the range of example scheme.
Claims (15)
1. a kind of method for processing video frequency characterized by comprising
Continuous N frame image is obtained in video, includes the first object in every frame described image, and the N is whole greater than 1
Number;
Determine the type of gesture of first object in every frame image, and according to the appearance of first object in every frame image
Gesture type determines that the posture distribution of first object, the posture distribution are used to indicate the change of the posture of first object
Law;
According to the distribution of the posture of first object and the N frame image, increase special efficacy in the video.
2. the method according to claim 1, wherein according to the distribution of the posture of first object and the N frame
Image increases special efficacy in the video, comprising:
Judge whether the posture distribution of first object meets default posture distribution;
When the posture distribution of first object meets default posture distribution, obtains the default posture and be distributed corresponding target
Special efficacy, and the target special efficacy is increased in the video according to the N frame image.
3. method according to claim 1 or 2, which is characterized in that according to the appearance of first object in every frame image
Gesture type determines the posture distribution of first object, comprising:
According to sequence of the N frame image in the video, the N frame image is grouped, at least two groups image is obtained,
It include continuous M frame image in every group of image, the M is the integer greater than 1;
According to the type of gesture of first object in every group of image in each image, the corresponding posture class of every group of image is determined
Type;
According to the corresponding type of gesture of every group of image, the posture distribution of first object is determined.
4. method according to claim 1-3, which is characterized in that for arbitrary in the N frame image
One image determines the type of gesture of first object in the first image, comprising:
The test object region in the first image, include in the subject area in the first image with described first pair
As corresponding part;
The subject area is handled, to determine the type of gesture of first object in the first image.
5. according to the method described in claim 4, it is characterized in that, in the first image test object region, comprising:
The data for indicating the first image are input to the first identification model, to obtain the subject area;Wherein, described
One identification model is learnt to obtain to multiple groups first sample, and every group of first sample includes sample image and the sample graph
Sample object region as in includes the corresponding image of first object in the sample image.
6. method according to claim 4 or 5, which is characterized in that handle the subject area, described in determination
The type of gesture of first object in first image, comprising:
The data for indicating the subject area are input to the second identification model, to obtain described first in the first image
The type of gesture of object;Wherein, second identification model is learnt to obtain to the second sample of multiple groups, every group of second sample
This includes sample object region and the sample type of gesture identified in the sample object region, the sample object area
It include the corresponding image of first object in domain.
7. according to the method described in claim 2, it is characterized in that, the video is the video shot;It obtains in video
Take continuous N frame image, comprising:
N frame image to be processed is obtained in the video, includes having shot most in the video in the N frame image to be processed
N frame image afterwards;
Judge in the N frame image to be processed whether to include first object in every frame image to be processed, if so, by institute
It states N frame image to be processed and is determined as the N frame image.
8. the method according to the description of claim 7 is characterized in that described in being increased in the video according to the N frame image
Target special efficacy, comprising:
The increase special efficacy in the nth frame image in the N frame image.
9. according to the method described in claim 2, it is characterized in that, the video is the video that shooting is completed;It is described in video
It is middle to obtain continuous N frame image, comprising:
Execute image selection to be processed operation, the image selection operation to be processed include: from the pre-set image of the video,
Continuous N frame image to be processed is obtained in the video;
Execution N frame image determines operation, and the N frame image determines operation includes: whether often to judge in the N frame image to be processed
It include the corresponding image of first object in frame image to be processed, if so, the N frame image to be processed is determined as institute
N frame image is stated, if it is not, the pre-set image to be then updated to the frame image after pre-set image described in the video;
It repeats the image selection operation to be processed and the N frame image determines operation, until determination obtains the N frame figure
Picture.
10. according to the method described in claim 9, it is characterized in that, described in being increased in the video according to the N frame image
Target special efficacy, comprising:
Increase the special efficacy in an at least frame image in the N frame image.
11. -10 described in any item methods according to claim 1, which is characterized in that described to obtain continuous N frame in video
Image, comprising:
Determination stays in increased special efficacy in the video;
It determines and stays in corresponding first object of increased special efficacy in the video;
According to first object, the N frame image is determined in the video.
12. according to the method described in claim 2, it is characterized in that, also being wrapped before obtaining continuous N frame image in video
It includes:
Determination does not increase the target special efficacy in the N frame image.
13. a kind of video process apparatus, which is characterized in that including obtaining module, the first determining module, the second determining module and increasing
Add module, wherein
The acquisition module is used for, and obtains continuous N frame image in video, includes the first object in every frame described image,
The N is the integer greater than 1;
First determining module is used for, and determines the type of gesture of first object in every frame image;
Second determining module is used for, and according to the type of gesture of first object in every frame image, determines described first
The posture of object is distributed, and the posture distribution is used to indicate the changing rule of the posture of first object;
The increase module is used for, and according to the distribution of the posture of first object and the N frame image, is increased in the video
Special efficacy.
14. a kind of electronic equipment characterized by comprising at least one processor and memory;
The memory stores computer executed instructions;
At least one described processor executes the computer executed instructions of the memory storage, so that at least one described processing
Device executes such as the described in any item method for processing video frequency of claim 1-12.
15. a kind of computer readable storage medium, which is characterized in that be stored with computer in the computer readable storage medium
It executes instruction, when processor executes the computer executed instructions, realizes such as the described in any item videos of claim 1-12
Processing method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910304462.5A CN109889893A (en) | 2019-04-16 | 2019-04-16 | Method for processing video frequency, device and equipment |
PCT/CN2019/126757 WO2020211422A1 (en) | 2019-04-16 | 2019-12-19 | Video processing method and apparatus, and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910304462.5A CN109889893A (en) | 2019-04-16 | 2019-04-16 | Method for processing video frequency, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109889893A true CN109889893A (en) | 2019-06-14 |
Family
ID=66937553
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910304462.5A Pending CN109889893A (en) | 2019-04-16 | 2019-04-16 | Method for processing video frequency, device and equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109889893A (en) |
WO (1) | WO2020211422A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223325A (en) * | 2019-06-18 | 2019-09-10 | 北京字节跳动网络技术有限公司 | Method for tracing object, device and equipment |
CN111416991A (en) * | 2020-04-28 | 2020-07-14 | Oppo(重庆)智能科技有限公司 | Special effect processing method and apparatus, and storage medium |
WO2020211422A1 (en) * | 2019-04-16 | 2020-10-22 | 北京字节跳动网络技术有限公司 | Video processing method and apparatus, and device |
CN112199016A (en) * | 2020-09-30 | 2021-01-08 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN112929743A (en) * | 2021-01-22 | 2021-06-08 | 广州光锥元信息科技有限公司 | Method and device for adding video special effect to specified object in video and mobile terminal |
EP4016993A4 (en) * | 2019-08-16 | 2022-08-31 | Beijing Bytedance Network Technology Co., Ltd. | Image processing method and apparatus, electronic device and computer-readable storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104902212A (en) * | 2015-04-30 | 2015-09-09 | 努比亚技术有限公司 | Video communication method and apparatus |
US20160364103A1 (en) * | 2015-06-11 | 2016-12-15 | Yaron Galant | Method and apparatus for using gestures during video playback |
CN106385591A (en) * | 2016-10-17 | 2017-02-08 | 腾讯科技(上海)有限公司 | Video processing method and video processing device |
CN108289180A (en) * | 2018-01-30 | 2018-07-17 | 广州市百果园信息技术有限公司 | Method, medium and the terminal installation of video are handled according to limb action |
CN108712661A (en) * | 2018-05-28 | 2018-10-26 | 广州虎牙信息科技有限公司 | A kind of live video processing method, device, equipment and storage medium |
CN108833818A (en) * | 2018-06-28 | 2018-11-16 | 腾讯科技(深圳)有限公司 | video recording method, device, terminal and storage medium |
CN109391792A (en) * | 2017-08-03 | 2019-02-26 | 腾讯科技(深圳)有限公司 | Method, apparatus, terminal and the computer readable storage medium of video communication |
CN109462776A (en) * | 2018-11-29 | 2019-03-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
CN109618183A (en) * | 2018-11-29 | 2019-04-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004112112A (en) * | 2002-09-13 | 2004-04-08 | Sony Corp | Information processing apparatus |
US20130201328A1 (en) * | 2012-02-08 | 2013-08-08 | Hing Ping Michael CHUNG | Multimedia processing as a service |
CN107481327B (en) * | 2017-09-08 | 2019-03-15 | 腾讯科技(深圳)有限公司 | About the processing method of augmented reality scene, device, terminal device and system |
CN109089058B (en) * | 2018-07-06 | 2021-05-11 | 广州方硅信息技术有限公司 | Video picture processing method, electronic terminal and device |
CN109889893A (en) * | 2019-04-16 | 2019-06-14 | 北京字节跳动网络技术有限公司 | Method for processing video frequency, device and equipment |
-
2019
- 2019-04-16 CN CN201910304462.5A patent/CN109889893A/en active Pending
- 2019-12-19 WO PCT/CN2019/126757 patent/WO2020211422A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104902212A (en) * | 2015-04-30 | 2015-09-09 | 努比亚技术有限公司 | Video communication method and apparatus |
US20160364103A1 (en) * | 2015-06-11 | 2016-12-15 | Yaron Galant | Method and apparatus for using gestures during video playback |
CN106385591A (en) * | 2016-10-17 | 2017-02-08 | 腾讯科技(上海)有限公司 | Video processing method and video processing device |
CN109391792A (en) * | 2017-08-03 | 2019-02-26 | 腾讯科技(深圳)有限公司 | Method, apparatus, terminal and the computer readable storage medium of video communication |
CN108289180A (en) * | 2018-01-30 | 2018-07-17 | 广州市百果园信息技术有限公司 | Method, medium and the terminal installation of video are handled according to limb action |
CN108712661A (en) * | 2018-05-28 | 2018-10-26 | 广州虎牙信息科技有限公司 | A kind of live video processing method, device, equipment and storage medium |
CN108833818A (en) * | 2018-06-28 | 2018-11-16 | 腾讯科技(深圳)有限公司 | video recording method, device, terminal and storage medium |
CN109462776A (en) * | 2018-11-29 | 2019-03-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
CN109618183A (en) * | 2018-11-29 | 2019-04-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020211422A1 (en) * | 2019-04-16 | 2020-10-22 | 北京字节跳动网络技术有限公司 | Video processing method and apparatus, and device |
CN110223325A (en) * | 2019-06-18 | 2019-09-10 | 北京字节跳动网络技术有限公司 | Method for tracing object, device and equipment |
EP4016993A4 (en) * | 2019-08-16 | 2022-08-31 | Beijing Bytedance Network Technology Co., Ltd. | Image processing method and apparatus, electronic device and computer-readable storage medium |
US11516411B2 (en) | 2019-08-16 | 2022-11-29 | Beijing Bytedance Network Technology Co., Ltd. | Image processing method and apparatus, electronic device and computer-readable storage medium |
CN111416991A (en) * | 2020-04-28 | 2020-07-14 | Oppo(重庆)智能科技有限公司 | Special effect processing method and apparatus, and storage medium |
CN112199016A (en) * | 2020-09-30 | 2021-01-08 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN112929743A (en) * | 2021-01-22 | 2021-06-08 | 广州光锥元信息科技有限公司 | Method and device for adding video special effect to specified object in video and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
WO2020211422A1 (en) | 2020-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109889893A (en) | Method for processing video frequency, device and equipment | |
CN110288547A (en) | Method and apparatus for generating image denoising model | |
CN108345387A (en) | Method and apparatus for output information | |
CN109829432A (en) | Method and apparatus for generating information | |
CN109600559B (en) | Video special effect adding method and device, terminal equipment and storage medium | |
CN109618236A (en) | Video comments treating method and apparatus | |
CN110413812A (en) | Training method, device, electronic equipment and the storage medium of neural network model | |
CN110213614A (en) | The method and apparatus of key frame are extracted from video file | |
CN110069191B (en) | Terminal-based image dragging deformation implementation method and device | |
CN110059624B (en) | Method and apparatus for detecting living body | |
CN110070063A (en) | Action identification method, device and the electronic equipment of target object | |
CN109683710B (en) | A kind of palm normal vector determines method, apparatus, equipment and storage medium | |
CN109977839A (en) | Information processing method and device | |
CN107704388A (en) | For the method and apparatus for the startup time for determining application | |
CN109688428A (en) | Video comments generation method and device | |
CN108882025A (en) | Video frame treating method and apparatus | |
CN110288532B (en) | Method, apparatus, device and computer readable storage medium for generating whole body image | |
CN109117758A (en) | Method and apparatus for generating information | |
CN110188782A (en) | Image similarity determines method, apparatus, electronic equipment and readable storage medium storing program for executing | |
CN110334650A (en) | Object detecting method, device, electronic equipment and storage medium | |
CN109981989A (en) | Render method, apparatus, electronic equipment and the computer readable storage medium of image | |
CN110198473A (en) | Method for processing video frequency, device, electronic equipment and computer readable storage medium | |
CN108537149A (en) | Image processing method, device, storage medium and electronic equipment | |
CN109242892B (en) | Method and apparatus for determining the geometric transform relation between image | |
CN110287161A (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190614 |
|
RJ01 | Rejection of invention patent application after publication |