CN105631803B - The method and apparatus of filter processing - Google Patents
The method and apparatus of filter processing Download PDFInfo
- Publication number
- CN105631803B CN105631803B CN201510951946.0A CN201510951946A CN105631803B CN 105631803 B CN105631803 B CN 105631803B CN 201510951946 A CN201510951946 A CN 201510951946A CN 105631803 B CN105631803 B CN 105631803B
- Authority
- CN
- China
- Prior art keywords
- video image
- target area
- image
- video
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 129
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000001514 detection method Methods 0.000 claims description 16
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 230000002542 deteriorative effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000012092 media component Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The disclosure is directed to a kind of method and apparatus of filter processing, belong to technical field of image processing.The described method includes: obtaining the image of the target area in the first video image of video, the target area in first video image is the region for tracking target where in first video image;According to the image of the target area in first video image, determine the target area in the second video image of the video, second video image is the subsequent frame video image of first video image, and the target area in second video image is region of the tracking target where in second video image;According to the target area in second video image, filter processing is carried out to second video image.The disclosure, which is realized, handles the filter of video image, expands the application range of filter processing, improves user experience.
Description
Technical field
This disclosure relates to technical field of image processing more particularly to a kind of method and apparatus of filter processing.
Background technique
The various special-effects of image may be implemented in filter processing, such as objective fuzzy, background blurring.Currently, filter is handled
Have become a kind of conventional means of image procossing.When carrying out filter processing, user first in photo draw a circle to approve processing region or
Person's setting processing region, then specific mode of operation is selected (such as from the filter function that the application program of image processing class provides
It is mosaic, background blurring), the application program of image processing class will carry out filter processing to processing region automatically.But at present
Filter processing need to draw a circle to approve processing region or all photos respectively for each photos by user and set the same treatment region
Domain, therefore can be only applied in the processing to photo, application range is than relatively limited.
Summary of the invention
Application range to overcome the problems, such as the processing of filter present in the relevant technologies is limited, and the disclosure provides a kind of filter
The method and apparatus of processing.
According to the first aspect of the embodiments of the present disclosure, a kind of method of filter processing is provided, comprising:
Obtain the image of the target area in the first video image of video, the target area in first video image
For the region at tracking target place in first video image;
According to the image of the target area in first video image, determine in the second video image of the video
Target area, second video image are the subsequent frame video image of first video image, the second video figure
Target area as in is region of the tracking target where in second video image;
According to the target area in second video image, filter processing is carried out to second video image.
By the image according to the target area in a frame video image, the target area in another frame video image is determined
Domain, and according to the target area in another frame video image, filter processing is carried out to another frame video image, is realized to video figure
The filter of picture is handled, and is expanded the application range of filter processing, is improved user experience.
In a kind of possible implementation of the disclosure, when first video image is to carry out at filter in the video
When the first frame video image of reason, the image of the target area in first video image for obtaining video, comprising:
First video image is exported to user;
Receive the target area in first video image of user's input;
Image from the target area obtained in first video image in first video image;
Alternatively,
Using algorithm of target detection from the target area identified in first video image in first video image
The image in domain.
By way of by user's selection or automatic identification, the first frame for obtaining and carrying out filter processing in video is realized
The image of target area in video image.
In the alternatively possible implementation of the disclosure, when first video image is filtered in the video
When the first frame video image of mirror processing, the image of the target area in first video image for obtaining video, comprising:
According to the image of the target area in the third video image of the video, determine in first video image
Target area, the third video image are the frame video image before first video image, the third video figure
Target area as in is region of the tracking target where in the third video image;
Image from the target area obtained in first video image in first video image.
By the first frame video image for carrying out filter processing in video as, target is successively had determined according to a frame
The video image of the image in region determines the target area in another frame video image, and then the image for obtaining target area is used
The determination of target area in another frame video image is so recycled, can be determined in video in all video images
Target area.
Optionally, first video image is the previous frame video image of second video image.
Under the premise of the image of target area in the previous frame video image of available second video image, utilize
The image of target area in adjacent two frame video image is the most similar, according in the previous frame video image of the second video image
Target area image, determine the target area in the second video image, accuracy highest.
In the disclosure in another possible implementation, the target area according in first video image
Image determines the target area in second video image, comprising:
Obtain the characteristic value of the image of the target area in first video image;
According to the characteristic value of the image of the target area in first video image, determine in second video image
Target area.
Using the characteristic value of the image of the target area in the first video image, the subsequent video of the first video image is determined
Target area in image.
Optionally, the characteristic value of the image according to the target area in first video image determines described
Target area in two video images, comprising:
The target area in second video image is estimated according to the target area in first video image;
Second video image is scanned according to the image of the target area in first video image, is detected
Possible target area in second video image out;
According in the target area in second video image of estimation and second video image detected
Possible target area determines the target area in second video image.
Using TLD algorithm realize target tracking, can solve tracked target be tracked during occur deformation,
It is the problems such as partial occlusion, tracking effect stabilization, robust, reliable.
Optionally, the characteristic value of the image according to the target area in first video image determines described
Target area in two video images, comprising:
Multiple candidate regions are chosen centered on determining region in second video image, the determining region is institute
State region corresponding with the target area in first video image in the second video image;
Calculate separately the multiple candidate region image and first video image in target area image it
Between feature value histogram similarity;
It is straight from characteristic value between the image of the target area in multiple candidate regions in selection and first video image
The maximum candidate region of the similarity of square figure, and the determining region is updated using the candidate region chosen;
The distance between described determining region when the updated determining region and before updating is less than set distance
When number not up to sets number, chosen centered on the updated determining region in second video image multiple
Candidate region updates the determining region again;
The distance between described determining region when the updated determining region and before updating is less than set distance
When number reaches setting number, using the updated determining region as the target area in second video image.
The tracking of target is realized using Mean Shift algorithm, target locating piece, search time is short, has well in real time
Property.And statistical nature is used, to noise by very strong robustness.
In the disclosure in another possible implementation, the target area according in second video image,
Filter processing is carried out to second video image, comprising:
According to the target area in second video image, to the image of the target area in second video image
Carry out filter processing;
Alternatively,
According to the target area in second video image, to removing the second video figure in second video image
Image except the image of target area as in carries out filter processing.
According to user's needs, it can choose and filter processing is carried out to the image of target area, also can choose to except target
Image except the image in region carries out filter processing, and applicability is good.
According to the second aspect of an embodiment of the present disclosure, a kind of device of filter processing is provided, comprising:
Obtain module, the image of the target area in the first video image for obtaining video, the first video figure
Target area as in is the region for tracking target where in first video image;
Determining module determines the of the video for the image according to the target area in first video image
Target area in two video images, second video image are the subsequent frame video image of first video image,
Target area in second video image is region of the tracking target where in second video image;
Processing module, for being carried out to second video image according to the target area in second video image
Filter processing.
In a kind of possible implementation of the disclosure, the acquisition module includes:
Output sub-module, for being the first frame video for carrying out filter processing in the video when first video image
When image, first video image is exported to user;
Receiving submodule, the target area in first video image for receiving user's input;
First acquisition submodule, for from the target area obtained in first video image in first video image
The image in domain;
Alternatively,
The acquisition module, for being the first frame view for carrying out filter processing in the video when first video image
When frequency image, using algorithm of target detection from the target area identified in first video image in first video image
The image in domain.
In the alternatively possible implementation of the disclosure, the acquisition module includes:
First determines submodule, for not being to carry out the first of filter processing in the video when first video image
When frame video image, according to the image of the target area in the third video image of the video, the first video figure is determined
Target area as in, the third video image are the frame video image before first video image, the third
Target area in video image is region of the tracking target where in the third video image;
Second acquisition submodule, for from the target area obtained in first video image in first video image
The image in domain.
Optionally, first video image is the previous frame video image of second video image.
In the disclosure in another possible implementation, the determining module includes:
Third acquisition submodule, the characteristic value of the image for obtaining the target area in first video image;
Second determines submodule, for the characteristic value according to the image of the target area in first video image, really
Target area in fixed second video image.
Optionally, described second determine that submodule includes:
Submodule is estimated, for estimating in second video image according to the target area in first video image
Target area;
Detection sub-module, for the image according to the target area in first video image to the second video figure
As being scanned, possible target area in second video image is detected;
Third determines submodule, for the target area in second video image according to estimation and detects
Second video image in possible target area, determine the target area in second video image.
Optionally, described second determine that submodule includes:
Submodule is chosen, for choosing multiple candidate regions centered on determining region in second video image,
The determining region is region corresponding with the target area in first video image in second video image;
Computational submodule, the mesh in image and first video image for calculating separately the multiple candidate region
Mark the similarity of feature value histogram between the image in region;
Submodule is updated, for the figure from selection and the target area in first video image in multiple candidate regions
The maximum candidate region of similarity of feature value histogram as between, and the determining area is updated using the candidate region chosen
Domain;
Judging submodule, for the distance between the determining region before the updated determining region and update
When number less than set distance not up to sets number, with the updated determining region in second video image
Centered on choose multiple candidate regions, update the determining region again;When the updated determining region and before updating
When the number that the distance between described determining region is less than set distance reaches setting number, by the updated determining region
As the target area in second video image.
In the disclosure in another possible implementation, the processing module is used for,
According to the target area in second video image, to the image of the target area in second video image
Carry out filter processing;
Alternatively,
According to the target area in second video image, to removing the second video figure in second video image
Image except the image of target area as in carries out filter processing.
According to the third aspect of an embodiment of the present disclosure, a kind of device of filter processing is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Obtain the image of the target area in the first video image of video, the target area in first video image
For the region at tracking target place in first video image;
According to the image of the target area in first video image, determine in the second video image of the video
Target area, second video image are the subsequent frame video image of first video image, the second video figure
Target area as in is region of the tracking target where in second video image;
According to the target area in second video image, filter processing is carried out to second video image.
The technical scheme provided by this disclosed embodiment can include the following benefits: by according to a frame video image
In target area image, determine the target area in another frame video image, and according to the mesh in another frame video image
Region is marked, filter processing is carried out to another frame video image, realizes and the filter of video image is handled, expands filter processing
Application range improves user experience.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention
Example, and be used to explain the principle of the present invention together with specification.
Fig. 1 is the application scenario diagram of the method for filter processing shown according to an exemplary embodiment;
Fig. 2 is a kind of flow chart of the method for filter processing shown according to an exemplary embodiment;
Fig. 3 is a kind of flow chart of the method for filter processing shown according to an exemplary embodiment;
Fig. 4 a- Fig. 4 d is the terminal interface during the method for filter processing shown according to an exemplary embodiment is realized
Figure;
Fig. 5 is a kind of block diagram of the device of filter processing shown according to an exemplary embodiment;
Fig. 6 is a kind of block diagram of the device of filter processing shown according to an exemplary embodiment;
Fig. 7 is a kind of block diagram of the device of filter processing shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended
The example of device and method being described in detail in claims, some aspects of the invention are consistent.
Fig. 1 is first combined simply to introduce the application scenarios of the method for the filter processing of embodiment of the present disclosure offer below.
Referring to Fig. 1, each frame video image includes the same person in video 3, and user 1 carries out filter processing to video 3 by terminal 2,
If mosaic processing obtains video 4a, so that the people in video 4a is beyond recognition, or virtualization background obtains video 4b, so that view
People in frequency 4b is prominent.
Specifically, terminal 2 can be smart phone, tablet computer, smart television, multimedia player, on knee portable
Computer and desktop computer etc..
It should be noted that it is only to lift that above-mentioned application scenarios, video content, filter processing mode and terminal type, which are realized,
Example, the disclosure are not restricted to this.
Fig. 2 is a kind of flow chart of the method for filter processing shown according to an exemplary embodiment, as shown in Fig. 2, should
The method of filter processing is for including the following steps in terminal.
In step s101, the image of the target area in the first video image of video is obtained.
In the present embodiment, the target area in the first video image is to track target where in the first video image
Region.
In step s 102, according to the image of the target area in the first video image, the second video figure of video is determined
Target area as in.
In the present embodiment, the second video image is the subsequent frame video image of the first video image, the second video figure
Target area as in is the region for tracking target where in the second video image.Wherein, the first video image and the second view
Precedence relationship between frequency image is determined by the shooting sequence or playing sequence of video.
In step s 103, according to the target area in the second video image, filter processing is carried out to the second video image.
The embodiment of the present disclosure determines another frame video image by the image according to the target area in a frame video image
In target area filter processing is carried out to another frame video image and according to the target area in another frame video image, it is real
Now the filter of video image is handled, the application range of filter processing is expanded, improves user experience.
Fig. 3 is a kind of flow chart of the method for filter processing shown according to an exemplary embodiment, as shown in figure 3, should
The method of filter processing is for including the following steps in terminal.
In step s 201, the initial frame video image of video is obtained.
In the present embodiment, video arranges each frame video image according to shooting sequence or playing sequence.Initial frame video
Image is the first frame video image that filter processing is carried out in video.
In step S202, the target area in initial frame video image is determined.
In the present embodiment, the target area in initial frame video image is tracking target institute in initial frame video image
Region.In practical applications, the object that tracking target can include at least two frame video images in video, as people,
Animal etc..
In a kind of implementation of the present embodiment, step S202 may include:
Initial frame video image is exported to user;
Receive the target area in the initial frame video image of user's input.
For example, the initial frame video image of terminal output is shown in Fig. 4 a, with compared to the increased box of Fig. 4 a in Fig. 4 b
Show the target area of user's input.
In practical applications, the interface of input target area can be provided on initial frame video image.For example, according to
Touch screen exports initial frame video image, then user can be another from a little sliding on initial frame video image by finger
Point, terminal form rectangle frame using the line between two o'clock as diagonal line, and the corresponding region of the rectangle frame is target area.Again
Such as, initial frame video image is exported according to non-tactile display screen, then user can be by input equipments such as mouses from initial
A little keep selected state to another point on frame video image, similarly, terminal is using the line between two o'clock as diagonal line
Rectangle frame is formed, the corresponding region of the rectangle frame is target area.
In another implementation of the present embodiment, which may include:
The image for identifying target area from initial frame video image using algorithm of target detection, determines initial video figure
Target area as in.
In the concrete realization, set largely containing redundancy feature can be first obtained, the method for recycling machine learning, from
The feature for being best able to reflection target object feature is found in characteristic set, structural classification device realizes the detection of target object, in turn
Obtain target area.
Step S203: according to the target area in initial frame video image, filter processing is carried out to initial frame video image.
In a kind of implementation of the present embodiment, which may include:
According to the target area in initial frame video image, the image of the target area in initial frame video image is carried out
Filter processing.
For example, Fig. 4 c is shown to the image-mosaics of the target area in initial frame video image treated image,
By the effect for deteriorating and color lump is caused to upset the color range details of the image of target area, to be beyond recognition target area
Image achievees the purpose that the image for protecting target area.
In another implementation of the present embodiment, which may include:
According to the target area in initial frame video image, removed in initial frame video image in initial frame video image
Image except the image of target area carries out filter processing.
For example, Fig. 4 d show in initial frame video image remove initial frame video image in target area image it
Image after outer image virtualization background, makes the depth of field shoal, achievees the purpose that the image of prominent target area by filtering.
In another implementation of the present embodiment, this method can also include:
Receive filter operational order.
Correspondingly, step S203 may include:
According to the target area in initial frame video image, according to filter operational order to the mesh in initial frame video image
The image for marking region carries out filter processing.
In the present embodiment, filter operational order can instruct for mosaic processing, background blurring etc..By receiving filter
Operational order, user can select to carry out video different filter processing according to their needs.
In step S204, the image of the target area in initial frame video image is obtained.
Be readily apparent that, step S204 can receive user input initial frame video image in target area it
Afterwards, from the image of the target area obtained in initial frame video image in initial frame video image;Target detection can also be used
Image of the algorithm from the target area identified in initial frame video image in initial frame video image.
In step S205, according to the image of the target area in initial frame video image, the second frame view of video is determined
Target area in frequency image.
In the present embodiment, the second frame video image is the latter frame video image of initial frame video image.
In a kind of implementation of the present embodiment, step S205 may include:
Obtain the characteristic value of the image of the target area in initial frame video image;
According to the characteristic value of the image of the target area in initial frame video image, the mesh in the second frame video image is determined
Mark region.
It in practical applications, can be using tracking study detection (Tracking-Learning-Detection, abbreviation
TLD) algorithm or average drifting (Mean Shift) algorithm determine the target area in the second frame video image.
Optionally, when using TLD algorithm, according to the characteristic value of the image of the target area in initial frame video image,
It determines the target area in the second frame video image, may include:
The target area in the second frame video image is estimated according to the target area in initial frame video image;
The second frame video image is scanned according to the image of the target area in initial frame video image, detects
Possible target area in two frame video images;
According to possible in the target area in the second frame video image of estimation and the second frame video image detected
Target area, determine the target area in the second frame video image.
Preferably, the target area in the second frame video image is estimated according to the target area in initial frame video image,
May include:
The image of target area in initial frame video image is divided into multiple images block;
It is limited centered on the position of the target area in the initial frame video image of correspondence in the second frame video image
In region, each image block is scanned respectively, obtains position of each image block in the second frame video image;
It calculates multiple images block and is moved to the position in the second frame video image from the position in initial frame video image
Moving distance average value;
On the basis of corresponding to the position of the target area in initial frame video image in the second frame video image, in addition meter
The moving distance average value of calculating, the target area in the second frame video image estimated.
Preferably, the second frame video image is scanned according to the image of the target area in initial frame video image,
It detects possible target area in the second frame video image, may include:
Second frame video is successively chosen using scanning window identical with the target area size in initial frame video image
Parts of images in image;
Compare the similarity of the image of the target area in the parts of images and initial frame video image chosen;
It is more than all part figures of given threshold by the similarity of the image with the target area in initial frame video image
As the possible image in target area in the second frame video image, and can by the target area in the second frame video image
Region where the image of energy is as possible target area in the second frame video image.
Preferably, according to the target area in the second frame video image of estimation and the second frame video figure detected
The possible target area as in, determines the target area in the second frame video image, comprising:
When in the second frame video image of estimation target area with it is possible in obtained second frame video image
When target area is identical, using the target area in the second frame video image of estimation as the target area in the second frame video image
Domain.
In practical applications, this method can also include:
When in the second frame video image of estimation target area with it is possible in obtained all second frame video images
When target area is different, determine that the target area in the second video image is not present.
Furthermore it is possible to assess the second frame video detected according to the target area in the second frame video image of estimation
Whether possible target area is correct in image, and then is scanned for subsequent video image and provides sample, steps up and sweeps
The accuracy retouched.
Optionally, when using Mean Shift algorithm, according to the image of the target area in initial frame video image
Characteristic value determines the target area in the second frame video image, may include:
Multiple candidate regions are chosen centered on determining region in the second video image, determine that region is the second video figure
The region corresponding with the target area in the first video image as in;
Calculate separately characteristic value between the image of multiple candidate regions and the image of the target area in the first video image
The similarity of histogram;
From feature value histogram between the image of the target area in multiple candidate regions in selection and the first video image
The maximum candidate region of similarity, and using choose candidate region update determine region;
When the number that updated determining region is less than set distance with the distance between the determination region before update does not reach
To when setting number, multiple candidate regions are chosen centered on updated determining region in the second video image, again more
It is new to determine region;
When the number that updated determining region is less than set distance with the distance between the determination region before update reaches
When setting number, using updated determining region as the target area in the second video image.
Wherein, feature value histogram is usually color histogram.
In step S206, according to the target area in the second frame video image, filter is carried out to the second frame video image
Processing.
Optionally, filter processing is carried out in step S206 can be essentially identical with step S203, the difference is that only
Process object is changed to the second frame video image in step S206 by the initial frame video image in step S203, herein no longer
It is described in detail.
In step S207, the image of the target area in the second frame video image is obtained.
It being readily apparent that, step S207 can first pass through step S205 and determine target area in the second frame video image,
Again from the image of the target area obtained in the second frame video image in the second frame video image.
In step S208, according to the image of the target area in the second frame video image, the third frame view of video is determined
Target area in frequency image.
In the present embodiment, third frame video image is the latter frame video image of the second frame video image.
Optionally, filter processing is carried out in step S208 can be essentially identical with step S205, the difference is that only
The third frame video image that object is changed in step S208 by the second frame video image in step S205 is determined, according to object
The second frame video image in step S208 is changed to by the initial frame video image in step S205, this will not be detailed here.
It should be noted that if causing not track target due to blocking, disappear etc. in the second frame video image, then
The image of the target area in the second frame video image is obtained in step S207, can become obtaining the mesh in initial frame video image
The image in region is marked, while third frame video is determined according to the image of the target area in the second frame video image in step 208
Target area in image accordingly becomes the image according to the target area in initial frame video image, determines third frame video
Target area in image.Expand to the 4th frame video image, the 5th frame video image ... etc., what is obtained in step 207 is logical
It is often the image of the target area in the previous frame video image of current frame video image, but in the previous of current frame video image
When the image of target area being not present in frame video image, before what is obtained in step 207 becomes current frame video image and deposit
In the video image of the image of target area, the figure away from the target area in the nearest video image of current frame video image
Picture.
In step S209, according to the target area in third frame video image, filter is carried out to third frame video image
Processing.
Optionally, filter processing is carried out in step S209 can also be essentially identical with step S203, and difference only exists
The third frame video image in step S209 is changed to by the initial frame video image in step S203 in process object, herein not
It is described in detail again.
It should be noted that can successively release the 4th frame video image, the 5th according to above-mentioned steps S201- step S209
Frame video image ... filter processing method, will not enumerate herein.
The embodiment of the present disclosure determines another frame video image by the image according to the target area in a frame video image
In target area filter processing is carried out to another frame video image and according to the target area in another frame video image, it is real
Now the filter of video image is handled, the application range of filter processing is expanded, improves user experience.
Fig. 5 is a kind of block diagram of the device of filter processing shown according to an exemplary embodiment, referring to Fig. 5, the device
Including obtaining module 301, determining module 302 and processing module 303.
The acquisition module 301 is configured as obtaining the image of the target area in the first video image of video, the first view
Target area in frequency image is the region for tracking target where in the first video image.
The determining module 302 is configured as the image according to the target area in the first video image, determines the of video
Target area in two video images, the second video image are the subsequent frame video image of the first video image, the second video
Target area in image is the region for tracking target where in the second video image.
The processing module 303 is configured as carrying out the second video image according to the target area in the second video image
Filter processing.
The embodiment of the present disclosure determines another frame video image by the image according to the target area in a frame video image
In target area filter processing is carried out to another frame video image and according to the target area in another frame video image, it is real
Now the filter of video image is handled, the application range of filter processing is expanded, improves user experience.
Fig. 6 is a kind of block diagram of the device of filter processing shown according to an exemplary embodiment, referring to Fig. 6, the device
Including obtaining module 401, determining module 402 and processing module 403.
The acquisition module 401 is configured as obtaining the image of the target area in the first video image of video, the first view
Target area in frequency image is the region for tracking target where in the first video image.
The determining module 402 is configured as the image according to the target area in the first video image, determines the of video
Target area in two video images, the second video image are the subsequent frame video image of the first video image, the second video
Target area in image is the region for tracking target where in the second video image.
The filter module 403 is configured as carrying out the second video image according to the target area in the second video image
Filter processing.
In a kind of implementation of the present embodiment, which may include output sub-module 401a, receives son
Module 401b and the first acquisition submodule 401c.
Output sub-module 401a is configured as when the first video image being the first frame view for carrying out filter processing in video
When frequency image, the first video image is exported to user.
Receiving submodule 401b is configured as receiving the target area in the first video image of user's input.
First acquisition submodule 401c is configured as from the target obtained in the first video image in the first video image
The image in region.
In another implementation of the present embodiment, which be can be configured as when the first video image
When the first frame video image of progress filter processing, to be identified from the first video image using algorithm of target detection in video
The image of target area in first video image.
In another implementation of the present embodiment, which may include the first determining submodule 401d
With the second acquisition submodule 401e.
This first determines that submodule 401d is configured as when the first video image not being to carry out the of filter processing in video
When one frame video image, according to the image of the target area in the third video image of video, determine in the first video image
Target area, third video image are the frame video image before the first video image, the target area in third video image
Domain is the region for tracking target where in third video image.
Second acquisition submodule 401e is configured as from the target obtained in the first video image in the first video image
The image in region.
Optionally, the first video image can be the previous frame video image of the second video image.
In another implementation of the present embodiment, which may include third acquisition submodule 402a
Submodule 402b is determined with second.
Third acquisition submodule 402a is configured as obtaining the feature of the image of the target area in the first video image
Value.
The second determining submodule 402b is configured as the feature of the image according to the target area in the first video image
Value, determines the target area in the second video image.
Optionally, this second determine submodule 402b may include estimation submodule 402ba, detection sub-module 402bb and
Third determines submodule 402bc.
Estimation submodule 402ba is configured as estimating the second video image according to the target area in the first video image
In target area.
Detection sub-module 402bb is configured as the image according to the target area in the first video image to the second video
Image is scanned, and detects possible target area in the second video image.
The third determine submodule 402bc be configured as the target area in the second video image according to estimation and
Possible target area in the second video image detected, determines the target area in the second video image.
Optionally, determination submodule 402b may include choosing submodule 402bd, computational submodule 402be, updating son
Module 402bf and judging submodule 402bg.
Selection submodule 402bd is configured as choosing multiple candidates centered on determining region in the second video image
Region determines that region is region corresponding with the target area in the first video image in the second video image.
Computational submodule 402be is configured to calculate in the image and the first video image of multiple candidate regions
The similarity of feature value histogram between the image of target area.
Update submodule 402bf is configured as from selection in multiple candidate regions and the target area in the first video image
The maximum candidate region of the similarity of feature value histogram between the image in domain, and updated using the candidate region chosen and determine area
Domain.
Judging submodule 402bg be configured as when updated determining region and update before determination region between away from
When not up to setting number from the number for being less than set distance, in the second video image centered on updated determining region
Multiple candidate regions are chosen, updates determine region again;Between determination region when updated determining region and before updating
When the number that distance is less than set distance reaches setting number, using updated determining region as the mesh in the second video image
Mark region.
In another implementation of the present embodiment, which is configurable to according to the second video image
In target area, filter processing is carried out to the image of the target area in the second video image;Alternatively, according to the second video figure
Target area as in carries out the image in the second video image in addition to the image of the target area in the second video image
Filter processing.
The embodiment of the present disclosure determines another frame video image by the image according to the target area in a frame video image
In target area filter processing is carried out to another frame video image and according to the target area in another frame video image, it is real
Now the filter of video image is handled, the application range of filter processing is expanded, improves user experience.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 7 is a kind of block diagram of the device 800 of filter processing shown according to an exemplary embodiment.For example, device 800
It can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices,
Body-building equipment, personal digital assistant etc..
Referring to Fig. 7, device 800 may include following one or more components: processing component 802, memory 804, electric power
Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and
Communication component 816.
The integrated operation of the usual control device 800 of processing component 802, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 802 may include that one or more processors 820 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just
Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate
Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in equipment 800.These data are shown
Example includes the instruction of any application or method for operating on device 800, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Electric power assembly 806 provides electric power for the various assemblies of device 800.Electric power assembly 806 may include power management system
System, one or more power supplys and other with for device 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between described device 800 and user.One
In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 808 includes a front camera and/or rear camera.When equipment 800 is in operation mode, such as screening-mode or
When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when device 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set
Part 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented
Estimate.For example, sensor module 814 can detecte the state that opens/closes of equipment 800, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device
Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800
Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device
800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation
In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example
Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 804 of instruction, above-metioned instruction can be executed by the processor 820 of device 800 to complete the above method.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of mobile terminal
When device executes, so that a kind of method that mobile terminal is able to carry out filter processing, which comprises
Obtain the image of the target area in the first video image of video, the target area in first video image
For the region at tracking target place in first video image;
According to the image of the target area in first video image, determine in the second video image of the video
Target area, second video image are the subsequent frame video image of first video image, the second video figure
Target area as in is region of the tracking target where in second video image;
According to the target area in second video image, filter processing is carried out to second video image.
In a kind of implementation of the present embodiment, when first video image is to carry out filter processing in the video
The first frame video image when, it is described obtain video the first video image in target area image, comprising:
First video image is exported to user;
Receive the target area in first video image of user's input;
Image from the target area obtained in first video image in first video image;
Alternatively,
Using algorithm of target detection from the target area identified in first video image in first video image
The image in domain.
In the alternatively possible implementation of the disclosure, when first video image is filtered in the video
When the first frame video image of mirror processing, the image of the target area in first video image for obtaining video, comprising:
According to the image of the target area in the third video image of the video, determine in first video image
Target area, the third video image are the frame video image before first video image, the third video figure
Target area as in is region of the tracking target where in the third video image;
Image from the target area obtained in first video image in first video image.
Optionally, first video image is the previous frame video image of second video image.
Under the premise of the image of target area in the previous frame video image of available second video image, utilize
The image of target area in adjacent two frame video image is the most similar, according in the previous frame video image of the second video image
Target area image, determine the target area in the second video image, accuracy highest.
In another implementation of the present embodiment, the figure according to the target area in first video image
Picture determines the target area in second video image, comprising:
Obtain the characteristic value of the image of the target area in first video image;
According to the characteristic value of the image of the target area in first video image, determine in second video image
Target area.
Optionally, the characteristic value of the image according to the target area in first video image determines described
Target area in two video images, comprising:
The target area in second video image is estimated according to the target area in first video image;
Second video image is scanned according to the image of the target area in first video image, is detected
Possible target area in second video image out;
According in the target area in second video image of estimation and second video image detected
Possible target area determines the target area in second video image.
Optionally, the characteristic value of the image according to the target area in first video image determines described
Target area in two video images, comprising:
Multiple candidate regions are chosen centered on determining region in second video image, the determining region is institute
State region corresponding with the target area in first video image in the second video image;
Calculate separately the multiple candidate region image and first video image in target area image it
Between feature value histogram similarity;
It is straight from characteristic value between the image of the target area in multiple candidate regions in selection and first video image
The maximum candidate region of the similarity of square figure, and the determining region is updated using the candidate region chosen;
The distance between described determining region when the updated determining region and before updating is less than set distance
When number not up to sets number, chosen centered on the updated determining region in second video image multiple
Candidate region updates the determining region again;
The distance between described determining region when the updated determining region and before updating is less than set distance
When number reaches setting number, using the updated determining region as the target area in second video image.
In the disclosure in another possible implementation, the target area according in second video image,
Filter processing is carried out to second video image, comprising:
According to the target area in second video image, to the image of the target area in second video image
Carry out filter processing;
Alternatively,
According to the target area in second video image, to removing the second video figure in second video image
Image except the image of target area as in carries out filter processing.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or
Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following
Claim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.
Claims (11)
1. a kind of method of filter processing characterized by comprising
The image for obtaining the target area in the first video image of video, the target area in first video image be with
Region of the track target where in first video image;
According to the image of the target area in first video image, the target in the second video image of the video is determined
Region, second video image is the subsequent frame video image of first video image, in second video image
Target area be it is described tracking target in second video image where region;
According to the target area in second video image, filter processing is carried out to second video image;
Wherein, the image according to the target area in first video image, determines in second video image
Target area, comprising:
Obtain the characteristic value of the image of the target area in first video image;
According to the characteristic value of the image of the target area in first video image, the mesh in second video image is determined
Mark region;
Wherein, the characteristic value of the image according to the target area in first video image, determines second video
Target area in image, comprising:
The target area in second video image is estimated according to the target area in first video image;
Second video image is scanned according to the image of the target area in first video image, detects institute
State possible target area in the second video image;
According to possible in the target area in second video image of estimation and second video image detected
Target area, determine the target area in second video image;
Wherein, the target area in second video image is estimated according to the target area in first video image, wrapped
It includes:
The image of target area in first video image is divided into multiple images block;
Finite region in second video image centered on the position of the target area to first video image
It is interior, each image block is scanned respectively, obtains position of each image block in second video image;
It calculates described multiple images block and is moved to the position in second video image from the position in first video image
The moving distance average value set;
On the basis of corresponding to the position of the target area of first video image in second video image, in addition calculating
The moving distance average value out, the target area in second video image estimated.
2. the method according to claim 1, wherein when first video image is to be filtered in the video
When the first frame video image of mirror processing, the image of the target area in first video image for obtaining video, comprising:
First video image is exported to user;
Receive the target area in first video image of user's input;
Image from the target area obtained in first video image in first video image;
Alternatively,
Using algorithm of target detection from the target area identified in first video image in first video image
Image.
3. the method according to claim 1, wherein when first video image is carried out in the video
When the first frame video image of filter processing, the image of the target area in first video image for obtaining video, comprising:
According to the image of the target area in the third video image of the video, the target in first video image is determined
Region, the third video image is the frame video image before first video image, in the third video image
Target area be it is described tracking target in the third video image where region;
Image from the target area obtained in first video image in first video image.
4. according to the method described in claim 3, it is characterized in that, first video image is second video image
Previous frame video image.
5. method according to claim 1-4, which is characterized in that described according in second video image
Target area carries out filter processing to second video image, comprising:
According to the target area in second video image, the image of the target area in second video image is carried out
Filter processing;
Alternatively,
According to the target area in second video image, removed in second video image in second video image
Target area image except image carry out filter processing.
6. a kind of device of filter processing characterized by comprising
Obtain module, the image of the target area in the first video image for obtaining video, in first video image
Target area be track target in first video image where region;
Determining module determines the second view of the video for the image according to the target area in first video image
Target area in frequency image, second video image is the subsequent frame video image of first video image, described
Target area in second video image is region of the tracking target where in second video image;
Processing module, for carrying out filter to second video image according to the target area in second video image
Processing;
Wherein, the determining module includes:
Third acquisition submodule, the characteristic value of the image for obtaining the target area in first video image;
Second determines submodule, for the characteristic value according to the image of the target area in first video image, determines institute
State the target area in the second video image;
Wherein, described second determine that submodule includes:
Submodule is estimated, for estimating the mesh in second video image according to the target area in first video image
Mark region;
Detection sub-module, for the image according to the target area in first video image to second video image into
Row scanning, detects possible target area in second video image;
Third determines submodule, for the target area in second video image according to estimation and the institute detected
Possible target area in the second video image is stated, determines the target area in second video image;
Wherein, the estimation submodule, for the image of the target area in first video image to be divided into multiple figures
As block;Finite region in second video image centered on the position of the target area to first video image
It is interior, each image block is scanned respectively, obtains position of each image block in second video image;Calculate institute
State multiple images block be moved to from the position in first video image movement of the position in second video image away from
From average value;On the basis of corresponding to the position of the target area of first video image in second video image, add
The upper calculated moving distance average value, the target area in second video image estimated.
7. device according to claim 6, which is characterized in that the acquisition module includes:
Output sub-module, for being the first frame video image for carrying out filter processing in the video when first video image
When, first video image is exported to user;
Receiving submodule, the target area in first video image for receiving user's input;
First acquisition submodule, for from the target area obtained in first video image in first video image
Image;
Alternatively,
The acquisition module, for being the first frame video figure for carrying out filter processing in the video when first video image
When picture, using algorithm of target detection from the target area identified in first video image in first video image
Image.
8. device according to claim 6, which is characterized in that the acquisition module includes:
First determines submodule, for not being the first frame view for carrying out filter processing in the video when first video image
When frequency image, according to the image of the target area in the third video image of the video, determine in first video image
Target area, the third video image be first video image before a frame video image, the third video
Target area in image is region of the tracking target where in the third video image;
Second acquisition submodule, for from the target area obtained in first video image in first video image
Image.
9. device according to claim 8, which is characterized in that first video image is second video image
Previous frame video image.
10. according to the described in any item devices of claim 6-9, which is characterized in that the processing module is used for,
According to the target area in second video image, the image of the target area in second video image is carried out
Filter processing;
Alternatively,
According to the target area in second video image, removed in second video image in second video image
Target area image except image carry out filter processing.
11. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium includes at least one finger
It enables, when at least one instruction is executed by processor, the method for the described in any item filters processing of perform claim requirement 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510951946.0A CN105631803B (en) | 2015-12-17 | 2015-12-17 | The method and apparatus of filter processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510951946.0A CN105631803B (en) | 2015-12-17 | 2015-12-17 | The method and apparatus of filter processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105631803A CN105631803A (en) | 2016-06-01 |
CN105631803B true CN105631803B (en) | 2019-05-28 |
Family
ID=56046692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510951946.0A Active CN105631803B (en) | 2015-12-17 | 2015-12-17 | The method and apparatus of filter processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105631803B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106327448A (en) * | 2016-08-31 | 2017-01-11 | 上海交通大学 | Picture stylization processing method based on deep learning |
CN108960213A (en) * | 2018-08-16 | 2018-12-07 | Oppo广东移动通信有限公司 | Method for tracking target, device, storage medium and terminal |
CN109714623B (en) * | 2019-03-12 | 2021-11-16 | 北京旷视科技有限公司 | Image display method and device, electronic equipment and computer readable storage medium |
CN109993711A (en) * | 2019-03-25 | 2019-07-09 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN110312164A (en) * | 2019-07-24 | 2019-10-08 | Oppo(重庆)智能科技有限公司 | Method for processing video frequency, device and computer storage medium and terminal device |
CN110796012B (en) * | 2019-09-29 | 2022-12-27 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and readable storage medium |
CN110930436B (en) * | 2019-11-27 | 2023-04-14 | 深圳市捷顺科技实业股份有限公司 | Target tracking method and device |
CN112055247B (en) * | 2020-09-11 | 2022-07-08 | 北京爱奇艺科技有限公司 | Video playing method, device, system and storage medium |
CN112258556A (en) * | 2020-10-22 | 2021-01-22 | 北京字跳网络技术有限公司 | Method and device for tracking designated area in video, readable medium and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101739712A (en) * | 2010-01-25 | 2010-06-16 | 四川大学 | Video-based 3D human face expression cartoon driving method |
CN101742228A (en) * | 2008-11-19 | 2010-06-16 | 新奥特硅谷视频技术有限责任公司 | Preprocessing method and system applied to digital court |
KR20110032347A (en) * | 2009-09-22 | 2011-03-30 | 삼성전자주식회사 | Apparatus and method for extracting character information in a motion picture |
CN101996312A (en) * | 2009-08-18 | 2011-03-30 | 索尼株式会社 | Method and device for tracking targets |
CN103400393A (en) * | 2013-08-21 | 2013-11-20 | 中科创达软件股份有限公司 | Image matching method and system |
CN103985142A (en) * | 2014-05-30 | 2014-08-13 | 上海交通大学 | Federated data association Mean Shift multi-target tracking method |
-
2015
- 2015-12-17 CN CN201510951946.0A patent/CN105631803B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101742228A (en) * | 2008-11-19 | 2010-06-16 | 新奥特硅谷视频技术有限责任公司 | Preprocessing method and system applied to digital court |
CN101996312A (en) * | 2009-08-18 | 2011-03-30 | 索尼株式会社 | Method and device for tracking targets |
KR20110032347A (en) * | 2009-09-22 | 2011-03-30 | 삼성전자주식회사 | Apparatus and method for extracting character information in a motion picture |
CN101739712A (en) * | 2010-01-25 | 2010-06-16 | 四川大学 | Video-based 3D human face expression cartoon driving method |
CN103400393A (en) * | 2013-08-21 | 2013-11-20 | 中科创达软件股份有限公司 | Image matching method and system |
CN103985142A (en) * | 2014-05-30 | 2014-08-13 | 上海交通大学 | Federated data association Mean Shift multi-target tracking method |
Also Published As
Publication number | Publication date |
---|---|
CN105631803A (en) | 2016-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105631803B (en) | The method and apparatus of filter processing | |
CN105528606B (en) | Area recognizing method and device | |
CN109446994B (en) | Gesture key point detection method and device, electronic equipment and storage medium | |
CN105491642B (en) | The method and apparatus of network connection | |
CN108010060B (en) | Target detection method and device | |
CN105491289B (en) | Prevent from taking pictures the method and device blocked | |
CN105631797B (en) | Watermark adding method and device | |
KR102446687B1 (en) | Image processing method and apparatus, electronic device and storage medium | |
CN106228168B (en) | The reflective detection method of card image and device | |
CN104918107B (en) | The identification processing method and device of video file | |
CN106228556B (en) | image quality analysis method and device | |
EP2998960A1 (en) | Method and device for video browsing | |
CN107688781A (en) | Face identification method and device | |
CN105335714B (en) | Photo processing method, device and equipment | |
CN104933700B (en) | A kind of method and apparatus carrying out picture material identification | |
CN109034150A (en) | Image processing method and device | |
CN107911576A (en) | Image processing method, device and storage medium | |
CN108154466A (en) | Image processing method and device | |
CN109034106B (en) | Face data cleaning method and device | |
CN110717399A (en) | Face recognition method and electronic terminal equipment | |
CN112927122A (en) | Watermark removing method, device and storage medium | |
CN105183755B (en) | A kind of display methods and device of picture | |
CN105205093B (en) | The method and device that picture is handled in picture library | |
CN108717542A (en) | Identify the method, apparatus and computer readable storage medium of character area | |
CN110059548B (en) | Target detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |