CN105631803A - Method and device for filter processing - Google Patents

Method and device for filter processing Download PDF

Info

Publication number
CN105631803A
CN105631803A CN201510951946.0A CN201510951946A CN105631803A CN 105631803 A CN105631803 A CN 105631803A CN 201510951946 A CN201510951946 A CN 201510951946A CN 105631803 A CN105631803 A CN 105631803A
Authority
CN
China
Prior art keywords
video image
image
target area
video
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510951946.0A
Other languages
Chinese (zh)
Other versions
CN105631803B (en
Inventor
陈志军
李明浩
侯文迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510951946.0A priority Critical patent/CN105631803B/en
Publication of CN105631803A publication Critical patent/CN105631803A/en
Application granted granted Critical
Publication of CN105631803B publication Critical patent/CN105631803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for filter processing, which belong to the technical field of image processing. The method comprises steps: an image of a target area in a first video image of video is acquired, wherein the target area in the first video image is the area where a tracking target is in the first video image; according to the image of the target area in the first video image, a target area in a second video image of the video is determined, wherein the second video image is a frame of video image behind the first video image, and the target area in the second video image is the area where the tracking target is in the second video image; and according to the target area in the second video image, filtering processing is carried out on the second video image. Filtering processing can be realized to be carried out on the video image, the filter processing application range is expanded, and the user experience is improved.

Description

Method and apparatus for filter treatment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for filter processing.
Background
The filter processing can realize various special effects of the image, such as target blurring and background blurring. Currently, filter processing has become a common means of image processing. When performing filter processing, a user firstly defines a processing area in a photo or sets a processing area, and then selects a specific operation mode (such as mosaic and background blurring) from filter functions provided by an image processing application program, so that the image processing application program can automatically perform filter processing on the processing area. However, in the current filter processing, a user needs to define a processing area for each picture or set the same processing area for all pictures, so that the current filter processing can only be applied to processing of pictures, and the application range is relatively limited.
Disclosure of Invention
To overcome the problem of limited application range of filter treatment in the related art, the present disclosure provides a method and apparatus for filter treatment.
According to a first aspect of embodiments of the present disclosure, there is provided a method of filter processing, comprising:
acquiring an image of a target area in a first video image of a video, wherein the target area in the first video image is an area where a tracking target is located in the first video image;
determining a target area in a second video image of the video according to an image of the target area in the first video image, wherein the second video image is a frame of video image behind the first video image, and the target area in the second video image is an area where the tracking target is located in the second video image;
and carrying out filter processing on the second video image according to the target area in the second video image.
The method and the device have the advantages that the target area in the other frame of video image is determined according to the image of the target area in the one frame of video image, and the filter processing is carried out on the other frame of video image according to the target area in the other frame of video image, so that the filter processing of the video image is realized, the application range of the filter processing is expanded, and the user experience is improved.
In a possible implementation manner of the present disclosure, when the first video image is a first frame video image of the video subjected to filter processing, the acquiring an image of a target area in the first video image of the video includes:
outputting the first video image to a user;
receiving a target area in the first video image input by the user;
acquiring an image of a target area in the first video image from the first video image;
or,
and identifying an image of a target area in the first video image from the first video image by adopting a target detection algorithm.
The method realizes the acquisition of the image of the target area in the first frame video image subjected to filter processing in the video in a mode of selection by a user or automatic identification.
In another possible implementation manner of the present disclosure, when the first video image is not a first frame video image of the video subjected to filter processing, the acquiring an image of a target area in the first video image of the video includes:
determining a target area in the first video image according to an image of the target area in a third video image of the video, wherein the third video image is a frame of video image in front of the first video image, and the target area in the third video image is an area where the tracking target is located in the third video image;
and acquiring an image of a target area in the first video image from the first video image.
Starting from a first frame of video image subjected to filter processing in the video, sequentially determining a target area in another frame of video image according to the video images of the determined target area in one frame of video image, and further acquiring an image of the target area for determining the target area in another frame of video image.
Optionally, the first video image is a video image of a frame preceding the second video image.
On the premise that the image of the target area in the previous frame of video image of the second video image can be acquired, the target area in the second video image is determined according to the image of the target area in the previous frame of video image of the second video image by using the fact that the images of the target area in the two adjacent frame of video images are most similar, and the accuracy is highest.
In yet another possible implementation manner of the present disclosure, the determining a target region in the second video image according to an image of the target region in the first video image includes:
acquiring a characteristic value of an image of a target area in the first video image;
and determining the target area in the second video image according to the characteristic value of the image of the target area in the first video image.
A target region in a video image subsequent to the first video image is determined using the feature values of the image of the target region in the first video image.
Optionally, the determining a target region in the second video image according to a feature value of an image of the target region in the first video image includes:
estimating a target region in the second video image from a target region in the first video image;
scanning the second video image according to the image of the target area in the first video image, and detecting a possible target area in the second video image;
and determining a target area in the second video image according to the estimated target area in the second video image and the detected possible target area in the second video image.
The TLD algorithm is adopted to realize the tracking of the target, the problems of deformation, partial shielding and the like of the tracked target in the tracking process can be solved, and the tracking effect is stable, robust and reliable.
Optionally, the determining a target region in the second video image according to a feature value of an image of the target region in the first video image includes:
selecting a plurality of candidate regions in the second video image by taking a determined region as a center, wherein the determined region is a region in the second video image corresponding to a target region in the first video image;
respectively calculating the similarity of characteristic value histograms between the images of the candidate areas and the image of the target area in the first video image;
selecting a candidate region with the largest similarity of feature value histograms with the image of the target region in the first video image from the plurality of candidate regions, and updating the determined region by adopting the selected candidate region;
when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance does not reach the set number of times, selecting a plurality of candidate areas in the second video image by taking the updated determined area as the center, and updating the determined area again;
and when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance reaches the set number of times, taking the updated determined area as the target area in the second video image.
The MeanShift algorithm is adopted to realize the tracking of the target, the target is positioned, the searching time is short, and the real-time performance is good. And moreover, the statistical characteristics are adopted, and the robustness to noise is high.
In another possible implementation manner of the present disclosure, the performing filter processing on the second video image according to a target area in the second video image includes:
according to the target area in the second video image, carrying out filter processing on the image of the target area in the second video image;
or,
and according to the target area in the second video image, carrying out filter processing on the images in the second video image except the image of the target area in the second video image.
According to the needs of the user, the image of the target area can be selected to be subjected to filter processing, and the image except the image of the target area can also be selected to be subjected to filter processing, so that the applicability is good.
According to a second aspect of embodiments of the present disclosure, there is provided an apparatus for filter processing, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image of a target area in a first video image of a video, and the target area in the first video image is an area where a tracking target is located in the first video image;
a determining module, configured to determine a target area in a second video image of the video according to an image of the target area in the first video image, where the second video image is a frame of video image behind the first video image, and the target area in the second video image is an area where the tracking target is located in the second video image;
and the processing module is used for carrying out filter processing on the second video image according to the target area in the second video image.
In one possible implementation manner of the present disclosure, the obtaining module includes:
the output sub-module is used for outputting the first video image to a user when the first video image is a first frame video image subjected to filter processing in the video;
the receiving submodule is used for receiving a target area in the first video image input by the user;
the first obtaining submodule is used for obtaining an image of a target area in the first video image from the first video image;
or,
the acquisition module is configured to identify an image of a target area in the first video image from the first video image by using a target detection algorithm when the first video image is a first frame video image in the video subjected to filter processing.
In another possible implementation manner of the present disclosure, the obtaining module includes:
a first determining submodule, configured to determine a target area in a first video image according to an image of a target area in a third video image of the video when the first video image is not a first frame of video image subjected to filter processing in the video, where the third video image is a frame of video image before the first video image, and the target area in the third video image is an area where the tracking target is located in the third video image;
and the second acquisition sub-module is used for acquiring the image of the target area in the first video image from the first video image.
Optionally, the first video image is a video image of a frame preceding the second video image.
In yet another possible implementation manner of the present disclosure, the determining module includes:
a third obtaining submodule, configured to obtain a feature value of an image of a target area in the first video image;
and the second determining submodule is used for determining the target area in the second video image according to the characteristic value of the image of the target area in the first video image.
Optionally, the second determining sub-module includes:
an estimation sub-module for estimating a target region in the second video image from a target region in the first video image;
the detection submodule is used for scanning the second video image according to the image of the target area in the first video image and detecting a possible target area in the second video image;
and the third determining submodule is used for determining the target area in the second video image according to the estimated target area in the second video image and the detected possible target area in the second video image.
Optionally, the second determining sub-module includes:
a selection submodule, configured to select a plurality of candidate regions in the second video image with a determination region as a center, where the determination region is a region in the second video image that corresponds to a target region in the first video image;
a calculation sub-module, configured to calculate similarities of feature value histograms between the images of the plurality of candidate regions and the image of the target region in the first video image, respectively;
an updating sub-module, configured to select, from the multiple candidate regions, a candidate region with the greatest similarity to a feature value histogram between images of a target region in the first video image, and update the determined region using the selected candidate region;
the judgment sub-module is used for selecting a plurality of candidate areas in the second video image by taking the updated determined area as the center and updating the determined area again when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance does not reach the set number of times; and when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance reaches the set number of times, taking the updated determined area as the target area in the second video image.
In yet another possible implementation manner of the present disclosure, the processing module is configured to,
according to the target area in the second video image, carrying out filter processing on the image of the target area in the second video image;
or,
and according to the target area in the second video image, carrying out filter processing on the images in the second video image except the image of the target area in the second video image.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for filter processing, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring an image of a target area in a first video image of a video, wherein the target area in the first video image is an area where a tracking target is located in the first video image;
determining a target area in a second video image of the video according to an image of the target area in the first video image, wherein the second video image is a frame of video image behind the first video image, and the target area in the second video image is an area where the tracking target is located in the second video image;
and carrying out filter processing on the second video image according to the target area in the second video image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the method and the device have the advantages that the target area in the other frame of video image is determined according to the image of the target area in the one frame of video image, and the filter processing is carried out on the other frame of video image according to the target area in the other frame of video image, so that the filter processing of the video image is realized, the application range of the filter processing is expanded, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a diagram illustrating an application scenario of a method of filter processing according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of filter processing in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method of filter processing in accordance with an exemplary embodiment;
4 a-4 d are terminal interface diagrams illustrating a method implementation of filter processing according to an exemplary embodiment;
FIG. 5 is a block diagram illustrating a filter processing apparatus according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating a filter processing apparatus according to an exemplary embodiment;
FIG. 7 is a block diagram illustrating an apparatus for filter processing according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
An application scenario of the method for filter processing according to the embodiment of the present disclosure is briefly described below with reference to fig. 1. Referring to fig. 1, each frame of video image in the video 3 includes the same person, and the user 1 performs filter processing, such as mosaic processing, on the video 3 through the terminal 2 to obtain a video 4a, so that a person in the video 4a cannot recognize the video, or blurs a background to obtain a video 4b, so that a person in the video 4b stands out.
In particular, the terminal 2 may be a smartphone, a tablet computer, a smart tv, a multimedia player, a laptop portable computer, a desktop computer, and the like.
It should be noted that the application scenario, the video content, the filter processing method, and the terminal type implementation are merely examples, and the disclosure is not limited thereto.
Fig. 2 is a flowchart illustrating a method of filter processing, as shown in fig. 2, for use in a terminal, according to an exemplary embodiment, including the following steps.
In step S101, an image of a target area in a first video image of a video is acquired.
In this embodiment, the target area in the first video image is an area where the tracking target is located in the first video image.
In step S102, a target area in a second video image of the video is determined from an image of the target area in the first video image.
In this embodiment, the second video image is a frame of video image behind the first video image, and the target area in the second video image is an area where the tracking target is located in the second video image. The precedence relationship between the first video image and the second video image is determined by the shooting sequence or the playing sequence of the video.
In step S103, filter processing is performed on the second video image according to the target region in the second video image.
According to the embodiment of the disclosure, the target area in the other frame of video image is determined according to the image of the target area in the one frame of video image, and the filter processing is performed on the other frame of video image according to the target area in the other frame of video image, so that the filter processing on the video image is realized, the application range of the filter processing is expanded, and the user experience is improved.
Fig. 3 is a flowchart illustrating a method of filter processing, as shown in fig. 3, for use in a terminal, according to an exemplary embodiment, including the following steps.
In step S201, an initial frame video image of a video is acquired.
In the present embodiment, the video arranges the frame video images in the shooting order or the playing order. The initial frame video image is the first frame video image in the video for filter processing.
In step S202, a target region in the initial frame video image is determined.
In this embodiment, the target area in the initial frame video image is an area where the tracking target is located in the initial frame video image. In practical applications, the tracking target may be an object, such as a human being, an animal, etc., included in at least two video images of the video.
In an implementation manner of this embodiment, the step S202 may include:
outputting the initial frame video image to a user;
a target area in an initial frame video image input by a user is received.
For example, fig. 4a shows that the terminal outputs an initial frame video image, and fig. 4b shows a target area input by the user as an increased box compared to fig. 4 a.
In practical applications, an interface for inputting the target area may be provided on the video image of the initial frame. For example, if the initial frame video image is output by using the touch screen, the user can slide from one point to another point on the initial frame video image by using a finger, the terminal forms a rectangular frame by using a connecting line between the two points as a diagonal line, and an area corresponding to the rectangular frame is the target area. For another example, if the non-touch display screen is used to output the initial frame video image, the user may keep the selected state from one point to another point on the initial frame video image through an input device such as a mouse, and similarly, the terminal forms a rectangular frame with a connecting line between the two points as a diagonal line, and the area corresponding to the rectangular frame is the target area.
In another implementation manner of this embodiment, this step 202 may include:
and identifying an image of a target area from the initial frame video image by adopting a target detection algorithm, and determining the target area in the initial video image.
In specific implementation, a large number of sets containing redundant features can be obtained, features which can reflect the features of the target object most can be found from the feature sets by using a machine learning method, a classifier is constructed, the detection of the target object is realized, and then the target area is obtained.
Step S203: and carrying out filter processing on the initial frame video image according to the target area in the initial frame video image.
In an implementation manner of this embodiment, the step 203 may include:
and performing filter processing on the image of the target area in the initial frame video image according to the target area in the initial frame video image.
For example, fig. 4c shows that, for the image mosaic-processed image of the target region in the initial frame video image, the image of the target region cannot be recognized by the effect of degrading the detail of the color level of the image of the target region and disturbing the color blocks, so as to protect the image of the target region.
In another implementation manner of this embodiment, the step 203 may include:
and according to the target area in the initial frame video image, performing filter processing on the images in the initial frame video image except for the image of the target area in the initial frame video image.
For example, fig. 4d shows that the depth of field is reduced by filtering the image after blurring the background of the image except the image of the target area in the initial frame video image, so as to achieve the purpose of highlighting the image of the target area.
In another implementation manner of this embodiment, the method may further include:
and receiving a filter operation instruction.
Accordingly, the step S203 may include:
and according to the target area in the initial frame video image, carrying out filter processing on the image of the target area in the initial frame video image according to the filter operation instruction.
In this embodiment, the filter operation instruction may be an instruction for mosaic processing, background blurring, or the like. By receiving the filter operation instruction, the user can select to carry out different filter processing on the video according to the requirement of the user.
In step S204, an image of a target region in the initial frame video image is acquired.
It is easy to know that this step S204 can obtain the image of the target region in the initial frame video image from the initial frame video image after receiving the target region in the initial frame video image input by the user; an image of the target region in the initial frame video image may also be identified from the initial frame video image using a target detection algorithm.
In step S205, a target region in a second frame video image of the video is determined from an image of the target region in the initial frame video image.
In this embodiment, the second frame video image is a frame video image subsequent to the initial frame video image.
In an implementation manner of this embodiment, the step S205 may include:
acquiring a characteristic value of an image of a target area in an initial frame video image;
and determining the target area in the second frame video image according to the characteristic value of the image of the target area in the initial frame video image.
In practical applications, a Tracking-Learning-Detection (TLD) algorithm or a mean shift (mean shift) algorithm may be used to determine the target region in the second frame of video image.
Optionally, when the TLD algorithm is adopted, determining the target region in the second frame video image according to the feature value of the image of the target region in the initial frame video image may include:
estimating a target area in the second frame video image according to the target area in the initial frame video image;
scanning a second frame video image according to the image of the target area in the initial frame video image, and detecting a possible target area in the second frame video image;
and determining the target area in the second frame video image according to the estimated target area in the second frame video image and the detected possible target area in the second frame video image.
Preferably, estimating the target region in the second frame video image from the target region in the initial frame video image may include:
dividing an image of a target area in an initial frame video image into a plurality of image blocks;
respectively scanning each image block in a limited area of the second frame video image by taking the position of the target area in the corresponding initial frame video image as the center to obtain the position of each image block in the second frame video image;
calculating a moving distance average value of a plurality of image blocks moving from a position in an initial frame video image to a position in a second frame video image;
and adding the calculated moving distance average value on the basis of the position of the target area in the second frame video image corresponding to the initial frame video image to obtain the estimated target area in the second frame video image.
Preferably, scanning the second frame video image according to the image of the target region in the initial frame video image, and detecting a possible target region in the second frame video image may include:
selecting partial images in the second frame video image in sequence by adopting a scanning window with the same size as the target area in the initial frame video image;
comparing the similarity of the selected partial image with the image of the target area in the initial frame video image;
and taking all partial images with similarity exceeding a set threshold value with the image of the target area in the initial frame video image as possible images of the target area in the second frame video image, and taking the area where the possible images of the target area in the second frame video image are located as possible target areas in the second frame video image.
Preferably, determining the target area in the second frame video image according to the estimated target area in the second frame video image and the detected possible target area in the second frame video image comprises:
and when the estimated target area in the second frame video image is the same as the possible target area in the obtained second frame video image, taking the estimated target area in the second frame video image as the target area in the second frame video image.
In practical application, the method may further include:
and when the estimated target area in the second frame video image is different from the possible target areas in all the obtained second frame video images, judging that the target area in the second video image does not exist.
In addition, whether the detected possible target area in the second frame video image is correct or not can be evaluated according to the estimated target area in the second frame video image, so that a sample is provided for scanning the subsequent video image, and the scanning accuracy is gradually improved.
Alternatively, when the MeanShift algorithm is adopted, determining the target area in the second frame video image according to the feature value of the image of the target area in the initial frame video image may include:
selecting a plurality of candidate areas in the second video image by taking the determined area as a center, wherein the determined area is an area corresponding to a target area in the first video image in the second video image;
respectively calculating the similarity of characteristic value histograms between the images of the candidate regions and the image of the target region in the first video image;
selecting a candidate region with the maximum similarity of a characteristic value histogram between the candidate region and an image of a target region in the first video image from the plurality of candidate regions, and updating the determined region by adopting the selected candidate region;
when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance does not reach the set number of times, selecting a plurality of candidate areas in the second video image by taking the updated determined area as the center, and updating the determined area again;
and when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance reaches the set number of times, taking the updated determined area as the target area in the second video image.
The histogram of feature values is typically a color histogram.
In step S206, filter processing is performed on the second frame video image according to the target region in the second frame video image.
Alternatively, the filter processing performed in step S206 may be substantially the same as that performed in step S203, except that the processing target is changed from the initial frame video image in step S203 to the second frame video image in step S206, and will not be described in detail here.
In step S207, an image of the target area in the second frame video image is acquired.
It is easy to know that, in step S207, the target area in the second frame of video image may be determined through step S205, and then the image of the target area in the second frame of video image may be acquired from the second frame of video image.
In step S208, a target area in a third frame video image of the video is determined from an image of the target area in the second frame video image.
In this embodiment, the third frame of video image is a frame of video image subsequent to the second frame of video image.
Alternatively, the filter processing in step S208 may be substantially the same as that in step S205, except that it is determined that the object is replaced with the third frame video image in step S208 from the second frame video image in step S205, and is replaced with the second frame video image in step S208 from the initial frame video image in step S205 according to the object, and details thereof are not described here.
It should be noted that, if there is no tracking target in the second frame video image due to occlusion, disappearance, or the like, the image of the target region in the second frame video image obtained in step S207 may become the image of the target region in the initial frame video image, and at the same time, the target region in the third frame video image is determined according to the image of the target region in the second frame video image in step S208, and accordingly, the target region in the third frame video image is determined according to the image of the target region in the initial frame video image. Extending to the fourth frame video image, the fifth frame video image, … …, etc., the image acquired in step 207 is typically the image of the target region in the video image of the previous frame of the current frame video image, but when there is no image of the target region in the video image of the previous frame of the current frame video image, among the video images acquired in step 207 that become before the current frame video image and in which there is an image of the target region, the image of the target region in the video image closest to the current frame video image.
In step S209, filter processing is performed on the third frame video image in accordance with the target area in the third frame video image.
Alternatively, the filter processing performed in step S209 may be substantially the same as that performed in step S203, except that the processing target is replaced by the initial frame video image in step S203 and the third frame video image in step S209, and therefore, the details thereof are not described here.
It should be noted that, according to the above steps S201 to S209, filter processing methods of the fourth frame video image, the fifth frame video image, and … … can be sequentially deduced, and are not listed here.
According to the embodiment of the disclosure, the target area in the other frame of video image is determined according to the image of the target area in the one frame of video image, and the filter processing is performed on the other frame of video image according to the target area in the other frame of video image, so that the filter processing on the video image is realized, the application range of the filter processing is expanded, and the user experience is improved.
Fig. 5 is a block diagram illustrating an apparatus for filter processing according to an exemplary embodiment, and referring to fig. 5, the apparatus includes an acquisition module 301, a determination module 302, and a processing module 303.
The obtaining module 301 is configured to obtain an image of a target region in a first video image of a video, where the target region in the first video image is a region where a tracking target is located in the first video image.
The determining module 302 is configured to determine a target region in a second video image of the video according to an image of the target region in the first video image, where the second video image is a frame of video image after the first video image, and the target region in the second video image is a region where the tracking target is located in the second video image.
The processing module 303 is configured to filter the second video image according to the target area in the second video image.
According to the embodiment of the disclosure, the target area in the other frame of video image is determined according to the image of the target area in the one frame of video image, and the filter processing is performed on the other frame of video image according to the target area in the other frame of video image, so that the filter processing on the video image is realized, the application range of the filter processing is expanded, and the user experience is improved.
Fig. 6 is a block diagram illustrating an apparatus for filter processing according to an exemplary embodiment, and referring to fig. 6, the apparatus includes an acquisition module 401, a determination module 402, and a processing module 403.
The acquiring module 401 is configured to acquire an image of a target region in a first video image of a video, where the target region in the first video image is a region where a tracking target is located in the first video image.
The determining module 402 is configured to determine a target region in a second video image of the video according to an image of the target region in the first video image, where the second video image is a frame of video image after the first video image, and the target region in the second video image is a region where the tracking target is located in the second video image.
The filter module 403 is configured to filter the second video image according to the target area in the second video image.
In an implementation manner of this embodiment, the obtaining module 401 may include an output submodule 401a, a receiving submodule 401b, and a first obtaining submodule 401 c.
The output sub-module 401a is configured to output the first video image to the user when the first video image is a first frame video image of the video subjected to filter processing.
The receiving sub-module 401b is configured to receive a user input of a target area in the first video image.
The first acquisition sub-module 401c is configured to acquire an image of a target region in a first video image from the first video image.
In another implementation manner of this embodiment, the obtaining module 401 may be configured to, when the first video image is a first frame video image of a video that is subjected to filter processing, identify an image of a target area in the first video image from the first video image by using a target detection algorithm.
In yet another implementation manner of this embodiment, the obtaining module 401 may include a first determining submodule 401d and a second obtaining submodule 401 e.
The first determining sub-module 401d is configured to determine a target area in a first video image according to an image of the target area in a third video image of the video when the first video image is not a first frame video image subjected to filter processing in the video, where the third video image is a frame video image before the first video image, and the target area in the third video image is an area where a tracking target is located in the third video image.
The second acquisition submodule 401e is configured to acquire an image of a target region in the first video image from the first video image.
Alternatively, the first video image may be a video image of a frame preceding the second video image.
In yet another implementation manner of the embodiment, the determining module 402 may include a third obtaining sub-module 402a and a second determining sub-module 402 b.
The third obtaining sub-module 402a is configured to obtain feature values of an image of a target area in the first video image.
The second determining submodule 402b is configured to determine the target area in the second video image based on the feature value of the image of the target area in the first video image.
Optionally, the second determination submodule 402b may include an estimation submodule 402ba, a detection submodule 402bb and a third determination submodule 402 bc.
The estimation submodule 402ba is configured to estimate a target area in the second video image from a target area in the first video image.
The detection sub-module 402bb is configured to scan the second video image according to the image of the target region in the first video image, and detect a possible target region in the second video image.
The third determining submodule 402bc is configured to determine the target region in the second video image based on the estimated target region in the second video image and the detected possible target region in the second video image.
Optionally, the determining sub-module 402b may include a selecting sub-module 402bd, a calculating sub-module 402be, an updating sub-module 402bf, and a determining sub-module 402 bg.
The selecting sub-module 402bd is configured to select a plurality of candidate regions in the second video image with a determined region as a center, the determined region being a region in the second video image corresponding to the target region in the first video image.
The calculation sub-module 402be is configured to calculate similarities of feature value histograms between the images of the plurality of candidate regions and the image of the target region in the first video image, respectively.
The update sub-module 402bf is configured to select, from the plurality of candidate regions, a candidate region having the greatest similarity with the feature value histogram of the image of the target region in the first video image, and update the determination region with the selected candidate region.
The determining submodule 402bg is configured to select a plurality of candidate areas in the second video image with the updated determination area as a center and update the determination area again when the number of times that the distance between the updated determination area and the determination area before updating is smaller than the set distance does not reach the set number of times; and when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance reaches the set number of times, taking the updated determined area as the target area in the second video image.
In another implementation manner of this embodiment, the processing module 403 may be configured to perform filter processing on an image of a target area in the second video image according to the target area in the second video image; or, according to the target area in the second video image, performing filter processing on the images in the second video image except for the image of the target area in the second video image.
According to the embodiment of the disclosure, the target area in the other frame of video image is determined according to the image of the target area in the one frame of video image, and the filter processing is performed on the other frame of video image according to the target area in the other frame of video image, so that the filter processing on the video image is realized, the application range of the filter processing is expanded, and the user experience is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an apparatus 800 for filter processing according to an exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, the apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 806 provides power to the various components of device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform a method of filter processing, the method comprising:
acquiring an image of a target area in a first video image of a video, wherein the target area in the first video image is an area where a tracking target is located in the first video image;
determining a target area in a second video image of the video according to an image of the target area in the first video image, wherein the second video image is a frame of video image behind the first video image, and the target area in the second video image is an area where the tracking target is located in the second video image;
and carrying out filter processing on the second video image according to the target area in the second video image.
In an implementation manner of this embodiment, when the first video image is a first frame video image of the video subjected to filter processing, the acquiring an image of a target area in the first video image of the video includes:
outputting the first video image to a user;
receiving a target area in the first video image input by the user;
acquiring an image of a target area in the first video image from the first video image;
or,
and identifying an image of a target area in the first video image from the first video image by adopting a target detection algorithm.
In another possible implementation manner of the present disclosure, when the first video image is not a first frame video image of the video subjected to filter processing, the acquiring an image of a target area in the first video image of the video includes:
determining a target area in the first video image according to an image of the target area in a third video image of the video, wherein the third video image is a frame of video image in front of the first video image, and the target area in the third video image is an area where the tracking target is located in the third video image;
and acquiring an image of a target area in the first video image from the first video image.
Optionally, the first video image is a video image of a frame preceding the second video image.
On the premise that the image of the target area in the previous frame of video image of the second video image can be acquired, the target area in the second video image is determined according to the image of the target area in the previous frame of video image of the second video image by using the fact that the images of the target area in the two adjacent frame of video images are most similar, and the accuracy is highest.
In another implementation manner of this embodiment, the determining the target area in the second video image according to the image of the target area in the first video image includes:
acquiring a characteristic value of an image of a target area in the first video image;
and determining the target area in the second video image according to the characteristic value of the image of the target area in the first video image.
Optionally, the determining a target region in the second video image according to a feature value of an image of the target region in the first video image includes:
estimating a target region in the second video image from a target region in the first video image;
scanning the second video image according to the image of the target area in the first video image, and detecting a possible target area in the second video image;
and determining a target area in the second video image according to the estimated target area in the second video image and the detected possible target area in the second video image.
Optionally, the determining a target region in the second video image according to a feature value of an image of the target region in the first video image includes:
selecting a plurality of candidate regions in the second video image by taking a determined region as a center, wherein the determined region is a region in the second video image corresponding to a target region in the first video image;
respectively calculating the similarity of characteristic value histograms between the images of the candidate areas and the image of the target area in the first video image;
selecting a candidate region with the largest similarity of feature value histograms with the image of the target region in the first video image from the plurality of candidate regions, and updating the determined region by adopting the selected candidate region;
when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance does not reach the set number of times, selecting a plurality of candidate areas in the second video image by taking the updated determined area as the center, and updating the determined area again;
and when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance reaches the set number of times, taking the updated determined area as the target area in the second video image.
In another possible implementation manner of the present disclosure, the performing filter processing on the second video image according to a target area in the second video image includes:
according to the target area in the second video image, carrying out filter processing on the image of the target area in the second video image;
or,
and according to the target area in the second video image, carrying out filter processing on the images in the second video image except the image of the target area in the second video image.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (17)

1. A method of filter processing, comprising:
acquiring an image of a target area in a first video image of a video, wherein the target area in the first video image is an area where a tracking target is located in the first video image;
determining a target area in a second video image of the video according to an image of the target area in the first video image, wherein the second video image is a frame of video image behind the first video image, and the target area in the second video image is an area where the tracking target is located in the second video image;
and carrying out filter processing on the second video image according to the target area in the second video image.
2. The method according to claim 1, wherein when the first video image is a first frame video image of the video subjected to filter processing, the acquiring an image of a target area in the first video image of the video comprises:
outputting the first video image to a user;
receiving a target area in the first video image input by the user;
acquiring an image of a target area in the first video image from the first video image;
or,
and identifying an image of a target area in the first video image from the first video image by adopting a target detection algorithm.
3. The method of claim 1, wherein when the first video image is not a first frame video image of the video subjected to filter processing, the acquiring an image of a target area in the first video image of the video comprises:
determining a target area in the first video image according to an image of the target area in a third video image of the video, wherein the third video image is a frame of video image in front of the first video image, and the target area in the third video image is an area where the tracking target is located in the third video image;
and acquiring an image of a target area in the first video image from the first video image.
4. The method of claim 3, wherein the first video image is a video image of a frame preceding the second video image.
5. The method according to any of claims 1-4, wherein said determining a target region in said second video image from an image of a target region in said first video image comprises:
acquiring a characteristic value of an image of a target area in the first video image;
and determining the target area in the second video image according to the characteristic value of the image of the target area in the first video image.
6. The method of claim 5, wherein determining the target region in the second video image according to the feature value of the image of the target region in the first video image comprises:
estimating a target region in the second video image from a target region in the first video image;
scanning the second video image according to the image of the target area in the first video image, and detecting a possible target area in the second video image;
and determining a target area in the second video image according to the estimated target area in the second video image and the detected possible target area in the second video image.
7. The method of claim 5, wherein determining the target region in the second video image according to the feature value of the image of the target region in the first video image comprises:
selecting a plurality of candidate regions in the second video image by taking a determined region as a center, wherein the determined region is a region in the second video image corresponding to a target region in the first video image;
respectively calculating the similarity of characteristic value histograms between the images of the candidate areas and the image of the target area in the first video image;
selecting a candidate region with the largest similarity of feature value histograms with the image of the target region in the first video image from the plurality of candidate regions, and updating the determined region by adopting the selected candidate region;
when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance does not reach the set number of times, selecting a plurality of candidate areas in the second video image by taking the updated determined area as the center, and updating the determined area again;
and when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance reaches the set number of times, taking the updated determined area as the target area in the second video image.
8. The method according to any one of claims 1-4, wherein the filter processing the second video image according to the target area in the second video image comprises:
according to the target area in the second video image, carrying out filter processing on the image of the target area in the second video image;
or,
and according to the target area in the second video image, carrying out filter processing on the images in the second video image except the image of the target area in the second video image.
9. An apparatus for filter processing, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image of a target area in a first video image of a video, and the target area in the first video image is an area where a tracking target is located in the first video image;
a determining module, configured to determine a target area in a second video image of the video according to an image of the target area in the first video image, where the second video image is a frame of video image behind the first video image, and the target area in the second video image is an area where the tracking target is located in the second video image;
and the processing module is used for carrying out filter processing on the second video image according to the target area in the second video image.
10. The apparatus of claim 9, wherein the obtaining module comprises:
the output sub-module is used for outputting the first video image to a user when the first video image is a first frame video image subjected to filter processing in the video;
the receiving submodule is used for receiving a target area in the first video image input by the user;
the first obtaining submodule is used for obtaining an image of a target area in the first video image from the first video image;
or,
the acquisition module is configured to identify an image of a target area in the first video image from the first video image by using a target detection algorithm when the first video image is a first frame video image in the video subjected to filter processing.
11. The apparatus of claim 9, wherein the obtaining module comprises:
a first determining submodule, configured to determine a target area in a first video image according to an image of a target area in a third video image of the video when the first video image is not a first frame of video image subjected to filter processing in the video, where the third video image is a frame of video image before the first video image, and the target area in the third video image is an area where the tracking target is located in the third video image;
and the second acquisition sub-module is used for acquiring the image of the target area in the first video image from the first video image.
12. The apparatus of claim 11, wherein the first video image is a video image of a frame preceding the second video image.
13. The apparatus of any of claims 9-12, wherein the means for determining comprises:
a third obtaining submodule, configured to obtain a feature value of an image of a target area in the first video image;
and the second determining submodule is used for determining the target area in the second video image according to the characteristic value of the image of the target area in the first video image.
14. The apparatus of claim 13, wherein the second determining submodule comprises:
an estimation sub-module for estimating a target region in the second video image from a target region in the first video image;
the detection submodule is used for scanning the second video image according to the image of the target area in the first video image and detecting a possible target area in the second video image;
and the third determining submodule is used for determining the target area in the second video image according to the estimated target area in the second video image and the detected possible target area in the second video image.
15. The apparatus of claim 13, wherein the second determining submodule comprises:
a selection submodule, configured to select a plurality of candidate regions in the second video image with a determination region as a center, where the determination region is a region in the second video image that corresponds to a target region in the first video image;
a calculation sub-module, configured to calculate similarities of feature value histograms between the images of the plurality of candidate regions and the image of the target region in the first video image, respectively;
an updating sub-module, configured to select, from the multiple candidate regions, a candidate region with the greatest similarity to a feature value histogram between images of a target region in the first video image, and update the determined region using the selected candidate region;
the judgment sub-module is used for selecting a plurality of candidate areas in the second video image by taking the updated determined area as the center and updating the determined area again when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance does not reach the set number of times; and when the number of times that the distance between the updated determined area and the determined area before updating is smaller than the set distance reaches the set number of times, taking the updated determined area as the target area in the second video image.
16. The apparatus according to any of claims 9-12, wherein the processing module is configured to,
according to the target area in the second video image, carrying out filter processing on the image of the target area in the second video image;
or,
and according to the target area in the second video image, carrying out filter processing on the images in the second video image except the image of the target area in the second video image.
17. An apparatus for filter processing, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring an image of a target area in a first video image of a video, wherein the target area in the first video image is an area where a tracking target is located in the first video image;
determining a target area in a second video image of the video according to an image of the target area in the first video image, wherein the second video image is a frame of video image behind the first video image, and the target area in the second video image is an area where the tracking target is located in the second video image;
and carrying out filter processing on the second video image according to the target area in the second video image.
CN201510951946.0A 2015-12-17 2015-12-17 The method and apparatus of filter processing Active CN105631803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510951946.0A CN105631803B (en) 2015-12-17 2015-12-17 The method and apparatus of filter processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510951946.0A CN105631803B (en) 2015-12-17 2015-12-17 The method and apparatus of filter processing

Publications (2)

Publication Number Publication Date
CN105631803A true CN105631803A (en) 2016-06-01
CN105631803B CN105631803B (en) 2019-05-28

Family

ID=56046692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510951946.0A Active CN105631803B (en) 2015-12-17 2015-12-17 The method and apparatus of filter processing

Country Status (1)

Country Link
CN (1) CN105631803B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327448A (en) * 2016-08-31 2017-01-11 上海交通大学 Picture stylization processing method based on deep learning
CN108960213A (en) * 2018-08-16 2018-12-07 Oppo广东移动通信有限公司 Method for tracking target, device, storage medium and terminal
CN109714623A (en) * 2019-03-12 2019-05-03 北京旷视科技有限公司 Image presentation method, device, electronic equipment and computer readable storage medium
CN110312164A (en) * 2019-07-24 2019-10-08 Oppo(重庆)智能科技有限公司 Method for processing video frequency, device and computer storage medium and terminal device
CN110796012A (en) * 2019-09-29 2020-02-14 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and readable storage medium
CN110930436A (en) * 2019-11-27 2020-03-27 深圳市捷顺科技实业股份有限公司 Target tracking method and device
WO2020192298A1 (en) * 2019-03-25 2020-10-01 维沃移动通信有限公司 Image processing method and terminal device
CN112055247A (en) * 2020-09-11 2020-12-08 北京爱奇艺科技有限公司 Video playing method, device, system and storage medium
CN112258556A (en) * 2020-10-22 2021-01-22 北京字跳网络技术有限公司 Method and device for tracking designated area in video, readable medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739712A (en) * 2010-01-25 2010-06-16 四川大学 Video-based 3D human face expression cartoon driving method
CN101742228A (en) * 2008-11-19 2010-06-16 新奥特硅谷视频技术有限责任公司 Preprocessing method and system applied to digital court
KR20110032347A (en) * 2009-09-22 2011-03-30 삼성전자주식회사 Apparatus and method for extracting character information in a motion picture
CN101996312A (en) * 2009-08-18 2011-03-30 索尼株式会社 Method and device for tracking targets
CN103400393A (en) * 2013-08-21 2013-11-20 中科创达软件股份有限公司 Image matching method and system
CN103985142A (en) * 2014-05-30 2014-08-13 上海交通大学 Federated data association Mean Shift multi-target tracking method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742228A (en) * 2008-11-19 2010-06-16 新奥特硅谷视频技术有限责任公司 Preprocessing method and system applied to digital court
CN101996312A (en) * 2009-08-18 2011-03-30 索尼株式会社 Method and device for tracking targets
KR20110032347A (en) * 2009-09-22 2011-03-30 삼성전자주식회사 Apparatus and method for extracting character information in a motion picture
CN101739712A (en) * 2010-01-25 2010-06-16 四川大学 Video-based 3D human face expression cartoon driving method
CN103400393A (en) * 2013-08-21 2013-11-20 中科创达软件股份有限公司 Image matching method and system
CN103985142A (en) * 2014-05-30 2014-08-13 上海交通大学 Federated data association Mean Shift multi-target tracking method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327448A (en) * 2016-08-31 2017-01-11 上海交通大学 Picture stylization processing method based on deep learning
CN108960213A (en) * 2018-08-16 2018-12-07 Oppo广东移动通信有限公司 Method for tracking target, device, storage medium and terminal
CN109714623A (en) * 2019-03-12 2019-05-03 北京旷视科技有限公司 Image presentation method, device, electronic equipment and computer readable storage medium
WO2020192298A1 (en) * 2019-03-25 2020-10-01 维沃移动通信有限公司 Image processing method and terminal device
CN110312164A (en) * 2019-07-24 2019-10-08 Oppo(重庆)智能科技有限公司 Method for processing video frequency, device and computer storage medium and terminal device
CN110796012A (en) * 2019-09-29 2020-02-14 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and readable storage medium
CN110796012B (en) * 2019-09-29 2022-12-27 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and readable storage medium
CN110930436A (en) * 2019-11-27 2020-03-27 深圳市捷顺科技实业股份有限公司 Target tracking method and device
CN110930436B (en) * 2019-11-27 2023-04-14 深圳市捷顺科技实业股份有限公司 Target tracking method and device
CN112055247A (en) * 2020-09-11 2020-12-08 北京爱奇艺科技有限公司 Video playing method, device, system and storage medium
CN112258556A (en) * 2020-10-22 2021-01-22 北京字跳网络技术有限公司 Method and device for tracking designated area in video, readable medium and electronic equipment

Also Published As

Publication number Publication date
CN105631803B (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN105631803B (en) The method and apparatus of filter processing
US9674395B2 (en) Methods and apparatuses for generating photograph
US10452890B2 (en) Fingerprint template input method, device and medium
CN105095881B (en) Face recognition method, face recognition device and terminal
EP3125135A1 (en) Picture processing method and device
EP3173970A1 (en) Image processing method and apparatus
JP6335289B2 (en) Method and apparatus for generating an image filter
CN107480665B (en) Character detection method and device and computer readable storage medium
CN107944367B (en) Face key point detection method and device
CN108154466B (en) Image processing method and device
CN106557759B (en) Signpost information acquisition method and device
CN106534951B (en) Video segmentation method and device
CN106778773A (en) The localization method and device of object in picture
US20220222831A1 (en) Method for processing images and electronic device therefor
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN112927122A (en) Watermark removing method, device and storage medium
CN107480785A (en) The training method and device of convolutional neural networks
CN110286813B (en) Icon position determining method and device
CN108596957B (en) Object tracking method and device
CN108154090B (en) Face recognition method and device
CN107730443B (en) Image processing method and device and user equipment
CN106469446B (en) Depth image segmentation method and segmentation device
CN105653623B (en) Picture collection method and device
CN107122356B (en) Method and device for displaying face value and electronic equipment
CN113761275A (en) Video preview moving picture generation method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant