CN114240804A - Matting data generation method and device, computer equipment and storage medium - Google Patents

Matting data generation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114240804A
CN114240804A CN202111613442.XA CN202111613442A CN114240804A CN 114240804 A CN114240804 A CN 114240804A CN 202111613442 A CN202111613442 A CN 202111613442A CN 114240804 A CN114240804 A CN 114240804A
Authority
CN
China
Prior art keywords
image
target
matting
mask image
original mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111613442.XA
Other languages
Chinese (zh)
Inventor
陈信宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wondershare Software Co Ltd
Original Assignee
Shenzhen Wondershare Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wondershare Software Co Ltd filed Critical Shenzhen Wondershare Software Co Ltd
Priority to CN202111613442.XA priority Critical patent/CN114240804A/en
Publication of CN114240804A publication Critical patent/CN114240804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a method and a device for generating matting data, computer equipment and a storage medium, wherein the method comprises the following steps: respectively obtaining a foreground image and a background image, and extracting an original mask image in the foreground image; fusing the original mask image and the background image into a target image; inputting the target image into the cutout model, and outputting a target mask image of the target image by the cutout model; calculating the overlapping degree of the original mask image and the target mask image based on the intersection ratio; and taking the target image corresponding to the target mask image with the overlapping degree within the preset overlapping range as the matting data for training the matting model. According to the method and the device, the original mask image and the background image in the foreground image are fused into the target image, then the target image is predicted by using the matting model, and whether the target image is beneficial to training of the matting model or not is judged based on the prediction result, so that the matting data capable of effectively improving the training effect of the matting model is generated.

Description

Matting data generation method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating matting data, a computer device, and a storage medium.
Background
Currently, when training a network model (e.g., a matting model), when training data is increased to a certain amount, if the distribution of data scenes is not uniform enough, the trained model may have blind spots. Moreover, if the training data are collected from the network, the distribution of the training data is biased to some specific scenes, for example, there are few office scenes in the training data, so that the matting model is not good in the environment effect in the office. Under the condition of large original data volume, the cost of searching the data of a specific scene by manpower is greatly increased. Therefore, how to solve the problem that the training data is uneven, which results in poor model matting effect, is a problem to be overcome by those skilled in the art.
Disclosure of Invention
The embodiment of the invention provides a cutout data generation method and device, computer equipment and a storage medium, and aims to generate cutout data capable of effectively improving the cutout model training effect.
In a first aspect, an embodiment of the present invention provides a method for generating matting data, including:
respectively obtaining a foreground image and a background image, and extracting an original mask image in the foreground image;
fusing the original mask image and the background image into a target image;
inputting the target image into a matting model, and outputting a target mask image of the target image by the matting model;
calculating the overlapping degree of the original mask image and the target mask image based on the intersection ratio;
and taking the target image corresponding to the target mask image with the overlapping degree within the preset overlapping range as the matting data for training the matting model.
In a second aspect, an embodiment of the present invention provides a matting data generating apparatus, including:
the image acquisition unit is used for respectively acquiring a foreground image and a background image and extracting an original mask image in the foreground image;
the first fusion unit is used for fusing the original mask image and the background image into a target image;
the model output unit is used for inputting the target image into a matting model and outputting a target mask image of the target image by the matting model;
a first calculation unit for calculating a degree of overlap of the original mask image and the target mask image based on an intersection ratio;
and the data generation unit is used for taking the target image corresponding to the target mask image with the overlapping degree within the preset overlapping range as the matting data for training the matting model.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the matting data generation method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the matting data generating method according to the first aspect.
The embodiment of the invention provides a method and a device for generating cutout data, computer equipment and a storage medium, wherein the method comprises the following steps: respectively obtaining a foreground image and a background image, and extracting an original mask image in the foreground image; fusing the original mask image and the background image into a target image; inputting the target image into a matting model, and outputting a target mask image of the target image by the matting model; calculating the overlapping degree of the original mask image and the target mask image based on the intersection ratio; and taking the target image corresponding to the target mask image with the overlapping degree within the preset overlapping range as the matting data for training the matting model. According to the method and the device, the original mask image in the foreground image is fused with the background image to generate a new image, namely the target image, the target image is predicted by using the matting model, and whether the target image is beneficial to training the matting model is judged based on the prediction result, so that the matting data capable of effectively improving the training effect of the matting model is generated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a matting data generation method according to an embodiment of the present invention;
fig. 2 is a sub-flow diagram of a matting data generation method according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of a matting data generating apparatus according to an embodiment of the present invention;
fig. 4 is a sub-schematic block diagram of a matting data generating apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flow chart of a matting data generation method according to an embodiment of the present invention, which specifically includes: steps S101 to S105.
S101, respectively obtaining a foreground image and a background image, and extracting an original mask image in the foreground image;
s102, fusing the original mask image and the background image into a target image;
s103, inputting the target image into a matting model, and outputting a target mask image of the target image by the matting model;
s104, calculating the overlapping degree of the original mask image and the target mask image based on the intersection ratio;
and S105, taking the target image corresponding to the target mask image with the overlapping degree within the preset overlapping range as the matting data for training the matting model.
In this embodiment, an original Mask image (Mask image) in an acquired foreground image is first extracted, and the original Mask image and the acquired background image are fused into a target image. And then predicting the target image through a matting model, performing overlapping comparison on a prediction result (namely the target mask image) and the original mask image to determine the overlapping degree between the target mask image and the original mask image, and further determining whether to reserve the corresponding target image as final matting data or not based on the overlapping degree.
The method for generating the cutout data can find the weakness of the currently adopted cutout model, and further generate the cutout data for training and eliminating the corresponding weakness for the cutout model. It can be understood that when training data is of a certain scale, it is difficult to manually collect data that makes the model less predictive, and even if collected, it is costly. And this embodiment is through fusing original mask image in the prospect image with the background image to generate new image, promptly the target image, then utilize the cutout model to predict this target image, and judge whether this target image is favorable to the training of cutout model based on the prediction result, thereby with this generation can effectively promote the cutout data of cutout model training effect.
In one embodiment, the step S101 includes:
converting the foreground image into a gray image, and isolating edge pixels of the gray image;
inverting the isolated gray level image and creating a mask;
and extracting boundary information of the gray level image through bit operation, and superposing the gray level image based on the boundary information so as to extract the original mask image.
In this embodiment, the foreground image and the original mask image corresponding to the foreground image are separated through image mask operation, so that the original mask image and the background image are fused into the target image. In digital image processing, image masks are mainly used for: firstly, extracting an interested region, and multiplying a pre-made interested region mask and an image to be processed to obtain an interested region image, wherein the image value in the interested region is kept unchanged, and the image value outside the region is 0. Masking, masking certain areas of the image to be processed or not to be processed parameter calculation, or processing or counting only the masked areas. Extracting structural features, and detecting and extracting the structural features similar to the mask in the image by using a similarity variable or an image matching method. And fourthly, manufacturing the image with the special shape.
In one embodiment, the step S102 includes:
fusing the original mask image and the background image according to the following formula:
M=I*α+B(1-α)
in the formula, M is the target image, I is the foreground image, α is the original mask image, and B is the background image.
In this embodiment, the original mask image and the background image are fused into the target image according to the above fusion formula. Here, the original mask image α has a numerical range of (0, 1) where 0 denotes that the background is 0 and 1 denotes that the foreground is 1.
In one embodiment, as shown in fig. 2, the step S103 includes: steps S201 to S204.
S201, calculating a feature vector of a pixel i in the target mask image according to the following formula:
X(i)=(cos(h),sin(h),s,v,x,y)
wherein h, s, v are coordinate values of HSV color space, respectively, and (x, y) are spatial coordinates of a pixel i;
s202, setting a kernel function according to the following formula:
Figure BDA0003435849640000051
in the formula, C is a weight value adjusting coefficient to ensure that a kernel function k (i, j) belongs to [0,1], | | | | · | | is a 1 norm;
s203, calculating a Laplace matrix L based on the kernel function:
L=D-A
in the formula, the similarity matrix AijK (i, j), diagonal matrix Dii=∑jAij
S204, constructing an equation of a closed form solution according to the following formula, and taking the closed form solution as the target mask image:
Figure BDA0003435849640000052
in the formula, λ is a constraint coefficient.
In this embodiment, when the target image is predicted and output by using the matting model, first, a feature vector of the target image is calculated, then, a corresponding kernel function is set or defined according to a calculation result of the feature vector, and further, a laplace matrix is calculated according to the kernel function, and an equation of a closed form solution (that is, a dependent variable of the closed form solution can be obtained by giving an arbitrary independent variable) is constructed, so that the equation is solved to obtain the target mask image. It should be noted here that the matting model of the present embodiment is not limited to the prediction step, and for other matting models, the target image may be input into other matting models, and corresponding prediction results are output by the other matting models.
In one embodiment, the step S104 includes:
calculating the intersection ratio of the original mask image and the target mask image according to the following formula, and taking the calculation result as the overlapping degree:
Figure BDA0003435849640000061
wherein A is the original mask image and B is the target mask image.
The target detection iou (interaction over union) is equivalent to a result obtained by dividing a part where two regions overlap by a set part of the two regions, so that the calculation result of the target detection iou (interaction over union) is used as the overlapping degree of the original mask image and the target mask image in the embodiment. For example, IOU is 0.5, which indicates that there is half of the overlap between the original mask image and the target mask image; IOU is 0, that is, the original mask image and the target mask image are completely non-overlapped; IOU is 1, i.e. the original mask image and the target mask image are completely overlapped.
Further, in an embodiment, the step S105 includes:
when the overlapping degree is within the preset overlapping range, judging that the overlapping degree of the target mask image and the original mask image is low;
when the overlapping degree is not within the preset overlapping range, judging that the overlapping degree of the target mask image and the original mask image is high;
and setting the target image corresponding to the target mask image with low overlapping degree as the matting data.
In this embodiment, if the degree of overlapping between the target mask image and the original mask image is low, it indicates that the difference between the prediction result of the matting model and the original mask image is large, so as to indicate that the target image is data with a poor prediction effect as compared with a corresponding matting model, and therefore the data with a poor prediction effect is required to train and improve the model, that is, the target image can be used as training image data for improving the accuracy of the matting model. On the contrary, if the overlapping degree of the target mask image and the original mask image is high, the difference between the prediction result of the matting model and the original mask image is small, so that the target image is data with a good prediction effect for the corresponding matting model, and therefore the data with a good prediction effect is not needed to train and improve the model, namely the target image is not needed to be used as training image data for improving the accuracy of the matting model.
In an embodiment, before the step S102, the method includes:
preprocessing the foreground image and the original mask image based on a preset data generation strategy, and fusing the preprocessed original mask image and the background image; the preset data generation strategy is obtained by randomly combining random foreground blurring, random foreground rotation angles and color conversion.
In this embodiment, the foreground image and the original mask image are preprocessed through some data generation strategies, for example, the foreground image is subjected to random foreground blurring processing or random foreground rotation angle processing, so that the purpose of increasing the diversity of generated data can be achieved. Of course, in other embodiments, the background image may be pre-processed accordingly, such as background blurring, angle rotation, and so on. Similarly, the foreground image, the original mask image and the background image may be correspondingly processed in other image preprocessing manners, such as color conversion, so as to achieve the effect of data diversity.
Further, after the foreground image and the original mask image are preprocessed, the foreground image and the original mask image are correspondingly changed. For example, before unprocessed, the foreground image is I, the original mask image is alpha, and the random rotation function f of the image is utilizedRAnd image random blur function fBPreprocessing the foreground image I and the original mask image alpha, and correspondingly changing the foreground image I and the original mask image alpha into:
I″=fR(fR(I)),α″=fR(fR(α))
therefore, when the preprocessed original mask image α ″ is fused with the background image B, the corresponding fusion needs to be performed according to the following formula:
M=I″*α″+B(1-α″)。
fig. 3 is a schematic block diagram of a matting data generating apparatus 300 according to an embodiment of the present invention, where the apparatus 300 includes:
an image obtaining unit 301, configured to obtain a foreground image and a background image, respectively, and extract an original mask image in the foreground image;
a first fusion unit 302, configured to fuse the original mask image and the background image into a target image;
a model output unit 303, configured to input the target image into a matting model, and output a target mask image of the target image from the matting model;
a first calculation unit 304 for calculating the degree of overlap of the original mask image and the target mask image based on the cross-over ratio;
the data generating unit 305 is configured to use a target image corresponding to a target mask image with an overlap degree within a preset overlap range as matting data for training a matting model.
In one embodiment, the image acquisition unit 301 includes:
the image conversion unit is used for converting the foreground image into a gray image and isolating edge pixels of the gray image;
the image reversing unit is used for reversing the isolated gray level image and creating a mask;
and the information superposition unit is used for extracting the boundary information of the gray level image through bit operation and superposing the gray level image based on the boundary information so as to extract the original mask image.
In one embodiment, the first fusion unit 302 includes:
a second fusing unit, configured to fuse the original mask image and the background image according to the following formula:
M=I*α+B(1-α)
in the formula, M is the target image, I is the foreground image, α is the original mask image, and B is the background image.
In one embodiment, as shown in fig. 4, the model output unit 303 includes:
a vector calculating unit 401, configured to calculate a feature vector of a pixel i in the target mask image according to the following formula:
X(i)=(cos(h),sin(h),s,v,x,y)
wherein h, s, v are coordinate values of HSV color space, respectively, and (x, y) are spatial coordinates of a pixel i;
a function setting unit 402, configured to set a kernel function according to the following equation:
Figure BDA0003435849640000081
in the formula, C is a weight value adjusting coefficient to ensure that a kernel function k (i, j) belongs to [0,1], | | | | · | | is a 1 norm;
a matrix calculation unit 403, configured to calculate a laplacian matrix L based on the kernel function:
L=D-A
in the formula, the similarity matrix AijK (i, j), diagonal matrix Dii=∑jAij
An equation construction unit 404, configured to construct an equation of a closed form solution according to the following formula, and use the closed form solution as the target mask image:
Figure BDA0003435849640000082
in the formula, λ is a constraint coefficient.
In one embodiment, the first computing unit 304 includes:
a second calculating unit, configured to calculate an intersection ratio of the original mask image and the target mask image according to the following formula, and take a calculation result as the overlap degree:
Figure BDA0003435849640000083
wherein A is the original mask image and B is the target mask image.
In one embodiment, the data generating unit 305 includes:
a first determination unit, configured to determine that the degree of overlap between the target mask image and the original mask image is low when the degree of overlap is within the preset overlap range;
a second determining unit, configured to determine that the degree of overlap between the target mask image and the original mask image is high when the degree of overlap is not within the preset overlap range;
and the data setting unit is used for setting the target image corresponding to the target mask image with low overlapping degree as the matting data.
In an embodiment, the first fusing unit 302 is preceded by:
the image preprocessing unit is used for preprocessing the foreground image and the original mask image based on a preset data generation strategy and fusing the preprocessed original mask image and the background image; the preset data generation strategy is obtained by randomly combining random foreground blurring, random foreground rotation angles and color conversion.
Since the embodiments of the apparatus portion and the method portion correspond to each other, please refer to the description of the embodiments of the method portion for the embodiments of the apparatus portion, which is not repeated here.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, the steps provided by the above embodiments can be implemented. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiment of the present invention further provides a computer device, which may include a memory and a processor, where the memory stores a computer program, and the processor may implement the steps provided in the above embodiments when calling the computer program in the memory. Of course, the computer device may also include various network interfaces, power supplies, and the like.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A matting data generation method, characterized by comprising:
respectively obtaining a foreground image and a background image, and extracting an original mask image in the foreground image;
fusing the original mask image and the background image into a target image;
inputting the target image into a matting model, and outputting a target mask image of the target image by the matting model;
calculating the overlapping degree of the original mask image and the target mask image based on the intersection ratio;
and taking the target image corresponding to the target mask image with the overlapping degree within the preset overlapping range as the matting data for training the matting model.
2. The matting data generating method according to claim 1, wherein the obtaining of a foreground image and a background image and extracting an original mask image in the foreground image respectively comprises:
converting the foreground image into a gray image, and isolating edge pixels of the gray image;
inverting the isolated gray level image and creating a mask;
and extracting boundary information of the gray level image through bit operation, and superposing the gray level image based on the boundary information so as to extract the original mask image.
3. The matting data generating method according to claim 1, wherein the fusing the original mask image and the background image into an object image comprises:
fusing the original mask image and the background image according to the following formula:
M=I*α+B(1-α)
in the formula, M is the target image, I is the foreground image, α is the original mask image, and B is the background image.
4. The matting data generating method according to claim 1, wherein the inputting the object image into a matting model and outputting an object mask image of the object image by the matting model comprises:
calculating a feature vector of a pixel i in the target mask image according to the following formula:
X(i)=(cos(h),sin(h),s,v,x,y)
wherein h, s, v are coordinate values of HSV color space, respectively, and (x, y) are spatial coordinates of a pixel i;
the kernel function is set as follows:
Figure FDA0003435849630000021
in the formula, C is a weight value adjusting coefficient to ensure that a kernel function k (i, j) belongs to [0,1], | | | | · | | is a 1 norm;
calculating a Laplace matrix L based on the kernel function:
L=D-A
in the formula, the similarity matrix AijK (i, j), diagonal matrix Dii=∑jAij
Constructing an equation of a closed form solution according to the following formula, and taking the closed form solution as the target mask image:
Figure FDA0003435849630000022
in the formula, λ is a constraint coefficient.
5. The matting data generating method according to claim 1, wherein the calculating of the degree of overlapping of the original mask image and the target mask image based on the cross-over ratio includes:
calculating the intersection ratio of the original mask image and the target mask image according to the following formula, and taking the calculation result as the overlapping degree:
Figure FDA0003435849630000023
wherein A is the original mask image and B is the target mask image.
6. The matting data generating method according to claim 1, wherein the step of using the target image corresponding to the target mask image with the overlapping degree within the preset overlapping range as matting data for training the matting model comprises:
when the overlapping degree is within the preset overlapping range, judging that the overlapping degree of the target mask image and the original mask image is low;
when the overlapping degree is not within the preset overlapping range, judging that the overlapping degree of the target mask image and the original mask image is high;
and setting the target image corresponding to the target mask image with low overlapping degree as the matting data.
7. The matting data generating method according to claim 1, wherein the fusing the original mask image and the background image into an object image comprises:
preprocessing the foreground image and the original mask image based on a preset data generation strategy, and fusing the preprocessed original mask image and the background image; the preset data generation strategy is obtained by randomly combining random foreground blurring, random foreground rotation angles and color conversion.
8. A matting data generating apparatus characterized by comprising:
the image acquisition unit is used for respectively acquiring a foreground image and a background image and extracting an original mask image in the foreground image;
the first fusion unit is used for fusing the original mask image and the background image into a target image;
the model output unit is used for inputting the target image into a matting model and outputting a target mask image of the target image by the matting model;
a first calculation unit for calculating a degree of overlap of the original mask image and the target mask image based on an intersection ratio;
and the data generation unit is used for taking the target image corresponding to the target mask image with the overlapping degree within the preset overlapping range as the matting data for training the matting model.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the matting data generating method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, implements the matting data generating method according to any one of claims 1 to 7.
CN202111613442.XA 2021-12-27 2021-12-27 Matting data generation method and device, computer equipment and storage medium Pending CN114240804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111613442.XA CN114240804A (en) 2021-12-27 2021-12-27 Matting data generation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111613442.XA CN114240804A (en) 2021-12-27 2021-12-27 Matting data generation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114240804A true CN114240804A (en) 2022-03-25

Family

ID=80763431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111613442.XA Pending CN114240804A (en) 2021-12-27 2021-12-27 Matting data generation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114240804A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511853A (en) * 2022-04-21 2022-05-17 华南理工大学 Character image writing track recovery effect discrimination method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511853A (en) * 2022-04-21 2022-05-17 华南理工大学 Character image writing track recovery effect discrimination method
CN114511853B (en) * 2022-04-21 2022-07-12 华南理工大学 Character image writing track recovery effect discrimination method

Similar Documents

Publication Publication Date Title
CN111047551B (en) Remote sensing image change detection method and system based on U-net improved algorithm
CN108389224B (en) Image processing method and device, electronic equipment and storage medium
CN111723732A (en) Optical remote sensing image change detection method, storage medium and computing device
CN109711416B (en) Target identification method and device, computer equipment and storage medium
CN111274892A (en) Robust remote sensing image change detection method and system
Zeng et al. LEARD-Net: Semantic segmentation for large-scale point cloud scene
CN107025660A (en) A kind of method and apparatus for determining binocular dynamic visual sensor image parallactic
CN113689445B (en) High-resolution remote sensing building extraction method combining semantic segmentation and edge detection
CN115345866B (en) Building extraction method in remote sensing image, electronic equipment and storage medium
CN112561881A (en) Infrared image self-adaptive data enhancement method based on evaluation model
CN110827320A (en) Target tracking method and device based on time sequence prediction
CN114240804A (en) Matting data generation method and device, computer equipment and storage medium
Zhang et al. A GPU-accelerated real-time single image de-hazing method using pixel-level optimal de-hazing criterion
CN114926734A (en) Solid waste detection device and method based on feature aggregation and attention fusion
Fang et al. Learning explicit smoothing kernels for joint image filtering
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
Yin et al. Multiscale depth fusion with contextual hybrid enhancement network for image dehazing
CN113222843B (en) Image restoration method and related equipment thereof
CN113658180B (en) Surface defect region segmentation method and device based on spatial context guidance
Tsuji et al. Non-guided depth completion with adversarial networks
Su et al. Attention-adaptive multi-scale feature aggregation dehazing network
CN110675311A (en) Sketch generation method and device under sketch order constraint and storage medium
CN114820755A (en) Depth map estimation method and system
Zhou et al. ASFusion: Adaptive visual enhancement and structural patch decomposition for infrared and visible image fusion
CN111461139B (en) Multi-target visual saliency layered detection method in complex scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination