CN112862851B - Automatic image matting method and system based on image recognition technology - Google Patents

Automatic image matting method and system based on image recognition technology Download PDF

Info

Publication number
CN112862851B
CN112862851B CN202110062144.XA CN202110062144A CN112862851B CN 112862851 B CN112862851 B CN 112862851B CN 202110062144 A CN202110062144 A CN 202110062144A CN 112862851 B CN112862851 B CN 112862851B
Authority
CN
China
Prior art keywords
target object
picture
edge
semantic segmentation
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110062144.XA
Other languages
Chinese (zh)
Other versions
CN112862851A (en
Inventor
张莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Entertainment Interactive Technology Beijing Co ltd
Original Assignee
Entertainment Interactive Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Entertainment Interactive Technology Beijing Co ltd filed Critical Entertainment Interactive Technology Beijing Co ltd
Priority to CN202110062144.XA priority Critical patent/CN112862851B/en
Publication of CN112862851A publication Critical patent/CN112862851A/en
Application granted granted Critical
Publication of CN112862851B publication Critical patent/CN112862851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an automatic cutout system and method based on an image recognition technology, wherein the method comprises the following steps: uploading a picture to be scratched; carrying out target object identification on the picture needing to be scratched through a target object identification model to obtain a target object identification picture; performing semantic segmentation on the target object identification picture to obtain a semantic segmentation picture; smoothing the edge of the target object in the semantic segmentation picture to obtain a semantic segmentation picture with smooth edge; and (4) scratching out the target object in the semantic segmentation picture with smooth edge to finish the scratching. According to the automatic matting system and method based on the image recognition technology, the target object is recognized and matting is carried out through the image recognition technology, the problems of complex operation and low processing speed in the prior art are effectively solved, and the cost is low.

Description

Automatic image matting method and system based on image recognition technology
Technical Field
The invention relates to the technical field of image matting, in particular to an automatic matting method and an automatic matting system based on an image recognition technology.
Background
The cutout is used for extracting foreground objects which are interested by users in images, is a basic technology in the field of image and video processing, and the technical achievement of the cutout is mainly applied to image processing and production of video and film and television industries and becomes a crucial technical link.
The prior art of matting needs to rely on a special shooting technique in the early stage and manual selection of a target object for matting for high-precision matting, which is not only complex in operation but also slow in processing speed, so that the invention provides an automatic matting method and system based on an image recognition technology.
Disclosure of Invention
The invention provides an automatic matting method based on an image recognition technology, which is used for recognizing a target image and matting through the image recognition technology, effectively solves the problems of complex operation and low processing speed in the prior art and has low cost.
The invention provides an automatic image matting method based on an image recognition technology, which comprises the following steps: the system comprises an image input module, a target identification module, a semantic segmentation module, an edge processing module and an automatic matting module;
the picture input module is used for uploading pictures needing to be scratched;
the target identification module is used for carrying out target object identification on the picture needing to be scratched through the target object identification model to obtain a target object identification picture;
the semantic segmentation module is used for performing semantic segmentation on the target object identification picture to obtain a semantic segmentation picture;
the edge processing module is used for smoothing the edge of the target object to obtain a semantic segmentation picture with smooth edge;
and the automatic matting module is used for matting out the target object in the semantic segmentation picture with smooth edge.
Further, the system also comprises a model training module and a missing filling module;
the model training module is used for training the target object recognition model before the target recognition module carries out target object recognition on the picture needing to be scratched through the target object recognition model;
and the missing filling module is used for filling the missing part of the missing target object after the target object is scratched out from the semantic segmentation picture with smooth edge.
Further, the model training module includes: the system comprises a target characteristic determining unit, a target object labeling unit and a model learning training unit;
the target characteristic determination unit is used for determining the characteristics of the target object;
the target object labeling unit is used for labeling the target object in the picture containing the target object to obtain the picture of the labeled target object;
and the model learning and training unit is used for learning and training the target object recognition model according to the image marked with the target object.
Further, the miss padding module includes: a context sensing unit and a missing supplement unit;
the environment perception unit is used for carrying out environment perception learning on the background around the missing part in the semantic segmentation picture lacking the target object;
the missing supplement unit is used for supplementing the background around the perceived missing part into the missing part.
Further, the edge processing module includes: the device comprises an edge obtaining unit, a feature fitting unit and a smoothing unit;
the edge acquisition unit is used for acquiring the information of the edge feature points of the target object in the semantic segmentation picture;
the characteristic fitting unit is used for performing curve fitting on the edge characteristic points acquired by the edge acquisition unit to acquire a fitting linear relation among the characteristic points;
and the smoothing unit is used for smoothing the edge of the target object in the semantic segmentation picture according to the fitted linear relation between the characteristic points to obtain the semantic segmentation picture with smooth edge.
An automatic image matting method based on an image recognition technology comprises the following steps:
uploading a picture to be scratched;
carrying out target object identification on the picture needing to be scratched through a target object identification model to obtain a target object identification picture;
performing semantic segmentation on the target object identification picture to obtain a semantic segmentation picture;
smoothing the edge of the target object in the semantic segmentation picture to obtain a semantic segmentation picture with smooth edge;
and (4) matting out the target object in the semantic segmentation picture with smooth edge to finish the matting.
Further, the target object recognition model also performs learning training on the target object recognition model before performing target object recognition on the picture needing to be scratched;
and carrying out missing part filling on the semantic segmentation pictures lacking the target object after the target object is extracted from the semantic segmentation pictures with smooth edges.
Further, the process of learning and training the target object recognition model includes:
determining characteristics of a target object;
labeling the target object in the picture containing the target object to obtain a picture of the labeled target object;
performing learning training on a target object recognition model according to the image of the labeled target object; and obtaining an optimized target object recognition model.
Further, the process of filling missing parts in the semantically segmented picture lacking the target object includes:
in the semantic segmentation picture lacking the target object, performing environment perception learning on the background around the missing part;
the background around the perceived missing part is supplemented to the missing part.
Further, the smoothing of the edge of the target object in the semantic segmentation picture to obtain the semantic segmentation picture with smooth edge includes:
acquiring information of edge feature points of a target object from a semantic segmentation picture;
performing curve fitting on the edge characteristic points according to the information of the edge characteristic points of the target object to obtain a fitting linear relation between the characteristic points;
smoothing the edge of the target object in the semantic segmentation picture according to the fitted linear relation between the feature points to obtain the semantic segmentation picture with smooth edge;
after the target object edge is subjected to smoothing processing, sharpening processing is performed on the smooth target object edge, and the method comprises the following steps:
firstly, acquiring pixel values of corresponding positions at two sides of a smooth edge of a target object with a smooth edge;
then, judging whether sharpening processing is needed or not according to the following formula;
F=abs{[(∑ai 2)2-(∑Ai 2)2]÷(∑ai 2+∑Ai 2)}
Figure GDA0003035099020000051
in the above formula, F represents a sharpening determination value, abs represents a function taking a positive value, and aiPixel value, A, of the ith point outside the smooth edge of the target objectiThe method comprises the steps of representing a pixel value of an ith point inside a smooth edge of a target object, G representing a sharpening judgment result, delta representing a difference value, n representing a parameter, wherein the larger the value is, the better the value is, taking 5, 1 representing that sharpening is needed, and 0 representing that sharpening is not needed;
and finally, sharpening the edge of the target object according to the judgment result.
The invention has at least the following beneficial effects:
the automatic cutout system based on the image recognition technology provided by the invention has the advantages that in the process of realizing automatic cutout, selecting the target object in the picture by using the target object recognition model through the target recognition module, avoids manual target object selection, not only saves the target object selection time, improves the target object selection speed and the matting speed, and the manual selection of the target object is not needed, so that the labor and material resources are saved, the cost of the cutout is saved, and the recognition efficiency of the target object recognition model is higher, the target object recognition can be carried out on a large number of pictures in a short time, the matting efficiency is effectively improved, in addition, in the process of matting, no complicated operation is needed to be carried out manually, so that the matting process is simplified, moreover, the matting error is not easy to occur, the quality of the matting is influenced, and the precision of the matting is improved.
According to the automatic cutout system based on the image recognition technology, the optimized target object recognition model can be recognized once a target object appears when the picture needing cutout is recognized through learning and training the target object recognition model, so that the accuracy of target object recognition is improved, and meanwhile, the efficiency of target object recognition is also improved.
According to the automatic cutout system based on the image recognition technology, the missing part of the semantic segmentation picture lacking the target object is filled after the target object is cut out through the missing filling module, so that the semantic segmentation picture lacking the target object still looks complete, a blank left on the original picture after the target object is buckled is avoided from appearing to be excessively sharp on the picture lacking the target object, the blank part caused after the target object is cut out is filled through the missing part filling, and the appearance of the picture lacking the target object is improved.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic structural diagram of an automatic matting system based on image recognition technology according to the present invention;
FIG. 2 is a schematic diagram of an automatic matting system based on image recognition technology according to the present invention;
FIG. 3 is a schematic diagram of a model training module in the automatic matting system based on image recognition technology according to the present invention;
FIG. 4 is a schematic diagram of a missing fill module in the automatic matting system based on the image recognition technology according to the present invention;
FIG. 5 is a schematic flow chart of an automatic matting method based on image recognition technology according to the present invention;
FIG. 6 is a schematic flow chart of an automatic matting method based on image recognition technology according to the present invention;
fig. 7 is a detailed flowchart of S20 in the automatic matting method based on the image recognition technology according to the present invention;
fig. 8 is a detailed flowchart of S6 in the automatic matting method based on the image recognition technology according to the present invention;
fig. 9 is an overall flow diagram of an automatic matting method based on an image recognition technology according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
As shown in fig. 1, an embodiment of the present invention provides an automatic matting system based on an image recognition technology, including: the system comprises an image input module, a target identification module, a semantic segmentation module, an edge processing module and an automatic matting module;
the picture input module is used for uploading pictures needing to be scratched;
the target identification module is used for carrying out target object identification on the picture needing to be scratched through the target object identification model to obtain a target object identification picture;
the semantic segmentation module is used for performing semantic segmentation on the target object identification picture to obtain a semantic segmentation picture;
the edge processing module is used for smoothing the edge of the target object to obtain a semantic segmentation picture with smooth edge;
and the automatic matting module is used for matting out the target object in the semantic segmentation picture with smooth edge.
In the above technical solution, the automatic cutout system based on the image recognition technology comprises: the system comprises an image input module, a target identification module, a semantic segmentation module, an edge processing module and an automatic matting module; uploading a picture to be scratched through a picture input module; carrying out target object identification on the picture needing to be scratched by using a target object identification model through a target identification module to obtain a target object identification picture; performing semantic segmentation on the target object identification picture through a semantic segmentation module to obtain a semantic segmentation picture; smoothing the edge of the target object through an edge processing module to obtain a semantic segmentation picture with smooth edge; and (4) scratching out the target object in the semantic segmentation picture with smooth edge through an automatic scratching module, and further finishing scratching. Through the technical scheme, in the process of realizing automatic matting, the target object in the picture is selected by the target identification module through the target object identification model, manual target object selection is avoided, not only the time for selecting the target object is saved, the speed for selecting the target object is increased, and the matting speed is increased, the cost of matting is saved while manpower and material resources are saved without manually selecting the target object, the identification efficiency of the target object identification model is higher, the efficiency of matting can be effectively increased by identifying the target object for a large number of pictures in a short time, in addition, in the process of matting, complicated operation is not needed, the matting process is simplified, matting errors are not easy to occur, the quality of matting is influenced, and the matting accuracy is improved. The semantic segmentation module is used for performing semantic segmentation to distinguish a target object in a picture from surrounding backgrounds, the distinguishing degree of the target object from surrounding environments in the picture is increased, and then the edge of the target object can be rapidly monitored when the edge processing module performs smooth processing on the edge of the target object, so that the edge is soft and smooth, the target object is accurately selected, the matting accuracy is improved, the fusion performance of the edge of the target object after smooth processing is better, the target object cannot be very sharp wherever the target object is placed, and the applicability of the target object is improved while the matting accuracy is improved.
As shown in fig. 2, in an embodiment provided by the present invention, the system further includes a model training module and a miss-padding module;
the model training module is used for training the target object recognition model before the target recognition module carries out target object recognition on the picture needing to be scratched through the target object recognition model;
and the missing filling module is used for filling the missing part of the missing target object after the target object is scratched out from the semantic segmentation picture with smooth edge.
In the technical scheme, the automatic cutout system based on the image recognition technology further comprises a model training module and a missing filling module; training a target object recognition model through a model training module before the target recognition module carries out target object recognition on a picture to be scratched through a target object recognition model; and filling the missing part of the missing target object after the target object is scratched out in the semantic segmentation picture with smooth edge through a missing filling module. Through the technical scheme, the model training module enables the optimized target object recognition model to recognize the image to be scratched through learning and training the target object recognition model, so that the accuracy of target object recognition is improved and the efficiency of target object recognition is also improved once the target object is recognized. The missing part filling is carried out on the semantic segmentation picture lacking the target object after the target object is cut out through the missing filling module, so that the semantic segmentation picture lacking the target object still looks complete, a blank left on the original picture after the target object is buckled is avoided from appearing extremely obtrusive on the picture lacking the target object, the blank part caused after the target object is cut out is filled through the missing part filling, and the visual sense of the picture lacking the target object is improved.
As shown in fig. 3, in an embodiment provided by the present invention, the model training module includes: the system comprises a target characteristic determining unit, a target object labeling unit and a model learning training unit;
the target characteristic determination unit is used for determining the characteristics of the target object;
the target object labeling unit is used for labeling the target object in the picture containing the target object to obtain the picture of the labeled target object;
and the model learning and training unit is used for learning and training the target object recognition model according to the image marked with the target object.
In the above technical solution, the model training module is provided with a target feature determination unit, a target object labeling unit and a model learning training unit; determining the characteristics of a target object through a target characteristic determining unit, and labeling the target object in a picture containing the target object through a target object labeling unit to obtain a picture of the labeled target object; and performing learning training on the target object recognition model according to the image marked with the target object by a model learning training unit. Through the technical scheme, the model training module can obtain the optimized target object recognition model, so that the optimized target object recognition model can automatically recognize the target object from the picture with the target object, and the recognition accuracy is improved. In addition, when the target object recognition model is subjected to learning training, the target object labeling unit can mark the target object in a large number of pictures according to the characteristics of the target object determined by the target characteristic determination unit, the learning training is carried out, the target object recognition model can be recognized once meeting the target object through the large number of pictures, the target object recognition sensitivity and the target object recognition accuracy of the target object recognition model are improved, and then the optimized target object recognition model can accurately recognize the target object when the target object recognition is carried out on the pictures needing image matting.
As shown in fig. 4, in an embodiment provided by the present invention, the miss padding module includes: a context sensing unit and a missing supplement unit;
the environment perception unit is used for carrying out environment perception learning on the background around the missing part in the semantic segmentation picture lacking the target object;
the missing supplement unit is used for supplementing the background around the perceived missing part into the missing part.
In the technical scheme, the missing filling module is provided with an environment sensing unit and a missing filling unit, the environment sensing unit senses and learns the background around the missing part in the semantic segmentation picture lacking the target object, and then the missing supplementing unit supplements the sensed background around the missing part to the missing part. By the aid of the technical scheme, the missing filling module can enable the picture lacking the target object to be still a complete picture, and the phenomenon that the picture is missing or blank in a whole piece after the target object is missing is avoided, so that the whole picture is extremely obtrusive, visual obstacles are caused to people, attractiveness of the picture is affected, and visual obstacles are generated to people. The whole impression of the picture of the missing target object after the image matting is promoted through the missing supplement unit, and the color filled in the missing part is obtained through the learning of the perception of the surrounding environment of the missing part in the picture through the environment perception unit, so that the filled missing part is integrated with the surrounding background, and the impression of the picture of the missing target object is promoted.
In an embodiment of the present invention, the edge processing module includes: the device comprises an edge obtaining unit, a feature fitting unit and a smoothing unit;
the edge acquisition unit is used for acquiring the information of the edge feature points of the target object in the semantic segmentation picture;
the characteristic fitting unit is used for performing curve fitting on the edge characteristic points acquired by the edge acquisition unit to acquire a fitting linear relation among the characteristic points;
and the smoothing unit is used for smoothing the edge of the target object in the semantic segmentation picture according to the fitted linear relation between the characteristic points to obtain the semantic segmentation picture with smooth edge.
In the technical scheme, the edge processing module comprises an edge obtaining unit, a feature fitting unit and a smoothing unit; the edge processing module obtains information of edge feature points of the target object through the edge obtaining unit in the process of smoothing the edge of the target object and obtaining the semantic segmentation picture with smooth edge, obtains information of feature points on the edge of the target object in the semantic segmentation picture, then performs curve fitting on the obtained feature points on the edge of the target object according to the information of the feature points on the edge of the target object through the feature fitting unit, and finally performs smoothing processing on the edge of the target object according to the curve fitting relation through the smoothing processing unit, so that the semantic segmentation picture with smooth edge is obtained. Through the technical scheme, the edges of the target object in the semantic segmentation picture are smooth, a plurality of convex corners are avoided from appearing on the edges of the target object, and simultaneously the edge lines of the scratched target object are smooth.
As shown in fig. 5, an embodiment of the present invention provides an automatic matting method based on an image recognition technology, including:
s1, uploading a picture needing to be scratched;
s2, carrying out target object recognition on the picture needing to be scratched through a target object recognition model to obtain a target object recognition picture;
s3, performing semantic segmentation on the target object identification picture to obtain a semantic segmentation picture;
s4, smoothing the edge of the target object in the semantic segmentation picture to obtain a semantic segmentation picture with smooth edge;
and S5, scratching out the target object in the semantic segmentation picture with smooth edge to finish the scratching.
In the technical scheme, when automatic cutout is carried out, firstly, a picture needing cutout is uploaded to a target object identification model; then, carrying out target object identification on the picture needing to be scratched through a target object identification model, and identifying a target object in the picture needing to be scratched to obtain a target object identification picture; secondly, performing semantic segmentation on the target object identification picture to obtain a semantic segmentation picture; then smoothing the edge of the target object in the semantic segmentation picture to obtain a semantic segmentation picture with smooth edge; and finally, the target object is extracted from the semantic segmentation picture with smooth edge to finish the extraction. Through the technical scheme, when the picture is scratched, the target object in the picture is selected through the target object identification model, so that the aim of manually selecting the target object is avoided, the target object is automatically identified and selected through the target object identification model, the time for selecting the target object is saved, the speed for selecting the target object is increased, the scratching speed is increased, the manual selection of the target object is not needed, the labor and material resources are saved, the scratching cost is saved, the target object identification model identification efficiency is high, the target object identification can be carried out on a large number of pictures in a short time, the scratching efficiency is effectively increased, in addition, in the scratching process, the manual complicated operation is not needed, the scratching process is simplified, and the scratching error is not easy to occur, the quality of the sectional drawing is influenced, and the accuracy of the sectional drawing is improved. The target object in the picture is distinguished from the surrounding background by semantic segmentation, the distinguishing degree of the target object from the surrounding environment in the picture is increased, and then the edge of the target object can be rapidly monitored when the edge of the target object is subjected to smooth processing, so that the edge is soft and smooth, the target object is accurately selected, the image matting accuracy is improved, the fusion performance of the edge of the target object subjected to smooth processing is better, the target object cannot appear to be very steep wherever the target object is placed, and the applicability of the target object is improved while the image matting accuracy is improved.
As shown in fig. 6 and fig. 9, in an embodiment provided by the present invention, before performing object identification on the picture to be scratched, the object identification model further includes S20, performing learning training on the object identification model;
and S6, carrying out missing part filling on the semantic segmentation picture lacking the target object after the target object is extracted from the semantic segmentation picture with smooth edges.
Among the above-mentioned technical scheme, the target object recognition model is right the picture that needs the cutout still learns the training before carrying out the target object recognition, makes the target object recognition model when the picture to needs the cutout discerns through to the target object recognition model, in case the target object just can be discerned appearing, has improved the accuracy of target object recognition, has also improved the efficiency of target object recognition simultaneously. The missing part filling is carried out on the semantic segmentation picture lacking the target object after the target object is cut out from the semantic segmentation picture with smooth edge, so that the semantic segmentation picture lacking the target object still looks complete, the phenomenon that a blank is left on the original picture after the target object is buckled and looks extremely obtrusive on the picture lacking the target object is avoided, the blank part caused after the target object is cut out is filled through the missing part filling, and the appearance of the picture lacking the target object is improved.
As shown in fig. 7, in an embodiment provided by the present invention, the process of performing learning training on the target object recognition model at S20 includes:
s201, determining the characteristics of a target object;
s202, labeling the target object in the picture containing the target object to obtain a picture of the labeled target object;
s203, learning and training a target object recognition model according to the image of the labeled target object; and obtaining an optimized target object recognition model.
In the technical scheme, a target object recognition model is optimized through learning training before target object recognition is carried out on a picture needing to be scratched, firstly, the characteristics of a target object are determined, and then the target object is marked in the picture according to the characteristics of the target object to obtain a picture with the marked target object; finally, learning and training the target object recognition model according to the image marked with the target object; and obtaining an optimized target object recognition model. By the technical scheme, the optimized target object recognition model can automatically recognize the target object from the picture with the target object, and the recognition accuracy is improved. In addition, when the target object recognition model is learned and trained, the target object recognition model can be recognized once meeting the target object through a large number of pictures, the target object recognition sensitivity and the target object recognition accuracy of the target object recognition model are improved, and then the optimized target object recognition model can accurately recognize the target object when the pictures needing to be scratched are subjected to target object recognition.
As shown in fig. 8, in an embodiment provided by the present invention, the step S6 of performing missing part filling on the semantic segmentation picture lacking the target object includes:
s61, in the semantic segmentation picture lacking the target object, performing environment perception learning on the background around the missing part;
s62, adding the background around the perceived missing part to the missing part.
In the above technical solution, after the target object is extracted from the smooth-edge semantic segmentation picture, the semantic segmentation picture lacking the target object is processed as follows: the method comprises the steps of firstly, carrying out environment perception learning on the background around a missing part, obtaining image information around the missing part in a target object, namely a semantic segmentation picture lacking the target object, and then supplementing the perceived background around the missing part into the missing part, so that the missing part and the background around the missing part are kept consistent and are integrated. By the technical scheme, the semantically segmented picture which is lack of the target object after automatic image matting is still a complete picture, the phenomenon that the original picture is lost or blank in a whole piece after the target object is lost is avoided, the whole picture is extremely obtrusive, visual obstacles are caused for people, the picture presentation and the visual impression of people are influenced, the whole impression of the picture which is lack of the target object after image matting is improved by filling the missing part, the color filled by the missing part is obtained by learning the surrounding environment perception of the missing part in the picture, the filled missing part is integrated with the surrounding background, and the impression of the picture which is lack of the target object is improved.
In an embodiment provided by the present invention, the smoothing of the edge of the target object in the semantic segmentation picture to obtain the semantic segmentation picture with a smooth edge includes:
acquiring information of edge feature points of a target object from a semantic segmentation picture;
performing curve fitting on the edge characteristic points according to the information of the edge characteristic points of the target object to obtain a fitting linear relation between the characteristic points;
smoothing the edge of the target object in the semantic segmentation picture according to the fitted linear relation between the feature points to obtain the semantic segmentation picture with smooth edge;
after the target object edge is subjected to smoothing processing, sharpening processing is performed on the smooth target object edge, and the method comprises the following steps:
firstly, acquiring pixel values of corresponding positions at two sides of a smooth edge of a target object with a smooth edge;
then, judging whether sharpening processing is needed or not according to the following formula;
F=abs{[(∑ai 2)2-(∑Ai 2)2]÷(∑ai 2+∑Ai 2)}
Figure GDA0003035099020000151
in the above formula, F represents a sharpening determination value, abs represents a function taking a positive value, and aiPixel value, A, of the ith point outside the smooth edge of the target objectiThe method comprises the steps of representing a pixel value of an ith point inside a smooth edge of a target object, G representing a sharpening judgment result, delta representing a difference value, n representing a parameter, wherein the larger the value is, the better the value is, taking 5, 1 representing that sharpening is needed, and 0 representing that sharpening is not needed;
and finally, sharpening the edge of the target object according to the judgment result.
In the technical scheme, in the process of smoothing the edge of the target object in the semantic segmentation picture to obtain the semantic segmentation picture with smooth edge, firstly, the information of the edge feature point of the target object is obtained in the semantic segmentation picture; then, performing curve fitting on the edge characteristic points according to the information of the edge characteristic points of the target object to obtain a fitting linear relation between the characteristic points; and finally, smoothing the edge of the target object in the semantic segmentation picture according to the fitted linear relation between the feature points to obtain the semantic segmentation picture with smooth edge. Carry out curve fitting with the more outstanding knee point on the edge of target object in the semantic segmentation picture through above-mentioned technical scheme, obtain the curvilinear relation between the edge characteristic point of target object, and then link up the edge characteristic point of target object through the curve for the edge of target object is smooth and mellow, makes the target object edge line of digging out simultaneously more smooth. In addition, after the target object edge is subjected to smoothing processing, the smooth target object edge is sharpened, whether sharpening processing is carried out or not is determined by calculating a sharpening judgment value and obtaining a sharpening judgment result, the sharpening processing is only carried out under the condition that the target object edge has an obvious pixel difference value, time is saved, efficiency can be improved, the edge of the scratched target object can be clear, and the condition that the edge of the scratched target object is fuzzy is eliminated.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. An automatic matting system based on image recognition technology, characterized in that the system comprises: the system comprises an image input module, a target identification module, a semantic segmentation module, an edge processing module and an automatic matting module;
the picture input module is used for uploading pictures needing to be scratched;
the target identification module is used for carrying out target object identification on the picture needing to be scratched through the target object identification model to obtain a target object identification picture;
the semantic segmentation module is used for performing semantic segmentation on the target object identification picture to obtain a semantic segmentation picture;
the edge processing module is used for smoothing the edge of the target object to obtain a semantic segmentation picture with smooth edge; the edge processing module comprises: the device comprises an edge obtaining unit, a feature fitting unit and a smoothing unit;
the edge acquisition unit is used for acquiring the information of the edge feature points of the target object in the semantic segmentation picture;
the characteristic fitting unit is used for performing curve fitting on the edge characteristic points acquired by the edge acquisition unit to acquire a fitting linear relation among the characteristic points;
the smoothing unit is used for smoothing the edge of the target object in the semantic segmentation picture according to the fitted linear relation between the feature points to obtain the semantic segmentation picture with smooth edge;
after the target object edge is subjected to smoothing processing, sharpening processing is performed on the smooth target object edge, and the method comprises the following steps:
firstly, acquiring pixel values of corresponding positions at two sides of a smooth edge of a target object with a smooth edge;
then, judging whether sharpening processing is needed or not according to the following formula;
Figure DEST_PATH_IMAGE001
Figure 233931DEST_PATH_IMAGE002
in the above-mentioned formula,
Figure DEST_PATH_IMAGE003
a value indicative of a determination of sharpening is indicated,
Figure 396928DEST_PATH_IMAGE004
the expression takes a function of the positive value,
Figure 140893DEST_PATH_IMAGE005
a pixel value representing the ith point outside the smooth edge of the target object,
Figure 253205DEST_PATH_IMAGE006
a pixel value representing the ith point inside the smooth edge of the target object,
Figure 542367DEST_PATH_IMAGE007
a result of the sharpening determination is represented,
Figure 740130DEST_PATH_IMAGE008
representing a difference value, n representing a parameter, n =5, representing that the sharpening process is required when G =1, and representing that the sharpening process is not required when G = 0;
finally, sharpening the edge of the target object according to the judgment result;
and the automatic matting module is used for matting out the target object in the semantic segmentation picture with smooth edge.
2. The system of claim 1, further comprising a model training module and a miss-padding module;
the model training module is used for training the target object recognition model before the target recognition module carries out target object recognition on the picture needing to be scratched through the target object recognition model;
and the missing filling module is used for filling the missing part of the missing target object after the target object is scratched out from the semantic segmentation picture with smooth edge.
3. The system of claim 2, wherein the model training module comprises: the system comprises a target characteristic determining unit, a target object labeling unit and a model learning training unit;
the target characteristic determination unit is used for determining the characteristics of the target object;
the target object labeling unit is used for labeling the target object in the picture containing the target object to obtain the picture of the labeled target object;
and the model learning and training unit is used for learning and training the target object recognition model according to the image marked with the target object.
4. The system of claim 2, wherein the miss-padding module comprises: a context sensing unit and a missing supplement unit;
the environment perception unit is used for carrying out environment perception learning on the background around the missing part in the semantic segmentation picture lacking the target object;
the missing supplement unit is used for supplementing the background around the perceived missing part into the missing part.
5. An automatic cutout method based on an image recognition technology is characterized by comprising the following steps:
uploading a picture to be scratched;
carrying out target object identification on the picture needing to be scratched through a target object identification model to obtain a target object identification picture;
performing semantic segmentation on the target object identification picture to obtain a semantic segmentation picture;
smoothing the edge of the target object in the semantic segmentation picture to obtain a semantic segmentation picture with smooth edge; the method comprises the following steps:
acquiring information of edge feature points of a target object from a semantic segmentation picture;
performing curve fitting on the edge characteristic points according to the information of the edge characteristic points of the target object to obtain a fitting linear relation between the characteristic points;
smoothing the edge of the target object in the semantic segmentation picture according to the fitted linear relation between the feature points to obtain the semantic segmentation picture with smooth edge;
after the target object edge is subjected to smoothing processing, sharpening processing is performed on the smooth target object edge, and the method comprises the following steps:
firstly, acquiring pixel values of corresponding positions at two sides of a smooth edge of a target object with a smooth edge;
then, judging whether sharpening processing is needed or not according to the following formula;
Figure 287786DEST_PATH_IMAGE009
Figure 254605DEST_PATH_IMAGE010
in the above-mentioned formula,
Figure 478782DEST_PATH_IMAGE003
a value indicative of a determination of sharpening is indicated,
Figure 898262DEST_PATH_IMAGE004
the expression takes a function of the positive value,
Figure 249609DEST_PATH_IMAGE005
a pixel value representing the ith point outside the smooth edge of the target object,
Figure 821666DEST_PATH_IMAGE006
a pixel value representing the ith point inside the smooth edge of the target object,
Figure 233056DEST_PATH_IMAGE007
a result of the sharpening determination is represented,
Figure 874253DEST_PATH_IMAGE008
representing a difference value, n representing a parameter, n =5, representing that the sharpening process is required when G =1, and representing that the sharpening process is not required when G = 0;
finally, sharpening the edge of the target object according to the judgment result;
and (4) matting out the target object in the semantic segmentation picture with smooth edge to finish the matting.
6. The method according to claim 5, wherein the target object recognition model also performs learning training on the target object recognition model before performing target object recognition on the picture needing matting;
and carrying out missing part filling on the semantic segmentation pictures lacking the target object after the target object is extracted from the semantic segmentation pictures with smooth edges.
7. The method of claim 6, wherein the learning training of the target object recognition model comprises:
determining characteristics of a target object;
labeling the target object in the picture containing the target object to obtain a picture of the labeled target object;
performing learning training on a target object recognition model according to the image of the labeled target object; and obtaining an optimized target object recognition model.
8. The method according to claim 7, wherein the process of filling missing parts in the semantically segmented picture lacking the target object comprises:
in the semantic segmentation picture lacking the target object, performing environment perception learning on the background around the missing part;
the background around the perceived missing part is supplemented to the missing part.
CN202110062144.XA 2021-01-18 2021-01-18 Automatic image matting method and system based on image recognition technology Active CN112862851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110062144.XA CN112862851B (en) 2021-01-18 2021-01-18 Automatic image matting method and system based on image recognition technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110062144.XA CN112862851B (en) 2021-01-18 2021-01-18 Automatic image matting method and system based on image recognition technology

Publications (2)

Publication Number Publication Date
CN112862851A CN112862851A (en) 2021-05-28
CN112862851B true CN112862851B (en) 2021-10-15

Family

ID=76006416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110062144.XA Active CN112862851B (en) 2021-01-18 2021-01-18 Automatic image matting method and system based on image recognition technology

Country Status (1)

Country Link
CN (1) CN112862851B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1892696A (en) * 2005-07-08 2007-01-10 深圳迈瑞生物医疗电子股份有限公司 Supersonic image edge-sharpening and speck-inhibiting method
CN101212563A (en) * 2006-12-29 2008-07-02 安凯(广州)软件技术有限公司 Noise estimation based partial image filtering method
CN102750681A (en) * 2012-07-04 2012-10-24 青岛海信信芯科技有限公司 Processing device and method for sharpening edge of image
CN103763460A (en) * 2014-01-20 2014-04-30 深圳市爱协生科技有限公司 Image sharpening method
CN105138984A (en) * 2015-08-24 2015-12-09 西安电子科技大学 Sharpened image identification method based on multi-resolution overshoot effect measurement
CN106603903A (en) * 2015-10-15 2017-04-26 中兴通讯股份有限公司 Photo processing method and apparatus
CN108491750A (en) * 2017-09-11 2018-09-04 上海南洋万邦软件技术有限公司 Face identification method
CN108876711A (en) * 2018-06-20 2018-11-23 山东师范大学 A kind of sketch generation method, server and system based on image characteristic point
CN109145922A (en) * 2018-09-10 2019-01-04 成都品果科技有限公司 A kind of automatically stingy drawing system
CN111402158A (en) * 2020-03-13 2020-07-10 西安科技大学 Method for clearing low-illumination fog dust image of fully mechanized coal mining face

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8600188B2 (en) * 2010-09-15 2013-12-03 Sharp Laboratories Of America, Inc. Methods and systems for noise reduction and image enhancement
US9361672B2 (en) * 2012-03-26 2016-06-07 Google Technology Holdings LLC Image blur detection

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1892696A (en) * 2005-07-08 2007-01-10 深圳迈瑞生物医疗电子股份有限公司 Supersonic image edge-sharpening and speck-inhibiting method
CN101212563A (en) * 2006-12-29 2008-07-02 安凯(广州)软件技术有限公司 Noise estimation based partial image filtering method
CN102750681A (en) * 2012-07-04 2012-10-24 青岛海信信芯科技有限公司 Processing device and method for sharpening edge of image
CN103763460A (en) * 2014-01-20 2014-04-30 深圳市爱协生科技有限公司 Image sharpening method
CN105138984A (en) * 2015-08-24 2015-12-09 西安电子科技大学 Sharpened image identification method based on multi-resolution overshoot effect measurement
CN106603903A (en) * 2015-10-15 2017-04-26 中兴通讯股份有限公司 Photo processing method and apparatus
CN108491750A (en) * 2017-09-11 2018-09-04 上海南洋万邦软件技术有限公司 Face identification method
CN108876711A (en) * 2018-06-20 2018-11-23 山东师范大学 A kind of sketch generation method, server and system based on image characteristic point
CN109145922A (en) * 2018-09-10 2019-01-04 成都品果科技有限公司 A kind of automatically stingy drawing system
CN111402158A (en) * 2020-03-13 2020-07-10 西安科技大学 Method for clearing low-illumination fog dust image of fully mechanized coal mining face

Also Published As

Publication number Publication date
CN112862851A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN101281540B (en) Apparatus, method and computer program for processing information
CN113128449A (en) Neural network training method and device for face image processing, and face image processing method and device
CN112153483B (en) Information implantation area detection method and device and electronic equipment
CN102831239B (en) A kind of method and system building image data base
CN107316035A (en) Object identifying method and device based on deep learning neutral net
CN111368682B (en) Method and system for detecting and identifying station caption based on master RCNN
CN108921850B (en) Image local feature extraction method based on image segmentation technology
CN112581434A (en) Image identification method for product defect detection
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN111814573A (en) Face information detection method and device, terminal equipment and storage medium
CN116030454B (en) Text recognition method and system based on capsule network and multi-language model
CN116052222A (en) Cattle face recognition method for naturally collecting cattle face image
CN106682652A (en) Structure surface disease inspection and analysis method based on augmented reality
CN115620022A (en) Object detection method, device, equipment and storage medium
CN112862851B (en) Automatic image matting method and system based on image recognition technology
CN114386504A (en) Engineering drawing character recognition method
CN114119695A (en) Image annotation method and device and electronic equipment
EP3396596B1 (en) Heat ranking of media objects
CN110533020A (en) A kind of recognition methods of text information, device and storage medium
KR102066412B1 (en) Apparatus and method for acquiring foreground image
CN111860173B (en) Remote sensing image ground feature element extraction method and system based on weak supervision
CN112598692A (en) Remote sensing image segmentation post-processing algorithm based on marked pixel matrix
CN111563462A (en) Image element detection method and device
CN116993727B (en) Detection method and device, electronic equipment and computer readable medium
CN113435341B (en) Programming practice evaluation system and method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant