CN112330634B - Edge fine matting method and system for clothing - Google Patents

Edge fine matting method and system for clothing Download PDF

Info

Publication number
CN112330634B
CN112330634B CN202011224064.1A CN202011224064A CN112330634B CN 112330634 B CN112330634 B CN 112330634B CN 202011224064 A CN202011224064 A CN 202011224064A CN 112330634 B CN112330634 B CN 112330634B
Authority
CN
China
Prior art keywords
clothing
image
boundary
module
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011224064.1A
Other languages
Chinese (zh)
Other versions
CN112330634A (en
Inventor
李小波
石矫龙
李昆仑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Shambala Culture Co ltd
Original Assignee
Hengxin Shambala Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Shambala Culture Co ltd filed Critical Hengxin Shambala Culture Co ltd
Priority to CN202011224064.1A priority Critical patent/CN112330634B/en
Publication of CN112330634A publication Critical patent/CN112330634A/en
Application granted granted Critical
Publication of CN112330634B publication Critical patent/CN112330634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of image processing, in particular to a method and a system for edge fine matting of clothing, wherein the method for edge fine matting of clothing comprises the following steps: step S110, performing feature recognition on the shot clothing pictures or clothing videos so as to classify the clothing in the clothing pictures or the clothing videos into preset categories; step S120, correcting the primary clothing boundary extracted from clothing pictures or clothing videos according to the clothing boundary parameters classified into the types to obtain an accurate clothing boundary; and step S130, cutting out a clothing image from the clothing picture or the clothing video according to the accurate clothing boundary. The application can ensure that the outline of the extracted main body image is complete, and can also ensure that the main body image is extracted thoroughly.

Description

Edge fine matting method and system for clothing
Technical Field
The application relates to the technical field of image processing, in particular to a method and a system for fine edge matting of clothing.
Background
When image capturing is performed, it is often the case that the subject color of the capturing is close to the background color, for example: in the process of shooting the clothing, the situation that the color of the clothing is close to the color of the space where the clothing is located can occur, in this case, it is difficult to extract the main body image from the shot picture or video, even if the main body image can be extracted from the shot picture or video, because the color of the main body is close to the color of the background, the outline of the extracted main body image has a large defect, and the extracted main body image also has a background area or other areas, so that the main body image is not completely extracted.
Therefore, how to ensure the integrity of the outline of the extracted subject image and ensure the complete extraction of the subject image is a technical problem that needs to be solved by those skilled in the art at present.
Disclosure of Invention
The application provides a method and a system for finely matting edges of clothing, which are used for guaranteeing that the outline of an extracted main body image is complete and guaranteeing that the main body image is thoroughly extracted.
In order to solve the technical problems, the application provides the following technical scheme:
A method for edge fine matting of garments comprising the steps of: step S110, performing feature recognition on the shot clothing pictures or clothing videos so as to classify the clothing in the clothing pictures or the clothing videos into preset categories; step S120, correcting the primary clothing boundary extracted from clothing pictures or clothing videos according to the clothing boundary parameters classified into the types to obtain an accurate clothing boundary; and step S130, cutting out a clothing image from the clothing picture or the clothing video according to the accurate clothing boundary.
The edge fine matting method for clothing as described above, wherein preferably, further comprises the following steps: step S140, calculating the area of each area surrounded by the boundary of the separated clothing image, so as to carry out smoothing treatment and filling treatment on the boundary of the separated clothing image; and step S150, identifying the mannequin part except the clothing in the clothing image, and removing the area occupied by the mannequin part to form an accurate clothing image.
The edge fine matting method for the clothing as described above, wherein it is preferable to perform feature recognition on the shot clothing picture or clothing video to classify the clothing in the clothing picture or clothing video into a predetermined category, includes the following sub-steps: step S111, converting each frame of image in the shot clothing picture or clothing video into a gray level image; step S112, expanding a low gray value part in the gray image, and compressing a high gray value part in the gray image to stretch the gray image; step S113, extracting the boundary of the garment from the stretched image; step S114, obtaining the characteristics of the clothes from the extracted clothes boundary, and inputting the characteristics of the clothes into a clothes classification model so as to classify the clothes into a predetermined category.
The edge fine matting method for clothing as described above, wherein the training to form the clothing classification model preferably comprises the following sub-steps: s101, collecting parameters of different types of clothes in advance to form a feature vector set; step S102, inputting the formed feature vector set into a DBN classification model, and training the DBN classification model to obtain different sub-classification models; and step S103, fusing different sub-classification models to obtain the clothing classification model.
The edge fine matting method for clothing as described above, wherein it is preferable to calculate the area of each region surrounded by the boundary of the clothing image, includes the following sub-steps: step S141, determining a certain number of folding points according to the curvature of each section of the boundary, and forming a polygon image through the folding points; and S142, performing area calculation on the polygon image to obtain the area of each area surrounded by the boundary of the clothing image.
An edge fine matting system for garments, comprising: the device comprises an identification and classification module, a correction module and a segmentation module; the identification and classification module performs feature identification on the shot clothing pictures or the shot clothing videos so as to classify the clothing in the clothing pictures or the clothing videos into preset categories; the correction module corrects the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the types so as to obtain an accurate clothing boundary; the segmentation module segments the clothing image from the clothing picture or the clothing video according to the accurate clothing boundary.
An edge matting system for garments as described above, wherein preferably further comprising: a processing module and a mannequin removing module; the processing module calculates the area of each area surrounded by the boundary of the separated clothing image so as to carry out smoothing processing and filling processing on the boundary of the separated clothing image; the mannequin removal module identifies mannequin parts except for the clothing in the clothing image and removes the area occupied by the mannequin parts to form an accurate clothing image.
The edge fine matting system for clothing as described above, wherein preferably the identifying and classifying module comprises: the device comprises a gray level conversion module, a stretching module, an extraction module and a characteristic identification input module; the gray conversion module converts each frame of image in the shot clothing picture or clothing video into a gray image; the stretching module expands the low gray value part in the gray image, and compresses the high gray value part in the gray image so as to stretch the gray image; the extraction module extracts the boundary of the garment from the stretched image; the feature recognition input module obtains features of the garment from the extracted garment boundaries and inputs the features of the garment into the garment classification model to classify the garment into a predetermined category.
An edge matting system for garments as described above, wherein preferably the training module comprises: the system comprises a feature vector set forming module, a DBN classification model and a fusion module; the characteristic vector set forming module collects parameters of different types of clothes in advance to form a characteristic vector set; training the formed feature vector set by the DBN classification model to obtain different sub-classification models; and the fusion module fuses the different sub-classification models to obtain the clothing classification model.
An edge matting system for garments as described above, wherein preferably the processing module comprises: a polygon forming module and an area calculating module; the polygon forming module determines a certain number of folding points according to the curvature of each section of the boundary, and forms a polygon image through the folding points; the area calculation module calculates the area of the polygon image to obtain the area of each area surrounded by the boundary of the clothing image.
Compared with the background art, the edge fine matting method and system for the clothing can ensure that the outline of the extracted main body image is complete, and can also ensure that the main body image is extracted thoroughly.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
Fig. 1 is a flowchart of a method for edge fine matting for garments provided by an embodiment of the present application;
FIG. 2 is a flow chart of training to form a garment classification model provided by an embodiment of the present application;
FIG. 3 is a flow chart of a garment categorization provided by an embodiment of the present application;
FIG. 4 is a flow chart of area calculation provided by an embodiment of the present application;
fig. 5 is a schematic diagram of an edge fine matting system for garments according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of a method for edge fine matting for clothing according to an embodiment of the present application;
The application provides a method for finely matting edges of clothing, which comprises the following steps:
step S110, performing feature recognition on the shot clothing pictures or clothing videos so as to classify the clothing in the clothing pictures or the clothing videos into preset categories;
Parameters of different kinds of clothes are acquired in advance before the method is used, so that a clothes classification model is formed by training the parameters of the different kinds of clothes. Specifically, as shown in fig. 2, the method comprises the following sub-steps:
S101, collecting parameters of different types of clothes in advance to form a feature vector set;
A feature vector set p= { z 1、z2......zi......zn }, where z 1、z2......zi......zn is a parameter set of each garment, n is the number ,zi={a1,a2,a3,a4,a5......ag},a1,a2,a3,a4,a5,......ag of parameter sets representing parameters of one garment, and g is the number of parameters of one garment, formed by collecting parameters of different types of garments in advance.
For example: the feature vector set P has 1000 half sleeves, 1000 long sleeves, and 1000 long trousers. This is, of course, an example, and in practice, will divide garments into finer categories, such as: the half sleeves are further divided into men's half sleeves, women's half sleeves, seven-half sleeves, nine-half sleeves and the like.
Step S102, inputting the formed feature vector set into a DBN classification model, and training the DBN classification model to obtain different sub-classification models;
The DBN model is a deep learning algorithm with good time evolution capability, a feature vector set P= { z 1、z2......zi......zn } is input into the DBN classification model, the DBM classification model is trained by using the feature vector set P, and different sub-classification models D t are obtained, wherein t=1, 2, 3 and … … T are obtained, namely T sub-classification models.
Step S103, fusing different sub-classification models to obtain a clothing classification model;
Specifically, the different sub-classification models D t are given corresponding weights β t, and the classification model Y is obtained by the following formula:
When the clothes are used, the clothes are put on the mannequin, then the clothes and the mannequin are put into the shooting device together, and the clothes are shot from different angles, so that clothes pictures or clothes videos are shot.
And then, carrying out feature recognition on the shot clothing picture or clothing video, and classifying the clothing in the clothing picture or the clothing video into a preset category according to the recognized features.
Specifically, referring to fig. 3, feature recognition is performed on a taken clothing picture or clothing video to classify the clothing in the clothing picture or the clothing video into a predetermined category, including the following sub-steps:
step S111, converting each frame of image in the shot clothing picture or clothing video into a gray level image;
the values of three colors R, G, B of each pixel point in each frame image in the photographed clothing picture or clothing video are set to be the same, so as to perform the graying processing on each frame image in the photographed clothing picture or clothing video, specifically, the graying processing is performed by the following formula r=g=b=wr r+wg+wb×b, where wr, wg, wb are weights of R, G, B respectively.
Step S112, expanding a low gray value part in the gray image, and compressing a high gray value part in the gray image to stretch the gray image;
specifically, each pixel point in the gray level image is expressed by the formula And performing transformation, wherein s is the value of the transformed pixel point, r is the original value of the pixel point in the gray image, m is a threshold value, preferably, m is 0.5, E is a stretching parameter, preferably, E is 5, and the gray image is stretched through the formula.
Step S113, extracting the boundary of the garment from the stretched image;
Specifically, an absolute gradient value M of each pixel point in the stretched image is calculated:
Wherein f (x, y) is a two-dimensional function corresponding to each pixel point in the stretched image, x, y are coordinate values of the pixel point, and if the absolute gradient value M (x, y) of the pixel point is greater than a preset threshold value, the pixel point belongs to a boundary point of the garment. According to the method, all boundary points of the garment in the stretched image are calculated, and a set formed by all boundary points of the garment is taken as the boundary of the garment to be extracted.
Step S114, obtaining the characteristics of the clothes from the extracted clothes boundary, and inputting the characteristics of the clothes into a clothes classification model so as to classify the clothes into a preset category;
Specifically, characteristic parameters such as the length, width, sleeve length, sleeve width, trousers length and trousers width are obtained according to the extracted clothing boundary, the obtained parameters are input into a clothing classification model Y, and clothing is classified into one of preset types such as long sleeves, short sleeves, long skirts, short skirts, trousers and shorts through the clothing classification model Y.
Referring to fig. 1, step S120, correcting the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the classified clothing boundary parameters to obtain an accurate clothing boundary;
For example, after classifying the garments into the men's long-sleeve shirts by the garment classification model Y, the boundary parameters of the men's long-sleeve shirts (for example, the proportional relation of the parameters such as the length, width, length and width of the men's long-sleeve shirts) which are set in advance are obtained. According to the preliminary clothing boundary extracted from the clothing picture or the clothing video in the steps S111, S112 and S113, the preliminary clothing boundary is corrected according to the preset boundary parameters of the men 'S long-sleeve shirts (for example, the proportional relation of the parameters of the men' S long-sleeve shirts such as the length, the width, the length and the width of the sleeves), so as to obtain an accurate clothing boundary.
Step S130, cutting out a clothing image from a clothing picture or a clothing video according to the accurate clothing boundary;
And carrying out image segmentation in each frame of image in the clothing picture or the clothing video according to the obtained accurate clothing boundary to obtain a clothing image.
Step S140, calculating the area of each area surrounded by the boundary of the separated clothing image, so as to carry out smoothing treatment and filling treatment on the boundary of the separated clothing image;
specifically, referring to fig. 4, calculating the area of each region surrounded by the boundary of the clothing image includes the following sub-steps:
step S141, determining a certain number of folding points according to the curvature of each section of the boundary, and forming a polygon image through the folding points;
the boundary of the clothing image can enclose closed areas with different sizes, and area calculation is carried out on each closed area. Since the boundary of the clothing image surrounding each area is usually a curve, when calculating the area of each area, a certain number of folding points are determined according to the curvature of each section of the curve, specifically, the corresponding vertex when the curvature is greater than the threshold value is determined as a folding point, and then two adjacent folding points are connected to convert each area into a polygon image. For example: w folding points are formed, and the coordinates of the folding points are (c 1,d1)、(c2,d2)、…(cw,dw) in sequence.
Step S142, carrying out area calculation on the polygon image to obtain the area of each area surrounded by the boundary of the clothing image;
specifically, the area H of the polygon image is:
Wherein w is the number of folding points of the polygon image, (c k,dk)、(ck+1,dk+1) is the coordinates of the folding points, k and k+1 are subscripts of the coordinates c and d of the folding points, so as to represent that c and d are the coordinates of different folding points, and the value is 1 to w-1.
Specifically, after the area of each region surrounded by the boundary of the clothing image is calculated, the boundary corresponding to the region with the area smaller than the threshold value is removed from the boundary of the clothing image, so that the boundary of the clothing image is smoother, and the miscellaneous points formed by the boundary of the clothing image can be removed.
On the basis of the above, after removing the boundary corresponding to the area smaller than the threshold, filling the area surrounded by the boundary (that is, the boundary corresponding to the area smaller than the threshold), specifically, selecting the pixel point near the boundary, and filling the area surrounded by the boundary with the feature of the pixel point.
Step S150, identifying a mannequin part except the clothing in the clothing image, and removing the area occupied by the mannequin part to form an accurate clothing image;
specifically, the mannequin part in the clothing image is identified by convex hull technology, for example: and (3) removing the image in the convex hull by the hands of the mannequin, and filling the removed area according to the background color to form a final clothing image.
Example two
Referring to fig. 5, fig. 5 is a schematic diagram of an edge fine matting system for clothing according to an embodiment of the present application;
the application also provides a system for edge fine matting of clothing, which comprises: the identification categorization module 510, the correction module 520, the segmentation module 530, the processing module 540, and the people table removal module 550.
The recognition categorizing module 510 performs feature recognition on the photographed clothing pictures or clothing videos to categorize the clothing in the clothing pictures or clothing videos into predetermined categories.
On the basis of the above, the edge fine matting system for clothing further comprises a training module 560, and before the method is used, the training module 560 trains parameters of different types of clothing acquired in advance to form a clothing classification model. Specifically, training module 560 includes: a feature vector set formation module 561, a DBN classification model 562, and a fusion module 563.
The feature vector set forming module 561 forms a feature vector set by the parameters of different kinds of clothing collected in advance;
specifically, a feature vector set p= { z 1、z2......zi......zn } formed by collecting parameters of different kinds of clothing in advance, where z 1、z2......zi......zn is a parameter set of each piece of clothing, n is the number ,zi={a1,a2,a3,a4,a5......ag},a1,a2,a3,a4,a5,......ag of parameter sets representing parameters of one piece of clothing, and g is the number of parameters of one piece of clothing.
For example: the feature vector set P has 1000 half sleeves, 1000 long sleeves, and 1000 long trousers. This is, of course, an example, and in practice, will divide garments into finer categories, such as: the half sleeves are further divided into men's half sleeves, women's half sleeves, seven-half sleeves, nine-half sleeves and the like.
The DBN classification model 562 trains the formed feature vector set to obtain different sub-classification models;
The DBN model is a deep learning algorithm with good time evolution capability, a feature vector set P= { z 1、z2......zi......zn } is input into the DBN classification model, the DBM classification model is trained by using the feature vector set P, and different sub-classification models D t are obtained, wherein t=1, 2, 3 and … … T are obtained, namely T sub-classification models.
The fusion module 563 fuses the different sub-classification models to obtain a clothing classification model;
Specifically, the different sub-classification models D t are given corresponding weights β t, and the classification model Y is obtained by the following formula:
When the clothes are used, the clothes are put on the mannequin, then the clothes and the mannequin are put into the shooting device together, and the clothes are shot from different angles, so that clothes pictures or clothes videos are shot.
And then, carrying out feature recognition on the shot clothing picture or clothing video, and classifying the clothing in the clothing picture or the clothing video into a preset category according to the recognized features.
Specifically, the identifying and classifying module 510 includes: a gray conversion module 511, a stretching module 512, an extraction module 513, and a feature recognition input module 514.
The gray conversion module 511 converts each frame image in the photographed clothing picture or clothing video into a gray image;
the values of three colors R, G, B of each pixel point in each frame image in the photographed clothing picture or clothing video are set to be the same, so as to perform the graying processing on each frame image in the photographed clothing picture or clothing video, specifically, the graying processing is performed by the following formula r=g=b=wr r+wg+wb×b, where wr, wg, wb are weights of R, G, B respectively.
The stretching module 512 expands the low gray value part in the gray image and compresses the high gray value part in the gray image to stretch the gray image;
specifically, each pixel point in the gray level image is expressed by the formula And performing transformation, wherein s is the value of the transformed pixel point, r is the original value of the pixel point in the gray image, m is a threshold value, preferably, m is 0.5, E is a stretching parameter, preferably, E is 5, and the gray image is stretched through the formula.
The extraction module 513 extracts the boundary of the garment from the stretched image;
Specifically, an absolute gradient value M of each pixel point in the stretched image is calculated:
Wherein f (x, y) is a two-dimensional function corresponding to each pixel point in the stretched image, x, y are coordinate values of the pixel point, and if the absolute gradient value M (x, y) of the pixel point is greater than a preset threshold value, the pixel point belongs to a boundary point of the garment. According to the method, all boundary points of the garment in the stretched image are calculated, and a set formed by all boundary points of the garment is taken as the boundary of the garment to be extracted.
The feature recognition input module 514 obtains features of the garment from the extracted garment boundaries and inputs the features of the garment into a garment classification model to classify the garment into a predetermined category;
Specifically, characteristic parameters such as the length, width, sleeve length, sleeve width, trousers length and trousers width are obtained according to the extracted clothing boundary, the obtained parameters are input into a clothing classification model Y, and clothing is classified into one of preset types such as long sleeves, short sleeves, long skirts, short skirts, trousers and shorts through the clothing classification model Y.
The correction module 520 corrects the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the types to obtain an accurate clothing boundary;
For example, after classifying the garments into the men's long-sleeve shirts by the garment classification model Y, the boundary parameters of the men's long-sleeve shirts (for example, the proportional relation of the parameters such as the length, width, length and width of the men's long-sleeve shirts) which are set in advance are obtained. The preliminary clothing boundary extracted from the clothing picture or the clothing video is corrected according to preset boundary parameters of the men's long-sleeve shirts (for example, the proportional relation of parameters such as the length, the width, the length and the width of the men's long-sleeve shirts), and the accurate clothing boundary is obtained.
The segmentation module 530 segments the garment image from the garment picture or the garment video according to the exact garment boundary;
And carrying out image segmentation in each frame of image in the clothing picture or the clothing video according to the obtained accurate clothing boundary to obtain a clothing image.
The processing module 540 calculates the area of each area surrounded by the boundary of the segmented clothing image, so as to perform smoothing and filling processing on the boundary of the segmented clothing image;
specifically, the processing module 540 includes: a polygon forming module 541 and an area calculating module 542.
The polygon forming module 541 determines a certain number of folding points according to the curvature of each segment of the boundary, and forms a polygon image through the folding points;
the boundary of the clothing image can enclose closed areas with different sizes, and area calculation is carried out on each closed area. Since the boundary of the clothing image surrounding each area is usually a curve, when calculating the area of each area, a certain number of folding points are determined according to the curvature of each section of the curve, specifically, the corresponding vertex when the curvature is greater than the threshold value is determined as a folding point, and then two adjacent folding points are connected to convert each area into a polygon image. For example: w folding points are formed, and the coordinates of the folding points are (c 1,d1)、(c2,d2)、…(cw,dw) in sequence.
The area calculation module 542 calculates the area of the polygon image to obtain the area of each region surrounded by the boundary of the clothing image;
specifically, the area H of the polygon image is:
Wherein w is the number of folding points of the polygon image, (c k,dk)、(ck+1,dk+1) is the coordinates of the folding points, k and k+1 are subscripts of the coordinates c and d of the folding points, so as to represent that c and d are the coordinates of different folding points, and the value is 1 to w-1.
Specifically, after the area of each region surrounded by the boundary of the clothing image is calculated, the boundary corresponding to the region with the area smaller than the threshold value is removed from the boundary of the clothing image, so that the boundary of the clothing image is smoother, and the miscellaneous points formed by the boundary of the clothing image can be removed.
On the basis of the above, after removing the boundary corresponding to the area smaller than the threshold, filling the area surrounded by the boundary (that is, the boundary corresponding to the area smaller than the threshold), specifically, selecting the pixel point near the boundary, and filling the area surrounded by the boundary with the feature of the pixel point.
The mannequin removal module 550 identifies mannequin portions other than the garment in the garment image and removes the area occupied by the mannequin portions to form an accurate garment image;
specifically, the mannequin part in the clothing image is identified by convex hull technology, for example: and (3) removing the image in the convex hull by the hands of the mannequin, and filling the removed area according to the background color to form a final clothing image.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (8)

1. A method for edge fine matting of clothing, comprising the steps of:
step S110, performing feature recognition on the shot clothing pictures or clothing videos so as to classify the clothing in the clothing pictures or the clothing videos into preset categories;
step S110 includes the following sub-steps:
step S111, converting each frame of image in the shot clothing picture or clothing video into a gray level image;
step S112, expanding a low gray value part in the gray image, and compressing a high gray value part in the gray image to stretch the gray image;
passing each pixel point in the gray level image through a formula Stretching, wherein s is the value of the pixel point after transformation, r is the original value of the pixel point in the gray image, m is a threshold value, m is 0.5, E is a stretching parameter, and E is 5;
step S113, extracting the boundary of the garment from the stretched image;
calculating an absolute gradient value M of each pixel point in the stretched image:
Wherein f (x, y) is a two-dimensional function corresponding to each pixel point in the stretched image, x, y are coordinate values of the pixel point, and if the absolute gradient value M (x, y) of the pixel point is greater than a preset threshold value, the pixel point belongs to a boundary point of the garment;
According to the relation between the absolute gradient value M (x, y) of each pixel point and a preset threshold value, calculating all boundary points of the garment in the stretched image, and taking a set formed by all boundary points of the garment as the boundary of the garment to extract;
Step S114, obtaining the characteristics of the clothes from the extracted clothes boundary, and inputting the characteristics of the clothes into a clothes classification model so as to classify the clothes into a preset category;
step S120, correcting the primary clothing boundary extracted from clothing pictures or clothing videos according to the clothing boundary parameters classified into the types to obtain an accurate clothing boundary;
Wherein, the boundary parameters of the clothing are the proportional relation of the clothing length, the clothing width, the sleeve length and the sleeve width;
and step S130, cutting out a clothing image from the clothing picture or the clothing video according to the accurate clothing boundary.
2. A method of edge fine matting for clothing as claimed in claim 1, characterised by the further step of:
step S140, calculating the area of each area surrounded by the boundary of the separated clothing image, so as to carry out smoothing treatment and filling treatment on the boundary of the separated clothing image;
And step S150, identifying the mannequin part except the clothing in the clothing image, and removing the area occupied by the mannequin part to form an accurate clothing image.
3. A method of edge fine matting for clothing according to claim 1 characterised in that training forms a clothing classification model comprising the sub-steps of:
S101, collecting parameters of different types of clothes in advance to form a feature vector set;
step S102, inputting the formed feature vector set into a DBN classification model, and training the DBN classification model to obtain different sub-classification models;
and step S103, fusing different sub-classification models to obtain the clothing classification model.
4. A method of edge matting for garments according to claim 2 characterised in that the area of each region bounded by the boundaries of the garment image is calculated comprising the sub-steps of:
Step S141, determining a certain number of folding points according to the curvature of each section of the boundary, and forming a polygon image through the folding points;
and S142, performing area calculation on the polygon image to obtain the area of each area surrounded by the boundary of the clothing image.
5. An edge fine matting system for garments, comprising: the device comprises an identification and classification module, a correction module and a segmentation module;
the identification and classification module performs feature identification on the shot clothing pictures or the shot clothing videos so as to classify the clothing in the clothing pictures or the clothing videos into preset categories;
The identification and classification module comprises: the device comprises a gray level conversion module, a stretching module, an extraction module and a characteristic identification input module;
the gray conversion module converts each frame of image in the shot clothing picture or clothing video into a gray image;
the stretching module expands the low gray value part in the gray image, and compresses the high gray value part in the gray image so as to stretch the gray image;
passing each pixel point in the gray level image through a formula Stretching, wherein s is the value of the pixel point after transformation, r is the original value of the pixel point in the gray image, m is a threshold value, m is 0.5, E is a stretching parameter, and E is 5;
the extraction module extracts the boundary of the garment from the stretched image;
calculating an absolute gradient value M of each pixel point in the stretched image:
Wherein f (x, y) is a two-dimensional function corresponding to each pixel point in the stretched image, x, y are coordinate values of the pixel point, and if the absolute gradient value M (x, y) of the pixel point is greater than a preset threshold value, the pixel point belongs to a boundary point of the garment;
According to the relation between the absolute gradient value M (x, y) of each pixel point and a preset threshold value, calculating all boundary points of the garment in the stretched image, and taking a set formed by all boundary points of the garment as the boundary of the garment to extract;
The characteristic recognition input module obtains the characteristics of the clothes from the extracted clothes boundary and inputs the characteristics of the clothes into the clothes classification model so as to classify the clothes into a preset category;
the correction module corrects the preliminary clothing boundary extracted from the clothing picture or the clothing video according to the clothing boundary parameters classified into the types so as to obtain an accurate clothing boundary;
Wherein, the boundary parameters of the clothing are the proportional relation of the clothing length, the clothing width, the sleeve length and the sleeve width;
the segmentation module segments the clothing image from the clothing picture or the clothing video according to the accurate clothing boundary.
6. An edge finishing system for clothing as claimed in claim 5, further comprising: a processing module and a mannequin removing module;
The processing module calculates the area of each area surrounded by the boundary of the separated clothing image so as to carry out smoothing processing and filling processing on the boundary of the separated clothing image;
the mannequin removal module identifies mannequin parts except for the clothing in the clothing image and removes the area occupied by the mannequin parts to form an accurate clothing image.
7. An edge fine matting system for clothing according to claim 5, characterised in that the training module comprises: the system comprises a feature vector set forming module, a DBN classification model and a fusion module;
The characteristic vector set forming module collects parameters of different types of clothes in advance to form a characteristic vector set;
training the formed feature vector set by the DBN classification model to obtain different sub-classification models;
and the fusion module fuses the different sub-classification models to obtain the clothing classification model.
8. An edge fine matting system for clothing as claimed in claim 6, characterised in that the processing module comprises: a polygon forming module and an area calculating module;
The polygon forming module determines a certain number of folding points according to the curvature of each section of the boundary, and forms a polygon image through the folding points;
The area calculation module calculates the area of the polygon image to obtain the area of each area surrounded by the boundary of the clothing image.
CN202011224064.1A 2020-11-05 2020-11-05 Edge fine matting method and system for clothing Active CN112330634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011224064.1A CN112330634B (en) 2020-11-05 2020-11-05 Edge fine matting method and system for clothing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011224064.1A CN112330634B (en) 2020-11-05 2020-11-05 Edge fine matting method and system for clothing

Publications (2)

Publication Number Publication Date
CN112330634A CN112330634A (en) 2021-02-05
CN112330634B true CN112330634B (en) 2024-07-19

Family

ID=74315936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011224064.1A Active CN112330634B (en) 2020-11-05 2020-11-05 Edge fine matting method and system for clothing

Country Status (1)

Country Link
CN (1) CN112330634B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11645776B2 (en) 2020-12-04 2023-05-09 Shopify Inc. System and method for generating recommendations during image capture of a product
CN116486116B (en) * 2023-06-16 2023-08-29 济宁大爱服装有限公司 Machine vision-based method for detecting abnormality of hanging machine for clothing processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145875A (en) * 2018-09-28 2019-01-04 上海阅面网络科技有限公司 Black surround glasses minimizing technology and device in a kind of facial image
CN110838131A (en) * 2019-11-04 2020-02-25 网易(杭州)网络有限公司 Method and device for realizing automatic cutout, electronic equipment and medium
CN111696063A (en) * 2020-06-15 2020-09-22 恒信东方文化股份有限公司 Repairing method and system for clothes multi-angle shot pictures

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0865493A (en) * 1994-08-22 1996-03-08 Kyocera Corp Image processor
US20110279475A1 (en) * 2008-12-24 2011-11-17 Sony Computer Entertainment Inc. Image processing device and image processing method
JP6469731B2 (en) * 2014-06-12 2019-02-13 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Optimizing parameters for segmenting images
CN106911904B (en) * 2015-12-17 2020-04-21 通用电气公司 Image processing method, image processing system and imaging system
CN106570856A (en) * 2016-08-31 2017-04-19 天津大学 Common carotid artery intima-media thickness measuring device and method combining level set segmentation and dynamic programming
CN106384126B (en) * 2016-09-07 2019-05-24 东华大学 Clothes fashion recognition methods based on contour curvature characteristic point and support vector machines
US10818011B2 (en) * 2017-12-29 2020-10-27 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Carpal segmentation and recognition method and system, terminal and readable storage medium
CN108711161A (en) * 2018-06-08 2018-10-26 Oppo广东移动通信有限公司 A kind of image partition method, image segmentation device and electronic equipment
CN110113510B (en) * 2019-05-27 2021-02-26 杭州国翌科技有限公司 Real-time video image enhancement method and high-speed camera system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145875A (en) * 2018-09-28 2019-01-04 上海阅面网络科技有限公司 Black surround glasses minimizing technology and device in a kind of facial image
CN110838131A (en) * 2019-11-04 2020-02-25 网易(杭州)网络有限公司 Method and device for realizing automatic cutout, electronic equipment and medium
CN111696063A (en) * 2020-06-15 2020-09-22 恒信东方文化股份有限公司 Repairing method and system for clothes multi-angle shot pictures

Also Published As

Publication number Publication date
CN112330634A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
Ghimire et al. A robust face detection method based on skin color and edges
Chen et al. Face detection by fuzzy pattern matching
CN108932493B (en) Facial skin quality evaluation method
CN112330634B (en) Edge fine matting method and system for clothing
CN105160317B (en) One kind being based on area dividing pedestrian gender identification method
CN109670430A (en) A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning
CN111325806A (en) Clothing color recognition method, device and system based on semantic segmentation
CN113191216B (en) Multi-user real-time action recognition method and system based on posture recognition and C3D network
CN103119625B (en) Video character separation method and device
CN111723687A (en) Human body action recognition method and device based on neural network
CN101945257A (en) Synthesis method for extracting chassis image of vehicle based on monitoring video content
CN101493887A (en) Eyebrow image segmentation method based on semi-supervision learning and Hash index
CN111080574A (en) Fabric defect detection method based on information entropy and visual attention mechanism
Fernando et al. Low cost approach for real time sign language recognition
CN113724273A (en) Edge light and shadow fusion method based on neural network regional target segmentation
Vasconcelos et al. Methods to automatically build point distribution models for objects like hand palms and faces represented in images
Brahme et al. Lip detection and lip geometric feature extraction using constrained local model for spoken language identification using visual speech recognition
Vasconcelos et al. Methodologies to build automatic point distribution models for faces represented in images
KR101620556B1 (en) Method for Biometric Detection
CN109165551A (en) A kind of expression recognition method of adaptive weighted fusion conspicuousness structure tensor and LBP feature
Borah et al. ANN based human facial expression recognition in color images
CN106815848A (en) Portrait background separation and contour extraction method based on grubcut and artificial intelligence
CN113378799A (en) Behavior recognition method and system based on target detection and attitude detection framework
CN114973384A (en) Electronic face photo collection method based on key point and visual salient target detection
Liang et al. Applying Image Processing Technology to Face Recognition.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant