CN111191659A - Multi-shape clothes hanger identification method for garment production system - Google Patents
Multi-shape clothes hanger identification method for garment production system Download PDFInfo
- Publication number
- CN111191659A CN111191659A CN201911367810.XA CN201911367810A CN111191659A CN 111191659 A CN111191659 A CN 111191659A CN 201911367810 A CN201911367810 A CN 201911367810A CN 111191659 A CN111191659 A CN 111191659A
- Authority
- CN
- China
- Prior art keywords
- image
- clothes hanger
- target
- color
- clothes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/457—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
The invention discloses a multi-shape clothes hanger identification method for a clothing production system, which comprises the steps of firstly collecting clothes hanger images through a camera, preprocessing the collected clothes hanger images and extracting edges to obtain target clothes hangers; completing the edge information of the target clothes rack through a closed filter; dividing the target clothes hanger by adopting a K-means method; respectively extracting an edge graph of the target clothes rack and an image after the K-means clustering; extracting the outer contour of the edge graph of the target clothes hanger by using a method of collecting boundary tracking; the outer contour of the clothes hanger is used as a filling boundary, and the shapes of the clothes hangers with various shapes are filled by using a water-overflowing filling method. And storing the extracted clothes racks with different shapes into a database, and matching according to the Euclidean distance to realize the independent identification of the clothes racks with different shapes. The invention solves the problem of low efficiency caused by manual identification in the transmission process of clothes hangers with different shapes under the complex background of the existing garment production system.
Description
Technical Field
The invention belongs to the technical field of clothes hanger detection and machine vision, and particularly relates to a multi-shape clothes hanger identification method for a clothes production system.
Background
With the strict requirements of the garment processing industry on production efficiency, the identification of the clothes hanger under a complex background is one of the important processes in the garment production line. Especially, in the motion process of clothes hangers in different shapes, the clothes hangers are manually detected and classified, and the processing efficiency is seriously influenced. Therefore, the automatic identification research on the clothes hangers in different shapes is realized, the production efficiency of the garment processing line can be improved, and the development of automation is indirectly promoted.
One of the effective ways to improve the identification and classification of different types of moving hangers in a complex manufacturing plant is to increase the efficiency of garment production. In recent years, image processing technology and intelligent system terminal devices are applied in many fields, and therefore, clothes hanger identification using an identification method with strong identification capability and high accuracy is a trend. The target clothes hanger identification method based on the image processing technology can be conveniently applied to a microprocessor terminal, and the power consumption and the cost are relatively low. Therefore, the clothes hanger is identified by adopting a visual method, and the method has certain significance.
Disclosure of Invention
The invention aims to provide a method for identifying multiple-shape clothes hangers in a clothing production system, which solves the problem of low efficiency caused by manual identification in the transmission process of clothes hangers in different shapes under the complex background of the existing clothing production system.
The invention adopts the technical scheme that the method for identifying the multi-shape clothes hanger of the clothing production system is implemented according to the following steps:
step 1, collecting a clothes hanger image through a camera, and preprocessing the image to enhance the effective characteristics of the image;
step 2, carrying out color channel ratio image transformation on the preprocessed clothes hanger image, realizing edge extraction on the target clothes hanger by selecting two colors, and obtaining an edge image of the target clothes hanger;
step 3, connecting the missing edge information in the extracted clothes hanger edge image through a closed filter to form a complete clothes hanger edge image;
step 4, adopting a K-means clustering method for the clothes hanger image obtained in the step 2, dividing the image into an area close to the clothes hanger color and an area with large color deviation with the clothes hanger, respectively selecting a plurality of initial clustering centers in the two areas, calculating the distance from each pixel point in each area to the initial clustering center of the area, selecting the class of the clustering center closest to the clothes hanger to be identified according to the distance range parameter changed in the clustering method, taking the class as the area close to the clothes hanger color, namely the target clothes hanger area, and segmenting the target clothes hanger to form a target clothes hanger area image;
step 5, fusing the images obtained in the step 3 and the step 4 to obtain the edge and the area of the target clothes hanger in the image;
step 6, extracting the outline of the edge image of the target clothes hanger by using a method of collecting boundary tracking;
step 7, using the image obtained by using the K-means clustering method in the step 4 as a central area of the target clothes hanger, using the outer contour of the clothes hanger obtained in the step 6 as a filling boundary, and using a flooding filling method to fill the shapes of the clothes hangers in various shapes;
and 8, storing the clothes hangers with different shapes extracted by the method into a database to serve as a template, matching the template with an image which is acquired in real time in a production field and extracted according to the method, and realizing the automatic identification of the clothes hangers with different shapes according to the Euclidean distance.
The present invention is also characterized in that,
step 1, when the image is preprocessed, judging whether the image is a color cast image according to an input image, if the image is the color cast image, performing color cast correction on the image to obtain the color cast corrected image. The implementation process for color cast correction comprises the following steps:
step 1.1, let image f (x) be ═ fr(x),fg(x),fb(x)]TWhere x represents the pixel coordinate, corresponding to x having a dynamic range of [0, L]L is the maximum pixel of the image, fr(x)、fg(x)、fb(x) For three color channels, for each channel fc(x) And c is r, g and b, and the image histogram is represented as a 2 × N matrix:
where N is the number of columns in the matrix, HCIs a matrix of image histograms, hcNFor vectors consisting of N grey levels, pcNCalculating a histogram matrix of an original image through a formula (1.1) for a vector consisting of probabilities corresponding to N gray levels, and calculating a maximum pixel L of the original image;
step 1.2, in order to make the distribution of the image histogram approach to uniform distribution, the distance between two adjacent gray levels in the histogram of step 1.1 is calculated as follows:
wherein s iscnIs the distance between two adjacent gray levels, hcnIs the current gray level, hc,n-1And for the gray level of the previous level, the effective characteristic information of the image is enhanced finally through algorithm calculation, and optimization is provided for further edge extraction of the image.
The color channel ratio image transformation in step 2 is specifically as follows:
selecting a color channel ratio image according to prior color information of a target, wherein the pixel value of a region corresponding to the target in the ratio image is large and is more prominent, the pixel value of other non-target regions is small, and the region is inhibited, so that two color channels are selected as a channel with obvious target color and is recorded as a foreground channel, the remaining color channel is recorded as a background channel, and an image region is selected, wherein when the foreground channel is higher than the background channel, the image region is represented as the target and a region close to the target color; when the foreground channel is approximately equal to the background channel, the region without obvious color is represented; when the foreground channel is smaller than the background channel, it appears as a region of a significantly different color from the target.
The clothes rack image color clustering process in the step 4 is as follows:
step 4.1, first, input data set M containing n elements, categoryK center number, ξ threshold, setting the initial value I to 1, and selecting K initial clustering centers C from the data set M by K-means clusteringj(I), j is 1,2, …, k, the initial clustering center is a point in the range of the target clothes rack region;
step 4.2, calculating the distance D (x) from each sample point in M to the cluster centeri,Zj(I) I 1,2, …, n, j 1,2, …, k if satisfiedX is theniBelonging to the m-th class, and obtaining the maximum and minimum distance values from the sample point in the clothes rack area to the initial clustering center of the clothes rack through calculation;
step 4.3, by calculating the error sum of squares criterion functionObtaining the distance from each pixel point to the initial clustering center of the area, and further preparing for determining the class of the clustering center for the target clothes hanger;
step 4.4, if | Jc(I)-Jc(I-1) | is less than ξ, the algorithm is ended, otherwise, I is I +1, k new clustering centers are calculated, the step 4.2-4.4 is repeated, finally, a set divided into k categories is output, a certain same category is obtained by comparing the set with the data sets of the clothes hangers with different shapes stored in the database, and the category is the target clothes hanger area image, wherein C is the target clothes hanger area imagejAs initial cluster center, D (x)i,Zj(I) Is the distance of each sample point in the data set to the cluster center, m is the number of class centers, JcIs the output value of the clustering criterion function,
the method has the advantages that the method for identifying the multi-shape clothes hangers in the clothing production system is used for identifying the clothes hangers under the complex background by fusing the characteristics of the shape outline and the image sample point clustering, and compared with a single background, the identification result is more effective; the problem of work efficiency low that exists when having improved traditional different clothes hanger discernment and categorised, improved the intellectuality of clothing manufacturing.
Drawings
FIG. 1 is a color space of hanger colors in a vision-based garment hanger identification method of the present invention;
fig. 2 is a schematic view of flood filling of an optimized target area in a vision-based garment hanger identification method of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a method for identifying a multi-shape clothes hanger of a clothing production system, which is implemented according to the following steps:
step 1, collecting a clothes hanger image through a camera, and preprocessing the image to enhance the effective characteristics of the image;
FIG. 1 shows that the color of the clothes rack image collected by the camera is not high in distinguishing degree, so that accurate identification of various clothes racks cannot be achieved, and a clothes rack color space is formed by extracting the color of the clothes rack and fusing the color with RGB color standards.
Step 2, carrying out color channel ratio image transformation on the preprocessed clothes hanger image, realizing edge extraction on the target clothes hanger by selecting two colors, and obtaining an edge image of the target clothes hanger;
step 3, connecting the missing edge information in the extracted clothes hanger edge image through a closed filter to form a complete clothes hanger edge image;
step 4, adopting a K-means clustering method for the clothes hanger image obtained in the step 2, dividing the image into an area close to the clothes hanger color and an area with large color deviation with the clothes hanger, respectively selecting a plurality of initial clustering centers in the two areas, calculating the distance from each pixel point in each area to the initial clustering center of the area, selecting the class of the clustering center closest to the clothes hanger to be identified according to the distance range parameter changed in the clustering method, taking the class as the area close to the clothes hanger color, namely the target clothes hanger area, and segmenting the target clothes hanger to form a target clothes hanger area image;
step 5, fusing the images obtained in the step 3 and the step 4 to obtain the edge and the area of the target clothes hanger in the image;
step 6, extracting the outline of the edge image of the target clothes hanger by using a method of collecting boundary tracking;
step 7, using the image obtained by using the K-means clustering method in the step 4 as a central area of the target clothes hanger, using the outer contour of the clothes hanger obtained in the step 6 as a filling boundary, and using a flooding filling method to fill the shapes of the clothes hangers in various shapes;
figure 2 is a full shape of the garment hanger filled by flooding the filling with unrecognized parts of the garment hanger in the area of the garment hanger outline, due to the inability to extract uniformly fine internal features at the edges of the garment hanger, which may result in the failure to form a complete image.
And 8, storing the clothes hangers with different shapes extracted by the method into a database to serve as a template, matching the template with an image which is acquired in real time in a production field and extracted according to the method, and realizing the automatic identification of the clothes hangers with different shapes according to the Euclidean distance.
When the image is preprocessed in the step 1, whether the image is a color cast image is judged according to the input image, and if the image is the color cast image, color cast correction needs to be carried out on the image to obtain the color cast corrected image. The implementation process for color cast correction comprises the following steps:
step 1.1, let image f (x) be ═ fr(x),fg(x),fb(x)]TWhere x represents the pixel coordinate, corresponding to x having a dynamic range of [0, L]L is the maximum pixel of the image, fr(x)、fg(x)、fb(x) For three color channels, for each channel fc(x) And c is r, g and b, and the image histogram is represented as a 2 × N matrix:
where N is the number of columns in the matrix, HCIs a matrix of image histograms, hcNFor vectors consisting of N grey levels, pcNCalculating a histogram matrix of an original image through a formula (1.1) for a vector consisting of probabilities corresponding to N gray levels, and calculating a maximum pixel L of the original image;
step 1.2, in order to make the distribution of the image histogram approach to uniform distribution, the distance between two adjacent gray levels in the histogram of step 1.1 is calculated as follows:
wherein s iscnIs the distance between two adjacent gray levels, hcnIs the current gray level, hc,n-1And for the gray level of the previous level, the effective characteristic information of the image is enhanced finally through algorithm calculation, and optimization is provided for further edge extraction of the image.
The color channel ratio image transformation in step 2 is specifically as follows:
selecting a color channel ratio image according to prior color information of a target, wherein the pixel value of a region corresponding to the target in the ratio image is large and is more prominent, the pixel value of other non-target regions is small, and the region is inhibited, so that two color channels are selected as a channel with obvious target color and is recorded as a foreground channel, the remaining color channel is recorded as a background channel, and an image region is selected, wherein when the foreground channel is higher than the background channel, the image region is represented as the target and a region close to the target color; when the foreground channel is approximately equal to the background channel, the region without obvious color is represented; when the foreground channel is smaller than the background channel, it appears as a region of a significantly different color from the target.
The clothes rack image color clustering process in the step 4 is as follows:
step 4.1, firstly, inputting a data set M containing n elements, the number K of class centers and a threshold value ξ, setting an initial value I to be 1, and selecting K initial clustering centers C from the data set M by using K-means clusteringj(I), j is 1,2, …, k, the initial clustering center is a point in the range of the target clothes rack region;
step 4.2, calculating the distance D (x) from each sample point in M to the cluster centeri,Zj(I) I 1,2, …, n, j 1,2, …, k if satisfiedX is theniBelonging to the m-th class, and obtaining the maximum and minimum distance values from the sample point in the clothes rack area to the initial clustering center of the clothes rack through calculation;
step 4.3, by calculating the error sum of squares criterion functionObtaining the distance from each pixel point to the initial clustering center of the area, and further preparing for determining the class of the clustering center for the target clothes hanger;
step 4.4, if | Jc(I)-Jc(I-1) | is less than ξ, the algorithm is ended, otherwise, I is I +1, k new clustering centers are calculated, the step 4.2-4.4 is repeated, finally, a set divided into k categories is output, a certain same category is obtained by comparing the set with the data sets of the clothes hangers with different shapes stored in the database, and the category is the target clothes hanger area image, wherein C is the target clothes hanger area imagejAs initial cluster center, D (x)i,Zj(I) Is the distance of each sample point in the data set to the cluster center, m is the number of class centers, JcIs the output value of the clustering criterion function,
the vision-based clothes hanger identification method is characterized by improving the informatization level on a production line, effectively, intelligently, stably and durably realizing the classification and identification of clothes hangers, and simultaneously being applied to a mobile terminal to meet the portable requirement.
Claims (4)
1. The method for identifying the multi-shape clothes hanger of the clothing production system is characterized by comprising the following steps:
step 1, collecting a clothes hanger image through a camera, and preprocessing the image to enhance the effective characteristics of the image;
step 2, carrying out color channel ratio image transformation on the preprocessed clothes hanger image, realizing edge extraction on the target clothes hanger by selecting two colors, and obtaining an edge image of the target clothes hanger;
step 3, connecting the missing edge information in the extracted clothes hanger edge image through a closed filter to form a complete clothes hanger edge image;
step 4, adopting a K-means clustering method for the clothes hanger image obtained in the step 2, dividing the image into an area close to the clothes hanger color and an area with large color deviation with the clothes hanger, respectively selecting a plurality of initial clustering centers in the two areas, calculating the distance from each pixel point in each area to the initial clustering center of the area, selecting the class of the clustering center closest to the clothes hanger to be identified according to the distance range parameter changed in the clustering method, taking the class as the area close to the clothes hanger color, namely the target clothes hanger area, and segmenting the target clothes hanger to form a target clothes hanger area image;
step 5, fusing the images obtained in the step 3 and the step 4 to obtain the edge and the area of the target clothes hanger in the image;
step 6, extracting the outline of the edge image of the target clothes hanger by using a method of collecting boundary tracking;
step 7, using the image obtained by using the K-means clustering method in the step 4 as a central area of the target clothes hanger, using the outer contour of the clothes hanger obtained in the step 6 as a filling boundary, and using a flooding filling method to fill the shapes of the clothes hangers in various shapes;
and 8, storing the clothes hangers with different shapes extracted by the method into a database to serve as a template, matching the template with an image which is acquired in real time in a production field and extracted according to the method, and realizing the automatic identification of the clothes hangers with different shapes according to the Euclidean distance.
2. The method for recognizing the multi-shape clothes hanger in the clothing production system according to claim 1, wherein the step 1 is to determine whether the image is a color cast image according to the input image when the image is preprocessed, and if the image is the color cast image, the color cast image needs to be subjected to color cast correction to obtain the color cast corrected image. The implementation process for color cast correction comprises the following steps:
step 1.1, let image f (x) be ═ fr(x),fg(x),fb(x)]TWhere x represents the pixel coordinate, corresponding to x having a dynamic range of [0, L]L is the maximum pixel of the image, fr(x)、fg(x)、fb(x) For three color channels, for each channel fc(x) And c is r, g and b, and the image histogram is represented as a 2 × N matrix:
where N is the number of columns in the matrix, HCIs a matrix of image histograms, hcNFor vectors consisting of N grey levels, pcNCalculating a histogram matrix of an original image through a formula (1.1) for a vector consisting of probabilities corresponding to N gray levels, and calculating a maximum pixel L of the original image;
step 1.2, in order to make the distribution of the image histogram approach to uniform distribution, the distance between two adjacent gray levels in the histogram of step 1.1 is calculated as follows:
wherein s iscnIs the distance between two adjacent gray levels, hcnIs the current gray level, hc,n-1And for the gray level of the previous level, the effective characteristic information of the image is enhanced finally through algorithm calculation, and optimization is provided for further edge extraction of the image.
3. The method for recognizing the multi-shape clothes hanger in the clothing production system according to the claim 1, wherein the color channel ratio image transformation in the step 2 is as follows:
selecting a color channel ratio image according to prior color information of a target, wherein the pixel value of a region corresponding to the target in the ratio image is large and is more prominent, the pixel value of other non-target regions is small, and the region is inhibited, so that two color channels are selected as a channel with obvious target color and is recorded as a foreground channel, the remaining color channel is recorded as a background channel, and an image region is selected, wherein when the foreground channel is higher than the background channel, the image region is represented as the target and a region close to the target color; when the foreground channel is approximately equal to the background channel, the region without obvious color is represented; when the foreground channel is smaller than the background channel, it appears as a region of a significantly different color from the target.
4. The method for identifying the multi-shape clothes hangers of the clothing production system according to claim 1, wherein the clothes hanger image color clustering process in the step 4 is as follows:
step 4.1, firstly, inputting a data set M containing n elements, the number K of class centers and a threshold value ξ, setting an initial value I to be 1, and selecting K initial clustering centers C from the data set M by using K-means clusteringj(I), j is 1,2, …, k, the initial clustering center is a point in the range of the target clothes rack region;
step 4.2, calculating the distance D (x) from each sample point in M to the cluster centeri,Zj(I) I 1,2, …, n, j 1,2, …, k if satisfiedX is theniBelonging to the m-th class, and obtaining the maximum and minimum distance values from the sample point in the clothes rack area to the initial clustering center of the clothes rack through calculation;
step 4.3, calculating the error square sum criterion function Jc:Obtaining the distance from each pixel point to the initial clustering center of the area, and further preparing for determining the class of the clustering center for the target clothes hanger;
step 4.4, if | Jc(I)-JcIf the I-1 | is less than ξ, the algorithm is ended, otherwise, if the I is I +1, k new clustering centers are calculated, the steps 4.2-4.4 are repeated, and finally, the k classes of the new clustering centers are output and dividedThe method comprises the steps of collecting, comparing with data sets of clothes hangers in different shapes stored in a database to obtain a certain same category, wherein the category is a target clothes hanger area image; wherein, CjAs initial cluster center, D (x)i,Zj(I) Is the distance of each sample point in the data set to the cluster center, m is the number of class centers, JcAs output values of a clustering criterion function, Jc:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911367810.XA CN111191659B (en) | 2019-12-26 | 2019-12-26 | Multi-shape clothes hanger identification method for clothing production system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911367810.XA CN111191659B (en) | 2019-12-26 | 2019-12-26 | Multi-shape clothes hanger identification method for clothing production system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111191659A true CN111191659A (en) | 2020-05-22 |
CN111191659B CN111191659B (en) | 2023-04-28 |
Family
ID=70708017
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911367810.XA Active CN111191659B (en) | 2019-12-26 | 2019-12-26 | Multi-shape clothes hanger identification method for clothing production system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111191659B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111814877A (en) * | 2020-07-09 | 2020-10-23 | 北京服装学院 | Large-scale group performance activity clothing live-action exhibition system and method |
CN114186299A (en) * | 2021-12-09 | 2022-03-15 | 上海百琪迈科技(集团)有限公司 | Method for generating and rendering three-dimensional clothing seam effect |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130064460A1 (en) * | 2010-06-01 | 2013-03-14 | Hewlett-Packard Development Company, L.P. | Image Clustering a Personal Clothing Model |
US20140310304A1 (en) * | 2013-04-12 | 2014-10-16 | Ebay Inc. | System and method for providing fashion recommendations |
CN108154087A (en) * | 2017-12-08 | 2018-06-12 | 北京航天计量测试技术研究所 | A kind of matched infrared human body target detection tracking method of feature based |
CN108764062A (en) * | 2018-05-07 | 2018-11-06 | 西安工程大学 | A kind of clothing cutting plate recognition methods of view-based access control model |
-
2019
- 2019-12-26 CN CN201911367810.XA patent/CN111191659B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130064460A1 (en) * | 2010-06-01 | 2013-03-14 | Hewlett-Packard Development Company, L.P. | Image Clustering a Personal Clothing Model |
US20140310304A1 (en) * | 2013-04-12 | 2014-10-16 | Ebay Inc. | System and method for providing fashion recommendations |
CN108154087A (en) * | 2017-12-08 | 2018-06-12 | 北京航天计量测试技术研究所 | A kind of matched infrared human body target detection tracking method of feature based |
CN108764062A (en) * | 2018-05-07 | 2018-11-06 | 西安工程大学 | A kind of clothing cutting plate recognition methods of view-based access control model |
Non-Patent Citations (1)
Title |
---|
雍歧卫等: "基于无人机巡线图像的地面油气管道识别方法", 《兵器装备工程学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111814877A (en) * | 2020-07-09 | 2020-10-23 | 北京服装学院 | Large-scale group performance activity clothing live-action exhibition system and method |
CN111814877B (en) * | 2020-07-09 | 2023-11-24 | 北京服装学院 | System and method for live-action exhibition of large-scale group performance movable clothing |
CN114186299A (en) * | 2021-12-09 | 2022-03-15 | 上海百琪迈科技(集团)有限公司 | Method for generating and rendering three-dimensional clothing seam effect |
CN114186299B (en) * | 2021-12-09 | 2023-12-15 | 上海百琪迈科技(集团)有限公司 | Method for generating and rendering three-dimensional clothing seam effect |
Also Published As
Publication number | Publication date |
---|---|
CN111191659B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368683B (en) | Face image feature extraction method and face recognition method based on modular constraint CenterFace | |
WO2022099598A1 (en) | Video dynamic target detection method based on relative statistical features of image pixels | |
CN108537239B (en) | Method for detecting image saliency target | |
CN107230188B (en) | Method for eliminating video motion shadow | |
CN109325507B (en) | Image classification method and system combining super-pixel saliency features and HOG features | |
CN112508090A (en) | External package defect detection method | |
CN111583279A (en) | Super-pixel image segmentation method based on PCBA | |
CN109145964B (en) | Method and system for realizing image color clustering | |
CN106157323A (en) | The insulator division and extracting method that a kind of dynamic division threshold value and block search combine | |
CN103295013A (en) | Pared area based single-image shadow detection method | |
CN115331119B (en) | Solid waste identification method | |
CN103456013A (en) | Method for expressing ultrapixels and measuring similarity between ultrapixels | |
CN105069816B (en) | A kind of method and system of inlet and outlet people flow rate statistical | |
CN112132153B (en) | Tomato fruit identification method and system based on clustering and morphological processing | |
CN111191659A (en) | Multi-shape clothes hanger identification method for garment production system | |
CN105825233A (en) | Pedestrian detection method based on random fern classifier of online learning | |
CN109086772A (en) | A kind of recognition methods and system distorting adhesion character picture validation code | |
CN107657276B (en) | Weak supervision semantic segmentation method based on searching semantic class clusters | |
CN106295649A (en) | Target identification method based on contour features | |
CN107886493A (en) | A kind of wire share split defect inspection method of transmission line of electricity | |
CN113344047A (en) | Platen state identification method based on improved K-means algorithm | |
CN102968618A (en) | Static hand gesture recognition method fused with BoF model and spectral clustering algorithm | |
CN111127407B (en) | Fourier transform-based style migration forged image detection device and method | |
CN107256545B (en) | A kind of broken hole flaw detection method of large circle machine | |
Lu et al. | Clustering based road detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |