CN112784854A - Method, device and equipment for segmenting and extracting clothing color based on mathematical statistics - Google Patents

Method, device and equipment for segmenting and extracting clothing color based on mathematical statistics Download PDF

Info

Publication number
CN112784854A
CN112784854A CN202011620586.3A CN202011620586A CN112784854A CN 112784854 A CN112784854 A CN 112784854A CN 202011620586 A CN202011620586 A CN 202011620586A CN 112784854 A CN112784854 A CN 112784854A
Authority
CN
China
Prior art keywords
image
human body
classes
extraction
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011620586.3A
Other languages
Chinese (zh)
Other versions
CN112784854B (en
Inventor
杨淼
谢宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Yunstare Technology Co ltd
Original Assignee
Chengdu Yunstare Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Yunstare Technology Co ltd filed Critical Chengdu Yunstare Technology Co ltd
Priority to CN202011620586.3A priority Critical patent/CN112784854B/en
Publication of CN112784854A publication Critical patent/CN112784854A/en
Application granted granted Critical
Publication of CN112784854B publication Critical patent/CN112784854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method, a device and equipment for segmenting and extracting clothing colors based on mathematical statistics. The method comprises the following steps: acquiring an image; the image comprises a human body image, and the human body wears a garment to be subjected to color segmentation and extraction; carrying out human body detection on the image to detect the position of a human body in the image, and segmenting the human body image to obtain a human body image slice; segmenting and extracting main colors of the clothes from the human body image slices based on a DBSCAN clustering algorithm to obtain color segmentation and extraction results; and outputting the color segmentation and extraction result. Compared with the traditional image processing methods such as image matching and principal component analysis, the method is simple in design and high in universality, and the DBSCAN clustering algorithm is adopted, so that the number of clustering centers does not need to be preset compared with the Kmeans algorithm, the effect is more visual and stable, and the method is more in line with the expectation. The solution of the present application is therefore highly practical.

Description

Method, device and equipment for segmenting and extracting clothing color based on mathematical statistics
Technical Field
The application relates to the technical field of computer image processing, in particular to a method, a device and equipment for segmenting and extracting clothing color based on mathematical statistics.
Background
In real life or work, people often have a need to search clothing images, for example, when the clothing worn by a person is seen from a video or a picture, the people may want to search information of the clothing. When the retrieval is actually executed, the clothing features need to be extracted to realize the retrieval (most commonly, color features are extracted), the traditional method is to extract and filter the features of the image information through image processing methods such as image matching and principal component analysis, and finally, the feature matching method is used for feature recognition to finally obtain a matching result, so that the design of feature extraction of the image information is complex, and the universality is not high. In addition, a method for classifying images through a Kmeans algorithm to realize image color feature extraction exists, but the method needs to preset the number of clustering centers, and the final result is greatly influenced by preset parameters, so that the result is unstable.
Disclosure of Invention
The application provides a clothing color segmentation and extraction method, a clothing color segmentation and extraction device and clothing color segmentation and extraction equipment based on mathematical statistics, and aims to solve the problems that the existing clothing color-specific feature extraction method is not high in universality or unstable in result.
The above object of the present application is achieved by the following technical solutions:
in a first aspect, an embodiment of the present application provides a clothing color segmentation and extraction method based on mathematical statistics, including:
acquiring an image; the image comprises a human body image, and the human body wears a garment to be subjected to color segmentation and extraction;
carrying out human body detection on the image to detect the position of a human body in the image, and segmenting the human body image to obtain a human body image slice;
based on a DBSCAN clustering algorithm, segmenting and extracting main colors of the clothes from the human body image slices to obtain color segmentation and extraction results;
and outputting the color segmentation and extraction result.
Optionally, the segmenting and extracting of the main colors of the clothing from the human body image slices based on the DBSCAN clustering algorithm to obtain color segmenting and extracting results includes:
preprocessing the human body image slice;
HLS color space transformation is carried out on the image obtained after preprocessing, channel splitting is carried out to obtain single H, L, S data, H, S data are spliced into N multiplied by 2 vectors according to pixel coordinates, and the vectors are used as clustering data, wherein N is wb×hb,wb,hbRespectively the width and the height of the human body image slice;
generating M color classes based on a DBSCAN clustering algorithm;
counting the generated characteristic vector of each color class to obtain the size of each class;
if the cluster number is larger than 3 classes, namely M is larger than 3, sorting the sizes of the classes from large to small, calculating the occupation ratio corresponding to each class by combining the sizes of the human body image slices, and taking the class with the occupation ratio larger than or equal to a preset threshold value as an alternative class; if the cluster number is less than or equal to 3 classes, namely M is less than or equal to 3, all the classes are taken as alternative classes;
using an Euclidean distance formula, taking an image center point as a fixed end point, taking an image point p (i, j) as another end point, connecting into a line segment, and generating a distance with the same size as the human body image slice; wherein, the central pixel value is 0, and the farther away from the center, the larger the pixel value;
accumulating pixel values of corresponding distances by using coordinates of the pixels of each alternative class, and taking an accumulated result as an error of each alternative class;
and selecting the n classes with the minimum error from the candidate classes as the color segmentation and extraction result.
Optionally, the preprocessing comprises gaussian blurring and down-sampling.
Optionally, the acquiring the image includes:
a video is acquired and images are extracted from the video frame by frame.
Optionally, the performing human body detection on the image to detect a human body position in the image includes:
acquiring a set detection area;
generating a detection area image based on the single frame video image;
detecting the human body of the detection area image by using a pre-trained detection model; wherein the detection model is obtained based on deep learning model training.
Optionally, the deep learning model comprises YOLOv3, YOLOv4, YOLOv5, fasterncn, SSD or MTCNN model.
In a second aspect, an embodiment of the present application further provides a device for segmenting and extracting a color of a garment based on mathematical statistics, including:
the acquisition module is used for acquiring an image; the image comprises a human body image, and the human body wears a garment to be subjected to color segmentation and extraction;
the human body image slice generating module is used for carrying out human body detection on the image so as to detect the position of a human body in the image and segmenting the human body image to obtain a human body image slice;
the segmentation extraction module is used for segmenting and extracting main clothing colors of the human body image slices based on a DBSCAN clustering algorithm to obtain color segmentation extraction results;
and the output module is used for outputting the color segmentation and extraction result.
Optionally, the segmentation and extraction module is specifically configured to:
preprocessing the human body image slice;
HLS color space transformation is carried out on the image obtained after preprocessing, channel splitting is carried out to obtain single H, L, S data, H, S data are spliced into N multiplied by 2 vectors according to pixel coordinates, and the vectors are used as clustering data, wherein N is wb×hb,wb,hbRespectively the width and the height of the human body image slice;
generating M color classes based on a DBSCAN clustering algorithm;
counting the generated characteristic vector of each color class to obtain the size of each class;
if the cluster number is larger than 3 classes, namely M is larger than 3, sorting the sizes of the classes from large to small, calculating the occupation ratio corresponding to each class by combining the sizes of the human body image slices, and taking the class with the occupation ratio larger than or equal to a preset threshold value as an alternative class; if the cluster number is less than or equal to 3 classes, namely M is less than or equal to 3, all the classes are taken as alternative classes;
using an Euclidean distance formula, taking an image center point as a fixed end point, taking an image point p (i, j) as another end point, connecting into a line segment, and generating a distance with the same size as the human body image slice; wherein, the central pixel value is 0, and the farther away from the center, the larger the pixel value;
accumulating pixel values of corresponding distances by using coordinates of the pixels of each alternative class, and taking an accumulated result as an error of each alternative class;
and selecting the n classes with the minimum error from the candidate classes as the color segmentation and extraction result.
Optionally, the obtaining module is specifically configured to:
a video is acquired and images are extracted from the video frame by frame.
In a third aspect, an embodiment of the present application further provides a clothing color segmentation and extraction device based on mathematical statistics, which includes:
a memory and a processor coupled to the memory;
the memory is used for storing a program, and the program is at least used for realizing the clothing color segmentation and extraction method based on mathematical statistics in any one of the first aspect;
the processor is used for calling and executing the program stored in the memory.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the technical scheme provided by the embodiment of the application, the image containing the human body is firstly obtained, the human body is detected, the human body image slice is obtained after the human body is detected, then the segmentation and extraction of the main color of the human body image slice are realized through the DBSCAN clustering algorithm, the design is simple, the universality is high, compared with the traditional image processing methods such as image matching and principal component analysis, the DBSCAN clustering algorithm is adopted, compared with the Kmeans algorithm, the number of clustering centers is not required to be preset, and therefore the effect is more visual and stable, and the expectation is better met. The solution of the present application is therefore highly practical.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flow chart of a clothing color segmentation and extraction method based on mathematical statistics according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a device for segmenting and extracting a color of a garment based on mathematical statistics according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a device for segmenting and extracting a garment color based on mathematical statistics according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
In order to solve the problems mentioned in the background art that the existing feature extraction method for clothing color is not highly universal or the result is unstable, the present application provides a clothing color segmentation and extraction method based on mathematical statistics, which is described in detail below by embodiments.
Examples
Referring to fig. 1, fig. 1 is a schematic flowchart of a clothing color segmentation and extraction method based on mathematical statistics according to an embodiment of the present application. As shown in fig. 1, the method comprises at least the following steps:
s101: acquiring an image; the image comprises a human body image, and the human body wears a garment to be subjected to color segmentation and extraction;
specifically, there are various methods for acquiring the clothing image, for example, acquiring a video and extracting images from the video frame by frame, or directly acquiring a single picture image. The obtained video can be obtained from various ways, including downloading from a network, shooting by the user, and the like, and a single picture image is directly obtained in the same way as long as the finally obtained image comprises a human body and the human body wears clothes which the user wants to retrieve.
S102: carrying out human body detection on the image to detect the position of a human body in the image, and segmenting the human body image to obtain a human body image slice;
specifically, the human body detection process may be set according to actual needs, and is not limited thereto, for example, the human body detection may be implemented by using a detection model obtained in advance based on deep learning model training. In addition, the process of segmenting the human body image to obtain the human body image slice refers to segmenting the human body image part in the image (actually segmenting the clothing part) to obtain a plurality of different sub-images, and the process can be realized by referring to the method in the prior art, and is not described in detail. It should be noted that, the human body image and the slice thereof mentioned in the embodiment refer to an image and a slice thereof containing only a single human body.
Further, when the step S101 is to acquire a video and extract a resulting image from the video frame by frame, the step S102 may be to perform human body detection on the image to detect the position of the human body in the image, where: acquiring a set detection area; generating a detection area image based on the single frame video image; detecting the human body of the detection area image by using a pre-trained detection model; the detection model may be obtained based on deep learning model training, and the deep learning model that may be adopted includes YOLOv3, YOLOv4, YOLOv5, fasterncn, SSD, MTCNN model, and the like, and is preferably YOLOv3 model.
Regarding the set detection area, since the acquired image may include a plurality of human bodies and different human bodies may wear different garments, when one of the garments needs to be subjected to color feature extraction, the user needs to set the corresponding detection area, which is to say, the user substantially selects the target garment.
In addition, when the method is applied, besides the detection area needs to be preset, each algorithm parameter needs to be preset and stored, for example: the Clustering radius alpha and the area proportion coefficient beta of a DBSCAN (sensitivity-Based Spatial Clustering of Applications with Noise) Clustering algorithm in the subsequent step; the preset threshold Thresh used for determining the alternative classes in the subsequent steps; setting post-processing parameters such as NMS (Non-Maximum Suppression) parameter rho, confidence coefficient parameter sigma and Top number parameter tau for the deep learning model; and so on.
S103: based on a DBSCAN clustering algorithm, segmenting and extracting main colors of the clothes from the human body image slices to obtain color segmentation and extraction results; the method specifically comprises the following steps:
preprocessing the human body image slice; preprocessing comprises Gaussian blurring and downsampling; gaussian Blur (Gaussian Blur), also called Gaussian smoothing, is used to reduce image noise and reduce detail levels; in addition, for an image I with the size of M multiplied by N, the image I is subjected to s-time down-sampling to obtain the image I
Figure BDA0002872229680000071
The resolution image of the size, namely, the image in the original image s multiplied by s window is changed into a pixel, and the value of the pixel is the average value of all pixels in the window; through preprocessing, the detection image is further optimized, the subsequent processing speed can be accelerated, and meanwhile, unnecessary interference is further avoided;
obtained after the pretreatmentThe image of (2) is subjected to HLS (Hue, brightness and Saturation) color space conversion, channel splitting is carried out to obtain single H, L, S data, H, S data are spliced into N multiplied by 2 vectors according to pixel coordinates, and the vectors are used as clustering data ClusterData, wherein N is wb×hb,wb,hbRespectively the width and the height of the human body image slice;
generating M color classes, namely class (class) based on DBSCAN clustering algorithm0,class1,class2,…classm-1) (ii) a Before the algorithm is applied, parameters such as an area proportion parameter beta, a clustering radius alpha and the like need to be set, during clustering, firstly, a DBSCAN parameter min _ P ts is calculated to be Nxbeta, then, clustering based on density is carried out on clustering data ClusterData according to the clustering radius alpha and the calculated min _ P ts, and M color classes can be obtained; compared with the Kmeans algorithm, the DBSCAN clustering algorithm does not need to preset the clustering number, has more visual and stable effect and better accords with the expectation;
counting the generated characteristic vector of each color class to obtain the size of each class;
if the cluster number is larger than 3 classes, namely M is larger than 3, sorting the sizes of the classes from large to small, calculating the occupation ratio corresponding to each class by combining the sizes of the human body image slices, and taking the class with the occupation ratio larger than or equal to a preset threshold value T res as an alternative class; if the cluster number is less than or equal to 3 classes, namely M is less than or equal to 3, all the classes are taken as alternative classes;
using Euclidean distance formula to obtain image center point
Figure BDA0002872229680000081
The image points p (i, j) are the other end points and are connected into a line segment to generate a distance Mask with the same size as the human body image slice (wherein i, j are the horizontal and vertical coordinates of the image points p respectively); wherein, the central pixel value is 0, the farther away from the center, the larger the pixel value, and the expression is:
Figure BDA0002872229680000082
and accumulating pixel values corresponding to the distance Mask by using the coordinates of the pixels of each alternative class, and taking the accumulated result as the error er of each alternative classnThe expression is:
Figure BDA0002872229680000083
selecting n types with the minimum error from the candidate types as the color segmentation extraction result Choose, which specifically comprises the following steps: class [ C oose ═ Class ]0 Class1…]。
S104: and outputting the color segmentation and extraction result.
The output segmentation extraction result is one or more color features, which can be used for searching, and the searching process is not limited in this application, and therefore, the detailed description is not given.
According to the technical scheme, the image containing the human body is obtained firstly, the human body is detected, the human body image slice is obtained after the human body is detected, then the main color of the human body image slice is segmented and extracted through the DBSCAN clustering algorithm, the design is simple and the universality is high compared with the traditional image processing methods such as image matching and principal component analysis, the DBSCAN clustering algorithm is adopted, compared with the Kmeans algorithm, the number of clustering centers is not required to be preset, the effect is more visual and stable, and the expectation is better met. The solution of the present application is therefore highly practical.
In addition, corresponding to the clothing color segmentation and extraction method based on mathematical statistics in the above embodiment, the embodiment of the present application further provides a clothing color segmentation and extraction device based on mathematical statistics. The apparatus is a functional aggregate based on software, hardware or a combination thereof in the corresponding device.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a clothing color segmentation and extraction device based on mathematical statistics according to an embodiment of the present application. As shown in fig. 2, the apparatus mainly includes the following structure:
an obtaining module 21, configured to obtain an image; the image comprises a human body image, and the human body wears a garment to be subjected to color segmentation and extraction;
a human body image slice generating module 22, configured to perform human body detection on the image to detect a human body position in the image, and segment the human body image to obtain a human body image slice;
the segmentation extraction module 23 is configured to perform segmentation extraction of main clothing colors on the human body image slices based on a DBSCAN clustering algorithm to obtain color segmentation extraction results;
and the output module 24 is configured to output the color segmentation and extraction result.
Optionally, the segmentation and extraction module 23 is specifically configured to:
preprocessing the human body image slice;
HLS color space transformation is carried out on the image obtained after preprocessing, channel splitting is carried out to obtain single H, L, S data, H, S data are spliced into N multiplied by 2 vectors according to pixel coordinates, and the vectors are used as clustering data, wherein N is wb×hb,wb,hbRespectively the width and the height of the human body image slice;
generating M color classes based on a DBSCAN clustering algorithm;
counting the generated characteristic vector of each color class to obtain the size of each class;
if the cluster number is larger than 3 classes, namely M is larger than 3, sorting the sizes of the classes from large to small, calculating the occupation ratio corresponding to each class by combining the sizes of the human body image slices, and taking the class with the occupation ratio larger than or equal to a preset threshold value as an alternative class; if the cluster number is less than or equal to 3 classes, namely M is less than or equal to 3, all the classes are taken as alternative classes;
using an Euclidean distance formula, taking an image center point as a fixed end point, taking an image point p (i, j) as another end point, connecting into a line segment, and generating a distance with the same size as the human body image slice; wherein, the central pixel value is 0, and the farther away from the center, the larger the pixel value;
accumulating pixel values of corresponding distances by using coordinates of the pixels of each alternative class, and taking an accumulated result as an error of each alternative class;
and selecting the n classes with the minimum error from the candidate classes as the color segmentation and extraction result.
Optionally, the obtaining module 21 is specifically configured to:
a video is acquired and images are extracted from the video frame by frame.
The implementation method of the specific method steps executed by the functional modules may refer to the corresponding contents in the foregoing method embodiments, and will not be described in detail here.
In addition, corresponding to the clothing color segmentation and extraction method based on mathematical statistics in the embodiment, the embodiment of the application further provides a clothing color segmentation and extraction device based on mathematical statistics.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a clothing color segmentation and extraction device based on mathematical statistics according to an embodiment of the present application. As shown in fig. 3, the apparatus includes:
a memory 31 and a processor 32 connected to the memory 31;
the memory 31 is used for storing a program, and the program is at least used for realizing the clothing color segmentation and extraction method based on mathematical statistics;
the processor 32 is used to call and execute the program stored in the memory 31.
Wherein the device may be a PC, a mobile terminal or the like. In addition, the specific steps of the method implemented by the program can refer to the corresponding contents in the foregoing method embodiments, and are not described in detail here.
By the scheme, the user can quickly obtain the segmentation and extraction result of the color of the target garment after inputting the image, namely the color characteristic is obtained, and further the method can be used for realizing the retrieval of the garment.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A clothing color segmentation and extraction method based on mathematical statistics is characterized by comprising the following steps:
acquiring an image; the image comprises a human body image, and the human body wears a garment to be subjected to color segmentation and extraction;
carrying out human body detection on the image to detect the position of a human body in the image, and segmenting the human body image to obtain a human body image slice;
based on a DBSCAN clustering algorithm, segmenting and extracting main colors of the clothes from the human body image slices to obtain color segmentation and extraction results;
and outputting the color segmentation and extraction result.
2. The method according to claim 1, wherein the segmenting and extracting of the main colors of the clothing from the human body image slices based on the DBSCAN clustering algorithm to obtain color segmenting and extracting results comprises:
preprocessing the human body image slice;
HLS color space transformation is carried out on the image obtained after preprocessing, channel splitting is carried out to obtain single H, L, S data, H, S data are spliced into N multiplied by 2 vectors according to pixel coordinates, and the vectors are used as clustering data, wherein N is wb×hb,wb,hbRespectively the width and the height of the human body image slice;
generating M color classes based on a DBSCAN clustering algorithm;
counting the generated characteristic vector of each color class to obtain the size of each class;
if the cluster number is larger than 3 classes, namely M is larger than 3, sorting the sizes of the classes from large to small, calculating the occupation ratio corresponding to each class by combining the sizes of the human body image slices, and taking the class with the occupation ratio larger than or equal to a preset threshold value as an alternative class; if the cluster number is less than or equal to 3 classes, namely M is less than or equal to 3, all the classes are taken as alternative classes;
using an Euclidean distance formula, taking an image center point as a fixed end point, taking an image point p (i, j) as another end point, connecting into a line segment, and generating a distance with the same size as the human body image slice; wherein, the central pixel value is 0, and the farther away from the center, the larger the pixel value;
accumulating pixel values of corresponding distances by using coordinates of the pixels of each alternative class, and taking an accumulated result as an error of each alternative class;
and selecting the n classes with the minimum error from the candidate classes as the color segmentation and extraction result.
3. The method of claim 2, wherein the pre-processing comprises gaussian blurring and down-sampling.
4. The method of claim 1, wherein the acquiring the image comprises:
a video is acquired and images are extracted from the video frame by frame.
5. The method of claim 4, wherein the human detection of the image to detect the human location in the image comprises:
acquiring a set detection area;
generating a detection area image based on the single frame video image;
detecting the human body of the detection area image by using a pre-trained detection model; wherein the detection model is obtained based on deep learning model training.
6. The method of claim 5, wherein the deep learning model comprises a YOLOv3, YOLOv4, YOLOv5, fasterncn, SSD, or MTCNN model.
7. The utility model provides a clothing colour cuts apart extraction element based on mathematical statistics which characterized in that includes:
the acquisition module is used for acquiring an image; the image comprises a human body image, and the human body wears a garment to be subjected to color segmentation and extraction;
the human body image slice generating module is used for carrying out human body detection on the image so as to detect the position of a human body in the image and segmenting the human body image to obtain a human body image slice;
the segmentation extraction module is used for segmenting and extracting main clothing colors of the human body image slices based on a DBSCAN clustering algorithm to obtain color segmentation extraction results;
and the output module is used for outputting the color segmentation and extraction result.
8. The apparatus of claim 7, wherein the segmentation extraction module is specifically configured to:
preprocessing the human body image slice;
HLS color space transformation is carried out on the image obtained after preprocessing, channel splitting is carried out to obtain single H, L, S data, H, S data are spliced into N multiplied by 2 vectors according to pixel coordinates, and the vectors are used as clustering data, wherein N is wb×hb,wb,hbRespectively the width and the height of the human body image slice;
generating M color classes based on a DBSCAN clustering algorithm;
counting the generated characteristic vector of each color class to obtain the size of each class;
if the cluster number is larger than 3 classes, namely M is larger than 3, sorting the sizes of the classes from large to small, calculating the occupation ratio corresponding to each class by combining the sizes of the human body image slices, and taking the class with the occupation ratio larger than or equal to a preset threshold value as an alternative class; if the cluster number is less than or equal to 3 classes, namely M is less than or equal to 3, all the classes are taken as alternative classes;
using an Euclidean distance formula, taking an image center point as a fixed end point, taking an image point p (i, j) as another end point, connecting into a line segment, and generating a distance with the same size as the human body image slice; wherein, the central pixel value is 0, and the farther away from the center, the larger the pixel value;
accumulating pixel values of corresponding distances by using coordinates of the pixels of each alternative class, and taking an accumulated result as an error of each alternative class;
and selecting the n classes with the minimum error from the candidate classes as the color segmentation and extraction result.
9. The apparatus of claim 7, wherein the obtaining module is specifically configured to:
a video is acquired and images are extracted from the video frame by frame.
10. A clothing color segmentation and extraction device based on mathematical statistics is characterized by comprising:
a memory and a processor coupled to the memory;
the memory is used for storing a program, and the program is at least used for realizing the clothing color segmentation and extraction method based on mathematical statistics as claimed in any one of claims 1 to 6;
the processor is used for calling and executing the program stored in the memory.
CN202011620586.3A 2020-12-30 2020-12-30 Clothing color segmentation extraction method, device and equipment based on mathematical statistics Active CN112784854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011620586.3A CN112784854B (en) 2020-12-30 2020-12-30 Clothing color segmentation extraction method, device and equipment based on mathematical statistics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011620586.3A CN112784854B (en) 2020-12-30 2020-12-30 Clothing color segmentation extraction method, device and equipment based on mathematical statistics

Publications (2)

Publication Number Publication Date
CN112784854A true CN112784854A (en) 2021-05-11
CN112784854B CN112784854B (en) 2023-07-14

Family

ID=75754302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011620586.3A Active CN112784854B (en) 2020-12-30 2020-12-30 Clothing color segmentation extraction method, device and equipment based on mathematical statistics

Country Status (1)

Country Link
CN (1) CN112784854B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222971A (en) * 2021-05-31 2021-08-06 深圳市蝶讯网科技股份有限公司 Method for browsing styles by colors and collocation, computer equipment and storage medium
CN113902938A (en) * 2021-10-26 2022-01-07 稿定(厦门)科技有限公司 Image clustering method, device and equipment
CN118648556A (en) * 2024-08-16 2024-09-17 中国水产科学研究院南海水产研究所 Dongxing spot body color improvement method and system based on illumination adjustment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611431A (en) * 2015-10-22 2017-05-03 阿里巴巴集团控股有限公司 An image detection method and apparatus
CN107563396A (en) * 2017-08-10 2018-01-09 南京大学 The construction method of protection screen intelligent identifying system in a kind of electric inspection process
CN110020627A (en) * 2019-04-10 2019-07-16 浙江工业大学 A kind of pedestrian detection method based on depth map and Fusion Features
CN110188803A (en) * 2019-05-16 2019-08-30 南京图申图信息科技有限公司 The recognition methods of trip spatiotemporal mode and system based on taxi track data
CN110473333A (en) * 2019-07-11 2019-11-19 深圳怡化电脑股份有限公司 Detection method, detection device and the terminal of note number
CN110569859A (en) * 2019-08-29 2019-12-13 杭州光云科技股份有限公司 Color feature extraction method for clothing image
WO2020018086A1 (en) * 2018-07-18 2020-01-23 Hewlett-Packard Development Company, L.P. Clustering colors for halftoning
CN111060014A (en) * 2019-10-16 2020-04-24 杭州安脉盛智能技术有限公司 Online self-adaptive tobacco shred width measuring method based on machine vision
CN111444806A (en) * 2020-03-19 2020-07-24 成都云盯科技有限公司 Commodity touch information clustering method, device and equipment based on monitoring video
CN111563536A (en) * 2020-04-17 2020-08-21 福建帝视信息科技有限公司 Bamboo strip color self-adaptive classification method based on machine learning
CN111862116A (en) * 2020-07-15 2020-10-30 完美世界(北京)软件科技发展有限公司 Animation portrait generation method and device, storage medium and computer equipment
CN112115898A (en) * 2020-09-24 2020-12-22 深圳市赛为智能股份有限公司 Multi-pointer instrument detection method and device, computer equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611431A (en) * 2015-10-22 2017-05-03 阿里巴巴集团控股有限公司 An image detection method and apparatus
CN107563396A (en) * 2017-08-10 2018-01-09 南京大学 The construction method of protection screen intelligent identifying system in a kind of electric inspection process
WO2020018086A1 (en) * 2018-07-18 2020-01-23 Hewlett-Packard Development Company, L.P. Clustering colors for halftoning
CN110020627A (en) * 2019-04-10 2019-07-16 浙江工业大学 A kind of pedestrian detection method based on depth map and Fusion Features
CN110188803A (en) * 2019-05-16 2019-08-30 南京图申图信息科技有限公司 The recognition methods of trip spatiotemporal mode and system based on taxi track data
CN110473333A (en) * 2019-07-11 2019-11-19 深圳怡化电脑股份有限公司 Detection method, detection device and the terminal of note number
CN110569859A (en) * 2019-08-29 2019-12-13 杭州光云科技股份有限公司 Color feature extraction method for clothing image
CN111060014A (en) * 2019-10-16 2020-04-24 杭州安脉盛智能技术有限公司 Online self-adaptive tobacco shred width measuring method based on machine vision
CN111444806A (en) * 2020-03-19 2020-07-24 成都云盯科技有限公司 Commodity touch information clustering method, device and equipment based on monitoring video
CN111563536A (en) * 2020-04-17 2020-08-21 福建帝视信息科技有限公司 Bamboo strip color self-adaptive classification method based on machine learning
CN111862116A (en) * 2020-07-15 2020-10-30 完美世界(北京)软件科技发展有限公司 Animation portrait generation method and device, storage medium and computer equipment
CN112115898A (en) * 2020-09-24 2020-12-22 深圳市赛为智能股份有限公司 Multi-pointer instrument detection method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵坤;张羽君;张建龙;王勇;: "基于SLIC分层分割的无人机图像极小目标检测方法", 数据采集与处理, no. 04, pages 93 - 101 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222971A (en) * 2021-05-31 2021-08-06 深圳市蝶讯网科技股份有限公司 Method for browsing styles by colors and collocation, computer equipment and storage medium
CN113902938A (en) * 2021-10-26 2022-01-07 稿定(厦门)科技有限公司 Image clustering method, device and equipment
CN113902938B (en) * 2021-10-26 2022-08-30 稿定(厦门)科技有限公司 Image clustering method, device and equipment
CN118648556A (en) * 2024-08-16 2024-09-17 中国水产科学研究院南海水产研究所 Dongxing spot body color improvement method and system based on illumination adjustment

Also Published As

Publication number Publication date
CN112784854B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN109325954B (en) Image segmentation method and device and electronic equipment
CN108921782B (en) Image processing method, device and storage medium
CN108717524B (en) Gesture recognition system based on double-camera mobile phone and artificial intelligence system
KR100860988B1 (en) Method and apparatus for object detection in sequences
KR20210149848A (en) Skin quality detection method, skin quality classification method, skin quality detection device, electronic device and storage medium
KR20160143494A (en) Saliency information acquisition apparatus and saliency information acquisition method
CN108154086B (en) Image extraction method and device and electronic equipment
CN110399842B (en) Video processing method and device, electronic equipment and computer readable storage medium
WO2019221013A4 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
CN112329851B (en) Icon detection method and device and computer readable storage medium
CN111183630B (en) Photo processing method and processing device of intelligent terminal
CN112861661B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112749645B (en) Clothing color detection method, device and equipment based on monitoring video
CN109859236B (en) Moving object detection method, system, computing device and storage medium
CN111028170B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN112883940A (en) Silent in-vivo detection method, silent in-vivo detection device, computer equipment and storage medium
CN110958469A (en) Video processing method and device, electronic equipment and storage medium
CN111125390A (en) Database updating method and device, electronic equipment and computer storage medium
CN112784854A (en) Method, device and equipment for segmenting and extracting clothing color based on mathematical statistics
JP2004310475A (en) Image processor, cellular phone for performing image processing, and image processing program
CN112839167A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN117496019B (en) Image animation processing method and system for driving static image
KR20080079443A (en) Method and apparatus for extracting object from image
JP2018049559A (en) Image processor, image processing method, and program
Teixeira et al. Object segmentation using background modelling and cascaded change detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant