CN112116000B - Image recognition method for clothing types - Google Patents

Image recognition method for clothing types Download PDF

Info

Publication number
CN112116000B
CN112116000B CN202010971943.4A CN202010971943A CN112116000B CN 112116000 B CN112116000 B CN 112116000B CN 202010971943 A CN202010971943 A CN 202010971943A CN 112116000 B CN112116000 B CN 112116000B
Authority
CN
China
Prior art keywords
image
neural network
clothing
network model
edge information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010971943.4A
Other languages
Chinese (zh)
Other versions
CN112116000A (en
Inventor
雷李义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Image Data Technology Co ltd
Original Assignee
Shenzhen Image Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Image Data Technology Co ltd filed Critical Shenzhen Image Data Technology Co ltd
Priority to CN202010971943.4A priority Critical patent/CN112116000B/en
Publication of CN112116000A publication Critical patent/CN112116000A/en
Application granted granted Critical
Publication of CN112116000B publication Critical patent/CN112116000B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The image recognition method for the clothing type comprises the following steps: cutting out a picture clothing region; image preprocessing: detecting the image edge, and normalizing the image edge information and the image color pixels; and (3) calculating an artificial neural network model: fusing and classifying the image information and the edge information in different neural network models; semi-automatic iterative optimization is carried out on the artificial neural network model, and finally the network model is optimized; and applying the optimal neural network model to the actual image, and obtaining the clothing type. The invention provides a method for identifying clothing categories, which takes picture edge information as an aid on the basis of color information by intercepting clothing areas, fully utilizes the edge information in a plurality of stages of an artificial neural network model, and continuously optimizes the model through a semiautomatic iteration process, thereby improving the accuracy of identification.

Description

Image recognition method for clothing types
Technical Field
The invention relates to the technical field of image recognition, in particular to an image recognition method aiming at clothing types.
Background
With the development of artificial intelligence technology, shopping centers are also beginning to conduct digital transformation. The current shopping mall hopes to have more knowledge of the mall customer in order to provide more accurate services.
At present, image processing is carried out on a general data set, and some consumption trends of people are determined from the image processing, but no image recognition technology for performing visual characteristic calculation specifically on the clothes style and type of the people exists at present.
In addition, the image recognition aiming at the clothing type has the problems of lower accuracy, low model convergence speed and the like,
Disclosure of Invention
The invention aims to provide an image recognition method aiming at a clothing type, which solves the technical problems that a shopping center hopes to know the clothing demands of clients more in the prior art, and the image recognition aiming at the clothing type is low in accuracy and slow in model convergence speed at the same time.
The invention discloses an image recognition method aiming at a clothing type.
The image recognition method for the clothing type comprises the following steps:
step S1: cutting out a picture clothing region;
Step S2: image preprocessing: detecting the image edge, and normalizing the image edge information and the image color pixels;
Step S3: and (3) calculating an artificial neural network model: fusing and classifying the image information and the edge information in different neural network models;
step S4: semi-automatic iterative optimization is carried out on the artificial neural network model, and finally the network model is optimized;
Step S5: and applying the optimal neural network model to the actual image, and obtaining the clothing type.
According to the invention, a complete solution for garment type recognition is provided, the garment area is efficiently intercepted by using different strategies under different scenes, the edge information extracted for the garment type is added as an aid on the basis of the color information, the edge information is fully utilized in a plurality of stages of an artificial neural network model, and the model is continuously optimized through a set of semi-automatic iterative processes, so that the accuracy of image recognition for the garment style is improved.
The invention meets the requirement that the shopping center hopes to know the clothing of the customer more.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
Fig. 2 is a flowchart of step S1;
Fig. 3 is a flowchart of step S2;
fig. 4 is a flowchart of step S3;
fig. 5 is a flowchart of step S4.
Detailed Description
The invention is further illustrated and described below in conjunction with the specific embodiments and the accompanying drawings:
referring to fig. 1, the image recognition method for a clothing type disclosed by the invention comprises the following steps:
step S1: and cutting out the picture clothing region. The method for intercepting the clothing region comprises two methods of calculating and obtaining the clothing position according to the position of the face and carrying out target detection on the image to obtain the clothing position.
The intercepting flow is shown in fig. 2, when a user logs in, face recognition is carried out on the user, under the use scene, the system knows the face position information of the user, and clothing position information is calculated according to the face position information; when the pictures come from Internet collection data or other scenes and the system does not have face position information of the user, a target detection model aiming at the clothing is used to obtain the clothing region.
Step S2: image preprocessing: and carrying out normalization processing on the image edge detection, the image edge information and the image color pixels. Edge detection of the image includes: and extracting the edge information of the image through scharr operators, and respectively normalizing the edge information and the color information of the image.
Specifically, the patent adds edge detection calculation in image preprocessing, adds edge information of a picture as a second input on the basis of inputting a color picture, and helps a model to better extract pattern-related features contained in a clothing image by providing the edge information. The preprocessing flow is shown in fig. 3, the edge information of the image is extracted through scharr operators, the edge information and the color information of the image are normalized respectively, and finally the normalized image is input into the clothing style recognition model. The edge information extracted by using scharr operators has better effect than other edge extraction operators such as canny, and can lead the model to have higher accuracy and faster convergence.
Referring to fig. 4, step S3: and (3) calculating an artificial neural network model: the image information and the edge information are fused and classified in different neural network models.
The step S3 includes:
s31, extracting the characteristics of the color information of the image by using a volume and a network;
s32, fusing the edge information and the image color information;
step S33: extracting bottom layer features from the edge information and the image color information through a bottom layer convolution network, and fusing the bottom layer features;
Step S34: the color features and the edge features are subjected to feature extraction through an inverse residual bottleneck convolution and a convolution network respectively, so that high-level features are obtained;
step S35: and carrying out feature fusion on the high-level features, carrying out final feature extraction through top-layer convolution, carrying out global average pooling and dimension reduction on the final features, and then classifying through a full-connection layer, thereby obtaining the clothing class.
The edge information is fused with the color information in three stages, and in the input stage, the edge information is spliced directly to the color information to form 4-channel input data. Adding edge information at the input stage makes the model less different in performance on the training set, but brings higher accuracy on the verification set.
The color image and the edge information are extracted through the bottom convolution layer to ensure that the feature sizes are the same, then the information fusion of the bottom features is carried out in an addition mode, and the effect of feature fusion through addition is better than that of splicing and splicing with channel fusion.
After the bottom layer features are added, the color features and the edge features are further extracted through an inverse residual bottleneck convolution module and a convolution module respectively, so that the higher-layer features with the same size and more abstract are obtained. The convolution module used for feature extraction of the edge features is different from the inverse residual bottleneck convolution module used for feature extraction of the color features, and experiments show that the edge information does not need a complex convolution module for feature extraction any more, and the use of a relatively simple convolution module is more beneficial to exerting the attention mechanism of the edge information.
The color features and the edge features of the high layer are subjected to final feature fusion in an addition mode, final feature extraction is performed through a top convolution layer, feature dimension reduction is performed through a global average pooling layer, and finally classification is performed through a full connection layer.
Step S4: and performing semi-automatic iterative optimization on the artificial neural network model, and finally optimizing the network model.
Step S41: calculating a new image through the neural network model, obtaining the confidence coefficient of the clothing class, comparing the obtained confidence coefficient with a set threshold value, judging the reliability of the judgment result of the neural network model, and adding the new image into the data according to the corresponding class;
step S42: manually checking the calculated image data with high confidence according to proportion, and adjusting a set confidence threshold according to the checking result;
Step S43: when the confidence coefficient of the type of the newly input image calculated by the neural network model is smaller than a preset confidence coefficient threshold value, the judgment result of the neural network model is not trusted, the correct type corresponding to the image is manually determined, the manually determined image is stored as newly added data, the neural network model is updated, and iteration is continuously carried out until the accuracy of the type of clothing in the neural network model identification image is continuously improved.
The number and the scale of the pictures in the data set can influence the accuracy and the generalization capability of the artificial neural network model to a great extent, so that the model is continuously optimized by using a set of semi-automatic iterative process, and the process is shown in figure 5. And inputting the new image into the model, and calculating to obtain the confidence coefficient of the clothing class. And when the confidence coefficient is higher than the set threshold value standard, the judgment result of the model is considered to be reliable, and the new image is added into the data of the corresponding category. In order to ensure the quality of the newly added data, error data with high confidence coefficient need to be avoided as much as possible, so that spot check can be performed on the high-confidence coefficient image according to a certain proportion, the proportion of the high-confidence coefficient error division is ensured to be within an acceptable range, and otherwise, the confidence coefficient threshold value is adjusted. When the confidence degree given by the model is lower than a preset threshold value standard, the judgment result of the model is not trusted, the correct category of the image is determined through manual verification, and the image after the manual verification is used as newly added data to update the model. Through continuous iteration, the accuracy and generalization capability of the clothing style recognition model can be continuously improved.
According to the invention, a complete solution for garment type recognition is provided, the garment area is efficiently intercepted by using different strategies under different scenes, the edge information extracted for the garment type is added as an aid on the basis of the color information, the edge information is fully utilized in a plurality of stages of an artificial neural network model, and the model is continuously optimized through a set of semi-automatic iterative processes, so that the accuracy of image recognition for the garment style is improved.
The invention meets the requirement that the shopping center hopes to know the clothing of the customer more.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (2)

1. An image recognition method for a garment type, comprising the steps of:
step S1: the image clothing region interception specifically comprises the following steps: calculating clothing position information according to the face position information obtained by face recognition, or obtaining a clothing region by using a target detection model aiming at clothing;
Step S2: image preprocessing: edge detection of an image, and normalization processing of image edge information and image color pixels, wherein the edge detection of the image comprises: extracting the edge information of the image through scharr operators, and carrying out normalization processing on the edge information of the image and the color pixels of the image;
Step S3: and (3) calculating an artificial neural network model: fusing and classifying the image color pixels and the edge information in different neural network models, wherein the step S3 comprises the following steps:
step S31, after normalization processing is carried out on the image color pixels, splicing and fusing the image edge information and the image color pixels;
s32, extracting a first bottom layer feature from the fused image edge information and the image color pixels through a bottom layer convolution network, extracting a second bottom layer feature from the image edge information through the bottom layer convolution network alone, and adding the first bottom layer feature and the second bottom layer feature to obtain a third bottom layer feature;
step S33: extracting features of the third bottom layer feature through an inverse residual bottleneck convolution module to obtain a first high-level feature, and extracting features of the second bottom layer feature through the convolution module to obtain a second high-level feature;
Step S34: adding the first high-level features and the second high-level features, extracting the final features of the added high-level features through a top convolution layer, carrying out global average pooling and dimension reduction on the final features, and classifying through a full connection layer to obtain the clothing type;
Step S4: semi-automatic iterative optimization is carried out on the artificial neural network model, and an optimal neural network model is finally obtained;
Step S5: and applying the optimal neural network model to the actual image, and obtaining the clothing type.
2. The image recognition method for a type of clothing according to claim 1, wherein step S4 includes:
Step S41: calculating a newly-recorded image through a neural network model, obtaining the confidence coefficient of the clothing type, comparing the obtained confidence coefficient with a set threshold value, judging the reliability of the result judged by the neural network model, and adding the newly-recorded image into data according to the corresponding category;
step S42: manually checking the calculated image data with high confidence according to proportion, and adjusting a set confidence threshold according to the checking result;
Step S43: when the confidence coefficient of the type calculated by the newly input image through the neural network model is smaller than a preset confidence coefficient threshold value, the judgment result of the neural network model is considered to be unreliable, the correct type corresponding to the image is manually determined, the manually determined image is used as newly added data to be stored, the neural network model is updated, and iteration is continuously carried out until the accuracy of the type of clothing in the neural network model identification image is continuously improved.
CN202010971943.4A 2020-09-16 2020-09-16 Image recognition method for clothing types Active CN112116000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010971943.4A CN112116000B (en) 2020-09-16 2020-09-16 Image recognition method for clothing types

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010971943.4A CN112116000B (en) 2020-09-16 2020-09-16 Image recognition method for clothing types

Publications (2)

Publication Number Publication Date
CN112116000A CN112116000A (en) 2020-12-22
CN112116000B true CN112116000B (en) 2024-07-16

Family

ID=73803489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010971943.4A Active CN112116000B (en) 2020-09-16 2020-09-16 Image recognition method for clothing types

Country Status (1)

Country Link
CN (1) CN112116000B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521565A (en) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 Garment identification method and system for low-resolution video
CN107220949A (en) * 2017-05-27 2017-09-29 安徽大学 The self adaptive elimination method of moving vehicle shade in highway monitoring video

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8787663B2 (en) * 2010-03-01 2014-07-22 Primesense Ltd. Tracking body parts by combined color image and depth processing
JP5833689B2 (en) * 2014-02-07 2015-12-16 伊藤 庸一郎 Authentication system and authentication method
CN107330451B (en) * 2017-06-16 2020-06-26 西交利物浦大学 Clothing attribute retrieval method based on deep convolutional neural network
CN108764062B (en) * 2018-05-07 2022-02-25 西安工程大学 Visual sense-based clothing piece identification method
CN110414411B (en) * 2019-07-24 2021-06-08 中国人民解放军战略支援部队航天工程大学 Sea surface ship candidate area detection method based on visual saliency
CN110825899B (en) * 2019-09-18 2023-06-20 武汉纺织大学 Clothing image retrieval method integrating color features and residual network depth features
CN110674884A (en) * 2019-09-30 2020-01-10 山东浪潮人工智能研究院有限公司 Image identification method based on feature fusion
CN110880165A (en) * 2019-10-15 2020-03-13 杭州电子科技大学 Image defogging method based on contour and color feature fusion coding
CN111199248A (en) * 2019-12-26 2020-05-26 东北林业大学 Clothing attribute detection method based on deep learning target detection algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521565A (en) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 Garment identification method and system for low-resolution video
CN107220949A (en) * 2017-05-27 2017-09-29 安徽大学 The self adaptive elimination method of moving vehicle shade in highway monitoring video

Also Published As

Publication number Publication date
CN112116000A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN110569878B (en) Photograph background similarity clustering method based on convolutional neural network and computer
CN110909690B (en) Method for detecting occluded face image based on region generation
CN113160192A (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN111310718A (en) High-accuracy detection and comparison method for face-shielding image
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
WO2020143316A1 (en) Certificate image extraction method and terminal device
WO2022156178A1 (en) Image target comparison method and apparatus, computer device and readable storage medium
CN115061769B (en) Self-iteration RPA interface element matching method and system for supporting cross-resolution
CN118196439B (en) Certificate photo color auditing method based on visual language model and multiple agents
CN109859222A (en) Edge extracting method and system based on cascade neural network
CN111275694B (en) Attention mechanism guided progressive human body division analysis system and method
CN112464925A (en) Mobile terminal account opening data bank information automatic extraction method based on machine learning
CN116385707A (en) Deep learning scene recognition method based on multi-scale features and feature enhancement
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN112116000B (en) Image recognition method for clothing types
CN117115614A (en) Object identification method, device, equipment and storage medium for outdoor image
CN110472639B (en) Target extraction method based on significance prior information
CN117173712A (en) OCR preprocessing method, system and storage medium for photographing certificate photo by mobile phone
CN110020688B (en) Shielded pedestrian detection method based on deep learning
CN110826564A (en) Small target semantic segmentation method and system in complex scene image
CN111160262A (en) Portrait segmentation method fusing human body key point detection
CN112101479B (en) Hair style identification method and device
CN112396648A (en) Target identification method and system capable of positioning mass center of target object
CN113361503B (en) Intelligent garden arbor quantity detection method and system based on unmanned aerial vehicle aerial photography
CN117975466B (en) Universal scene card identification system based on layout analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant