CN110070147A - A kind of clothing popularity Texture Recognition neural network based and system - Google Patents
A kind of clothing popularity Texture Recognition neural network based and system Download PDFInfo
- Publication number
- CN110070147A CN110070147A CN201910375032.2A CN201910375032A CN110070147A CN 110070147 A CN110070147 A CN 110070147A CN 201910375032 A CN201910375032 A CN 201910375032A CN 110070147 A CN110070147 A CN 110070147A
- Authority
- CN
- China
- Prior art keywords
- texture
- feature
- network
- clothes
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/49—Analysis of texture based on structural texture description, e.g. using primitives or placement rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Abstract
The present invention relates to a kind of clothing popularity Texture Recognition neural network based and systems, including image characteristics extraction network, clothes position feature to extract network, texture feature extraction module, clothes position regression block and pattern texture prediction module.Its advantage is shown: being provided judgement to the specific location of clothes in image, and can greatly be excluded the interference of background using the specific location content of clothes as ROI input neural network, improves the accuracy rate of identification;Utilize the fashion elements of this system analysis present fashion garment sector, to analyze current fashion trend, this system may be that fashion designer provides design inspiration simultaneously, helps designer to design the dress-goods for catering to consumer psychology, improves the cognition degree and satisfaction of consumer;It can complete the data mark of 700-800 picture frames per minute using this system, efficiency is greatly improved;About 20 percentage points are improved using this system recognition accuracy.
Description
Technical field
The present invention relates to, under complex background, the automatic identification of the pattern texture attribute of clothes is divided in fashion clothing field
Analysis, specifically, being a kind of clothing popularity Texture Recognition neural network based and system.
Background technique
Fashion clothing has many attributes, and pattern texture is very important one in numerous attributes.And the pattern of clothes
Texture recognition is and to provide the technology for accurately identifying result by the attribute of computer automatic analysis clothes.It is led in fashion clothing
Domain, designer or analyst are difficult to judge whole fashion trend or prevalence from current a small number of clothes quickly
Element needs to make the fashion elements of fashion by the clothes for analyzing magnanimity difference style accurately judgement and divides
Analysis, and this too expensive on manpower and time cost.Only accurately analyze current fashion trend and fashion elements ability
It helps fashion clothing designer to design the fashion clothing to go with the current of the times, improves the desire to buy and satisfaction of user.
But currently in the identification field of fashion clothing, there are it is more be clothes fashion identification, even style
Also the only identification of a small number of styles, and to the identification of clothing popularity texture properties almost without therefore being highly desirable exploitation one
The mature method and system of set can automatically analyze out the pattern texture attribute of clothes by computer technology, analyze as clothes
With the element task of designing system.
Existing following technological difficulties: 1. for current general identifying system, how to be allowed to be applied to clothing popularity
The identification field of texture;2. how to improve and be blocked in clothes or there are the identification of the pattern texture of the clothes of angle is accurate
Rate.
Chinese patent literature: CN201810542313.8, the applying date: it is high that 2018.05.30 discloses a kind of recognition accuracy
Garment identification system.A kind of garment identification system that recognition accuracy is high, including image collection module, characteristic extracting module and
Characteristic matching module, described image obtain module for acquiring non-smooth image of clothing and smooth image of clothing, and the feature mentions
Modulus block is for extracting the feature of non-smooth image of clothing and smooth image of clothing, and the characteristic matching module is for non-
Smooth image of clothing is matched with smooth image of clothing feature, completes non-smooth clothes identification.
Chinese patent literature: CN201010228931.9, the applying date: 2010.07.09 discloses a kind of image for identification
In pattern method.A kind of method of pattern in image for identification includes memory, for storing specific to region
Reference value, the image information of the image-region of the reference value specific to region based on the part comprising the pattern to be identified come
It calculates: and, processor, the image for being configured as to receive is divided into region, based on the reference value in the region, will be counted
The reference value of calculation is compared with reference value stored in memory, if found in the image received by adjacent area structure
At part, then instruction recognize pattern, wherein the reference value in the region with enough precision correspond to is stored in memory
In reference value.
Above patent document: a kind of garment identification system that recognition accuracy is high in CN201810542313.8 introduces same
The smooth image of part clothes, is matched with the image of clothing for being in out-of-flatness placement status, thus by non-smooth image of clothing
Identification be converted into image matching problems.A kind of patent document: the pattern in CN201010228931.9 in image for identification
Method, become pattern identification than simpler in the past and more rapidly, to be the calculating capacity of equipment not necessarily like the prior art
It is so huge in solution.But about a kind of more matchmakers comprising clothes by inputting various dimensions to dedicated identification model
Volume data, the part extracted to garment data are analyzed with global feature, are solved under the complex scenes such as more backgrounds, multi-angle
The automatic identifying method and system of clothing popularity texture yet there are no report.
Summary of the invention
It is a kind of more by being inputted to dedicated identification model the purpose of the present invention is aiming at the shortcomings in the prior art, providing
The multi-medium data comprising clothes of dimension, the part extracted to garment data are analyzed with global feature, and solution is being carried on the back more
The automatic identifying method and system of clothing popularity texture under the complex scenes such as scape, multi-angle.
To achieve the above object, the technical solution adopted by the present invention is that:
A kind of clothing popularity Texture Recognition neural network based and system, which is characterized in that realize that process is main
It comprises the steps of:
The building of S1, pattern texture identification network model:
The loss function design of S2, back-propagation:
S3, model training strategy.
As a kind of perferred technical scheme, step S1 specifically comprises the following steps:
S11, pattern texture identification network model mainly include image characteristics extraction network, clothes position feature extraction net
Network, texture feature extraction module, clothes position regression block and pattern texture prediction module;The image characteristics extraction net
Network carries out characteristics of image to the multi-medium data of the multidimensional of input, predominantly image data, by batch normalizing, Chi Hua, convolution
Extraction, obtain more abundant characteristic details, the convolutional layer uses the combination that empty convolution adds conventional convolution, obtains
The high-level semantics feature of image;The clothes position feature extracts what network further obtained image characteristics extraction network
High-level semantics feature is handled, and the high-level semantics feature of clothes position in image has been obtained;The clothes position returns mould
Block extracts the high-level semantics feature that network obtains to clothes position feature and carries out recurrence processing, obtains the specific position of clothes in image
It sets;The texture feature extraction module is that the high-level semantics feature extracted to image characteristics extraction network carries out further
Feature extraction obtains the high-level semantics feature of texture;The pattern texture prediction module is obtained to texture feature extraction module
To texture high-level semantics feature carry out prediction output;
S12, the image characteristics extraction network include by batch normalization layer, empty convolutional layer, regular volume lamination, swash
Layer composition living, the empty convolutional layer after batch normalizes layer, the regular volume lamination after empty convolutional layer,
The active coating is after regular volume lamination;The batch normalizes layer, empty convolutional layer, regular volume lamination, active coating
Form a sub- extraction module;The son, which extracts, is added maximum pond layer between block module, information is carried out efficient dimensionality reduction
Processing promotes calculating speed;
S13, the image characteristics extraction network extract the place that block carries out cross-layer short circuit connection by three sons using every
Reason, and feature output is overlapped, retain more detailed information, promote the precision of identification, after superposition processing, passes through
One discarding plus scaling layer, prevent under too deep network, and over-fitting situation occurs in model, accelerate gradient descent algorithm and are optimizing
Convergence rate in the process;
S14, the clothes position feature extract network, the first high-level semantics to the output of image characteristics extraction network
Feature carries out feature extraction, obtains the high-level semantics feature of clothes position, and by the high-level semantics feature of clothes position export to
Texture feature extraction module and clothes position regression block use;
Clothes position feature is extracted the advanced language in clothes position of network output by S15, the clothes position regression block
Adopted feature is returned, and provides the specific location of clothes, and the more specific location information of clothes is returned to characteristics of image again and is mentioned
Network is taken, the region of clothes in precise positioning image, greatly interference of the exclusion complex background to clothes, while line can be reduced
The calculation amount of characteristic extracting module, and the prediction output of calibrating pattern texture are managed, accuracy of identification is improved;
S16, the texture feature extraction module extract network by global texture feature extraction network, Local textural feature
And Texture Feature Fusion module composition;
Global texture feature extraction network described in S17, S16 can receive clothes position feature information auxiliary, it is right
The high-level semantics feature of image characteristics extraction network output carries out the extraction of global textural characteristics, and the global textural characteristics mention
There are two the structure inputs for taking network, and one is clothes position feature, another is characteristics of image, and every by two son extractions
Block carries out the connection of cross-layer short circuit, and there was only scaling layer after overlap-add operation, does not abandon layer;
Local textural feature described in S18, S16 extracts the high-level semantics that network can export image characteristics extraction network
Feature carries out the extraction of local detail textural characteristics, and it is characteristics of image that Local textural feature, which extracts network inputs, and every passes through two
Height extracts the connection that block carries out cross-layer short circuit, and there was only scaling layer after overlap-add operation, does not abandon layer;
Texture Feature Fusion module described in S19, S16 mentions global texture feature extraction network and Local textural feature
It takes the output of network to carry out fusion arrangement, the feature extraction result of local grain is subjected to abundant processing to global textural characteristics,
So that the confidence level of global textural characteristics is higher, efficiently solve that the blocking of clothes in image, there are angles to cause pattern texture
The case where attribute leakage identification, wrong identification;
The textural characteristics that S110, image texture prediction module export Texture Feature Fusion module carry out Classification and Identification;It should
Module is carried out using full convolutional network, and is carried out global pool and obtained the result of Classification and Identification.
As a kind of perferred technical scheme, step S2 specifically comprises the following steps:
The loss function includes the attribute classification cross entropy loss for returning loss function, pattern texture of clothes position
Function;
L is lost in the recurrence of S21, clothes positionlocIt is defined as follows:
Vector, lociIndicate the clothes position vector predicted in i-th image;
The attribute classification loss L of S22, pattern textureattrIt is defined as follows:
Wherein, m indicates the quantity of training set sample, xiIndicate i-th image of clothing, ciIndicate the figure of i-th image of clothing
Case texture properties label vector;
S23, according to the S21 and S22, overall loss LtotalIt is defined as follows:
Ltotal=w1Lloc+w2Lattr
Wherein, w1For in training to LlocControl parameter, w2For in training to LattrControl parameter.
As a kind of perferred technical scheme, step S3 specifically comprises the following steps:
Combine alternate Training strategy, go to learn shared feature by alternative optimization, main process is as follows:
The loss of S31, training clothes holding position, pass through w in overall loss2Parameter ignorance falls LattrInfluence, obtain clothes
Location information;
S32, the pattern texture Attribute Recognition network of the clothes band of position one isolation of training generated in S31, clothes are used
Holding position training network and pattern texture Attribute Recognition network do not share convolutional layer;
S33, it is instructed using pattern texture Attribute Recognition netinit clothes position Recurrent networks obtained in S32
Practice, fixed shared convolutional layer, only finely tunes the distinctive layer of clothes position Recurrent networks here;
S34, the shared convolutional layer being kept fixed finely tune the distinctive layer of pattern texture Attribute Recognition network.
As a kind of perferred technical scheme, the number of the step S12 neutron extraction module can be wanted according to hardware
It asks and adjustment that prediction result is increased and decreased.
As a kind of perferred technical scheme, in the step S13, the specific implementation details of image characteristics extraction network
In, the section of loss ratio setting is 0.3-0.7, and the section of zoom ratio setting is 0.01-0.08.
As a kind of perferred technical scheme, in the step S16, texture feature extraction module is using global texture drawn game
The strategy that portion's texture extracts jointly, goes the extraction for assisting global texture using local grain, and by local grain and global line
Reason is merged, so that obtained global texture has higher confidence level, efficiently solves blocking, existing for clothes in image
Angle causes the case where pattern texture attribute leakage identification, wrong identification.
As a kind of perferred technical scheme, in the step S3, using the training of tetra- steps of S31, S32, S33, S34
Strategy improves trained speed.
The invention has the advantages that:
1, judgement is provided to the specific location of clothes in image, and inputs mind for the specific location content of clothes as ROI
The interference that background can be greatly excluded through network, improves the accuracy rate of identification.
2, this method can be applied to fashion clothing design and analysis system, design analysis field in fashion clothing, can be with
By the fashion elements of this system analysis present fashion garment sector, to analyze current fashion trend, while this is
System or fashion designer provide design inspiration, help designer to design the dress-goods for catering to consumer psychology, mention
The cognition degree and satisfaction of high consumption person.
3, this system can also be used as the universal method of fashion clothing FIELD Data mark, the data mark of traditional fashion world
Note needs to provide data mark by the manpower for having professional knowledge background is dynamic, and time-consuming and laborious, this method can complete 700- per minute
The data mark of 800 picture frames, efficiency are greatly improved.
4, the method for the present invention is by the identification neural network of the clothing popularity texture properties of design specialized, and accuracy rate is than logical
The accuracy rate that identification with the identification network application of type in the field obtains improves about 20 percentage points.
Detailed description of the invention
Attached drawing 1 is a kind of specific implementation stream of clothing popularity Texture Recognition neural network based and system of the present invention
Cheng Tu.
Attached drawing 2 is that the textural characteristics of a kind of clothing popularity Texture Recognition neural network based of the present invention and system mention
The broad flow diagram of modulus block.
Attached drawing 3 is that the characteristics of image of a kind of clothing popularity Texture Recognition neural network based of the present invention and system mentions
Take the flow chart of network.
Attached drawing 4 is that the characteristics of image of a kind of clothing popularity Texture Recognition neural network based of the present invention and system mentions
Take the implementation detail flow chart of network.
Specific embodiment
It elaborates with reference to the accompanying drawing to specific embodiment provided by the invention.
Embodiment 1
In order to solve the automatic identification of the clothing popularity texture under the complex scenes such as more backgrounds, multi-angle, the present invention is being passed
Dedicated deep neural network is devised on the basis of the deep neural network model for universal identification of uniting to complete the identification mission;It is logical
It crosses to the multi-medium data comprising clothes, such as picture, video etc. of dedicated identification model input various dimensions, it is dedicated by this
Part that model extracts garment data and global feature provide recognition result by the analysis of feature;The realization of specific method
Process is as follows:
Step s1 obtains the multi-medium data of a large amount of fashion clothing classes, including video and image, and defeated as system
Enter.And in order to determine that background does not interfere the identification of practical clothing popularity texture in the frame image, it is determined that in image
The specific location of clothes is trained as ROI region input model;
Step s2, pre-processes data, is extracted as the three-dimensional feature data in RGB color space as neural network model
Actually enter data;
Three-dimensional feature data input special depth neural network model is carried out propagated forward calculating by step s3;
Step s4 according to practical business scene and Neural Network Structure Design loss function, and carries out backpropagation calculating,
The weight parameter of network model is obtained by optimizing loss function;
Step s5 had neither part nor lot in trained data by inputting, through too deep according to the network weight parameter that step s4 is obtained
The recognition result of clothing popularity texture is calculated in the propagated forward of degree neural network model.
Attached drawing 1 is please referred to, attached drawing 1 is a kind of clothing popularity Texture Recognition neural network based of the present invention and system
Specific implementation flow chart.The clothing popularity Texture Recognition neural network based includes that characteristics of image mentions with system
Network, clothes position feature is taken to extract network, texture feature extraction module, clothes position regression block, pattern texture prediction mould
Block.
Attached drawing 2 is please referred to, attached drawing 2 is a kind of clothing popularity Texture Recognition neural network based of the present invention and system
Texture feature extraction module broad flow diagram.The texture feature extraction module includes global texture feature extraction net
Network, Local textural feature extract network, Texture Feature Fusion module.
Attached drawing 3 is please referred to, attached drawing 3 is a kind of clothing popularity Texture Recognition neural network based of the present invention and system
Image characteristics extraction network flow chart.The image characteristics extraction network include batch normalization layer, empty convolutional layer,
Regular volume lamination, active coating, maximum pond layer.
Attached drawing 4 is please referred to, attached drawing 4 is a kind of clothing popularity Texture Recognition neural network based of the present invention and system
Image characteristics extraction network implementation detail flow chart.Block, superposition, discarding plus scaling layer are extracted including three sons.
It should be understood that embodiments described herein is only the section Example in the method for the present invention, side of the present invention
A kind of clothing popularity Texture Recognition neural network based and system, realization process that method provides mainly include following step
It is rapid:
The building of S1, pattern texture identification network model:
S11, pattern texture identification network model mainly include image characteristics extraction network, clothes position feature extraction net
Network, texture feature extraction module, clothes position regression block and pattern texture prediction module.The image characteristics extraction net
Network carries out characteristics of image to the multi-medium data of the multidimensional of input, predominantly image data, by batch normalizing, Chi Hua, convolution
Extraction, more abundant characteristic details in order to obtain, convolutional layer here uses the combination that empty convolution adds conventional convolution,
Result in the high-level semantics feature of image;The clothes position feature extracts network further by image characteristics extraction net
The high-level semantics feature that network obtains is handled, and the high-level semantics feature of clothes position in image has been obtained;The clothes position
It sets the high-level semantics feature that regression block obtains clothes position feature extraction network and carries out recurrence processing, obtain clothes in image
Specific location;The texture feature extraction module is that the high-level semantics feature extracted to image characteristics extraction network carries out
Further feature is extracted, and the high-level semantics feature of texture is obtained;The pattern texture prediction module is mentioned to textural characteristics
The texture high-level semantics feature that modulus block obtains carries out prediction output;
S12, the image characteristics extraction network include by batch normalization layer, empty convolutional layer, regular volume lamination, swash
Layer composition living, the empty convolutional layer after batch normalizes layer, the regular volume lamination after empty convolutional layer,
The active coating is after regular volume lamination;Batch normalization layer, empty convolutional layer, regular volume lamination, active coating composition one
A sub- extraction module;The number of image characteristics extraction network neutron extraction module can be according to the requirement of oneself hardware and pre-
Survey the adjustment that result is increased and decreased;Son, which extracts, is added maximum pond layer between block module, for information to be carried out efficient dimensionality reduction
Processing promotes calculating speed;
In the case where too deep, the gradient of backpropagation disappears network in order to prevent for S13, the image characteristics extraction network
It loses, extracts the processing that block carries out cross-layer short circuit connection by three sons using every;Feature output is overlapped simultaneously, rather than
Simple addition is handled, and reason is to retain more detailed information, promotes the precision of identification;And then superposition processing it
Afterwards, it is abandoned by one and adds scaling layer, can so prevented under too deep network, over-fitting situation occurs in model;Loss ratio
The section that generally can be set is 0.3-0.7, and the section that zoom ratio generally can be set is between 0.01-0.08;
S14, the clothes position feature extract network, the first high-level semantics to the output of image characteristics extraction network
Feature carries out feature extraction, obtains the high-level semantics feature of clothes position, and by the high-level semantics feature of clothes position export to
Texture feature extraction module and clothes position regression block use;
Clothes position feature is extracted the advanced language in clothes position of network output by S15, the clothes position regression block
Adopted feature is returned, and provides the specific location of clothes, and the more specific location information of clothes is returned to characteristics of image again and is mentioned
Network is taken, for the region of clothes in precise positioning image, can greatly exclude interference of the complex background to clothes;It simultaneously can
To reduce the calculation amount of texture feature extraction module, and the prediction output of calibrating pattern texture, accuracy of identification is improved;
S16, the texture feature extraction module extract network by global texture feature extraction network, Local textural feature
And Texture Feature Fusion module composition;
Global texture feature extraction network described in S17, S16 can receive clothes position feature information auxiliary, it is right
The high-level semantics feature of image characteristics extraction network output carries out the extraction of global textural characteristics;Global texture feature extraction network
Structure it is similar with the structure of image characteristics extraction network, but differ in that: there are two inputs, and one is special for clothes position
Sign, another is characteristics of image;It is every that the connection that block carries out cross-layer short circuit is extracted by two sons;And after overlap-add operation only
Scaling layer does not abandon layer;
Local textural feature described in S18, S16 extracts the high-level semantics that network can export image characteristics extraction network
The extraction of feature progress local detail textural characteristics;Local textural feature extracts the structure and overall situation texture feature extraction net of network
The structure of network is similar, difference be Local textural feature extract network input only one, there is no clothes position feature letters
Breath is as auxiliary input;
Texture Feature Fusion module described in S19, S16 mentions global texture feature extraction network and Local textural feature
It takes the output of network to carry out fusion arrangement, the feature extraction result of local grain is subjected to abundant processing to global textural characteristics,
So that the confidence level of global textural characteristics is higher, it can efficiently solve that the blocking of clothes in image, there are angles to cause pattern
The case where texture properties leakage identification, wrong identification;
The textural characteristics that S110, image texture prediction module export Texture Feature Fusion module carry out Classification and Identification;It should
Module is carried out using full convolutional network, and is carried out global pool and obtained the result of Classification and Identification;
The loss function design of S2, back-propagation:
It is optimizing in order to obtain as a result, it is desirable to design reasonable loss function in back-propagation;In the present invention
Loss function include clothes position return loss function, pattern texture attribute classification cross entropy loss function;
L is lost in the recurrence of S21, clothes positionlocIt is defined as follows:
Wherein, m indicates the sample size of training set,Indicate the actual position vector of classification in i-th image, loci
Indicate the clothes position vector predicted in i-th image;
The attribute classification loss L of S22, pattern textureattrIt is defined as follows:
Wherein, m indicates the quantity of training set sample, xiIndicate i-th image of clothing, ciIndicate the figure of i-th image of clothing
Case texture properties label vector;
S23, according to the S21 and S22, overall loss LtotalIt is defined as follows:
Ltotal=w1Lloc+w2Lattr
Wherein, w1For in training to LlocControl parameter, w2For in training to LattrControl parameter;
S3, model training strategy:
Since clothing popularity texture recognition network is network end to end, can using Training strategy end to end,
But training end to end, the problem is that runing time is long, this causes very big be stranded to the time cost of R&D and production
It disturbs;The method of the present invention proposes the alternate Training strategy of joint, the feature for going study shared by alternative optimization, main process
It is as follows:
The loss of S31, training clothes holding position, pass through w in overall loss2Parameter ignorance falls LattrInfluence, obtain clothes
Location information;
S32, the pattern texture Attribute Recognition network of the clothes band of position one isolation of training generated in S31 is used;Clothes
Holding position training network and pattern texture Attribute Recognition network do not share convolutional layer;
S33, it is instructed using pattern texture Attribute Recognition netinit clothes position Recurrent networks obtained in S32
Practice, fixed shared convolutional layer, only finely tunes the distinctive layer of clothes position Recurrent networks here;
S34, the shared convolutional layer being kept fixed finely tune the distinctive layer of pattern texture Attribute Recognition network.
The present invention provides judgement to the specific location of clothes in image, and the specific location content of clothes is defeated as ROI
The interference of background can greatly be excluded by entering neural network, improve the accuracy rate of identification;This method can be applied to fashion clothing
Design and analysis system designs analysis field in fashion clothing, the prevalence of present fashion garment sector can be analyzed by this system
Element, to analyze current fashion trend, while this system may be that fashion designer provides design inspiration, side
It helps designer to design the dress-goods for catering to consumer psychology, improves the cognition degree and satisfaction of consumer;This system can also
Using the universal method marked as fashion clothing FIELD Data, the data mark of traditional fashion world, which needs to rely on, professional knowledge
The manpower of background is dynamic to provide data mark, and time-consuming and laborious, this method can complete the data mark of 700-800 picture frames per minute
Note, efficiency are greatly improved;Pass through the identification neural network of the clothing popularity texture properties of design specialized, accuracy rate
The accuracy rate that identification than universal identification network application in the field obtains improves about 20 percentage points.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
Member, under the premise of not departing from the method for the present invention, can also make several improvement and supplement, these are improved and supplement also should be regarded as
Protection scope of the present invention.
Claims (8)
1. a kind of clothing popularity Texture Recognition neural network based and system, which is characterized in that realization process is mainly wrapped
Containing following steps:
The building of S1, pattern texture identification network model:
The loss function design of S2, back-propagation:
S3, model training strategy.
2. a kind of clothing popularity Texture Recognition neural network based according to claim 1 and system, feature
It is, step S1 specifically comprises the following steps:
S11, pattern texture identification network model mainly include image characteristics extraction network, clothes position feature extraction network, line
Manage characteristic extracting module, clothes position regression block and pattern texture prediction module;The image characteristics extraction network pair
The multi-medium data of the multidimensional of input, predominantly image data carry out mentioning for characteristics of image by batch normalizing, Chi Hua, convolution
It takes, obtains more abundant characteristic details, the convolutional layer uses the combination that empty convolution adds conventional convolution, schemed
The high-level semantics feature of picture;The clothes position feature extraction network further obtains image characteristics extraction network advanced
Semantic feature is handled, and the high-level semantics feature of clothes position in image has been obtained;The clothes position regression block pair
Clothes position feature extracts the high-level semantics feature that network obtains and carries out recurrence processing, obtains the specific location of clothes in image;
The texture feature extraction module is that the high-level semantics feature extracted to image characteristics extraction network carries out further spy
Sign is extracted, and the high-level semantics feature of texture is obtained;The pattern texture prediction module is obtained to texture feature extraction module
Texture high-level semantics feature carry out prediction output;
S12, the image characteristics extraction network include by batch normalization layer, empty convolutional layer, regular volume lamination, active coating
Composition, for the empty convolutional layer after batch normalizes layer, the regular volume lamination is described after empty convolutional layer
Active coating after regular volume lamination;The batch normalizes layer, empty convolutional layer, regular volume lamination, active coating composition
One sub- extraction module;The son, which extracts, is added maximum pond layer between block module, information is carried out efficient dimension-reduction treatment,
Promote calculating speed;
S13, the image characteristics extraction network extract the processing that block carries out cross-layer short circuit connection by three sons using every,
And be overlapped feature output, retain more detailed information, promote the precision of identification, after superposition processing, by one
A abandon adds scaling layer, prevents under too deep network, and over-fitting situation occurs in model, accelerates gradient descent algorithm and was optimizing
Convergence rate in journey;
S14, the clothes position feature extract network, first the high-level semantics feature to the output of image characteristics extraction network
Feature extraction is carried out, obtains the high-level semantics feature of clothes position, and the high-level semantics feature of clothes position is exported to texture
Characteristic extracting module and clothes position regression block use;
The clothes position high-level semantics that clothes position feature is extracted network output by S15, the clothes position regression block are special
Sign is returned, and provides the specific location of clothes, and the more specific location information of clothes is returned to image characteristics extraction net again
Network, the region of clothes in precise positioning image, greatly interference of the exclusion complex background to clothes, while texture spy can be reduced
The calculation amount of extraction module, and the prediction output of calibrating pattern texture are levied, accuracy of identification is improved;
S16, the texture feature extraction module by global texture feature extraction network, Local textural feature extract network and
Texture Feature Fusion module composition;
Global texture feature extraction network described in S17, S16 can be in the auxiliary for receiving clothes position feature information, to image
The high-level semantics feature of feature extraction network output carries out the extraction of global textural characteristics, the global texture feature extraction net
There are two the structure inputs of network, and one is clothes position feature, another is characteristics of image, and it is every by two sons extract blocks into
The connection of row cross-layer short circuit, and there was only scaling layer after overlap-add operation, do not abandon layer;
Local textural feature described in S18, S16 extracts the high-level semantics feature that network can export image characteristics extraction network
The extraction of local detail textural characteristics is carried out, it is characteristics of image that Local textural feature, which extracts network inputs, and every by two sons
The connection that block carries out cross-layer short circuit is extracted, and there was only scaling layer after overlap-add operation, does not abandon layer;
Texture Feature Fusion module described in S19, S16 is by global texture feature extraction network and local texture feature extraction net
The output of network carries out fusion arrangement, and the feature extraction result of local grain is carried out abundant processing to global textural characteristics, so that
The confidence level of global textural characteristics is higher, efficiently solves blocking, causing pattern texture attribute there are angle for clothes in image
The case where leakage identification, wrong identification;
The textural characteristics that S110, image texture prediction module export Texture Feature Fusion module carry out Classification and Identification;The module
It is carried out using full convolutional network, and carries out global pool and obtain the result of Classification and Identification.
3. a kind of clothing popularity Texture Recognition neural network based according to claim 1 and system, feature
It is, step S2 specifically comprises the following steps:
The loss function includes the attribute classification cross entropy loss letter for returning loss function, pattern texture of clothes position
Number;
L is lost in the recurrence of S21, clothes positionlocIt is defined as follows:
Wherein, m indicates the sample size of training set,Indicate the actual position vector of classification in i-th image, lociIt indicates
The clothes position vector predicted in i-th image;
The attribute classification loss L of S22, pattern textureattrIt is defined as follows:
Wherein, m indicates the quantity of training set sample, xiIndicate i-th image of clothing, ciIndicate the pattern line of i-th image of clothing
Manage attribute tags vector;
S23, according to the S21 and S22, overall loss LtotalIt is defined as follows:
Ltotal=w1Lloc+w2Lattr
Wherein, W1For in training to LlocControl parameter, W2For in training to LattrControl parameter.
4. a kind of clothing popularity Texture Recognition neural network based according to claim 1 and system, feature
It is, step S3 specifically comprises the following steps:
Combine alternate Training strategy, go to learn shared feature by alternative optimization, main process is as follows:
The loss of S31, training clothes holding position, pass through w in overall loss2Parameter ignorance falls LattrInfluence, obtain the position of clothes
Confidence breath;
S32, the pattern texture Attribute Recognition network of the clothes band of position one isolation of training generated in S31, clothes position are used
It sets trained network and pattern texture Attribute Recognition network and does not share convolutional layer;
S33, it is trained using pattern texture Attribute Recognition netinit clothes position Recurrent networks obtained in S32, this
In fixed shared convolutional layer, only finely tune the distinctive layer of clothes position Recurrent networks;
S34, the shared convolutional layer being kept fixed finely tune the distinctive layer of pattern texture Attribute Recognition network.
5. a kind of clothing popularity Texture Recognition neural network based according to claim 1 and system, feature
It is, the tune that the number of the step S12 neutron extraction module can be increased and decreased according to the requirement and prediction result of hardware
It is whole.
6. a kind of clothing popularity Texture Recognition neural network based according to claim 1 and system, feature
It is, in the step S13, in the specific implementation details of image characteristics extraction network, the section of loss ratio setting is 0.3-
0.7, the section of zoom ratio setting is 0.01-0.08.
7. a kind of clothing popularity Texture Recognition neural network based according to claim 1 and system, feature
It is, in the step S16, the strategy that texture feature extraction module is extracted jointly using global texture and local grain is used
Local grain goes to assist the extraction of global texture, and local grain and global texture are merged, so that the obtained overall situation
Texture has higher confidence level, efficiently solves blocking, the leakage of pattern texture attribute being caused to know there are angle for clothes in image
Not, the case where wrong identification.
8. a kind of clothing popularity Texture Recognition neural network based according to claim 1 and system, feature
It is, in the step S3, using the Training strategy of tetra- steps of S31, S32, S33, S34, improves trained speed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910375032.2A CN110070147B (en) | 2019-05-07 | 2019-05-07 | Garment pattern texture recognition method and system based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910375032.2A CN110070147B (en) | 2019-05-07 | 2019-05-07 | Garment pattern texture recognition method and system based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110070147A true CN110070147A (en) | 2019-07-30 |
CN110070147B CN110070147B (en) | 2023-10-17 |
Family
ID=67370044
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910375032.2A Active CN110070147B (en) | 2019-05-07 | 2019-05-07 | Garment pattern texture recognition method and system based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110070147B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112182336A (en) * | 2020-10-13 | 2021-01-05 | 广西机电职业技术学院 | Jing nationality pattern pedigree arrangement classification system |
CN113343891A (en) * | 2021-06-24 | 2021-09-03 | 深圳市起点人工智能科技有限公司 | Detection device and detection method for child kicking quilt |
US20220237879A1 (en) * | 2021-01-27 | 2022-07-28 | Facebook Technologies, Llc | Direct clothing modeling for a drivable full-body avatar |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090116698A1 (en) * | 2007-11-07 | 2009-05-07 | Palo Alto Research Center Incorporated | Intelligent fashion exploration based on clothes recognition |
CN107358243A (en) * | 2017-07-14 | 2017-11-17 | 深圳码隆科技有限公司 | A kind of method and apparatus of cloth identification |
CN109272011A (en) * | 2018-07-31 | 2019-01-25 | 东华大学 | Multitask depth representing learning method towards image of clothing classification |
CN109325952A (en) * | 2018-09-17 | 2019-02-12 | 上海宝尊电子商务有限公司 | Fashion clothing image partition method based on deep learning |
CN109344872A (en) * | 2018-08-31 | 2019-02-15 | 昆明理工大学 | A kind of recognition methods of national costume image |
-
2019
- 2019-05-07 CN CN201910375032.2A patent/CN110070147B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090116698A1 (en) * | 2007-11-07 | 2009-05-07 | Palo Alto Research Center Incorporated | Intelligent fashion exploration based on clothes recognition |
CN107358243A (en) * | 2017-07-14 | 2017-11-17 | 深圳码隆科技有限公司 | A kind of method and apparatus of cloth identification |
CN109272011A (en) * | 2018-07-31 | 2019-01-25 | 东华大学 | Multitask depth representing learning method towards image of clothing classification |
CN109344872A (en) * | 2018-08-31 | 2019-02-15 | 昆明理工大学 | A kind of recognition methods of national costume image |
CN109325952A (en) * | 2018-09-17 | 2019-02-12 | 上海宝尊电子商务有限公司 | Fashion clothing image partition method based on deep learning |
Non-Patent Citations (1)
Title |
---|
廖文军等: "Sobel 算子在衣物纹理类型检测中的应用研究", 《计算机技术与发展》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112182336A (en) * | 2020-10-13 | 2021-01-05 | 广西机电职业技术学院 | Jing nationality pattern pedigree arrangement classification system |
CN112182336B (en) * | 2020-10-13 | 2023-05-30 | 广西机电职业技术学院 | Beijing pattern pedigree sorting and classifying system |
US20220237879A1 (en) * | 2021-01-27 | 2022-07-28 | Facebook Technologies, Llc | Direct clothing modeling for a drivable full-body avatar |
CN113343891A (en) * | 2021-06-24 | 2021-09-03 | 深圳市起点人工智能科技有限公司 | Detection device and detection method for child kicking quilt |
Also Published As
Publication number | Publication date |
---|---|
CN110070147B (en) | 2023-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | A deep network solution for attention and aesthetics aware photo cropping | |
CN107273876B (en) | A kind of micro- expression automatic identifying method of ' the macro micro- transformation model of to ' based on deep learning | |
CN110070147A (en) | A kind of clothing popularity Texture Recognition neural network based and system | |
CN109543602A (en) | A kind of recognition methods again of the pedestrian based on multi-view image feature decomposition | |
CN109272011B (en) | Multi-task depth representation learning method for clothing image classification | |
Sokolova et al. | Gait recognition based on convolutional neural networks | |
CN108961675A (en) | Fall detection method based on convolutional neural networks | |
CN108363997A (en) | It is a kind of in video to the method for real time tracking of particular person | |
CN109886153B (en) | Real-time face detection method based on deep convolutional neural network | |
Cheng et al. | Semi-supervised learning for rgb-d object recognition | |
Lin et al. | Large-scale isolated gesture recognition using a refined fused model based on masked res-c3d network and skeleton lstm | |
CN109190643A (en) | Based on the recognition methods of convolutional neural networks Chinese medicine and electronic equipment | |
CN110110650A (en) | Face identification method in pedestrian | |
Zhang et al. | Zoom transformer for skeleton-based group activity recognition | |
CN109598186A (en) | A kind of pedestrian's attribute recognition approach based on multitask deep learning | |
Shang et al. | Using lightweight deep learning algorithm for real-time detection of apple flowers in natural environments | |
CN109325408A (en) | A kind of gesture judging method and storage medium | |
CN110096991A (en) | A kind of sign Language Recognition Method based on convolutional neural networks | |
CN107944366A (en) | A kind of finger vein identification method and device based on attribute study | |
CN114239754B (en) | Pedestrian attribute identification method and system based on attribute feature learning decoupling | |
Park et al. | Insect classification using Squeeze-and-Excitation and attention modules-a benchmark study | |
Chen et al. | An ensemble deep neural network for footprint image retrieval based on transfer learning | |
CN111191531A (en) | Rapid pedestrian detection method and system | |
Wang et al. | Finger multimodal features fusion and recognition based on CNN | |
Lin et al. | Identification method of citrus aurantium diseases and pests based on deep convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |