CN107818489A - A kind of more people's costume retrieval methods based on dressing parsing and human testing - Google Patents
A kind of more people's costume retrieval methods based on dressing parsing and human testing Download PDFInfo
- Publication number
- CN107818489A CN107818489A CN201710806740.8A CN201710806740A CN107818489A CN 107818489 A CN107818489 A CN 107818489A CN 201710806740 A CN201710806740 A CN 201710806740A CN 107818489 A CN107818489 A CN 107818489A
- Authority
- CN
- China
- Prior art keywords
- people
- image
- retrieval
- clothes
- dressing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Library & Information Science (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of more people's costume retrieval methods based on dressing parsing and human testing, wherein, this method includes:Image is clapped to street and carries out more people's dressing dissection process;Face in image is identified, calculated, obtains the data of face location;Data set is parsed by binary channels human testing neutral net with reference to more people's dressings, is converted to corresponding human body distribution temperature detection figure;Carried out sliding-model control;User selects clothing in being shown by front end, rear end carries out data retrieval, draws retrieval result.To implementing the embodiment of the present invention, it disclosure satisfy that the image that user is easily clapped using street goes to retrieve the demand of the similar clothes of online shopping mall.
Description
Technical field
The present invention relates to machine vision, graphics techniques field, more particularly to one kind to be based on dressing parsing and human testing
More people's costume retrieval methods.
Background technology
With hot and logistic industry the development of internet, shopping online has attracted increasing consumer.Clothes
Pretend for one of commodity of people's current consumption, the sales volume of its shopping on the web it is also very huge.People look on the internet
To the clothes of needs, it is necessary to by the algorithm of costume retrieval.Initial costume retrieval algorithm is text based, accuracy compared with
It is low.Then generate naturally to scheme to search the costume retrieval method of figure.Costume retrieval originates from image retrieval, therefore costume retrieval is calculated
Method has similar part with image retrieval algorithm.But clothes have abundant prior information to excavate, the material, neckline such as clothing
Type, length of sleeve etc..Then many researcher's general knowledge lodge such prior information auxiliary costume retrieval, and achieve one
Determine effect.
Dressing parsing can be split the clothing in picture with the precision of Pixel-level, therefore it can be used as a basis
Property and preposition Journal of Sex Research, effectively improve dressing recommend and retrieval precision and convenience, it helps segmentation different objects, such as
Animal, people, automobile etc..Little by little, people are transferred to sight in clothes analytic technique, and apply it to clothes and recommend, take
In the e-commerce industries such as dress identification.
At present, common clothes analytic method can be divided into:The method cut based on figure, this kind of method are most initially before being used for
Scape is split, and is also applied to clothes segmentation recently.Its feature is to need first to give the background information of some, establishes probability mould
Type is fitted the color of prospect and background and position distribution, reuses energy function and minimizes and is trained, so as to completing prospect
The segmentation of background area.The characteristics of this kind of method and the disadvantage is that, robustness is not good enough, prospect and background color can not be handled
Picture as distributional class.So the clothes cut based on figure the segmentation research processing of early stage be background and prospect color distinction compared with
Big and relatively simple color picture.Due to needing given background information, so being entered first based on the clothes analytic method that figure is cut
Capable is recognition of face, so as to obtain the region of the upper part of the body, then regards the pixel outside region as background, is put into mixed Gaussian mould
Trained among type, and finally give the result of segmentation.
Burning hot with deep learning, also there is the method based on depth network, this kind of method one in clothes parsing field
As first using CNN extraction characteristics of image, recycle condition random field or other networks to be classified and identified, obtain clothes point
The result cut.Also have method using shape dictionary to complete clothes parsing, contain each position in dictionary, as clothes, trousers,
Glasses, arm etc. are variously-shaped, by being rotated, scaling, combining to each shape, can produce the shape of Target Photo
Shape, neutral net are exactly for learning the parameter therein for rotating, scaling, combining.
Because the method for early stage extracts characteristics of image using manual method mostly, the feature of extraction can not represent to scheme well
The information of picture, without good robustness.The method of deep learning is currently based on primarily directed to the single mesh under single scene
Mark is handled, to not having good solution across the image of scene and multiple target.
The content of the invention
It is an object of the invention to overcome the deficiencies in the prior art, and the invention provides one kind to be based on dressing parsing and human body
More people's costume retrieval methods of detection, with the dressing analytic method based on deep learning, are solved across the more mesh of more people under scene
The problem of target clothes parse.The result parsed with reference to the information of human testing and dressing, meet that user claps image using street, point
The each clothing that everyone wears is cut out, each clothing that chaining search is split, goes to retrieve the similar clothes of online shopping mall
Demand.
In order to solve the above problems, the present invention proposes a kind of more people's costume retrievals based on dressing parsing and human testing
Method, methods described include:
Clap street image and carry out more people's dressing dissection process, the clothing in segmentation figure picture, obtain more people's clothes analysis results;
Face in image is identified, the data of face in image are calculated;
Obtain human face data and image is clapped in the street, input one by one among binary channels human testing neutral net, draw every
Individual face corresponds to the distribution temperature detection figure of human body;
Certain threshold value is set to human body distribution temperature detection figure, sliding-model control is carried out, by the human body after sliding-model control
Distribution map point combines more people's dressings parsing data set, obtains all clothes regions;
Clothes region is showed into user in front end, user selects to need the clothing retrieved according to demand;
The data for the need retrieval clothing that rear end receiving front-end transmits, are input in searching system, are retrieved one by one
Clothing.
Preferably, the method for clapping street the more people's dressing dissection process of image progress includes:
Street bat image is input among combination condition parsing network, conversion one corresponding two-dimensional matrix of output, matrix
Size is consistent with street bat image, the affiliated label value of correspondence position pixel of each element representation street bat image in two-dimensional matrix, and one
18 are shared, is respectively:Background, hair, cap, glasses, accessory, skin, shirt, overcoat, one-piece dress, handbag, Western-style clothes, on
Clothing, vest, underwear, skirt, trousers, socks, shoes.
Preferably, it is described to draw the step of each face corresponds to the distribution temperature detection figure of human body, including:
The human face data after identification calculates is obtained, selects a target as this time human testing;
The two-dimensional matrix that one length and width of structure clap image with the street is schemed for the detection of human face data temperature, and default value 0 is right
Human face region is answered to be arranged to 1;
Obtain human face data temperature detection figure and clap image with the street, be input to binary channels human testing neutral net and work as
In, human body distribution temperature detection figure corresponding with human face data, temperature detection are exported as binary channels human testing neutral net
Figure is a two-dimensional matrix, and its size is consistent with street bat image, and each element represents correspondence position and inputs the general of the character physical
Rate;
Temperature detection figure and corresponding human face data are distributed according to human body, is recorded and checked for unused
Human face data, re-recognize calculating if then returning;Otherwise this step is terminated.
Preferably, it is described that certain threshold value is set to human body distribution temperature detection figure, the step of carrying out sliding-model control, bag
Include:
Given threshold is a, judges whether each element of human body distribution temperature detection figure is more than this threshold value, if being more than a,
1 is then entered as again, is otherwise entered as 0.
All human body distributed heat degree detection figures after discretization are put into dressing analysis result respectively, expression formula is:
Cij=HiL (I=cj)
Wherein CijRepresent the distribution detection figure of the jth part clothes of i-th of human body, HiRepresent i-th of human body distribution temperature inspection
Mapping;I is that image is clapped in the street;L (I=cj) it is indicator function, effect is to clap in street in image I to belong to clothes cjRegion set
1 is set to, is otherwise 0.
Preferably, the specific steps for obtaining retrieval clothing, including:
One dressing as this retrieval of selection among the dressing of user's selection;
The dressing chosen is input among retrieval character extraction network, generates the retrieval character of dressing;
Other dressing features in the retrieval character and database of dressing are contrasted, calculate its similitude, and by phase
The result retrieved like the commodity taking-up of n names before property ranking as this dressing;
This time the target dressing of retrieval and retrieval result are recorded, and checks whether the dressing for also existing and not retrieving, if
Have, return to searching step again, otherwise terminate to retrieve.
In embodiments of the present invention, the method using more people's dressings parsing with reference to human testing, more people's pictures that street is clapped
In each clothing split with the precision of Pixel-level, it is achieved thereby that to Duo Ren streets clap scene costume retrieval.It is simultaneously full
Sufficient user can input a complicated picture of any background, and the commodity similar to picture are retrieved in Online Store.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the flow signal of more people's costume retrieval methods based on dressing parsing and human testing of the embodiment of the present invention
Figure;
Fig. 2 is the schematic flow sheet of binary channels human testing neutral net in the embodiment of the present invention;
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made
Embodiment, belong to the scope of protection of the invention.
Fig. 1 is the flow signal of more people's costume retrieval methods based on dressing parsing and human testing of the embodiment of the present invention
Figure, as shown in figure 1, this method includes:
S1, clap street image and carry out more people's dressing dissection process, the clothing in segmentation figure picture, obtain more people's clothes parsing knots
Fruit;
S2, the face in image is identified, the data of face in image are calculated;
S3, obtains human face data and image is clapped in the street, inputs among binary channels human testing neutral net, draws one by one
Each face corresponds to the distribution temperature detection figure of human body;
S4, certain threshold value is set to human body distribution temperature detection figure, sliding-model control is carried out, by the people after sliding-model control
Body distribution map point combines more people's dressings parsing data set, obtains all clothes regions;
S5, clothes region is showed into user in front end, user selects to need the clothing retrieved according to demand;
S6, the data for the need retrieval clothing that rear end receiving front-end transmits, is input in searching system, is examined one by one
Rope clothing.
Wherein, the specific method of the more people's dressing dissection process of S1 Zhong Dui streets bat image progress includes:
Street bat image is input among combination condition parsing network, conversion one corresponding two-dimensional matrix of output, matrix
Size is consistent with street bat image, the affiliated label value of correspondence position pixel of each element representation street bat image in two-dimensional matrix, and one
18 are shared, is respectively:Background, hair, cap, glasses, accessory, skin, shirt, overcoat, one-piece dress, handbag, Western-style clothes, on
Clothing, vest, underwear, skirt, trousers, socks, shoes.
Specifically, recognition of face is to carry out recognition of face to more people's scenes using MTCNN in S2.Its network receive one or
For more people's pictures as input data, conversion exports the vector that N number of length is 5, and wherein N is the quantity on head in picture, Mei Gexiang
Numerical value represents the X-coordinate, Y-coordinate, the length and width on head of headers box in amount.
In a particular embodiment, the specific steps inputted one by one in S3 among binary channels human testing neutral net include:
Further, show that each face corresponds to the specific steps of the distribution temperature detection figure of human body described in S3, wrap
Include:
S31, the human face data after identification calculates is obtained, select a target as this time human testing;
S32, one length and width of structure clap the consistent two-dimensional matrix of image with the street and detect figure as human face data temperature, write from memory
It is 0 to recognize value, and corresponding human face region is arranged to 1;
S33, as shown in Fig. 2 human face data temperature detection figure and the street are clapped into image is input to binary channels human testing
Among neutral net, human body distribution temperature detection figure corresponding with human face data is exported as binary channels human testing neutral net,
Temperature detection figure is a two-dimensional matrix, and its size is consistent with street bat image, and each element represents correspondence position and inputs the people
The probability of thing body;
S34, temperature detection figure and corresponding human face data are distributed according to human body, is recorded and checked for not
The human face data used, calculating is re-recognized if then returning;Otherwise this step is terminated.
Wherein, the binary channels human testing neutral net mentioned in S33, to improve its accuracy, need to constantly train straight
To in the absence of untill data are not used, specific training method is:
More people's dressings parsing data set is distinguished into single dressing data set and more people's dressing data sets, wherein more personal datas
Collection includes true more people's dressing data and synthesizes more people's dressing data;
For single dressing data set, the label that will be greater than 0 all makes 1 into;
For true more people's dressing data sets, because data volume is few, then human body is marked to be distributed one by one using manual method
Figure;
For synthesizing more people's dressing data sets, according to the record of photomontage, corresponding single picture is found;Further according to conjunction
Into mode, find the label for clothing in synthesising picture one by one in units of personage, its whole be changed to 1.
S4 further comprises:
Given threshold is a, judges whether each element of human body distribution temperature detection figure is more than this threshold value, if being more than a,
1 is then entered as again, is otherwise entered as 0.
All human body distributed heat degree detection figures after discretization are put into dressing analysis result respectively, expression formula is:
Cij=HiL (I=cj)
Wherein CijRepresent the distribution detection figure of the jth part clothes of i-th of human body, HiRepresent i-th of human body distribution temperature inspection
Mapping;I is that image is clapped in the street;L (I=cj) it is indicator function, effect is to clap in street in image I to belong to clothes cjRegion set
1 is set to, is otherwise 0.
In specific implementation, S6 further comprises:
S61, one clothes as this retrieval of selection among the clothes of user's selection;
S62, the clothes chosen are input among retrieval character extraction network, generate the retrieval character of clothes;
S63, other garment features in the retrieval character and database of clothes are contrasted, calculate its similitude, and
The commodity of n names before similitude ranking are taken out into the result as this costume retrieval;
S64, records the target garment and retrieval result of this time retrieval, and checks whether the clothes for also existing and not retrieving
Dress, searching step again is returned if having, otherwise terminates to retrieve.
Specifically, the retrieval character network used in S62 is to extract network according to triple garment feature further to repair
Change the network drawn, it primarily serves the scale for reducing network, lifts the effect of network calculations speed.
The invention discloses a kind of more people's costume retrieval methods based on dressing parsing and human testing.This process employs
Clothes parse, and combine human testing, and each clothing has been split out with the precision of Pixel-level in more people's pictures that street is clapped,
It is achieved thereby that the costume retrieval of scene is clapped Duo Ren streets, meet that user can be conveniently inputted into a complicated figure of any background
Piece can return to commodity similar to retrieving image in Online Store.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
To instruct the hardware of correlation to complete by program, the program can be stored in computer-readable recording medium, and storage is situated between
Matter can include:Read-only storage (ROM, Read Only Memory), random access memory (RAM, Random Access
Memory), disk or CD etc..
In addition, the more people costume retrieval sides based on dressing parsing and human testing provided above the embodiment of the present invention
Method is described in detail, and specific case used herein is set forth to the principle and embodiment of the present invention, the above
The explanation of embodiment is only intended to help the method and its core concept for understanding the present invention;Meanwhile for the general skill of this area
Art personnel, according to the thought of the present invention, there will be changes in specific embodiments and applications, in summary, this
Description should not be construed as limiting the invention.
Claims (5)
- A kind of 1. more people's costume retrieval methods based on dressing parsing and human testing, it is characterised in that methods described includes:Clap street image and carry out more people's dressing dissection process, the clothing in segmentation figure picture, obtain more people's clothes analysis results;Face in image is identified, the data of face in image are calculated;Obtain human face data and image is clapped in the street, input one by one among binary channels human testing neutral net, draw everyone Face corresponds to the distribution temperature detection figure of human body;Certain threshold value is set to human body distribution temperature detection figure, sliding-model control is carried out, the human body after sliding-model control is distributed Figure point combines more people's dressings parsing data set, obtains all clothes regions;Clothes region is showed into user in front end, user selects to need the clothing retrieved according to demand;The data for the need retrieval clothing that rear end receiving front-end transmits, are input in searching system, obtain retrieval clothing one by one.
- 2. more people's costume retrieval methods based on dressing parsing and human testing as claimed in claim 1, it is characterised in that institute The method that the more people's dressing dissection process of image progress are clapped in Shu Dui streets includes:Street bat image is input among combination condition parsing network, conversion one corresponding two-dimensional matrix of output, matrix size Consistent with street bat image, the affiliated label value of correspondence position pixel of image is clapped in each element representation street in two-dimensional matrix, and one is shared 18, it is respectively:Background, hair, cap, glasses, accessory, skin, shirt, overcoat, one-piece dress, handbag, Western-style clothes, jacket, the back of the body The heart, underwear, skirt, trousers, socks, shoes.
- 3. more people's costume retrieval methods based on dressing parsing and human testing as claimed in claim 1, it is characterised in that institute The specific steps for drawing the distribution temperature detection figure that each face corresponds to human body are stated, including:The human face data after identification calculates is obtained, selects a target as this time human testing;The two-dimensional matrix that one length and width of structure clap image with the street is schemed for the detection of human face data temperature, default value 0, corresponding people Face region is arranged to 1;Obtain human face data temperature detection figure and clap image with the street, be input among binary channels human testing neutral net, by Binary channels human testing neutral net exports human body distribution temperature detection figure corresponding with human face data, and temperature detection figure is one Individual two-dimensional matrix, its size is consistent with street bat image, and each element represents the probability that correspondence position inputs the character physical;Temperature detection figure and corresponding human face data are distributed according to human body, is recorded and checks for untapped people Face data, calculating is re-recognized if then returning;Otherwise this step is terminated.
- 4. more people's costume retrieval methods based on dressing parsing and human testing as claimed in claim 1, it is characterised in that institute State and certain threshold value is set to human body distribution temperature detection figure, the step of carrying out sliding-model control, including:Given threshold is a, judges whether each element of human body distribution temperature detection figure is more than this threshold value, if being more than a, is weighed 1 newly is entered as, is otherwise entered as 0.All human body distributed heat degree detection figures after discretization are put into dressing analysis result respectively, expression formula is:Cij=HiL (I=cj)Wherein CijRepresent the distribution detection figure of the jth part clothes of i-th of human body, HiRepresent i-th of human body distribution temperature detection figure; I is that image is clapped in the street;L (I=cj) it is indicator function, effect is to clap in street in image I to belong to clothes cjRegion be arranged to 1, Otherwise it is 0.
- 5. more people's costume retrieval methods based on dressing parsing and human testing as claimed in claim 1, it is characterised in that institute The specific steps for obtaining retrieval clothing are stated, including:One clothes as this retrieval of selection among the clothes of user's selection;The clothes chosen are input among retrieval character extraction network, generate the retrieval character of clothes;Other garment features in the retrieval character and database of clothes are contrasted, calculate its similitude, and by similitude The commodity of n names take out the result as this costume retrieval before ranking;The target garment and retrieval result of this time retrieval are recorded, and checks whether the clothes for also existing and not retrieving, if having Searching step again is returned, otherwise terminates to retrieve.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710806740.8A CN107818489B (en) | 2017-09-08 | 2017-09-08 | Multi-person clothing retrieval method based on dressing analysis and human body detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710806740.8A CN107818489B (en) | 2017-09-08 | 2017-09-08 | Multi-person clothing retrieval method based on dressing analysis and human body detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107818489A true CN107818489A (en) | 2018-03-20 |
CN107818489B CN107818489B (en) | 2021-09-17 |
Family
ID=61601585
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710806740.8A Active CN107818489B (en) | 2017-09-08 | 2017-09-08 | Multi-person clothing retrieval method based on dressing analysis and human body detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107818489B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503058A (en) * | 2019-08-27 | 2019-11-26 | 上海交通大学苏州人工智能研究院 | A kind of dressing compliance detection method |
CN111639641A (en) * | 2020-04-30 | 2020-09-08 | 中国海洋大学 | Clothing area acquisition method and device |
CN113076775A (en) * | 2020-01-03 | 2021-07-06 | 上海依图网络科技有限公司 | Preset clothing detection method, device, chip and computer readable storage medium |
CN113837138A (en) * | 2021-09-30 | 2021-12-24 | 重庆紫光华山智安科技有限公司 | Dressing monitoring method, system, medium and electronic terminal |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779270A (en) * | 2012-06-21 | 2012-11-14 | 西南交通大学 | Target clothing image extraction method aiming at shopping image search |
TW201443807A (en) * | 2013-04-17 | 2014-11-16 | Yahoo Inc | Visual clothing retrieval |
CN106250423A (en) * | 2016-07-25 | 2016-12-21 | 上海交通大学 | The degree of depth convolutional neural networks cross-domain costume retrieval method shared based on partial parameters |
-
2017
- 2017-09-08 CN CN201710806740.8A patent/CN107818489B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779270A (en) * | 2012-06-21 | 2012-11-14 | 西南交通大学 | Target clothing image extraction method aiming at shopping image search |
TW201443807A (en) * | 2013-04-17 | 2014-11-16 | Yahoo Inc | Visual clothing retrieval |
CN106250423A (en) * | 2016-07-25 | 2016-12-21 | 上海交通大学 | The degree of depth convolutional neural networks cross-domain costume retrieval method shared based on partial parameters |
Non-Patent Citations (3)
Title |
---|
LIU SI等: "Street-to-Shop: Cross-Scenario Clothing Retrieval via Parts Alignment and Auxiliary Set", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
李宗民等: "结合层次分割和跨域字典学习的服装检索", 《中国图象图形学报》 * |
艾海舟等: "Who Blocks Who: Simultaneous Segmentation of Occluded Objects", 《JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503058A (en) * | 2019-08-27 | 2019-11-26 | 上海交通大学苏州人工智能研究院 | A kind of dressing compliance detection method |
CN113076775A (en) * | 2020-01-03 | 2021-07-06 | 上海依图网络科技有限公司 | Preset clothing detection method, device, chip and computer readable storage medium |
CN111639641A (en) * | 2020-04-30 | 2020-09-08 | 中国海洋大学 | Clothing area acquisition method and device |
CN111639641B (en) * | 2020-04-30 | 2022-05-03 | 中国海洋大学 | Method and device for acquiring clothing region not worn on human body |
CN113837138A (en) * | 2021-09-30 | 2021-12-24 | 重庆紫光华山智安科技有限公司 | Dressing monitoring method, system, medium and electronic terminal |
CN113837138B (en) * | 2021-09-30 | 2023-08-29 | 重庆紫光华山智安科技有限公司 | Dressing monitoring method, dressing monitoring system, dressing monitoring medium and electronic terminal |
Also Published As
Publication number | Publication date |
---|---|
CN107818489B (en) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107818489A (en) | A kind of more people's costume retrieval methods based on dressing parsing and human testing | |
US9075825B2 (en) | System and methods of integrating visual features with textual features for image searching | |
Bossard et al. | Apparel classification with style | |
US9460518B2 (en) | Visual clothing retrieval | |
CN109658455A (en) | Image processing method and processing equipment | |
CN107679960B (en) | Personalized clothing recommendation method based on clothing image and label text bimodal content analysis | |
CN109614925A (en) | Dress ornament attribute recognition approach and device, electronic equipment, storage medium | |
CN106021603A (en) | Garment image retrieval method based on segmentation and feature matching | |
JP2001167110A (en) | Picture retrieving method and its device | |
CN108109055B (en) | Cross-scene clothing retrieval method based on image rendering | |
US11475500B2 (en) | Device and method for item recommendation based on visual elements | |
KR101639657B1 (en) | Method and server for searching similar goods | |
CN107301644B (en) | Natural image non-formaldehyde finishing method based on average drifting and fuzzy clustering | |
CN110647906A (en) | Clothing target detection method based on fast R-CNN method | |
CN108197180A (en) | A kind of method of the editable image of clothing retrieval of clothes attribute | |
CN116343267B (en) | Human body advanced semantic clothing changing pedestrian re-identification method and device of clothing shielding network | |
Song et al. | When multimedia meets fashion | |
CN103049512B (en) | Blocking, weighting and matching retrieval method based on commodity image saliency map | |
CN115909407A (en) | Cross-modal pedestrian re-identification method based on character attribute assistance | |
Rubio et al. | Multi-modal joint embedding for fashion product retrieval | |
Usmani et al. | Enhanced deep learning framework for fine-grained segmentation of fashion and apparel | |
Chun et al. | A novel clothing attribute representation network-based self-attention mechanism | |
Zhang et al. | Warpclothingout: A stepwise framework for clothes translation from the human body to tiled images | |
Liu et al. | FRSFN: A semantic fusion network for practical fashion retrieval | |
Wang et al. | A two-branch hand gesture recognition approach combining atrous convolution and attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |