CN108985298A - A kind of human body clothing dividing method based on semantic consistency - Google Patents
A kind of human body clothing dividing method based on semantic consistency Download PDFInfo
- Publication number
- CN108985298A CN108985298A CN201810631795.4A CN201810631795A CN108985298A CN 108985298 A CN108985298 A CN 108985298A CN 201810631795 A CN201810631795 A CN 201810631795A CN 108985298 A CN108985298 A CN 108985298A
- Authority
- CN
- China
- Prior art keywords
- train
- picture
- semantic
- clothing
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of human body clothing dividing method based on semantic consistency, for analyzing the clothing region semantic situation of each frame in the case where given single frames single clothing picture.Specifically comprise the following steps: to obtain the image data set for training human body clothing to divide, and defines algorithm target;Each single-frame images is concentrated to find its adjacent picture in semantic space and composition picture pair data;Joint modeling is carried out to the neighbouring relations in flow pattern space to each group of picture;Establish the prediction model of clothing segmentation;Use the semantic information of prediction model parsing picture underpants object.Clothing of the present invention suitable for true picture divides analysis, has preferable effect and robustness in face of all kinds of complex situations.
Description
Technical field
The present invention relates to computer vision field, a kind of particularly human body clothing segmentation side based on semantic consistency
Method.
Background technique
Clothing semantic segmentation is as a kind of Low Level Vision technology, frequently as the auxiliary information of some high-rise visual tasks, such as
Clothing retrieval, clothing attributive analysis etc..The target of clothing segmentation is to give an image, point of each pixel in predicted pictures
Class label.Clothing segmentation key factor mainly include in laundry class apparent otherness it is huge, it is clothing non-rigid and
The extreme flexible characteristics of clothing.Conventional method generally regards clothing segmentation task as a semantic segmentation problem, although some
Method achieves breakthrough in nicety of grading, but it does not make full use of the information of data with existing.
Due to the validity of statistical modeling, gradually it is applied in semantic segmentation task currently based on the method for study.It is existing
Some, mainly using deep learning frame end to end, inputs an original triple channel color picture based on learning method, defeated
The semantic segmentation figure predicted out.Deep learning can efficiently solve the problem of character representation, but in clothing segmentation
When the problem of lacking enough data sets accurately marked, the deficiency of data volume limits the effect of deep learning, while clothing
Flexible characteristics make common convolution that can not extract reasonable feature.
Summary of the invention
The in view of the above problems and in practice needs of distribution network construction, the present invention provides a kind of based on semantic consistency
Human body clothing dividing method.The present invention it is specific the technical solution adopted is as follows:
A kind of human body clothing dividing method based on semantic consistency the following steps are included:
The image data set of S1, acquisition for training human body clothing to divide, and define algorithm target;
S2, each single-frame images is concentrated to find its adjacent picture in semantic space and composition picture pair to data;
S3, joint modeling is carried out to the neighbouring relations in flow pattern space (i.e. semantic space) to each group of picture;
S4, the prediction model for establishing clothing segmentation;
S5, the semantic information of prediction model parsing picture underpants object is used.
Preferably, the image data set in the S1 includes single-frame images ItrainThe semantic segmentation figure manually marked
Ptrain;The algorithm target is the clothing semantic segmentation figure predicted in single-frame images
Preferably, the S2 includes following sub-step:
S21, for each single-frame images Itrain, from pre-training it is good human body attitude estimation model Openpose in extract people
Body posture feature CposeWith image appearance features Cappearance, by concatenating CposeAnd Cappearance, obtain single-frame images ItrainIt is right
The picture feature G answeredI;
S22, the picture concentrated to image data calculate similarity two-by-two, wherein any two pictures ItrainWith I 'train
Similarity α calculate it is as follows:
Wherein CemptyFor with GIPicture size is the same and is worth the picture feature of all 0 image;C′IFor for list
Frame image I 'train, according to single-frame images ItrainThe picture feature that identical method obtains;Euclidean () indicates to calculate Europe
Family name's distance;
S23, by similarity calculation and compare, to each single-frame images Itrain, retrieve image most like therewith
I 'train, picture is obtained to (Itrain, I 'train) and corresponding similarity value α.
Preferably, the S3 includes following sub-step:
S31, using four layers of convolution operation and pondization operation to the I of picture centeringtrainWith I 'trainFeature is extracted respectively to obtain
To SIWith S 'I, it may be assumed that
SI=fsingle(Itrain;θ)
S′I=fsingle(I′train;θ)
Wherein fsingle() is the function of four layers of convolution operation and pondization operation building, and θ is deconvolution parameter;
S32, to feature S obtained in S31IWith S 'IFeature S after being mergedinteraction:
Sinteraction=(1- α) * SI+α*S′I
S33, it is operated using three-layer coil product to feature S after fusioninteractionThe reconstruct for carrying out picture semantic information obtains big
Small is image ItrainThe semantic segmentation figure of 1/8th sizesSimultaneously using four layers of convolution sum up-sampling operation to single
Characteristics of image SISemantic information reconstruct is carried out, is obtained and image ItrainSemantic segmentation figure of the same size
S34, the operation to all pictures to S31-S33 is executed.
Preferably, the S4 includes following sub-step:
S41, depth convolutional neural networks are established, the input of neural network is a pair of of picture to (Itrain, I 'train), output
For relative to picture ItrainSemantic segmentation imageThe representation of neural network is mappingWith public affairs
Formula indicates are as follows:
Wherein θ1Deconvolution parameter used when predicting semantic segmentation result for prediction model, f () is depth convolutional Neural net
The anticipation function of network;
The loss function of S42, neural network are as follows:
Wherein P and PsmallIt respectively indicatesCorresponding true semantic segmentation image andCorresponding true semantic segmentation
Figure;Indicate the prediction semantic segmentation figure of original scaleWith the loss error of its true semantic picture;Indicate the prediction semantic segmentation figure of small scaleIt is described small with the loss error of its true semantic picture
Scale is 1/8th of original scale;λ is weight parameter;
S43, entire neural network is trained at loss function L using Adam optimization method and back-propagation algorithm, until
Neural network convergence.
This method is based on deep neural network and is adopted using the neighbouring relations of similar pictures semantic information in flow pattern space
It is modeled with deformation behaviour of the deformable convolution to clothing, the clothing semantic segmentation under different scenes can be better adapted to.
Compared to traditional clothing semantic segmentation method, the present invention has following income:
Firstly, clothing semantic segmentation method of the invention defines three important problems, i.e. clothing in clothing semantic segmentation
The modeling and calculating accuracy of the extreme flexible characteristics, the semantic congruence relationship of similar pictures of object.By seeking the two
The solution in direction, can efficiently solve data volume not foot clothing semantic segmentation.
Secondly, clothing semantic segmentation method of the invention is based on depth convolutional neural networks, semantic consistency mould is established
Type, and have the advantages that calculate accuracy simultaneously.Depth convolutional neural networks can preferably express visual signature, in addition, view
The study of the extraction and counter structure model of feeling feature is unified in the same frame, improves the final effect of method.
Finally, clothing semantic segmentation method of the invention is proposed by modeling similar pictures pair using convolutional neural networks
Semantic congruence sexual intercourse, clothing semantic segmentation is predicted with this, and the characteristics of for clothing changeability using changeability convolution come
Extract the characteristic information of clothing.This method can effectively excavate the semantic congruence sexual intercourse of the similar picture pair of content, and
It keeps constraining this semantic consistency structure in semantic space.
This method can effectively improve the accuracy and effect of retrieval and analysis in clothing retrieval and clothing attributive analysis
Rate has good application value.For example, this method can be quickly and correctly in the application scenarios of clothes electric business retail
Clothing region and the classification with model are analyzed, so as to which the laundry with model is rapidly completed, to retrieve with money
Electric business retail provide foundation.
Detailed description of the invention
Fig. 1 is flow diagram of the invention;
Fig. 2 is experiment effect figure of the invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
On the contrary, the present invention covers any substitution done on the essence and scope of the present invention being defined by the claims, repairs
Change, equivalent method and scheme.Further, in order to make the public have a better understanding the present invention, below to of the invention thin
It is detailed to describe some specific detail sections in section description.Part without these details for a person skilled in the art
The present invention can also be understood completely in description.
With reference to Fig. 1, a kind of human body clothing dividing method based on semantic consistency the following steps are included:
The image data set of S1, acquisition for training human body clothing to divide, and define algorithm target;
Image data set described in this step includes single-frame images ItrainThe semantic segmentation figure P manually markedtrain;
The algorithm target is the clothing semantic segmentation figure predicted in single-frame images
S2, each single-frame images is concentrated to find its adjacent picture in semantic space and composition picture pair to data;
This step includes following sub-step:
S21, for each single-frame images Itrain, from pre-training it is good human body attitude estimation model Openpose in extract people
Body posture feature CposeWith image appearance features Cappearance, pass through concatenation (i.e. direct splicing) CposeAnd Cappearance, obtain list
Frame image ItrainCorresponding picture feature GI;
S22, the picture concentrated to image data calculate similarity two-by-two, obtain the similarity that any two picture is shown in.Its
In, any two pictures ItrainWith I 'trainSimilarity α calculate it is as follows:
Wherein CemptyFor with GIPicture size is the same and is worth the picture feature of all 0 image;C′IFor for list
Frame image I 'train, according to single-frame images ItrainThe picture feature that identical method obtains;Euclidean () indicates to calculate Europe
Family name's distance;
S23, by similarity calculation and compare, to each single-frame images Itrain, retrieve image most like therewith
I 'train, picture is obtained to (Itrain, I 'train) and corresponding similarity value α.
S3, joint modeling is carried out to the neighbouring relations in flow pattern space (i.e. semantic space) to each group of picture;
This step includes following sub-step:
S31, it is operated to picture using four layers of convolution operation and pondization to (Itrain, I 'train) in ItrainWith I 'trainPoint
Indescribably feature is taken to obtain SIWith S 'I, it may be assumed that
SI=fsingle(Itrain;θ)
S′I=fsingle(I′train;θ)
Wherein fsingle() is the function of four layers of convolution operation and pondization operation building, and θ is deconvolution parameter;
S32, to feature S obtained in S31IWith S 'IFeature S after being mergedinteraction:
Sinteraction=(1- α) * SI+α*S′I
Wherein α is the similarity value of this group of picture;
S33, it is operated using three-layer coil product to feature S after fusioninteractionThe reconstruct for carrying out picture semantic information obtains big
Small is image ItrainThe semantic segmentation figure of 1/8th sizesSimultaneously using four layers of convolution sum up-sampling operation to single
Characteristics of image SISemantic information reconstruct is carried out, is obtained and image ItrainSemantic segmentation figure of the same size
S34, the operation to all pictures to S31-S33 is executed.
S4, the prediction model for establishing clothing segmentation;
This step includes following sub-step:
S41, depth convolutional neural networks are established, the input of neural network is a pair of of picture to (Itrain, I 'train), output
For relative to picture ItrainSemantic segmentation imageThe representation of neural network is mappingWith public affairs
Formula indicates are as follows:
Wherein θ1Deconvolution parameter used when predicting semantic segmentation result for prediction model, f () is depth convolutional Neural net
The anticipation function of network;
The loss function of S42, neural network are as follows:
Wherein P is indicatedCorresponding true semantic segmentation image, i.e. image ItrainTrue semantic segmentation figure, PsmallIt indicatesCorresponding true semantic segmentation figure, i.e. size are image ItrainEighth true semantic segmentation figure;Table
Show the prediction semantic segmentation figure of original scaleLoss with its true semantic picture (i.e. the semantic segmentation figure marked in S1) misses
Difference;Indicate the prediction semantic segmentation figure of small scaleIt (has i.e. been marked in S1 with its true semantic picture
Semantic segmentation figure) loss error, the small scale be original scale 1/8th, withScale keeps identical;λ
For weight parameter, value is 0.125 herein;
S43, entire neural network is trained at loss function L using Adam optimization method and back-propagation algorithm, until
Neural network convergence.
S5, the semantic information of prediction model parsing picture underpants object is used.
The above method is applied in specific embodiment below, so as to those skilled in the art can better understand that this hair
Bright effect.
Embodiment
The implementation method of the present embodiment is as previously mentioned, no longer elaborate specific step, below only for case data
Show its effect.The present invention is implemented on three data sets with true value mark, is respectively as follows:
Fashionista v0.2 data set: the data set includes 685 images, there is 56 class semantic labels.
Refined Fashionista data set: the data set includes 685 images, there is 25 class semantic labels.
CFPD data set: the data set includes 2682 images, there is 23 class semantic labels.
This example is chosen a picture on each data set and is tested, first pass through calculate similarity obtain it is most similar
Then picture extracts the feature of two pictures respectively, and joins to this group of picture to the neighbouring relations in flow pattern space
It builds mould jointly, obtains final semantic segmentation figure, as shown in Figure 2.In figure, groundtruth indicates true semantic segmentation figure, we
The obtained prediction semantic segmentation figure of method and true semantic segmentation figure it is almost the same.
The detection accuracy of the present embodiment testing result is as shown in the table, main using average Acc and IoU two indices pair
The detection accuracy of various methods is compared, can be with wherein the averagely Acc index classification results accuracy that refers to each pixel
Preferable response prediction result;IoU refers to the friendship of area between semantic region and true value region and ratio.As shown in Table, our
For method compared with other conventional methods, there is clear superiority in average Acc and IoU index.
In above-described embodiment, clothing semantic segmentation method of the invention is first to the similar picture of each group of content in flow pattern
Neighbouring relations in space carry out joint modeling.On this basis, structuring problem concerning study end to end is converted by former problem,
And clothing semantic segmentation model is established based on deep neural network.Finally, using trained clothing semantic segmentation model come pre-
Survey the clothing semantic information of a new frame.
By above technical scheme, the embodiment of the present invention is based on depth learning technology and has developed one kind based on semantic consistency
Human body clothing dividing method.The present invention can use the neighbouring relations of similar pictures semantic information in flow pattern space, and adopt
It is modeled with deformation behaviour of the deformable convolution to clothing, the clothing semantic segmentation under different scenes can be better adapted to.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.
Claims (5)
1. a kind of human body clothing dividing method based on semantic consistency, it is characterised in that the following steps are included:
The image data set of S1, acquisition for training human body clothing to divide, and define algorithm target;
S2, each single-frame images is concentrated to find its adjacent picture in semantic space and composition picture pair to data;
S3, joint modeling is carried out to the neighbouring relations in flow pattern space to each group of picture;
S4, the prediction model for establishing clothing segmentation;
S5, the semantic information of prediction model parsing picture underpants object is used.
2. a kind of human body clothing dividing method based on semantic consistency according to claim 1, it is characterised in that described
Image data set in S1 includes single-frame images ItrainThe semantic segmentation figure P manually markedtrain;The algorithm target is
Predict the clothing semantic segmentation figure in single-frame images
3. a kind of human body clothing dividing method based on semantic consistency according to claim 1, it is characterised in that described
S2 includes following sub-step:
S21, for each single-frame images Itrain, from pre-training it is good human body attitude estimation model Openpose in extract human body appearance
State feature CposeWith image appearance features Cappearance, by concatenating CposeAnd Cappearance, obtain single-frame images ItrainIt is corresponding
Picture feature CI;
S22, the picture concentrated to image data calculate similarity two-by-two, wherein any two pictures ItrainWith I 'trainIt is similar
It is as follows to spend α calculating:
Wherein CemptyFor with CIPicture size is the same and is worth the picture feature of all 0 image;C′IFor for single frames figure
As I 'train, according to single-frame images ItrainThe picture feature that identical method obtains;Euclidean () indicate calculate Euclidean away from
From;
S23, by similarity calculation and compare, to each single-frame images Itrain, retrieve image I ' most like therewithtrain,
Picture is obtained to (Itrain, I 'train) and corresponding similarity value α.
4. a kind of human body clothing dividing method based on semantic consistency according to claim 1, feature exist
In the S3 include following sub-step:
S31, using four layers of convolution operation and pondization operation to the I of picture centeringtrainWith I 'trainFeature is extracted respectively obtains SIWith
S′I, it may be assumed that
SI=fsingle(Itrain;θ)
S′I=fsingle(I′train;θ)
Wherein fsingle() is the function of four layers of convolution operation and pondization operation building, and θ is deconvolution parameter;
S32, to feature S obtained in S31IWith S 'IFeature S after being mergedinteraction:
Sinteraction=(1- α) * SI+α*S′I
S33, it is operated using three-layer coil product to feature S after fusioninteractionThe reconstruct of picture semantic information is carried out, obtaining size is
Image ItrainThe semantic segmentation figure of 1/8th sizesSimultaneously using four layers of convolution sum up-sampling operation to single image
Feature SISemantic information reconstruct is carried out, is obtained and image ItrainSemantic segmentation figure of the same size
S34, the operation to all pictures to S31-S33 is executed.
5. a kind of human body clothing dividing method based on semantic consistency according to claim 1, feature exist
In the S4 include following sub-step:
S41, depth convolutional neural networks are established, the input of neural network is a pair of of picture to (Itrain, I 'train), it exports as phase
For picture ItrainSemantic segmentation imageThe representation of neural network is mappingWith formula table
It is shown as:
Wherein θ1Deconvolution parameter used when predicting semantic segmentation result for prediction model, f () is depth convolutional neural networks
Anticipation function;
The loss function of S42, neural network are as follows:
Wherein P and PsmallIt respectively indicatesCorresponding true semantic segmentation image andCorresponding true semantic segmentation figure,Indicate the prediction semantic segmentation figure of original scaleWith the loss error of its true semantic picture;
Indicate the prediction semantic segmentation figure of small scaleWith the loss error of its true semantic picture, the small scale is original ruler
/ 8th of degree;λ is weight parameter;
S43, entire neural network is trained at loss function L using Adam optimization method and back-propagation algorithm, until nerve
Network convergence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810631795.4A CN108985298B (en) | 2018-06-19 | 2018-06-19 | Human body clothing segmentation method based on semantic consistency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810631795.4A CN108985298B (en) | 2018-06-19 | 2018-06-19 | Human body clothing segmentation method based on semantic consistency |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108985298A true CN108985298A (en) | 2018-12-11 |
CN108985298B CN108985298B (en) | 2022-02-18 |
Family
ID=64540714
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810631795.4A Active CN108985298B (en) | 2018-06-19 | 2018-06-19 | Human body clothing segmentation method based on semantic consistency |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108985298B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858539A (en) * | 2019-01-24 | 2019-06-07 | 武汉精立电子技术有限公司 | A kind of ROI region extracting method based on deep learning image, semantic parted pattern |
CN110807462A (en) * | 2019-09-11 | 2020-02-18 | 浙江大学 | Training method insensitive to context of semantic segmentation model |
CN111028249A (en) * | 2019-12-23 | 2020-04-17 | 杭州知衣科技有限公司 | Garment image segmentation method based on deep learning |
CN114092591A (en) * | 2022-01-20 | 2022-02-25 | 中国科学院自动化研究所 | Image generation method, image generation device, electronic equipment and storage medium |
CN116543147A (en) * | 2023-03-10 | 2023-08-04 | 武汉库柏特科技有限公司 | Carotid ultrasound image segmentation method, device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002075685A2 (en) * | 2001-03-15 | 2002-09-26 | Koninklijke Philips Electronics N.V. | Automatic system for monitoring persons entering and leaving a changing room |
GB2403363A (en) * | 2003-06-25 | 2004-12-29 | Hewlett Packard Development Co | Tags for automated image processing |
CN1920820A (en) * | 2006-09-14 | 2007-02-28 | 浙江大学 | Image meaning automatic marking method based on marking significance sequence |
US8173772B2 (en) * | 2005-12-30 | 2012-05-08 | Spiber Technologies Ab | Spider silk proteins and methods for producing spider silk proteins |
CN105261017A (en) * | 2015-10-14 | 2016-01-20 | 长春工业大学 | Method for extracting regions of interest of pedestrian by using image segmentation method on the basis of road restriction |
CN106327469A (en) * | 2015-06-29 | 2017-01-11 | 北京航空航天大学 | Video object segmentation method based on semantic label guidance |
CN107729804A (en) * | 2017-08-31 | 2018-02-23 | 广东数相智能科技有限公司 | A kind of people flow rate statistical method and device based on garment ornament |
-
2018
- 2018-06-19 CN CN201810631795.4A patent/CN108985298B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002075685A2 (en) * | 2001-03-15 | 2002-09-26 | Koninklijke Philips Electronics N.V. | Automatic system for monitoring persons entering and leaving a changing room |
GB2403363A (en) * | 2003-06-25 | 2004-12-29 | Hewlett Packard Development Co | Tags for automated image processing |
US8173772B2 (en) * | 2005-12-30 | 2012-05-08 | Spiber Technologies Ab | Spider silk proteins and methods for producing spider silk proteins |
CN1920820A (en) * | 2006-09-14 | 2007-02-28 | 浙江大学 | Image meaning automatic marking method based on marking significance sequence |
CN106327469A (en) * | 2015-06-29 | 2017-01-11 | 北京航空航天大学 | Video object segmentation method based on semantic label guidance |
CN105261017A (en) * | 2015-10-14 | 2016-01-20 | 长春工业大学 | Method for extracting regions of interest of pedestrian by using image segmentation method on the basis of road restriction |
CN107729804A (en) * | 2017-08-31 | 2018-02-23 | 广东数相智能科技有限公司 | A kind of people flow rate statistical method and device based on garment ornament |
Non-Patent Citations (4)
Title |
---|
KOTA YAMAGUCHI 等: "Retrieving Similar Styles to Parse Clothing", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
M. HADI KIAPOUR 等: "Where to Buy It: Matching Street Clothing Photos in Online Shops", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
XIAODAN LIANG 等: "Clothes Co-Parsing Via Joint Image Segmentation and Labeling With Application to Clothing Retrieval", 《IEEE TRANSACTIONS ON MULTIMEDIA》 * |
赵洋 等: "基于深度学习的视觉SLAM综述", 《机器人》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858539A (en) * | 2019-01-24 | 2019-06-07 | 武汉精立电子技术有限公司 | A kind of ROI region extracting method based on deep learning image, semantic parted pattern |
CN110807462A (en) * | 2019-09-11 | 2020-02-18 | 浙江大学 | Training method insensitive to context of semantic segmentation model |
CN110807462B (en) * | 2019-09-11 | 2022-08-30 | 浙江大学 | Training method insensitive to context of semantic segmentation model |
CN111028249A (en) * | 2019-12-23 | 2020-04-17 | 杭州知衣科技有限公司 | Garment image segmentation method based on deep learning |
CN114092591A (en) * | 2022-01-20 | 2022-02-25 | 中国科学院自动化研究所 | Image generation method, image generation device, electronic equipment and storage medium |
CN116543147A (en) * | 2023-03-10 | 2023-08-04 | 武汉库柏特科技有限公司 | Carotid ultrasound image segmentation method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108985298B (en) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108985298A (en) | A kind of human body clothing dividing method based on semantic consistency | |
Yi et al. | Dualgan: Unsupervised dual learning for image-to-image translation | |
CN109325952B (en) | Fashionable garment image segmentation method based on deep learning | |
Cerutti et al. | Understanding leaves in natural images–a model-based approach for tree species identification | |
CN106599805B (en) | It is a kind of based on have monitoring data drive monocular video depth estimation method | |
CN111754396B (en) | Face image processing method, device, computer equipment and storage medium | |
CN106570486A (en) | Kernel correlation filtering target tracking method based on feature fusion and Bayesian classification | |
Chen et al. | Deformable model for estimating clothed and naked human shapes from a single image | |
CN107273895B (en) | Method for recognizing and translating real-time text of video stream of head-mounted intelligent device | |
Wang et al. | Automatic lip contour extraction from color images | |
CN106600597A (en) | Non-reference color image quality evaluation method based on local binary pattern | |
CN107273870A (en) | The pedestrian position detection method of integrating context information under a kind of monitoring scene | |
CN101710418A (en) | Interactive mode image partitioning method based on geodesic distance | |
CN110473181A (en) | Screen content image based on edge feature information without ginseng quality evaluating method | |
CN118115819A (en) | Deep learning-based chart image data identification method and system | |
CN114648681A (en) | Image generation method, device, equipment and medium | |
CN116385660A (en) | Indoor single view scene semantic reconstruction method and system | |
CN108960281A (en) | A kind of melanoma classification method based on nonrandom obfuscated data enhancement method | |
Yang et al. | Visual saliency detection with center shift | |
Yuan et al. | Explore double-opponency and skin color for saliency detection | |
KAWAKAMI et al. | Automated Color Image Arrangement Method Based on Histogram Matching-Investigation of Kansei impression between HE and HMGD | |
CN112016592B (en) | Domain adaptive semantic segmentation method and device based on cross domain category perception | |
Liang et al. | Multiple object tracking by reliable tracklets | |
Horvath et al. | A higher-order active contour model of a ‘gas of circles’ and its application to tree crown extraction | |
CN107729821A (en) | A kind of video summarization method based on one-dimensional sequence study |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |