CN112419489B - Three-dimensional-two-dimensional template generation method based on feature fusion and RBF network - Google Patents
Three-dimensional-two-dimensional template generation method based on feature fusion and RBF network Download PDFInfo
- Publication number
- CN112419489B CN112419489B CN202011441272.7A CN202011441272A CN112419489B CN 112419489 B CN112419489 B CN 112419489B CN 202011441272 A CN202011441272 A CN 202011441272A CN 112419489 B CN112419489 B CN 112419489B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- clothing
- model
- segmentation
- descriptors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000004927 fusion Effects 0.000 title claims abstract description 17
- 230000011218 segmentation Effects 0.000 claims abstract description 43
- 238000005520 cutting process Methods 0.000 claims abstract description 10
- 238000013528 artificial neural network Methods 0.000 claims description 17
- 239000004744 fabric Substances 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 6
- 239000002245 particle Substances 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 238000007499 fusion processing Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 4
- 238000010801 machine learning Methods 0.000 abstract description 3
- 238000003062 neural network model Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/12—Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2113/00—Details relating to the application field
- G06F2113/12—Cloth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/14—Force analysis or force optimisation, e.g. static or dynamic forces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30124—Fabrics; Textile; Paper
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computer Hardware Design (AREA)
- Mathematical Physics (AREA)
- Computational Mathematics (AREA)
- Human Computer Interaction (AREA)
- Architecture (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Graphics (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a three-dimensional-two-dimensional template generation method based on feature fusion and RBF network. Eight features of the three-dimensional clothing component are extracted, then the features of the three-dimensional clothing component are used as input ends to be respectively input into the independent RBF neural network model, and the integral features of the three-dimensional clothing component are output through machine learning. And performing global feature matching on the whole features of the parts and the target/unidentified three-dimensional clothing model to finish the identification and classification of the clothing parts. And then, realizing automatic segmentation of the three-dimensional clothing model component by using a graph cutting method. And finally, expanding the three-dimensional clothing model part to generate a template by a mechanical model curved surface expanding method. The method can effectively realize automatic identification and segmentation of the three-dimensional clothing model component, so that clothing templates are further directly generated, the dependence of the segmentation process on manpower is reduced, and the clothing templates can be rapidly generated.
Description
Technical Field
The invention relates to the field of clothing template manufacturing, in particular to a three-dimensional-two-dimensional template generating method based on feature fusion and RBF network.
Background
With the development of digital intelligent technology, the application of three-dimensional clothing models has also become a research hot spot. The technology of virtual fitting of clothes, generating two-dimensional clothes templates by three-dimensional clothes and the like are all independent of the three-dimensional clothes model.
Acquisition of three-dimensional garment models includes several methods. Firstly, directly acquiring point cloud data of a physical garment through a three-dimensional scanner, and then reconstructing a three-dimensional garment model through a curved surface; secondly, extracting contour and texture information from the picture by an image processing technology to obtain a clothing style graph, and then constructing a three-dimensional clothing grid according to the style to obtain a three-dimensional clothing model. The patent number CN201910130161.5 discloses an automatic generating system based on the Internet 3D clothing template, which obtains a two-dimensional clothing picture to be processed from a sample picture in the Internet, and then converts the image processing to be processed into a three-dimensional clothing model. Thirdly, acquiring human body point cloud data by using a three-dimensional human body scanner, and establishing a three-dimensional clothing model from the three-dimensional human body model by using a vector method or other methods according to clothing style requirements. The clothing template generation method based on three-dimensional scanning disclosed in patent number CN201810704200.3 is characterized in that three-dimensional human body data is obtained through scanning, and then a template is obtained through analyzing the relation between the form of each characteristic part and the corresponding template curve form.
Most of the above are to finally generate a two-dimensional garment template. The part segmentation of the three-dimensional garment model is an important step in template generation. The three-dimensional garment model is temporarily separated from manual operation. Because the parts of the garment are divided into a front piece, a rear piece, a sleeve body, a collar part, a top fly and the like, the parts need to be judged and identified by professionals. The three-dimensional software is required to manually draw a curve and then divide the curve, and the dependence on the professional technician is high.
Disclosure of Invention
The invention aims to provide a three-dimensional-two-dimensional template generation method based on feature fusion and RBF network, aiming at the defects in the prior art, a three-dimensional model is divided into sub-panels by an over-dividing method, and then the sub-panels are adjusted to be garment parts. Eight features of the three-dimensional clothing component are extracted, then the features of the three-dimensional clothing component are used as input ends to be respectively input into the independent RBF neural network model, and the integral features of the three-dimensional clothing component are output through machine learning. And performing global feature matching on the whole features of the parts and the target/unidentified three-dimensional clothing model to finish the identification and classification of the clothing parts. And then, realizing automatic segmentation of the three-dimensional clothing model component by using a graph cutting method. And finally, expanding the three-dimensional clothing model part to generate a template by a mechanical model curved surface expanding method. The method can effectively realize automatic identification and segmentation of the three-dimensional clothing model component, so that clothing templates are further directly generated, the dependence of the segmentation process on manpower is reduced, and the clothing templates can be rapidly generated.
In order to solve the technical problems, the following technical scheme is adopted:
a three-dimensional-two-dimensional template generation method based on feature fusion and RBF network is characterized by comprising the following steps:
(1) Preprocessing a three-dimensional clothing model, and obtaining a three-dimensional clothing component through cutting;
(2) Training the three-dimensional garment part feature RBF neural network to finish garment part identification and classification;
(3) Dividing three-dimensional clothing parts;
(4) And generating a template based on the mechanical model curved surface expansion.
Preferably, the three-dimensional clothing model preprocessing in the step (1) comprises a, preliminary over-segmentation processing, namely, over-segmentation of the three-dimensional clothing model into n sub-panels; b. the Beam search region fusion processing is carried out, and the obtained segmented clothing components comprise a left front piece, a right front piece, a left rear piece, a right rear piece, a left sleeve, a right sleeve, a top fly, a collar part and a lower hem. After the fusion treatment, moderate boundary adjustment can be performed, and the accuracy of region fusion is ensured.
Preferably, the specific steps in the pretreatment of the three-dimensional clothing model are as follows:
a. Preliminary over-segmentation treatment: performing over-segmentation treatment on the three-dimensional clothing model, and performing over-segmentation on the three-dimensional clothing model to obtain a plurality of sub-panels; splitting the voxels into a plurality of subregions with regular shapes, and extracting local, global and topological features on the sub-patches, wherein the voxel features in the subregions have consistency; specifically, a sub-dough piece is obtained through normalization cutting calculation, and the boundary of the sub-dough piece is aligned through fuzzy cutting; over-dividing the three-dimensional clothing model into n sub-panels;
b. beam search region fusion processing: combining the subregions with similar characteristics by means of Beam search, so that preliminary segmentation is realized; the segmentation points are contained in the segmentation combination, and the most suitable segmentation combination is selected by means of Beam search; when the areas are merged, the subareas waiting to be merged need to be searched through the adjacency graph; let the maximum beam be 10, the specific steps are:
I, adding all sets with the segmentation number of 1 into candidate solutions, wherein the score cutting point is P;
II, arranging the candidate solutions from large to small according to the score, and deleting the tail if the candidate solutions exceed the size of the beam;
III, adding new segmentation points to form a solution with 2 segmentation points of the candidate solution;
IV, arranging the candidate solutions according to the score from large to small, and deleting the last remaining number if the candidate solutions exceed the size of the beam;
Iterating until the candidate solution is traversed to have 10 segmentation points, and then obtaining the segmentation point with the largest score; after the fusion treatment, moderate boundary adjustment can be performed, so that the accuracy of region fusion is ensured; the dividing component comprises a left front piece, a right front piece, a left back piece, a right back piece, a left sleeve, a right sleeve, a front fly, a collar and a lower hem.
Preferably, the step (2) includes:
a. Feature extraction: eight different feature descriptors are selected, and the features can describe and express the features of the three-dimensional clothing model from different angles;
Selecting 50 groups of three-dimensional clothing models of the segmented tool, and extracting eight feature descriptors of clothing components, wherein the clothing components are left front pieces, right front pieces, left rear pieces, right rear pieces, left sleeves, right sleeves, top fly, collar parts and lower hem; the features they extract are denoted FLAi、FRAi、BLAi、BRAi、SLAi、SRAi、FAi、NAi、HAi, where i=1, 2,3,4,5,6,7,8, respectively;
b. Classification and identification of three-dimensional clothing model components based on RBF neural network: using RBF neural network to complete the classification of the three-dimensional clothing model components; the RBF neural network is a three-layer neural network and comprises an input layer, a hidden layer and an output layer; the transformation from input space to hidden space is nonlinear, while the transformation from hidden space to output space is linear;
Eight features of the nine garment components that have been extracted are: the left front piece (FLA i), the right front piece (FRA i), the left rear piece (BLA i), the right rear piece (BRA i), the left sleeve (SLA i), the right sleeve (SRA i), the front fly (FA i), the collar (NA i) and the lower hem (HA i) are respectively used as input layers; outputting the integral characteristics of each clothing component through the RBF neural training model, wherein the integral characteristics are respectively recorded as FLT, FRT, BLT, BRT, SLT, SRT, FT, NT, HT; by mapping low-dimensional features onto high dimensions, the input layer nonlinear relationship can be converted into an output layer linear relationship. And performing global feature matching on the overall features of the clothing parts and the target/unidentified three-dimensional clothing model to finish the identification and classification of the clothing parts.
Preferably, in the step a, eight different feature descriptors are respectively: shape diameter function (A 1), euclidean distance (A 2), gaussian curvature (A 3), average geodesic distance (A 4), multi-resolution Reeb graph skeleton extraction (A 5), shape distribution (A 6), point feature histogram PFH descriptor (A 7), and direction histogram SHOT descriptor (A 8).
Preferably, in the three-dimensional clothing component segmentation of the step (3), the three-dimensional clothing model component is identified in the last step, and is segmented from the whole model; collaborative segmentation is carried out on the same three-dimensional clothing model, all sub-patches are concentrated together, and meanwhile, the models are assumed to have the same number of clustering centers; the segmentation boundary has larger noise, the boundary is optimized by fuzzy segmentation, and a final segmentation result is obtained by a graph segmentation method; the clothing parts are divided by grids of dividing lines, a dividing line is defined at the sleeve bottom of the sleeve body parts, other decorative style lines are omitted, and only the structural dividing lines are considered.
Preferably, in the template generation in the step (4), after a plurality of three-dimensional clothing model components are obtained, a 3D model is required to be converted into a 2D plane; for the clothing component, the three-dimensional clothing grid is unfolded into a two-dimensional clothing template; setting the types of fabrics, wherein performance parameters of fabrics are required to be added, establishing a spring mass point model, and completing generation of a template by a curved surface unfolding method based on a mechanical model; specifically, the curved surface is simplified into a particle system formed by the vertexes of the triangular mesh, and the triangular mesh generates corresponding deformation by adding proper force on each particle, so that the curved surface is unfolded to generate a template.
Preferably, properties incorporated into the fabric include material, thickness, modulus of elasticity.
Due to the adoption of the technical scheme, the method has the following beneficial effects:
The invention uses a method of dividing the three-dimensional model into sub-panels, and then adjusts the sub-panels into clothing components. Eight features of the three-dimensional clothing component are extracted, then the features of the three-dimensional clothing component are used as input ends to be respectively input into the independent RBF neural network model, and the integral features of the three-dimensional clothing component are output through machine learning. And performing global feature matching on the whole features of the parts and the target/unidentified three-dimensional clothing model to finish the identification and classification of the clothing parts. And then, realizing automatic segmentation of the three-dimensional clothing model component by using a graph cutting method. And finally, expanding the three-dimensional clothing model part to generate a template by a mechanical model curved surface expanding method. The method can effectively realize automatic identification and segmentation of the three-dimensional clothing model component, so that clothing templates are further directly generated, the dependence of the segmentation process on manpower is reduced, and the clothing templates can be rapidly generated.
Drawings
The invention is further described below with reference to the accompanying drawings:
FIG. 1 is a flow chart of template generation in accordance with the present invention;
FIG. 2 is a schematic diagram showing the separation of the present invention;
FIG. 3 is a schematic view of a Beam search region fusion in accordance with the present invention;
FIG. 4 is a schematic view of a split of a garment component of the present invention;
FIG. 5 is a block diagram of an RBF neural network according to the present invention;
FIG. 6 is a schematic diagram of an expanded template according to the present invention.
Detailed Description
The present invention is better illustrated in connection with the specific embodiments and figures, as shown in figures 1 to 6. This example is merely illustrative of the present invention and does not limit the scope of the invention.
First, a three-dimensional clothing model is obtained by various methods, and after a triangular mesh is generated, the following processing is performed.
1. Three-dimensional garment model pretreatment
(1) Preliminary over-segmentation process
Preliminary over-segmentation treatment: performing over-segmentation treatment on the three-dimensional clothing model, and performing over-segmentation on the three-dimensional clothing model into a plurality of sub-panels, as shown in fig. 2; splitting the voxels into a plurality of subregions with regular shapes, and extracting local, global and topological features on the sub-patches, wherein the voxel features in the subregions have consistency; specifically, a sub-dough piece is obtained through normalization cutting calculation, and the boundary of the sub-dough piece is aligned through fuzzy cutting; over-dividing the three-dimensional clothing model into n sub-panels;
(2) Beam search region fusion processing:
As shown in fig. 3, the sub-regions with similar features are then merged by means of Beam search, thereby achieving the objective of preliminary segmentation. The segmentation points are included therein and the best suitable segmentation combination is selected by means of Beam search. Each sub-region after data splitting is a cube voxel set, and all voxels in the set belong to the same region tissue. Firstly, selecting a seed point growing in a region, and determining a cube splitting region where the seed point is located; then taking the region as the center, and judging all adjacent subregions; merging is performed if the neighboring region and the region are the same organization. The algorithm actually grows the seed points by taking the cube nodes as the minimum units, and when the regions are split, a region adjacency graph needs to be generated, and the adjacency relationship among the subareas is represented by the adjacency graph. In region merging, a sub-region waiting for merging needs to be searched through an adjacency graph.
Let the maximum beam be 10, the specific steps are:
(a) All sets with partition number 1 are added to the candidate solution, and the score cut point is P.
{{P1},{P2},{P3},…,{P10}}
(B) The candidate solutions are ranked from large to small in score, and if the candidate solution exceeds the size of the beam, the end is pruned.
(C) Adding new segmentation points to form a candidate solution with 2 segmentation points
{{P1},{P2},{P3},…,{P10},
{P1,P2},{P1,P3},…,{P1,P10},
{P2,P3},{P2,P4},…,{P2,P10},…
{P9,P10}}
(D) The candidate solutions are ranked from large to small in score, and if the candidate solution exceeds the size of the beam, the last remaining number is pruned.
Iterating until the candidate solution is traversed to have 10 segmentation points, and then obtaining the segmentation point with the largest score; after the fusion treatment, moderate boundary adjustment can be performed, and the accuracy of region fusion is ensured. The dividing component comprises a left front piece, a right front piece, a left back piece, a right back piece, a left sleeve, a right sleeve, a front fly, a collar and a lower hem.
2. Three-dimensional garment part feature RBF neural network training
(1) Feature extraction
Eight different feature descriptors were chosen, which were: shape diameter function (A 1), euclidean distance (A 2), gaussian curvature (A 3), average geodesic distance (A 4), multi-resolution Reeb graph skeleton extraction (A 5), shape distribution (A 6), point feature histogram PFH descriptor (A 7), and direction histogram SHOT descriptor (A 8). These features may characterize and express the three-dimensional model from different angles.
Selecting 50 groups of three-dimensional clothing models of the segmented tool, and extracting eight feature descriptors of clothing components, wherein the clothing components are left front pieces, right front pieces, left rear pieces, right rear pieces, left sleeves, right sleeves, top fly, collar parts and lower hem; the features they extract are denoted FLAi、FRAi、BLAi、BRAi、SLAi、SRAi、FAi、NAi、HAi, where i=1, 2,3,4,5,6,7,8, respectively;
(2) Classification and identification of clothing model components based on RBF neural network
Using RBF neural network to complete the classification of the three-dimensional clothing model components; the RBF neural network is a three-layer neural network and comprises an input layer, a hidden layer and an output layer; the transformation from input space to hidden space is nonlinear, while the transformation from hidden space to output space is linear;
The activation function is:
The structure of the RBF neural network with x P =5 yields a network output of:
the least squares loss function is:
Eight features of the nine garment components that have been extracted are: the left front piece (FLA i), the right front piece (FRA i), the left rear piece (BLA i), the right rear piece (BRA i), the left sleeve (SLA i), the right sleeve (SRA i), the front fly (FA i), the collar (NA i) and the lower hem (HA i) are respectively used as input layers; outputting the integral characteristics of each clothing component through the RBF neural training model, wherein the integral characteristics are respectively recorded as FLT, FRT, BLT, BRT, SLT, SRT, FT, NT, HT; by mapping the low-dimensional features onto the high-dimensional, the input layer nonlinear relationship can be converted into the output layer linear relationship; and performing global feature matching on the overall features of the clothing parts and the target/unidentified three-dimensional clothing model to finish the identification and classification of the clothing parts.
3. Three-dimensional garment invisible segmentation
Identifying the three-dimensional clothing model part in the previous step, and dividing the three-dimensional clothing model part from the integral model; collaborative segmentation is carried out on the same three-dimensional clothing model, all sub-patches are concentrated together, and meanwhile, the models are assumed to have the same number of clustering centers; the segmentation boundary has larger noise, the boundary is optimized by fuzzy segmentation, and a final segmentation result is obtained by a graph segmentation method; the clothing parts are divided by grids of dividing lines, a dividing line is defined at the sleeve bottom of the sleeve body parts, other decorative style lines are omitted, and only the structural dividing lines are considered.
4. And generating a template based on the mechanical model curved surface expansion.
After a plurality of three-dimensional clothing model components are obtained, a 3D model is required to be converted into a 2D plane; for the clothing component, the three-dimensional clothing grid is unfolded into a two-dimensional clothing template; setting the types of fabrics, wherein the performances of fabrics are required to be added, and completing the generation of a template by a curved surface unfolding method based on a mechanical model; specifically, the curved surface is simplified into a particle system formed by the vertexes of the triangular mesh, and the triangular mesh generates corresponding deformation by adding proper force on each particle, so that the curved surface is unfolded to generate a template.
The above is only a specific embodiment of the present invention, but the technical features of the present invention are not limited thereto. Any simple changes, equivalent substitutions or modifications made on the basis of the present invention to solve the substantially same technical problems and achieve the substantially same technical effects are encompassed within the scope of the present invention.
Claims (4)
1. A three-dimensional-two-dimensional template generation method based on feature fusion and RBF network is characterized by comprising the following steps:
(1) Preprocessing a three-dimensional clothing model, and obtaining a three-dimensional clothing component through cutting; the three-dimensional clothing model preprocessing in the step (1) comprises a, preliminary over-segmentation processing, namely, over-segmentation of the three-dimensional clothing model into n sub-panels; b. the Beam search area fusion processing is carried out, and the cut clothing parts comprise a left front piece, a right front piece, a left back piece, a right back piece, a left sleeve, a right sleeve, a top fly, a collar and a lower hem;
(2) Training a three-dimensional garment part feature RBF neural network to finish the identification and classification of the three-dimensional garment model parts; comprising the following steps:
a. Feature extraction: selecting eight different feature descriptors, wherein the eight different feature descriptors describe and express the features of the three-dimensional clothing model from different angles; eight different feature descriptors are: shape diameter function descriptors, euclidean distance descriptors, gaussian curvature descriptors, average geodesic distance descriptors, multi-resolution Reeb graph skeleton extraction descriptors, shape distribution descriptors, point feature histogram PFH descriptors and direction histogram SHOT descriptors;
Selecting a three-dimensional clothing model of 50 groups of segmented tools, and extracting eight different feature descriptors of clothing components, wherein the clothing components are a left front piece, a right front piece, a left rear piece, a right rear piece, a left sleeve, a right sleeve, a top fly, a collar and a lower hem; the feature descriptors they extract are denoted FLAi、FRAi、BLAi、BRAi、SLAi、SRAi、FAi、NAi and HA i, respectively, where i=1, 2,3,4,5,6,7,8;
b. Identifying and classifying the three-dimensional clothing model parts based on the three-dimensional clothing part characteristics RBF neural network: the three-dimensional garment component feature RBF neural network is a three-layer neural network, and comprises an input layer, a hidden layer and an output layer; the transformation from input space to hidden space is nonlinear and the transformation from hidden space to output space is linear;
The extracted feature descriptors :FLAi、FRAi、BLAi、BRAi、SLAi、SRAi、FAi、NAi and HA i of the left front piece, the right front piece, the left back piece, the right back piece, the left sleeve, the right sleeve, the top fly, the collar and the lower hem of the clothing component are respectively used as input layers; outputting the integral characteristics of each clothing component, namely FLT, FRT, BLT, BRT, SLT, SRT, FT, NT and HT respectively, through a three-dimensional clothing component characteristic RBF neural network; converting the nonlinear relationship of the input layer into the linear relationship of the output layer by mapping the low-dimensional features onto the high dimensions; performing global feature matching on the overall features of the clothing components and unidentified three-dimensional clothing models to finish the identification and classification of the three-dimensional clothing model components;
(3) Dividing the three-dimensional clothing model part; the three-dimensional clothing model component obtained by the identification and classification in the previous step is segmented from the whole three-dimensional clothing model;
(4) Completing generation of a template based on a mechanical model curved surface unfolding method; after a plurality of three-dimensional clothing model components are obtained, the three-dimensional clothing grids are unfolded into two-dimensional clothing templates.
2. The three-dimensional-two-dimensional template generation method based on feature fusion and RBF network as set forth in claim 1, wherein: in the three-dimensional clothing model part segmentation of the step (3), collaborative segmentation is carried out on the same three-dimensional clothing model, all sub-patches are concentrated together, and meanwhile, the models are assumed to have the same number of clustering centers; optimizing the boundary by using fuzzy segmentation, and obtaining a final segmentation result by using a graph segmentation method; the clothing parts are divided by grids of dividing lines, a dividing line is defined at the sleeve bottom of the sleeve body parts, other decorative style lines are omitted, and only the structural dividing lines are considered.
3. The three-dimensional-two-dimensional template generation method based on feature fusion and RBF network as set forth in claim 1, wherein: in the step (4), setting the types of fabrics, wherein the performances of the fabrics are added, and the generation of a template is completed based on a curved surface unfolding method of a mechanical model; the generation of the template is completed by a curved surface unfolding method based on a mechanical model, namely simplifying the curved surface into a particle system consisting of vertexes of triangular grids, and enabling the triangular grids to generate corresponding deformation by adding proper force on each particle so as to achieve curved surface unfolding and generate the template.
4. A three-dimensional-two-dimensional template generation method based on feature fusion and RBF network as recited in claim 3, wherein: properties incorporated into the fabric include elastic modulus and flexural rigidity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011441272.7A CN112419489B (en) | 2020-12-08 | 2020-12-08 | Three-dimensional-two-dimensional template generation method based on feature fusion and RBF network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011441272.7A CN112419489B (en) | 2020-12-08 | 2020-12-08 | Three-dimensional-two-dimensional template generation method based on feature fusion and RBF network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112419489A CN112419489A (en) | 2021-02-26 |
CN112419489B true CN112419489B (en) | 2024-05-24 |
Family
ID=74776705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011441272.7A Active CN112419489B (en) | 2020-12-08 | 2020-12-08 | Three-dimensional-two-dimensional template generation method based on feature fusion and RBF network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112419489B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114663579A (en) * | 2022-02-14 | 2022-06-24 | 清华大学 | Twin three-dimensional model generation method and device, electronic device and storage medium |
CN115350482B (en) * | 2022-08-25 | 2023-12-12 | 浙江大学 | Watertight three-dimensional toy model opening method based on data driving |
CN116543134B (en) * | 2023-07-06 | 2023-09-15 | 金锐同创(北京)科技股份有限公司 | Method, device, computer equipment and medium for constructing digital twin model |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933757A (en) * | 2015-05-05 | 2015-09-23 | 昆明理工大学 | Method of three-dimensional garment modeling based on style descriptor |
CN105653742A (en) * | 2014-11-10 | 2016-06-08 | 江苏中佑石油机械科技有限责任公司 | Clothes model building method in three-dimension simulation fitting system |
CN106022343A (en) * | 2016-05-19 | 2016-10-12 | 东华大学 | Fourier descriptor and BP neural network-based garment style identification method |
CN106327506A (en) * | 2016-08-05 | 2017-01-11 | 北京三体高创科技有限公司 | Probability-partition-merging-based three-dimensional model segmentation method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5584006B2 (en) * | 2010-03-31 | 2014-09-03 | 富士フイルム株式会社 | Projection image generation apparatus, projection image generation program, and projection image generation method |
-
2020
- 2020-12-08 CN CN202011441272.7A patent/CN112419489B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105653742A (en) * | 2014-11-10 | 2016-06-08 | 江苏中佑石油机械科技有限责任公司 | Clothes model building method in three-dimension simulation fitting system |
CN104933757A (en) * | 2015-05-05 | 2015-09-23 | 昆明理工大学 | Method of three-dimensional garment modeling based on style descriptor |
CN106022343A (en) * | 2016-05-19 | 2016-10-12 | 东华大学 | Fourier descriptor and BP neural network-based garment style identification method |
CN106327506A (en) * | 2016-08-05 | 2017-01-11 | 北京三体高创科技有限公司 | Probability-partition-merging-based three-dimensional model segmentation method |
Non-Patent Citations (2)
Title |
---|
三角网格模型的最小值边界分割;王泽昊;黄常标;林忠威;;计算机辅助设计与图形学学报(第01期);全文 * |
服装款式图识别与样板转换技术研究进展;李涛;杜磊;黄振华;蒋玉萍;邹奉元;;纺织学报(第08期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112419489A (en) | 2021-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112419489B (en) | Three-dimensional-two-dimensional template generation method based on feature fusion and RBF network | |
KR102154470B1 (en) | 3D Human Hairstyle Generation Method Based on Multiple Feature Search and Transformation | |
Lee et al. | Intelligent mesh scissoring using 3d snakes | |
CN107742102B (en) | Gesture recognition method based on depth sensor | |
CN109325993B (en) | Saliency feature enhanced sampling method based on class octree index | |
CN103871100B (en) | Tree modelling method for reconstructing based on a cloud Yu data-driven | |
CN106022228B (en) | A kind of three-dimensional face identification method based on grid local binary patterns in length and breadth | |
CN110136246A (en) | Three-dimension Tree Geometric model reconstruction method based on class trunk point | |
CN101017575B (en) | Method for automatically forming 3D virtual human body based on human component template and body profile | |
CN101783016B (en) | Crown appearance extract method based on shape analysis | |
CN112288857A (en) | Robot semantic map object recognition method based on deep learning | |
CN109166145A (en) | A kind of fruit tree leaf growth parameter(s) extracting method and system based on cluster segmentation | |
CN109034131A (en) | A kind of semi-automatic face key point mask method and storage medium | |
CN112396655B (en) | Point cloud data-based ship target 6D pose estimation method | |
CN111311751A (en) | Three-dimensional clothes model reconstruction method based on deep neural network | |
CN115018982A (en) | Digital tree twinning method based on foundation laser radar point cloud | |
Ma et al. | Kinematic skeleton extraction from 3D articulated models | |
CN109887009A (en) | A kind of point cloud local matching process | |
CN114120389A (en) | Network training and video frame processing method, device, equipment and storage medium | |
CN108595649A (en) | The textile image search method of local invariant textural characteristics based on geometry | |
Achlioptas et al. | ChangeIt3D: Languageassisted 3d shape edits and deformations | |
CN106952267A (en) | Threedimensional model collection is divided into segmentation method and device | |
CN108108700A (en) | A kind of characteristic area recognition methods of the pig based on peg conversion | |
CN113111830A (en) | Grape vine winter pruning point detection algorithm | |
He et al. | Object recognition and recovery by skeleton graph matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |