CN112163593A - Method and system for rapidly matching human body model - Google Patents

Method and system for rapidly matching human body model Download PDF

Info

Publication number
CN112163593A
CN112163593A CN202010877291.8A CN202010877291A CN112163593A CN 112163593 A CN112163593 A CN 112163593A CN 202010877291 A CN202010877291 A CN 202010877291A CN 112163593 A CN112163593 A CN 112163593A
Authority
CN
China
Prior art keywords
model
user
unit
matching
processing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010877291.8A
Other languages
Chinese (zh)
Inventor
梁耀灿
罗毅
刘雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yyc Industrial Co ltd China
Original Assignee
Yyc Industrial Co ltd China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yyc Industrial Co ltd China filed Critical Yyc Industrial Co ltd China
Priority to CN202010877291.8A priority Critical patent/CN112163593A/en
Publication of CN112163593A publication Critical patent/CN112163593A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • AHUMAN NECESSITIES
    • A41WEARING APPAREL
    • A41HAPPLIANCES OR METHODS FOR MAKING CLOTHES, e.g. FOR DRESS-MAKING OR FOR TAILORING, NOT OTHERWISE PROVIDED FOR
    • A41H1/00Measuring aids or methods

Abstract

The invention discloses a method and a system for quickly matching a human body model, which comprises the following steps: s100, collecting image information of a user; s110, distinguishing physical attributes of a user through a machine; s120, matching with a model library through a machine according to the physical attributes; s130, providing one and/or more groups of models by a model library S140, and selecting the most similar models; s150, pushing the closest model to a user; s160, the user selects the needed collocation according to the pushed model, and after the collocation is finished, the collocation is pushed to a server; by adopting the method and the system described by the invention, the human body 3D model can be determined for the user under the condition of not dynamically generating the human body 3D model, and the method and the system also have higher operation speed.

Description

Method and system for rapidly matching human body model
Technical Field
The invention belongs to the field of rapid 3D model fitting and measurement, and particularly relates to a method and a system for rapidly matching a human body model.
Background
The electronic marketplace provides users with the ability to electronically purchase and sell items, including clothing. Different garment manufacturers use different size standards. Thus, when a user orders clothing, they may or may not fit. The user may return the ordered goods and exchange them, but some trouble or cost, and for this reason, a method and system for fast 3D model fitting and anthropometry are proposed.
Disclosure of Invention
The present invention is directed to a method and system for fast matching human body models, so as to solve the problems mentioned in the background art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for quickly matching a human body model comprises the following steps:
s100, collecting image information of a user;
s110, distinguishing physical attributes of a user through a machine;
s120, matching with a model library through a machine according to the physical attributes;
s130, the model library provides one group and/or multiple groups of models
S140, selecting the most similar model;
s150, pushing the closest model to a user;
and S160, selecting the needed collocation according to the pushed model by the user, and pushing the collocation to the server after the collocation is finished.
Preferably, the match between the physical attribute of the user and the corresponding attribute of the model is made as part of a comparison of a feature vector of the user with a feature vector of the model, each feature vector comprising a height value.
Preferably, the physical attribute information of the user further comprises synthesizing a set of models based on population distributions of the attributes.
Preferably, each of the feature vectors of the user further includes a neck circumference value, a shoulder circumference value, a chest circumference value, a waist circumference value, and a hip circumference value.
Preferably, the image of the user comprises a depth map, and determining the physical property of the user is based on the depth map.
The invention also provides a system for rapidly matching the human body model, which comprises a central processing module, wherein the central processing module is interactively connected with a server unit, the central processing module is interactively connected with a storage unit, the central processing module is interactively connected with an input unit, the central processing module is interactively connected with a communication unit, the central processing module is interactively connected with a hardware unit, the central processing module is interactively connected with a display unit, the input unit comprises mobile equipment and a camera, the server unit comprises an electronic commerce server and a model fitting server, the model fitting server comprises an image processing unit, and the storage unit comprises a database and a memory.
Preferably, the hardware unit comprises a sensor device.
Preferably, the communication unit comprises a network switch, a signal receiver and a signal transmitter.
Preferably, the display unit includes a display screen.
Preferably, the input unit and the communication unit are interactively connected, and the server unit and the storage unit are interactively connected.
The invention has the technical effects and advantages that: compared with the prior art, the method and the system for rapidly matching the human body model have the following advantages that:
by employing the methods and systems described herein, 3D models can be facilitated for a user, and 3D models can be facilitated for a user without dynamically generating the 3D models, which can also facilitate a user in selecting 3D models more quickly and with faster computing power.
Drawings
FIG. 1 is a flow chart of a method of the present invention for rapid mannequin matching;
FIG. 2 is a flow chart of a system for rapid mannequin matching according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The specific embodiments described herein are merely illustrative of the invention and do not delimit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a method for quickly matching a human body model as shown in figure 1, which comprises the following steps:
s100, collecting image information of a user;
s110, distinguishing physical attributes of a user through a machine;
s120, matching with a model library through a machine according to the physical attributes;
s130, the model library provides one group and/or multiple groups of models
S140, selecting the most similar model;
s150, pushing the closest model to a user;
and S160, selecting the needed collocation according to the pushed model by the user, and pushing the collocation to the server after the collocation is finished.
Preferably, the match between the physical attribute of the user and the corresponding attribute of the model is made as part of a comparison of a feature vector of the user with a feature vector of the model, each feature vector comprising a height value.
By adopting the technical scheme, the feature vectors of the model can be matched according to the feature vectors of the user, and the accuracy of the matching information is standardized by high matching when the feature vectors are matched.
Preferably, the physical attribute information of the user further comprises synthesizing a set of models based on population distributions of the attributes.
By adopting the technical scheme, the physical attribute information of each user is different, and the matching is carried out according to the group distribution of the attributes, so that the model can be synthesized quickly and matched with the model library.
Preferably, each of the feature vectors of the user further includes a neck circumference value, a shoulder circumference value, a chest circumference value, a waist circumference value, and a hip circumference value.
By adopting the technical scheme, the model is generated according to the neck circumference value, the shoulder circumference value, the chest circumference value, the waist circumference value and the hip circumference value of the user, and the model is generated effectively and accurately.
Preferably, the image of the user comprises a depth map, and determining the physical attribute of the user is based on the depth map.
By employing the above-described approach, the user's body measurements are extracted from the depth map using joint position information, which may correspond to actual joints (e.g., ankle, wrist, and elbow joints) or other important body positions (e.g., head and torso joints)
The invention also provides a system for rapidly matching human body models as shown in fig. 2, which comprises a central processing module, wherein the central processing module is interactively connected with a server unit, the central processing module is interactively connected with a storage unit, the central processing module is interactively connected with an input unit, the central processing module is interactively connected with a communication unit, the central processing module is interactively connected with a hardware unit, the central processing module is interactively connected with a display unit, the input unit comprises mobile equipment and a camera, the server unit comprises an e-commerce server and a model fitting server, the model fitting server comprises an image processing unit, and the storage unit comprises a database and a memory.
Preferably, the hardware unit comprises a sensor device.
By adopting the technical scheme, the sensor equipment can sense the measured information and convert the sensed information into an electric signal or other information in a required form according to a certain rule for output, so that the requirements of information transmission, processing, storage, display, recording, control and the like are met.
Preferably, the communication unit comprises a network switch, a signal receiver and a signal transmitter.
By adopting the technical scheme, the network exchanger is used for providing a network for the system, the signal receiver is used for receiving image information transmitted by a user, and the signal transmitter is used for transmitting the optimal model information matched by the system to the user.
Preferably, the display unit includes a display screen.
By adopting the technical scheme, the display screen is used for displaying the image information processed by the model fitting server and displaying the pushed closest group or groups of models.
Preferably, the input unit and the communication unit are connected with each other, and the server unit and the storage unit are connected with each other.
By adopting the technical scheme, the user uploads information through the input unit, the communication unit receives the information uploaded by the input unit, the information is processed through the central processing module and fed back to the server unit, and the server unit processes the image and matches the image with the model table in the storage unit to obtain the optimal model.
In this embodiment: the network generated by the network switch may be a wired network, a wireless network (e.g., a mobile or cellular network), or any suitable combination thereof, and network 170 may include one or more portions that form a private network, a public network (e.g., the internet), or any suitable combination thereof.
In this embodiment: units in the model fitting server, all configured to communicate with each other (e.g., via a bus, shared memory, or switch), any one or more units described herein may be implemented using hardware (e.g., dedicated hardware components) or a combination of hardware and software (e.g., a processor configured by software); further, any two or more of these units may be combined into a single unit, and the functions described herein for a single unit may be between multiple units.
In this embodiment: the database is a data storage resource and may store data constructed as text files, tables, spreadsheets, relational databases, and also network databases stored in a network.
In this embodiment: the communication unit is configured to transmit and receive data, for example, the communication unit may receive sensor data through a network and transmit the received data to the image processing unit, and as another example, the image matching unit may identify a model to be used for a user, and information about the user. The model may be sent by the communication unit to the e-commerce server over a network.
In this embodiment: the image processing unit is configured to receive and process image data, each image in the image data may be a two-dimensional image, a three-dimensional image, a depth image, an infrared image, a binocular image, or any suitable combination thereof, e.g., an image may be received from a camera; the image processing unit may process the image data to generate a depth map of a person or object in the image.
In this embodiment: the image processing unit is further configured to generate a composite model, e.g. the height and weight distribution of the population may be used to generate parameters for model creation; these parameters may be used with a model generation tool to generate a set of synthetic models having a body shape in a distribution corresponding to the actual height and weight distribution of the population.
In this embodiment: the image matching unit is configured to match the parameters identified by the image processing unit with the generated composite model, for example if the user reports their height and weight, and the user's image shows shoulders and hip width relative to the height, there is substantial information that can be used to select the user's three-dimensional model.
In this embodiment: the storage unit is used for storing and retrieving data generated and used by the image generating unit, the image processing unit and the image matching unit, for example, the model generated by the image generating unit can be stored by the storage unit, the information on the matching generated by the image matching unit can be retrieved by the image matching unit, and the information on the matching can also be stored by the storage unit. The e-commerce server may alternatively request the model for the user (e.g. by providing a user identifier which may be retrieved from the memory by the storage unit and sent over the network using the communication unit.
In this embodiment: the input unit, the camera, and the communication unit, all of which are configured to communicate with each other (e.g., via a bus, a shared memory, or a switch), any one or more of the units described herein may be implemented using hardware (e.g., a processor of a machine) or a combination of hardware and software, e.g., any of the units described herein may configure a processor to perform the operations described herein for that unit, further, any two or more of these units may be combined into a single unit, and the functions described herein for a single unit may be subdivided among multiple units.
In this embodiment: the input unit is configured to receive an input from a user via a user interface. For example, the user may input their height, weight, and gender into the input unit, configure the camera, select items appropriate to the model, and so on, and in some example embodiments, the height, weight, and gender are automatically determined based on the user's images, rather than being explicitly input by the user.
In this embodiment: the camera is configured to capture image data; for example, an image may be received from a camera, a depth image may be received from an infrared camera, a pair of images may be received from a binocular camera, and so on.
In this embodiment: the communication unit is further configured to transmit data received by the input unit or the camera to the model fitting server, e.g. the input unit may receive an input containing the height and weight of the user and the communication unit may send information about the user to the model fitting machine for storage in a database accessible by the image matching unit.
In this embodiment: based on one or more sets of generative models in the accessed distribution, a population of models may be generated to have distribution attributes similar to those in the population, and thus, more models with near-average values will be generated and fewer models with fewer common values will be generated. Models can be generated using MakeHuman, which is an open source python framework intended to prototype realistic 3D mannequins. MakeHuman contains a standard assembled human mesh that can be used to create realistic characters based on the normalized attributes of a particular virtual character.
In this embodiment: when a single depth channel of a single image is used, only a frontal view of the user may be obtained. To generate a 3D view, ellipses are fitted to points on one or more cross-sectional planes to obtain a whole-body measurement, and when multiple images are available, an ellipse fitting may also be used, and the points of each ellipse may be defined as a set of points p in a point cloud, where the perpendicular distance of the point from x is less than a threshold.
In this embodiment: the image matching ticket retrieves the model that best matches the user data, and the feature vectors can be used to perform a comparison between the user data and the set of models. For example, the feature vectors may be divided into three groups of features with different weights: global body shape, gender and local body shape, once the feature vectors are created, the image matching unit may perform a nearest neighbor search to find the closest match in the composite dataset.
In this embodiment: the matching model may be used for any of a variety of purposes, for example, a size fit chart may be accessed that indicates which particular body measurement or set of body measurements corresponds to a particular size of an article of clothing. By reading appropriate body measurements from the model, the correct garment size of the user can be determined, as another example, if the measured structure of the garment is known, a model of the garment can be generated and presented on the matching model in the correct proportions, allowing the user to see how the garment will fit, as another example, the matching model can be placed in a game or virtual reality environment to represent the user.
In this embodiment: hardware elements may be implemented mechanically, electronically, or any suitable combination thereof; for example, a hardware unit may comprise dedicated circuitry or logic that is permanently configured to perform certain operations; for example, the hardware unit may be a special purpose processor, such as a Field Programmable Gate Array (FPGA) or ASIC; the hardware elements may also comprise programmable logic or circuitry that is temporarily configured by software to perform certain operations; for example, a hardware element may include software contained within a general-purpose processor or other programmable processor; it should be understood that the decision of a hardware unit is implemented mechanically, in a dedicated and permanently configured circuit or in a temporarily configured circuit.
The working principle is as follows: by employing the methods and systems described herein, 3D models can be determined for a user, and 3D models can be determined for a user without dynamically generating 3D models, the user can also be promoted to select 3D models more quickly and with lower computing power usage, and the effort a user spends in ordering items of interest can also be reduced by the methods and systems provided herein.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.

Claims (10)

1. A method for rapidly matching a human body model is characterized in that: the method comprises the following steps:
s100, collecting image information of a user;
s110, distinguishing physical attributes of a user through a machine;
s120, matching with a model library through a machine according to the physical attributes;
s130, the model library provides one group and/or multiple groups of models
S140, selecting the most similar model;
s150, pushing the closest model to a user;
and S160, selecting the needed collocation according to the pushed model by the user, and pushing the collocation to the server after the collocation is finished.
2. The method for rapid mannequin matching according to claim 1, wherein: a match between the physical attribute of the user and the corresponding attribute of the model as part of a comparison of the feature vector of the user and the feature vector of the model, each feature vector comprising a height value.
3. The method for fast human model matching according to claim 1, wherein: the physical attribute information of the user further includes synthesizing a set of models based on population distributions of the attributes.
4. The method for fast human model matching according to claim 1, wherein: each of the user's feature vectors further includes a neck circumference value, a shoulder circumference value, a chest circumference value, a waist circumference value, and a hip circumference value.
5. The method for fast human model matching according to claim 1, wherein: the image of the user includes a depth map, and determining the physical attributes of the user is based on the depth map.
6. A system for rapid mannequin matching according to claim 1, wherein: including central processing module, central processing module interconnect server unit, central processing module interconnect memory cell, central processing module interconnect input unit, central processing module interconnect communication unit, central processing module interconnect hardware unit, central processing module interconnect display element, input unit includes mobile device and camera, server unit includes electronic commerce server and model fitting server, model fitting server includes image processing unit, memory cell includes database and memory.
7. The system for rapid mannequin matching according to claim 6, wherein: the hardware unit includes a sensor device.
8. The system for rapid mannequin matching according to claim 6, wherein: the communication unit includes a network switch, a signal receiver, and a signal transmitter.
9. The system for rapid mannequin matching according to claim 6, wherein: the display unit includes a display screen.
10. The system for rapid mannequin matching according to claim 6, wherein: the input unit is interactively connected with the communication unit, and the server unit is interactively connected with the storage unit.
CN202010877291.8A 2020-08-27 2020-08-27 Method and system for rapidly matching human body model Withdrawn CN112163593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010877291.8A CN112163593A (en) 2020-08-27 2020-08-27 Method and system for rapidly matching human body model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010877291.8A CN112163593A (en) 2020-08-27 2020-08-27 Method and system for rapidly matching human body model

Publications (1)

Publication Number Publication Date
CN112163593A true CN112163593A (en) 2021-01-01

Family

ID=73859806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010877291.8A Withdrawn CN112163593A (en) 2020-08-27 2020-08-27 Method and system for rapidly matching human body model

Country Status (1)

Country Link
CN (1) CN112163593A (en)

Similar Documents

Publication Publication Date Title
US11403866B2 (en) Method, medium, and system for fast 3D model fitting and anthropometrics using synthetic data
US10657709B2 (en) Generation of body models and measurements
US20210241364A1 (en) Digital wardrobe
US11321769B2 (en) System and method for automatically generating three-dimensional virtual garment model using product description
CN104679831B (en) Method and device for matching human body model
US20070032898A1 (en) Method and apparatus for identifying vitual body profiles
US10813715B1 (en) Single image mobile device human body scanning and 3D model creation and analysis
US20210134042A1 (en) Crowdshaping Realistic 3D Avatars with Words
US20190026810A1 (en) Highly Custom and Scalable Design System and Method for Articles of Manufacture
US20190026397A1 (en) Highly Custom and Scalable Design System and Method for Articles of Manufacture
US10818062B2 (en) Crowdshaping realistic 3D avatars with words
US20190026809A1 (en) Highly Custom and Scalable Design System and Method for Articles of Manufacture
CN108960985A (en) Body parameter generation method and online shopping item recommendation method based on image or video
CN108537887A (en) Sketch based on 3D printing and model library 3-D view matching process
CN105654555A (en) Virtual wardrobe system based on mobile terminal
Kwon et al. Optimal camera point selection toward the most preferable view of 3-d human pose
CN106773050A (en) A kind of intelligent AR glasses virtually integrated based on two dimensional image
CN104123655A (en) Simulation fitting system
CN115631322B (en) User-oriented virtual three-dimensional fitting method and system
CN112163593A (en) Method and system for rapidly matching human body model
WO2015011678A1 (en) Method for determining a fitting index of a garment based on anthropometric data of a user, and device and system thereof
Hamad et al. New human body shape descriptor based on anthropometrics points
CN109144452A (en) A kind of naked eye 3D display system and method based on 3D MIcrosope image
Paquet et al. Anthropometric calibration of virtual mannequins through cluster analysis and content-based retrieval of 3-D body scans
CN109393614A (en) A kind of software tool for dimensional measurement of cutting the garment according to the figure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210101

WW01 Invention patent application withdrawn after publication