CN117234342B - Method and equipment for generating virtual model based on mannequin model - Google Patents

Method and equipment for generating virtual model based on mannequin model Download PDF

Info

Publication number
CN117234342B
CN117234342B CN202311517504.6A CN202311517504A CN117234342B CN 117234342 B CN117234342 B CN 117234342B CN 202311517504 A CN202311517504 A CN 202311517504A CN 117234342 B CN117234342 B CN 117234342B
Authority
CN
China
Prior art keywords
data
model
mannequin
features
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311517504.6A
Other languages
Chinese (zh)
Other versions
CN117234342A (en
Inventor
王文峰
李慧娟
温龙
杨振
汤星星
刘淑梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Golden Partner Technology Co ltd
Beijing Jingpaidang Technology Co ltd
Original Assignee
Beijing Golden Partner Technology Co ltd
Beijing Jingpaidang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Golden Partner Technology Co ltd, Beijing Jingpaidang Technology Co ltd filed Critical Beijing Golden Partner Technology Co ltd
Priority to CN202311517504.6A priority Critical patent/CN117234342B/en
Publication of CN117234342A publication Critical patent/CN117234342A/en
Application granted granted Critical
Publication of CN117234342B publication Critical patent/CN117234342B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a method and equipment for generating a virtual model based on a mannequin model, and relates to the technical field of data processing. The method comprises the following steps: collecting structured data, unstructured data and semi-structured data; extracting first data features and second data features from the structured data, unstructured data, and semi-structured data; fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model; calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic; and obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model. According to the invention, the mannequin model is constructed through the analysis of the multi-source data, so that the accuracy of the mannequin model is improved, and the virtual model which is closer to the human body can be output.

Description

Method and equipment for generating virtual model based on mannequin model
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for generating a virtual model based on a mannequin model.
Background
The mannequin is a human model manufactured according to human body proportion and is commonly used for clothing design, clothing teaching, clothing tailoring manufacture, industrial clothing inspection and the like. Common people tables can be divided into: fitting person's stand, show person's stand and special person's stand for three-dimensional cutting.
The existing mannequin model is a human body model preset by a designer in general, and when the existing mannequin model is fused with a human body live-action in the follow-up process, the problem of poor matching caused by large position phase difference is easy to occur.
Disclosure of Invention
The invention provides a method for generating a virtual model based on a mannequin model, which comprises the following steps:
collecting structured data, unstructured data and semi-structured data required by mannequin model construction of different sources;
extracting first data features from the structured data and extracting second data features from the unstructured data and the semi-structured data;
fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic;
and obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model.
A method of generating a virtual model based on a mannequin model as described above, wherein a regular expression is used to extract first data features from structured data.
The method for generating the virtual model based on the mannequin model comprises the steps of performing knowledge extraction of unstructured data and semi-structured data based on supervised learning, and extracting second data features.
The method for generating the virtual model based on the mannequin model, as described above, wherein the knowledge extraction of unstructured data is performed based on supervised learning, and the second data feature is extracted, specifically comprises:
constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, and extracting attributes, relations and entities from unstructured data and semi-structured data;
performing entity alignment and entity disambiguation on the attributes, the relationships and the entities, and then performing quality evaluation;
extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model;
the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
The method for generating the virtual model based on the mannequin model, as described above, further comprises: shooting a multi-angle human body component image by using a human body component analysis camera, and correcting a human platform model by using the multi-angle human body depth image and the multi-angle human body component image, wherein the method specifically comprises the following steps of: and performing primary matching on the multi-angle human depth image, correcting the multi-angle target image according to the primary matching result, and matching the corrected target image with the digital human table image.
The invention also provides a device for generating a virtual model based on the mannequin model, which comprises: the system comprises a data acquisition module, a mannequin model construction module, a model correction module and a virtual model generation module;
the data acquisition module is used for acquiring structured data, unstructured data and semi-structured data required by the construction of the mannequin model from different sources;
the mannequin model building module is used for extracting first data features from the structured data and extracting second data features from the unstructured data and the semi-structured data; fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
the model correction module is used for calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic;
the virtual model generation module is used for obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model.
A method of generating a virtual model based on a mannequin model as described above, wherein a regular expression is used to extract first data features from structured data.
The method for generating the virtual model based on the mannequin model comprises the steps of performing knowledge extraction of unstructured data and semi-structured data based on supervised learning, and extracting second data features.
The method for generating the virtual model based on the mannequin model, as described above, wherein the knowledge extraction of unstructured data is performed based on supervised learning, and the second data feature is extracted, specifically comprises:
constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, and extracting attributes, relations and entities from unstructured data and semi-structured data;
performing entity alignment and entity disambiguation on the attributes, the relationships and the entities, and then performing quality evaluation;
extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model;
the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
The present invention also provides a computer storage medium comprising: at least one memory and at least one processor;
the memory is used for storing one or more program instructions;
a processor for executing one or more program instructions for performing a method of generating a virtual model based on a mannequin as described in any one of the preceding claims.
The beneficial effects achieved by the invention are as follows: according to the invention, the mannequin model is constructed through the analysis of the multi-source data, so that the accuracy of the mannequin model is improved, and the virtual model which is closer to the human body can be output.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
Fig. 1 is a flowchart of a method for generating a virtual model based on a mannequin model according to an embodiment of the present invention.
Detailed Description
The invention is further described in connection with the following detailed description, in order to make the technical means, the creation characteristics, the achievement of the purpose and the effect of the invention easy to understand.
Example 1
As shown in fig. 1, a first embodiment of the present invention provides a method for generating a virtual model based on a mannequin model, including:
step 110, collecting structured data, unstructured data and semi-structured data required by mannequin model construction of different sources;
in order to construct the most suitable mannequin model, the application acquires data for constructing the mannequin model from various sources, such as European style human skeleton muscle data, japanese style human skeleton muscle data, korean style human skeleton muscle data and national standard human skeleton muscle data, wherein the data can be analyzed by means of questionnaire investigation, data description, depth camera shooting and the like, and the data is divided into structured data, unstructured data and semi-structured data.
Step 120 extracts a first data feature from the structured data and a second data feature from the unstructured data and the semi-structured data.
Since structured data has a fixed structural form, it is preferable to extract structured data features from structured data using regular expressions. For unstructured data and semi-structured data without fixed formats, the application preferably performs knowledge extraction of the unstructured data and the semi-structured data based on supervised learning to extract the second data features.
Knowledge extraction of unstructured data is performed based on supervised learning, and second data features are extracted, specifically including: constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, specifically extracting attributes, relations and entities from unstructured data and semi-structured data, performing entity alignment and entity disambiguation on the attributes, the relations and the entities, and performing quality evaluation. Extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model; the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
Data features of interest extracted from structured, unstructured, and semi-structured data include skeletal features, but are not limited to shoulder, arm, spine, pelvis, and leg regions, and muscle features, including but not limited to upper limb, chest, abdomen, buttocks, and legs, with which a mannequin model is trained.
The first data features extracted from the structured data by the feature extraction method are as followsWherein, the method comprises the steps of, wherein,xyrespectively representing skeletal and muscular characteristics, +.>Representing European style human skeletal muscle data, < >>Representing daily version of human skeletal muscle data, … …, ->Represents the nth human skeletal muscle data. The second data extracted from the unstructured data and the semi-structured data is characterized byWherein->Representing European style human skeletal muscle data, < >>Representing daily version of human skeletal muscle data, … …, ->Represents the nth human skeletal muscle data. The first data feature and the second data feature are the same data type, but the data is different.
And 130, fusing the first data characteristic and the second data characteristic, inputting the fused characteristic into the machine model, and outputting the preliminary mannequin model.
According to the formulaFusing the structured data features and the unstructured data features, wherein +.>In order to fuse the features of the features,,/>representing the fused European style human skeletal muscle data,representing the daily version of human skeletal muscle data, … …, < >>Representing fused nth human skeletal muscle data;/>Representing the influence factor of the ith characteristic data on the construction of the mannequin module, < >>Representing the impact weight of the structured data features on the construction of a mannequin model, < >>And (5) representing the influence weight of unstructured data and semi-structured data characteristics on the construction of the mannequin model. And inputting the fusion characteristics into a machine model, and outputting a preliminary mannequin model.
Step 140, calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the pedestrian model by using the characteristic loss function between the first characteristic and the second characteristic;
the method comprises the steps of acquiring structured data, unstructured data and semi-structured data, extracting data features from the acquired structured data, unstructured data and semi-structured data, wherein certain feature loss exists, and because the first data features and the second data features represent different data forms of the same model, certain feature loss exists between the data features.
Specifically, according to the formulaCalculating a feature loss function between the first data feature and the second data feature, wherein +_>For the first data feature set, +.>Is a second set of data features; />、/>Correction coefficients of the characteristic loss function for the first data characteristic and the second data characteristic, respectively, 0</><1,0</><1, and->+/>=1。For characteristic loss rate, ++>Wherein->Representing the norm, i.e. the square root of the sum of squares of each element in the data feature set,/for>Calculating for the loss of the first data feature and the fusion feature, < >>Calculating an influence weight on the feature loss rate for the loss of the first data feature and the fusion feature,/->Calculating for the loss of the second data feature and the fusion feature,/->And calculating the influence weight on the feature loss rate for the loss of the second data feature and the fusion feature.
Step 150, obtaining user skeletal muscle data by using a depth camera and a human body composition analysis camera, inputting the user skeletal muscle data into a trained mannequin model, and generating a virtual model;
and shooting multi-angle human body depth images by using a depth camera, generating a motion history point cloud by using a depth image sequence, extracting global features, extracting relative displacement features and relative distance features of human body bone joints from three-dimensional bone information, normalizing to obtain user bone muscle data, and analyzing the camera by using human body components to obtain the user muscle data. User skeletal muscle data is input into a mannequin model to generate a virtual model, a multi-angle human body depth image is shot by using a depth camera, a multi-angle human body component image is shot by using a human body component analysis camera, and correction of the mannequin model is performed by using the multi-angle human body depth image and the multi-angle human body component image.
The correction of the mannequin model specifically comprises the following steps:
and performing primary matching on the multi-angle human body depth image and the multi-angle human body component image.
Specifically, calibration of skeleton feature point coordinates is performed on a multi-angle human depth image, calibration of human-table feature points is performed in a multi-angle digital human-table image, a designated human body part in the digital human-table image is used as a human-table feature point, and the position of the human-table feature point in the digital human-table image is the human-table feature point coordinates. And the skeleton feature points and the human-table feature points have corresponding relations, and the corresponding feature points are aligned one by one, so that primary matching of the multi-angle depth image and the corresponding multi-angle digital human-table image is completed.
Calibrating muscle characteristic point coordinates on the multi-angle human body component image, calibrating human body characteristic points in the multi-angle digital human body image, taking the appointed human body part in the digital human body image as the human body characteristic points, and taking the positions of the human body characteristic points in the digital human body image as the human body characteristic point coordinates. The skeleton feature points and the human-table feature points have corresponding relations, and the corresponding feature points are aligned one by one, so that primary matching of the multi-angle human body component images and the corresponding multi-angle digital human-table images is completed.
And correcting the target image with multiple angles according to the primary matching result, and matching the corrected target image with the digital mannequin image.
Specifically, whether the coordinates of the mannequin feature points after primary matching are overlapped with the coordinates of the skeletal muscle feature points or not is checked, the number of the overlapped coordinates of the mannequin feature points and the number of the clothing feature points are judged, and if the number of the overlapped coordinates is smaller than a first specified threshold, the whole correction is carried out on the target image. And (3) carrying out overall scaling on the image according to a certain proportion, judging whether the number of coordinates of the overlapped human-table characteristic points and the number of coordinates of the skeletal muscle characteristic points are changed or not after the overall scaling, and if the number of the overlapped coordinates is increased and is larger than a first designated threshold value, carrying out local correction on the image, and matching the corrected target image with the digital human-table image.
Example two
The second embodiment of the invention provides a device for generating a virtual model based on a mannequin model, which comprises: the system comprises a data acquisition module 21, a mannequin model construction module 22, a model correction module 23 and a virtual model generation module 24;
the data acquisition module 21 is used for acquiring structured data, unstructured data and semi-structured data required by the construction of the mannequin model from different sources;
in order to construct the most suitable mannequin model, the application acquires data for constructing the mannequin model from various sources, such as European style human skeleton muscle data, japanese style human skeleton muscle data, korean style human skeleton muscle data and national standard human skeleton muscle data, wherein the data can be analyzed by means of questionnaire investigation, data description, depth camera shooting and the like, and the data is divided into structured data, unstructured data and semi-structured data.
The mannequin model building module 22 is used for extracting first data features from the structured data, and extracting second data features from the unstructured data and the semi-structured data; fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
since structured data has a fixed structural form, it is preferable to extract structured data features from structured data using regular expressions. For unstructured data and semi-structured data without fixed formats, the application preferably performs knowledge extraction of the unstructured data and the semi-structured data based on supervised learning to extract the second data features.
Knowledge extraction of unstructured data is performed based on supervised learning, and second data features are extracted, specifically including: constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, specifically extracting attributes, relations and entities from unstructured data and semi-structured data, performing entity alignment and entity disambiguation on the attributes, the relations and the entities, and performing quality evaluation. Extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model; the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
Data features of interest extracted from structured, unstructured, and semi-structured data include skeletal features, but are not limited to shoulder, arm, spine, pelvis, and leg regions, and muscle features, including but not limited to upper limb, chest, abdomen, buttocks, and legs, with which a mannequin model is trained.
The first data features extracted from the structured data by the feature extraction method are as followsWherein x, y represent bone and muscle characteristics, respectively,/->Representing European style human skeletal muscle data, < >>Representing daily version of human skeletal muscle data, … …, ->Represents the nth human skeletal muscle data. The second data extracted from the unstructured data and the semi-structured data is characterized byWherein->Representing European style human skeletal muscle data, < >>Representing daily version of human skeletal muscle data, … …, ->Represents the nth human skeletal muscle data. The first data feature and the second data feature are the same data type, but the data is different.
According to the formulaFusing the structured data features and the unstructured data features, wherein +.>In order to fuse the features of the features,,/>representing the fused European style human skeletal muscle data,representing the daily version of human skeletal muscle data, … …, < >>Representing the nth human skeletal muscle data after fusion; />Table for representing ith characteristic dataInfluence factor of module construction->Representing the impact weight of the structured data features on the construction of a mannequin model, < >>And (5) representing the influence weight of unstructured data and semi-structured data characteristics on the construction of the mannequin model. And inputting the fusion characteristics into a machine model, and outputting a preliminary mannequin model.
The model correction module 23 is configured to calculate a feature loss function between the first data feature and the second data feature, and perform correction of the platform model using the feature loss function between the first feature and the second feature;
the method comprises the steps of acquiring structured data, unstructured data and semi-structured data, extracting data features from the acquired structured data, unstructured data and semi-structured data, wherein certain feature loss exists, and because the first data features and the second data features represent different data forms of the same model, certain feature loss exists between the data features.
Specifically, according to the formulaCalculating a feature loss function between the first data feature and the second data feature, wherein +_>For the first data feature set, +.>Is a second set of data features; />、/>Correction coefficients of the characteristic loss function for the first data characteristic and the second data characteristic, respectively, 0</><1,0</><1, and->+/>=1。For characteristic loss rate, ++>Wherein->Representing the norm, i.e. the square root of the sum of squares of each element in the data feature set,/for>Calculating for the loss of the first data feature and the fusion feature, < >>Calculating an influence weight on the feature loss rate for the loss of the first data feature and the fusion feature,/->Calculating for the loss of the second data feature and the fusion feature,/->And calculating the influence weight on the feature loss rate for the loss of the second data feature and the fusion feature.
The virtual model generation module 24 is configured to obtain user skeletal muscle data using the depth camera and the body composition analysis camera, input the user skeletal muscle data into the trained mannequin model, and generate a virtual model.
And shooting multi-angle human body depth images by using a depth camera, generating a motion history point cloud by using a depth image sequence, extracting global features, extracting relative displacement features and relative distance features of human body bone joints from three-dimensional bone information, normalizing to obtain user bone muscle data, and analyzing the camera by using human body components to obtain the user muscle data. User skeletal muscle data is input into a mannequin model to generate a virtual model, a multi-angle human body depth image is shot by using a depth camera, a multi-angle human body component image is shot by using a human body component analysis camera, and correction of the mannequin model is performed by using the multi-angle human body depth image and the multi-angle human body component image.
The correction of the mannequin model specifically comprises the following steps:
and performing primary matching on the multi-angle human body depth image and the multi-angle human body component image.
Specifically, calibration of skeleton feature point coordinates is performed on a multi-angle human depth image, calibration of human-table feature points is performed in a multi-angle digital human-table image, a designated human body part in the digital human-table image is used as a human-table feature point, and the position of the human-table feature point in the digital human-table image is the human-table feature point coordinates. And the skeleton feature points and the human-table feature points have corresponding relations, and the corresponding feature points are aligned one by one, so that primary matching of the multi-angle depth image and the corresponding multi-angle digital human-table image is completed.
Calibrating muscle characteristic point coordinates on the multi-angle human body component image, calibrating human body characteristic points in the multi-angle digital human body image, taking the appointed human body part in the digital human body image as the human body characteristic points, and taking the positions of the human body characteristic points in the digital human body image as the human body characteristic point coordinates. The skeleton feature points and the human-table feature points have corresponding relations, and the corresponding feature points are aligned one by one, so that primary matching of the multi-angle human body component images and the corresponding multi-angle digital human-table images is completed.
And correcting the target image with multiple angles according to the primary matching result, and matching the corrected target image with the digital mannequin image.
Specifically, whether the coordinates of the mannequin feature points after primary matching are overlapped with the coordinates of the skeletal muscle feature points or not is checked, the number of the overlapped coordinates of the mannequin feature points and the number of the clothing feature points are judged, and if the number of the overlapped coordinates is smaller than a first specified threshold, the whole correction is carried out on the target image. And (3) carrying out overall scaling on the image according to a certain proportion, judging whether the number of coordinates of the overlapped human-table characteristic points and the number of coordinates of the skeletal muscle characteristic points are changed or not after the overall scaling, and if the number of the overlapped coordinates is increased and is larger than a first designated threshold value, carrying out local correction on the image, and matching the corrected target image with the digital human-table image.
Corresponding to the above embodiments, an embodiment of the present invention provides a computer storage medium, including: at least one memory and at least one processor;
the memory is used for storing one or more program instructions;
a processor for executing one or more program instructions for performing a method of generating a virtual model based on a mannequin model.
In accordance with the foregoing embodiments, a computer-readable storage medium is provided, where the computer-readable storage medium contains one or more program instructions for execution by a processor of a method for generating a virtual model based on a mannequin model.
The disclosed embodiments provide a computer readable storage medium having stored therein computer program instructions that, when executed on a computer, cause the computer to perform a method of generating a virtual model based on a mannequin model as described above.
In the embodiment of the invention, the processor may be an integrated circuit chip with signal processing capability. The processor may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP for short), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), a field programmable gate array (FieldProgrammable Gate Array, FPGA for short), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The processor reads the information in the storage medium and, in combination with its hardware, performs the steps of the above method.
The storage medium may be memory, for example, may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable ROM (Electrically EPROM, EEPROM), or a flash Memory.
The volatile memory may be a random access memory (Random Access Memory, RAM for short) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (Double Data RateSDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (directracram, DRRAM).
The storage media described in embodiments of the present invention are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in a combination of hardware and software. When the software is applied, the corresponding functions may be stored in a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present invention in further detail, and are not to be construed as limiting the scope of the invention, but are merely intended to cover any modifications, equivalents, improvements, etc. based on the teachings of the invention.

Claims (10)

1. A method for generating a virtual model based on a mannequin model, comprising:
collecting structured data, unstructured data and semi-structured data required by mannequin model construction of different sources;
extracting first data features from the structured data and extracting second data features from the unstructured data and the semi-structured data;
fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic;
and obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model.
2. A method of generating a virtual model based on a mannequin of claim 1, wherein the first data feature is extracted from the structured data using a regular expression.
3. The method of generating a virtual model based on a mannequin model of claim 1, wherein knowledge extraction of unstructured data and semi-structured data is performed based on supervised learning to extract the second data features.
4. A method for generating a virtual model based on a mannequin model as claimed in claim 3, wherein the extraction of knowledge from unstructured data based on supervised learning, the extraction of the second data features, in particular, comprises:
constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, and extracting attributes, relations and entities from unstructured data and semi-structured data;
performing entity alignment and entity disambiguation on the attributes, the relationships and the entities, and then performing quality evaluation;
extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model;
the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
5. The method for generating a virtual model based on a mannequin of claim 1, further comprising: shooting a multi-angle human body component image by using a human body component analysis camera, and correcting a human platform model by using the multi-angle human body depth image and the multi-angle human body component image, wherein the method specifically comprises the following steps of: and performing primary matching on the multi-angle human depth image, correcting the multi-angle target image according to the primary matching result, and matching the corrected target image with the digital human table image.
6. An apparatus for generating a virtual model based on a mannequin model, comprising: the system comprises a data acquisition module, a mannequin model construction module, a model correction module and a virtual model generation module;
the data acquisition module is used for acquiring structured data, unstructured data and semi-structured data required by the construction of the mannequin model from different sources;
the mannequin model building module is used for extracting first data features from the structured data and extracting second data features from the unstructured data and the semi-structured data; fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
the model correction module is used for calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic;
the virtual model generation module is used for obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model.
7. An apparatus for generating a virtual model based on a mannequin of claim 6, wherein the first data feature is extracted from the structured data using a regular expression.
8. The apparatus for generating a virtual model based on a mannequin of claim 6, wherein the second data feature is extracted by knowledge extraction of unstructured data and semi-structured data based on supervised learning.
9. The apparatus for generating a virtual model based on a mannequin of claim 8, wherein the extracting of the second data features based on knowledge extraction of unstructured data based on supervised learning comprises:
constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, and extracting attributes, relations and entities from unstructured data and semi-structured data;
performing entity alignment and entity disambiguation on the attributes, the relationships and the entities, and then performing quality evaluation;
extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model;
the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
10. A computer storage medium, comprising: at least one memory and at least one processor;
the memory is used for storing one or more program instructions;
a processor for executing one or more program instructions for performing a method of generating a virtual model based on a mannequin according to any one of claims 1 to 5.
CN202311517504.6A 2023-11-15 2023-11-15 Method and equipment for generating virtual model based on mannequin model Active CN117234342B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311517504.6A CN117234342B (en) 2023-11-15 2023-11-15 Method and equipment for generating virtual model based on mannequin model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311517504.6A CN117234342B (en) 2023-11-15 2023-11-15 Method and equipment for generating virtual model based on mannequin model

Publications (2)

Publication Number Publication Date
CN117234342A CN117234342A (en) 2023-12-15
CN117234342B true CN117234342B (en) 2024-03-19

Family

ID=89093376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311517504.6A Active CN117234342B (en) 2023-11-15 2023-11-15 Method and equipment for generating virtual model based on mannequin model

Country Status (1)

Country Link
CN (1) CN117234342B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114676260A (en) * 2021-12-15 2022-06-28 清华大学 Human body bone motion rehabilitation model construction method based on knowledge graph
CN116541472A (en) * 2023-03-22 2023-08-04 麦博(上海)健康科技有限公司 Knowledge graph construction method in medical field

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013189058A1 (en) * 2012-06-21 2013-12-27 Microsoft Corporation Avatar construction using depth camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114676260A (en) * 2021-12-15 2022-06-28 清华大学 Human body bone motion rehabilitation model construction method based on knowledge graph
CN116541472A (en) * 2023-03-22 2023-08-04 麦博(上海)健康科技有限公司 Knowledge graph construction method in medical field

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
三维人体数据的提取及分析;冯晓 等;天津工业大学学报;第30卷(第6期);第20-23页 *

Also Published As

Publication number Publication date
CN117234342A (en) 2023-12-15

Similar Documents

Publication Publication Date Title
US11436745B1 (en) Reconstruction method of three-dimensional (3D) human body model, storage device and control device
Ive et al. Distilling translations with visual awareness
CN109448090B (en) Image processing method, device, electronic equipment and storage medium
CN109741309B (en) Bone age prediction method and device based on deep regression network
Yu et al. A computational biomechanics human body model coupling finite element and multibody segments for assessment of head/brain injuries in car-to-pedestrian collisions
CN109685013B (en) Method and device for detecting head key points in human body posture recognition
US11977604B2 (en) Method, device and apparatus for recognizing, categorizing and searching for garment, and storage medium
Gao et al. Leveraging two kinect sensors for accurate full-body motion capture
CN110490081A (en) A kind of remote sensing object decomposition method based on focusing weight matrix and mutative scale semantic segmentation neural network
CN113781164B (en) Virtual fitting model training method, virtual fitting method and related devices
CN112614125A (en) Mobile phone glass defect detection method and device, computer equipment and storage medium
CN111127668A (en) Role model generation method and device, electronic equipment and storage medium
CN114782661B (en) Training method and device for lower body posture prediction model
CN117234342B (en) Method and equipment for generating virtual model based on mannequin model
US20220270387A1 (en) Modeling method and modeling device for human body model, electronic device, and storage medium
CN114048282A (en) Text tree local matching-based image-text cross-modal retrieval method and system
CN109472023A (en) Entity association degree measuring method and system based on entity and text combined embedding and storage medium
CN115035250A (en) Modeling graphic data information interaction processing method and system
CN112084981B (en) Method for customizing clothing based on neural network
CN114972792A (en) Question-answering method, device, equipment and storage medium based on bimodal feature fusion
CN111738248B (en) Character recognition method, training method of character decoding model and electronic equipment
CN113887500A (en) Human body semantic recognition method and device
CN113298948A (en) Three-dimensional grid reconstruction method, device, equipment and storage medium
Yin et al. Application and visualization of human 3D anatomy teaching for healthy people based on a hybrid network model
CN112613470A (en) Face silence living body detection method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant