CN117234342A - Method and equipment for generating virtual model based on mannequin model - Google Patents
Method and equipment for generating virtual model based on mannequin model Download PDFInfo
- Publication number
- CN117234342A CN117234342A CN202311517504.6A CN202311517504A CN117234342A CN 117234342 A CN117234342 A CN 117234342A CN 202311517504 A CN202311517504 A CN 202311517504A CN 117234342 A CN117234342 A CN 117234342A
- Authority
- CN
- China
- Prior art keywords
- data
- model
- mannequin
- features
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 210000002027 skeletal muscle Anatomy 0.000 claims abstract description 38
- 238000004458 analytical method Methods 0.000 claims abstract description 13
- 239000000203 mixture Substances 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 22
- 238000012937 correction Methods 0.000 claims description 17
- 238000010276 construction Methods 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 14
- 238000003860 storage Methods 0.000 claims description 11
- 230000014509 gene expression Effects 0.000 claims description 6
- 238000013441 quality evaluation Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 abstract description 3
- 210000003205 muscle Anatomy 0.000 description 17
- 230000004927 fusion Effects 0.000 description 12
- 210000000988 bone and bone Anatomy 0.000 description 7
- 210000002414 leg Anatomy 0.000 description 4
- 210000001015 abdomen Anatomy 0.000 description 2
- 210000001217 buttock Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 210000000038 chest Anatomy 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000011835 investigation Methods 0.000 description 2
- 210000004197 pelvis Anatomy 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 210000001364 upper extremity Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003387 muscular Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The application discloses a method and equipment for generating a virtual model based on a mannequin model, and relates to the technical field of data processing. The method comprises the following steps: collecting structured data, unstructured data and semi-structured data; extracting first data features and second data features from the structured data, unstructured data, and semi-structured data; fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model; calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic; and obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model. According to the application, the mannequin model is constructed through the analysis of the multi-source data, so that the accuracy of the mannequin model is improved, and the virtual model which is closer to the human body can be output.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for generating a virtual model based on a mannequin model.
Background
The mannequin is a human model manufactured according to human body proportion and is commonly used for clothing design, clothing teaching, clothing tailoring manufacture, industrial clothing inspection and the like. Common people tables can be divided into: fitting person's stand, show person's stand and special person's stand for three-dimensional cutting.
The existing mannequin model is a human body model preset by a designer in general, and when the existing mannequin model is fused with a human body live-action in the follow-up process, the problem of poor matching caused by large position phase difference is easy to occur.
Disclosure of Invention
The application provides a method for generating a virtual model based on a mannequin model, which comprises the following steps:
collecting structured data, unstructured data and semi-structured data required by mannequin model construction of different sources;
extracting first data features from the structured data and extracting second data features from the unstructured data and the semi-structured data;
fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic;
and obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model.
A method of generating a virtual model based on a mannequin model as described above, wherein a regular expression is used to extract first data features from structured data.
The method for generating the virtual model based on the mannequin model comprises the steps of performing knowledge extraction of unstructured data and semi-structured data based on supervised learning, and extracting second data features.
The method for generating the virtual model based on the mannequin model, as described above, wherein the knowledge extraction of unstructured data is performed based on supervised learning, and the second data feature is extracted, specifically comprises:
constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, and extracting attributes, relations and entities from unstructured data and semi-structured data;
performing entity alignment and entity disambiguation on the attributes, the relationships and the entities, and then performing quality evaluation;
extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model;
the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
The method for generating the virtual model based on the mannequin model, as described above, further comprises: shooting a multi-angle human body component image by using a human body component analysis camera, and correcting a human platform model by using the multi-angle human body depth image and the multi-angle human body component image, wherein the method specifically comprises the following steps of: and performing primary matching on the multi-angle human depth image, correcting the multi-angle target image according to the primary matching result, and matching the corrected target image with the digital human table image.
The application also provides a device for generating a virtual model based on the mannequin model, which comprises: the system comprises a data acquisition module, a mannequin model construction module, a model correction module and a virtual model generation module;
the data acquisition module is used for acquiring structured data, unstructured data and semi-structured data required by the construction of the mannequin model from different sources;
the mannequin model building module is used for extracting first data features from the structured data and extracting second data features from the unstructured data and the semi-structured data; fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
the model correction module is used for calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic;
the virtual model generation module is used for obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model.
A method of generating a virtual model based on a mannequin model as described above, wherein a regular expression is used to extract first data features from structured data.
The method for generating the virtual model based on the mannequin model comprises the steps of performing knowledge extraction of unstructured data and semi-structured data based on supervised learning, and extracting second data features.
The method for generating the virtual model based on the mannequin model, as described above, wherein the knowledge extraction of unstructured data is performed based on supervised learning, and the second data feature is extracted, specifically comprises:
constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, and extracting attributes, relations and entities from unstructured data and semi-structured data;
performing entity alignment and entity disambiguation on the attributes, the relationships and the entities, and then performing quality evaluation;
extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model;
the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
The present application also provides a computer storage medium comprising: at least one memory and at least one processor;
the memory is used for storing one or more program instructions;
a processor for executing one or more program instructions for performing a method of generating a virtual model based on a mannequin as described in any one of the preceding claims.
The beneficial effects achieved by the application are as follows: according to the application, the mannequin model is constructed through the analysis of the multi-source data, so that the accuracy of the mannequin model is improved, and the virtual model which is closer to the human body can be output.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
Fig. 1 is a flowchart of a method for generating a virtual model based on a mannequin model according to an embodiment of the present application.
Detailed Description
The application is further described in connection with the following detailed description, in order to make the technical means, the creation characteristics, the achievement of the purpose and the effect of the application easy to understand.
Example 1
As shown in fig. 1, a first embodiment of the present application provides a method for generating a virtual model based on a mannequin model, including:
step 110, collecting structured data, unstructured data and semi-structured data required by mannequin model construction of different sources;
in order to construct the most suitable mannequin model, the application acquires data for constructing the mannequin model from various sources, such as European style human skeleton muscle data, japanese style human skeleton muscle data, korean style human skeleton muscle data and national standard human skeleton muscle data, wherein the data can be analyzed by means of questionnaire investigation, data description, depth camera shooting and the like, and the data is divided into structured data, unstructured data and semi-structured data.
Step 120 extracts a first data feature from the structured data and a second data feature from the unstructured data and the semi-structured data.
Since structured data has a fixed structural form, it is preferable to extract structured data features from structured data using regular expressions. For unstructured data and semi-structured data without fixed formats, the application preferably performs knowledge extraction of the unstructured data and the semi-structured data based on supervised learning to extract the second data features.
Knowledge extraction of unstructured data is performed based on supervised learning, and second data features are extracted, specifically including: constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, specifically extracting attributes, relations and entities from unstructured data and semi-structured data, performing entity alignment and entity disambiguation on the attributes, the relations and the entities, and performing quality evaluation. Extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model; the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
Data features of interest extracted from structured, unstructured, and semi-structured data include skeletal features, but are not limited to shoulder, arm, spine, pelvis, and leg regions, and muscle features, including but not limited to upper limb, chest, abdomen, buttocks, and legs, with which a mannequin model is trained.
The first data features extracted from the structured data by the feature extraction method are as followsWherein, the method comprises the steps of, wherein,x、yrespectively representing skeletal and muscular characteristics, +.>Representing European style human skeletal muscle data, < >>Representing daily version of human skeletal muscle data, … …, ->Represents the nth human skeletal muscle data. The second data extracted from the unstructured data and the semi-structured data is characterized byWherein->Representing European style human skeletal muscle data, < >>Representing daily version of human skeletal muscle data, … …, ->Represents the nth human skeletal muscle data. The first data feature and the second data feature are the same data type, but the data is different.
And 130, fusing the first data characteristic and the second data characteristic, inputting the fused characteristic into the machine model, and outputting the preliminary mannequin model.
According to the formulaFusing the structured data features and the unstructured data features, wherein +.>In order to fuse the features of the features,,/>representing the fused European style human skeletal muscle data,representing the daily version of human skeletal muscle data, … …, < >>Representing the nth human skeletal muscle data after fusion; />Representing the influence factor of the ith characteristic data on the construction of the mannequin module, < >>Representing the impact weight of the structured data features on the construction of a mannequin model, < >>And (5) representing the influence weight of unstructured data and semi-structured data characteristics on the construction of the mannequin model. And inputting the fusion characteristics into a machine model, and outputting a preliminary mannequin model.
Step 140, calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the pedestrian model by using the characteristic loss function between the first characteristic and the second characteristic;
the application also comprises the steps of calculating a characteristic loss function among the data features after training the extracted data features, and continuously correcting the mannequin model by using the characteristic loss function among the data features so as to improve the accuracy of the mannequin model.
Specifically, according to the formulaCalculating a feature loss function between the first data feature and the second data feature, wherein +_>For the first data feature set, +.>Is a second set of data features; />、/>Correction coefficients of the characteristic loss function for the first data characteristic and the second data characteristic, respectively, 0</><1,0</><1, and->+/>=1。For characteristic loss rate, ++>Wherein->Representing the norm, i.e. the square root of the sum of squares of each element in the data feature set,/for>For the first data feature and fusion featureLoss calculation of symptoms, < >>Calculating an influence weight on the feature loss rate for the loss of the first data feature and the fusion feature,/->Calculating for the loss of the second data feature and the fusion feature,/->And calculating the influence weight on the feature loss rate for the loss of the second data feature and the fusion feature.
Step 150, obtaining user skeletal muscle data by using a depth camera and a human body composition analysis camera, inputting the user skeletal muscle data into a trained mannequin model, and generating a virtual model;
and shooting multi-angle human body depth images by using a depth camera, generating a motion history point cloud by using a depth image sequence, extracting global features, extracting relative displacement features and relative distance features of human body bone joints from three-dimensional bone information, normalizing to obtain user bone muscle data, and analyzing the camera by using human body components to obtain the user muscle data. User skeletal muscle data is input into a mannequin model to generate a virtual model, a multi-angle human body depth image is shot by using a depth camera, a multi-angle human body component image is shot by using a human body component analysis camera, and correction of the mannequin model is performed by using the multi-angle human body depth image and the multi-angle human body component image.
The correction of the mannequin model specifically comprises the following steps:
and performing primary matching on the multi-angle human body depth image and the multi-angle human body component image.
Specifically, calibration of skeleton feature point coordinates is performed on a multi-angle human depth image, calibration of human-table feature points is performed in a multi-angle digital human-table image, a designated human body part in the digital human-table image is used as a human-table feature point, and the position of the human-table feature point in the digital human-table image is the human-table feature point coordinates. And the skeleton feature points and the human-table feature points have corresponding relations, and the corresponding feature points are aligned one by one, so that primary matching of the multi-angle depth image and the corresponding multi-angle digital human-table image is completed.
Calibrating muscle characteristic point coordinates on the multi-angle human body component image, calibrating human body characteristic points in the multi-angle digital human body image, taking the appointed human body part in the digital human body image as the human body characteristic points, and taking the positions of the human body characteristic points in the digital human body image as the human body characteristic point coordinates. The skeleton feature points and the human-table feature points have corresponding relations, and the corresponding feature points are aligned one by one, so that primary matching of the multi-angle human body component images and the corresponding multi-angle digital human-table images is completed.
And correcting the target image with multiple angles according to the primary matching result, and matching the corrected target image with the digital mannequin image.
Specifically, whether the coordinates of the mannequin feature points after primary matching are overlapped with the coordinates of the skeletal muscle feature points or not is checked, the number of the overlapped coordinates of the mannequin feature points and the number of the clothing feature points are judged, and if the number of the overlapped coordinates is smaller than a first specified threshold, the whole correction is carried out on the target image. And (3) carrying out overall scaling on the image according to a certain proportion, judging whether the number of coordinates of the overlapped human-table characteristic points and the number of coordinates of the skeletal muscle characteristic points are changed or not after the overall scaling, and if the number of the overlapped coordinates is increased and is larger than a first designated threshold value, carrying out local correction on the image, and matching the corrected target image with the digital human-table image.
Example two
The second embodiment of the application provides a device for generating a virtual model based on a mannequin model, which comprises: the system comprises a data acquisition module 21, a mannequin model construction module 22, a model correction module 23 and a virtual model generation module 24;
the data acquisition module 21 is used for acquiring structured data, unstructured data and semi-structured data required by the construction of the mannequin model from different sources;
in order to construct the most suitable mannequin model, the application acquires data for constructing the mannequin model from various sources, such as European style human skeleton muscle data, japanese style human skeleton muscle data, korean style human skeleton muscle data and national standard human skeleton muscle data, wherein the data can be analyzed by means of questionnaire investigation, data description, depth camera shooting and the like, and the data is divided into structured data, unstructured data and semi-structured data.
The mannequin model building module 22 is used for extracting first data features from the structured data, and extracting second data features from the unstructured data and the semi-structured data; fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
since structured data has a fixed structural form, it is preferable to extract structured data features from structured data using regular expressions. For unstructured data and semi-structured data without fixed formats, the application preferably performs knowledge extraction of the unstructured data and the semi-structured data based on supervised learning to extract the second data features.
Knowledge extraction of unstructured data is performed based on supervised learning, and second data features are extracted, specifically including: constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, specifically extracting attributes, relations and entities from unstructured data and semi-structured data, performing entity alignment and entity disambiguation on the attributes, the relations and the entities, and performing quality evaluation. Extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model; the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
Data features of interest extracted from structured, unstructured, and semi-structured data include skeletal features, but are not limited to shoulder, arm, spine, pelvis, and leg regions, and muscle features, including but not limited to upper limb, chest, abdomen, buttocks, and legs, with which a mannequin model is trained.
By applying the characteristicThe first data extracted from the structured data is characterized byWherein x, y represent bone and muscle characteristics, respectively,/->Representing European style human skeletal muscle data, < >>Representing daily version of human skeletal muscle data, … …, ->Represents the nth human skeletal muscle data. The second data extracted from the unstructured data and the semi-structured data is characterized byWherein->Representing European style human skeletal muscle data, < >>Representing daily version of human skeletal muscle data, … …, ->Represents the nth human skeletal muscle data. The first data feature and the second data feature are the same data type, but the data is different.
According to the formulaFusing the structured data features and the unstructured data features, wherein +.>In order to fuse the features of the features,,/>representing the fused European style human skeletal muscle data,representing the daily version of human skeletal muscle data, … …, < >>Representing the nth human skeletal muscle data after fusion; />Representing the influence factor of the ith characteristic data on the construction of the mannequin module, < >>Representing the impact weight of the structured data features on the construction of a mannequin model, < >>And (5) representing the influence weight of unstructured data and semi-structured data characteristics on the construction of the mannequin model. And inputting the fusion characteristics into a machine model, and outputting a preliminary mannequin model.
The model correction module 23 is configured to calculate a feature loss function between the first data feature and the second data feature, and perform correction of the platform model using the feature loss function between the first feature and the second feature;
the application also comprises the steps of calculating a characteristic loss function among the data features after training the extracted data features, and continuously correcting the mannequin model by using the characteristic loss function among the data features so as to improve the accuracy of the mannequin model.
Specifically, according to the formulaCalculating a feature loss function between the first data feature and the second data feature, wherein +_>For the first data feature set, +.>Is a second set of data features; />、/>Correction coefficients of the characteristic loss function for the first data characteristic and the second data characteristic, respectively, 0</><1,0</><1, and->+/>=1。For characteristic loss rate, ++>Wherein->Representing the norm, i.e. the square root of the sum of squares of each element in the data feature set,/for>Calculating for the loss of the first data feature and the fusion feature, < >>For the first data feature and the fusion featureInfluence weight of loss calculation on characteristic loss rate, < ->Calculating for the loss of the second data feature and the fusion feature,/->And calculating the influence weight on the feature loss rate for the loss of the second data feature and the fusion feature.
The virtual model generation module 24 is configured to obtain user skeletal muscle data using the depth camera and the body composition analysis camera, input the user skeletal muscle data into the trained mannequin model, and generate a virtual model.
And shooting multi-angle human body depth images by using a depth camera, generating a motion history point cloud by using a depth image sequence, extracting global features, extracting relative displacement features and relative distance features of human body bone joints from three-dimensional bone information, normalizing to obtain user bone muscle data, and analyzing the camera by using human body components to obtain the user muscle data. User skeletal muscle data is input into a mannequin model to generate a virtual model, a multi-angle human body depth image is shot by using a depth camera, a multi-angle human body component image is shot by using a human body component analysis camera, and correction of the mannequin model is performed by using the multi-angle human body depth image and the multi-angle human body component image.
The correction of the mannequin model specifically comprises the following steps:
and performing primary matching on the multi-angle human body depth image and the multi-angle human body component image.
Specifically, calibration of skeleton feature point coordinates is performed on a multi-angle human depth image, calibration of human-table feature points is performed in a multi-angle digital human-table image, a designated human body part in the digital human-table image is used as a human-table feature point, and the position of the human-table feature point in the digital human-table image is the human-table feature point coordinates. And the skeleton feature points and the human-table feature points have corresponding relations, and the corresponding feature points are aligned one by one, so that primary matching of the multi-angle depth image and the corresponding multi-angle digital human-table image is completed.
Calibrating muscle characteristic point coordinates on the multi-angle human body component image, calibrating human body characteristic points in the multi-angle digital human body image, taking the appointed human body part in the digital human body image as the human body characteristic points, and taking the positions of the human body characteristic points in the digital human body image as the human body characteristic point coordinates. The skeleton feature points and the human-table feature points have corresponding relations, and the corresponding feature points are aligned one by one, so that primary matching of the multi-angle human body component images and the corresponding multi-angle digital human-table images is completed.
And correcting the target image with multiple angles according to the primary matching result, and matching the corrected target image with the digital mannequin image.
Specifically, whether the coordinates of the mannequin feature points after primary matching are overlapped with the coordinates of the skeletal muscle feature points or not is checked, the number of the overlapped coordinates of the mannequin feature points and the number of the clothing feature points are judged, and if the number of the overlapped coordinates is smaller than a first specified threshold, the whole correction is carried out on the target image. And (3) carrying out overall scaling on the image according to a certain proportion, judging whether the number of coordinates of the overlapped human-table characteristic points and the number of coordinates of the skeletal muscle characteristic points are changed or not after the overall scaling, and if the number of the overlapped coordinates is increased and is larger than a first designated threshold value, carrying out local correction on the image, and matching the corrected target image with the digital human-table image.
Corresponding to the above embodiments, an embodiment of the present application provides a computer storage medium, including: at least one memory and at least one processor;
the memory is used for storing one or more program instructions;
a processor for executing one or more program instructions for performing a method of generating a virtual model based on a mannequin model.
In accordance with the foregoing embodiments, a computer-readable storage medium is provided, where the computer-readable storage medium contains one or more program instructions for execution by a processor of a method for generating a virtual model based on a mannequin model.
The disclosed embodiments provide a computer readable storage medium having stored therein computer program instructions that, when executed on a computer, cause the computer to perform a method of generating a virtual model based on a mannequin model as described above.
In the embodiment of the application, the processor may be an integrated circuit chip with signal processing capability. The processor may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP for short), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), a field programmable gate array (FieldProgrammable Gate Array, FPGA for short), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The processor reads the information in the storage medium and, in combination with its hardware, performs the steps of the above method.
The storage medium may be memory, for example, may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable ROM (Electrically EPROM, EEPROM), or a flash Memory.
The volatile memory may be a random access memory (Random Access Memory, RAM for short) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (Double Data RateSDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (directracram, DRRAM).
The storage media described in embodiments of the present application are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present application may be implemented in a combination of hardware and software. When the software is applied, the corresponding functions may be stored in a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present application in further detail, and are not to be construed as limiting the scope of the application, but are merely intended to cover any modifications, equivalents, improvements, etc. based on the teachings of the application.
Claims (10)
1. A method for generating a virtual model based on a mannequin model, comprising:
collecting structured data, unstructured data and semi-structured data required by mannequin model construction of different sources;
extracting first data features from the structured data and extracting second data features from the unstructured data and the semi-structured data;
fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic;
and obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model.
2. A method of generating a virtual model based on a mannequin of claim 1, wherein the first data feature is extracted from the structured data using a regular expression.
3. The method of generating a virtual model based on a mannequin model of claim 1, wherein knowledge extraction of unstructured data and semi-structured data is performed based on supervised learning to extract the second data features.
4. A method for generating a virtual model based on a mannequin model as claimed in claim 3, wherein the extraction of knowledge from unstructured data based on supervised learning, the extraction of the second data features, in particular, comprises:
constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, and extracting attributes, relations and entities from unstructured data and semi-structured data;
performing entity alignment and entity disambiguation on the attributes, the relationships and the entities, and then performing quality evaluation;
extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model;
the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
5. The method for generating a virtual model based on a mannequin of claim 1, further comprising: shooting a multi-angle human body component image by using a human body component analysis camera, and correcting a human platform model by using the multi-angle human body depth image and the multi-angle human body component image, wherein the method specifically comprises the following steps of: and performing primary matching on the multi-angle human depth image, correcting the multi-angle target image according to the primary matching result, and matching the corrected target image with the digital human table image.
6. An apparatus for generating a virtual model based on a mannequin model, comprising: the system comprises a data acquisition module, a mannequin model construction module, a model correction module and a virtual model generation module;
the data acquisition module is used for acquiring structured data, unstructured data and semi-structured data required by the construction of the mannequin model from different sources;
the mannequin model building module is used for extracting first data features from the structured data and extracting second data features from the unstructured data and the semi-structured data; fusing the first data features and the second data features, inputting the fused features into a machine model, and outputting a mannequin model;
the model correction module is used for calculating a characteristic loss function between the first data characteristic and the second data characteristic, and correcting the platform model by using the characteristic loss function between the first characteristic and the second characteristic;
the virtual model generation module is used for obtaining user skeletal muscle data by using the depth camera and the human body composition analysis camera, inputting the user skeletal muscle data into the trained mannequin model, and generating a virtual model.
7. An apparatus for generating a virtual model based on a mannequin of claim 6, wherein the first data feature is extracted from the structured data using a regular expression.
8. The apparatus for generating a virtual model based on a mannequin of claim 6, wherein the second data feature is extracted by knowledge extraction of unstructured data and semi-structured data based on supervised learning.
9. The apparatus for generating a virtual model based on a mannequin of claim 8, wherein the extracting of the second data features based on knowledge extraction of unstructured data based on supervised learning comprises:
constructing a knowledge graph, extracting entity pairs with target relations from the knowledge graph, and extracting attributes, relations and entities from unstructured data and semi-structured data;
performing entity alignment and entity disambiguation on the attributes, the relationships and the entities, and then performing quality evaluation;
extracting sentences containing entity pairs from unstructured data and semi-structured data as training data, and training a supervised learning model;
the unstructured data and the semi-structured data are input into a supervised learning model, and extracted second data features are output.
10. A computer storage medium, comprising: at least one memory and at least one processor;
the memory is used for storing one or more program instructions;
a processor for executing one or more program instructions for performing a method of generating a virtual model based on a mannequin according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311517504.6A CN117234342B (en) | 2023-11-15 | 2023-11-15 | Method and equipment for generating virtual model based on mannequin model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311517504.6A CN117234342B (en) | 2023-11-15 | 2023-11-15 | Method and equipment for generating virtual model based on mannequin model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117234342A true CN117234342A (en) | 2023-12-15 |
CN117234342B CN117234342B (en) | 2024-03-19 |
Family
ID=89093376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311517504.6A Active CN117234342B (en) | 2023-11-15 | 2023-11-15 | Method and equipment for generating virtual model based on mannequin model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117234342B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130342527A1 (en) * | 2012-06-21 | 2013-12-26 | Microsoft Corporation | Avatar construction using depth camera |
CN114676260A (en) * | 2021-12-15 | 2022-06-28 | 清华大学 | Human body bone motion rehabilitation model construction method based on knowledge graph |
CN116541472A (en) * | 2023-03-22 | 2023-08-04 | 麦博(上海)健康科技有限公司 | Knowledge graph construction method in medical field |
-
2023
- 2023-11-15 CN CN202311517504.6A patent/CN117234342B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130342527A1 (en) * | 2012-06-21 | 2013-12-26 | Microsoft Corporation | Avatar construction using depth camera |
CN114676260A (en) * | 2021-12-15 | 2022-06-28 | 清华大学 | Human body bone motion rehabilitation model construction method based on knowledge graph |
CN116541472A (en) * | 2023-03-22 | 2023-08-04 | 麦博(上海)健康科技有限公司 | Knowledge graph construction method in medical field |
Non-Patent Citations (1)
Title |
---|
冯晓 等: "三维人体数据的提取及分析", 天津工业大学学报, vol. 30, no. 6, pages 20 - 23 * |
Also Published As
Publication number | Publication date |
---|---|
CN117234342B (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11436745B1 (en) | Reconstruction method of three-dimensional (3D) human body model, storage device and control device | |
Ive et al. | Distilling translations with visual awareness | |
CN109685013B (en) | Method and device for detecting head key points in human body posture recognition | |
Yu et al. | A computational biomechanics human body model coupling finite element and multibody segments for assessment of head/brain injuries in car-to-pedestrian collisions | |
US11977604B2 (en) | Method, device and apparatus for recognizing, categorizing and searching for garment, and storage medium | |
CN110490081A (en) | A kind of remote sensing object decomposition method based on focusing weight matrix and mutative scale semantic segmentation neural network | |
CN108241649B (en) | Knowledge graph-based searching method and device | |
CN111008583A (en) | Pedestrian and rider posture estimation method assisted by limb characteristics | |
CN112614125A (en) | Mobile phone glass defect detection method and device, computer equipment and storage medium | |
WO2023142651A1 (en) | Action generation method and related apparatus, and electronic device, storage medium and program | |
CN111127668A (en) | Role model generation method and device, electronic equipment and storage medium | |
CN110347857A (en) | The semanteme marking method of remote sensing image based on intensified learning | |
CN115035250A (en) | Modeling graphic data information interaction processing method and system | |
CN114782661A (en) | Training method and device for lower body posture prediction model | |
CN113781164B (en) | Virtual fitting model training method, virtual fitting method and related devices | |
CN111310590A (en) | Action recognition method and electronic equipment | |
CN117234342B (en) | Method and equipment for generating virtual model based on mannequin model | |
Deng et al. | Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided Decoding | |
US20220270387A1 (en) | Modeling method and modeling device for human body model, electronic device, and storage medium | |
CN114048282A (en) | Text tree local matching-based image-text cross-modal retrieval method and system | |
CN109472023A (en) | Entity association degree measuring method and system based on entity and text combined embedding and storage medium | |
CN116071410A (en) | Point cloud registration method, system, equipment and medium based on deep learning | |
CN112084981B (en) | Method for customizing clothing based on neural network | |
CN115049764A (en) | Training method, device, equipment and medium for SMPL parameter prediction model | |
CN113887500A (en) | Human body semantic recognition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |