CN116127019B - Dynamic parameter and visual model generation WEB 2D automatic modeling engine system - Google Patents

Dynamic parameter and visual model generation WEB 2D automatic modeling engine system Download PDF

Info

Publication number
CN116127019B
CN116127019B CN202310209360.1A CN202310209360A CN116127019B CN 116127019 B CN116127019 B CN 116127019B CN 202310209360 A CN202310209360 A CN 202310209360A CN 116127019 B CN116127019 B CN 116127019B
Authority
CN
China
Prior art keywords
feature vector
parameter
vector
product
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310209360.1A
Other languages
Chinese (zh)
Other versions
CN116127019A (en
Inventor
吴武江
张军峰
徐翌鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Guochen Zhiqi Technology Co ltd
Original Assignee
Hangzhou Guochen Zhiqi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Guochen Zhiqi Technology Co ltd filed Critical Hangzhou Guochen Zhiqi Technology Co ltd
Priority to CN202310209360.1A priority Critical patent/CN116127019B/en
Publication of CN116127019A publication Critical patent/CN116127019A/en
Application granted granted Critical
Publication of CN116127019B publication Critical patent/CN116127019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • G06F16/972Access to data in other repository systems, e.g. legacy data or dynamic Web page generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)
  • Feedback Control In General (AREA)

Abstract

The application relates to the field of intelligent modeling, and particularly discloses a WEB 2D automatic modeling engine system generated by dynamic parameters and a visual model, which is characterized in that semantic understanding characteristic information and product demand parameter characteristic information of text description of product demand are dug out by adopting a neural network model based on deep learning, and the two are further fused to obtain accurate product demand semantic understanding information, so that modeling efficiency and quality are improved.

Description

Dynamic parameter and visual model generation WEB 2D automatic modeling engine system
Technical Field
The application relates to the field of intelligent modeling, in particular to a WEB 2D automatic modeling engine system generated by dynamic parameters and a visual model.
Background
With the development of industrial intelligent manufacturing, the demands of industrial enterprises on equipment visualization and remote operation and maintenance are becoming strong, and the traditional single-edition configuration software cannot meet the increasingly complex control demands, so that the realization of a Web configuration visualization interface becomes a main technical path.
However, the existing WEB 2D modeling has low intelligent degree, and the artificial design is easy to be wrong. In the process of WEB 2D modeling, the relation of each parameter is complicated, the efficiency of designing the 2D model is low, a great deal of energy of engineers is consumed, and the accuracy cannot be ensured.
Therefore, an intelligent WEB 2D automatic modeling engine system is desired, which can drive the WEB 2D system to perform an automatic modeling engine to accurately perform 2D modeling, and improve modeling efficiency and quality.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a WEB 2D automatic modeling engine system generated by dynamic parameters and a visual model, which is used for mining semantic understanding characteristic information and product demand parameter characteristic information of text description of product demands by adopting a neural network model based on deep learning, and further fusing the semantic understanding characteristic information and the product demand parameter characteristic information to obtain accurate product demand semantic understanding information, so that modeling efficiency and modeling quality are improved.
According to one aspect of the present application, there is provided a dynamic parameter and visualization model generation WEB 2D automatic modeling engine system, comprising: the product demand acquisition module is used for acquiring product demands input by a user, wherein the product demands comprise product text descriptions and demand parameters input by the user in a product custom parameter template form; the product text semantic understanding module is used for obtaining a product text description semantic understanding feature vector through a semantic encoder comprising a word embedding layer after word segmentation processing is carried out on the product text description; the product parameter coding module is used for respectively passing the demand parameters input by the user in the product custom parameter template form through the independent heat coder to obtain a plurality of demand parameter independent heat coding vectors; the product parameter association module is used for enabling the plurality of demand parameter single-heat coding vectors to pass through a context encoder based on a converter to obtain parameter context semantic association feature vectors; the feature fusion module is used for fusing the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain a modeling demand understanding feature vector; the optimization module is used for carrying out feature distribution optimization on the modeling demand understanding feature vector based on the product text description semantic understanding feature vector and the parameter context semantic association feature vector so as to obtain an optimized modeling demand understanding feature vector; and a 2D model generation module for passing the optimization modeling demand understanding feature vector through a diffusion model-based model generator to generate a 2D model.
In the above dynamic parameter and visualization model generation WEB 2D automatic modeling engine system, the product text semantic understanding module includes: the word segmentation unit is used for carrying out word segmentation processing on the product text description so as to convert the product text description into a word sequence consisting of a plurality of words; the word embedding unit is used for mapping each word in the word sequence into a word embedding vector by using the embedding layer of the semantic encoder comprising the embedding layer so as to obtain a sequence of word embedding vectors; a context coding unit, configured to perform global context semantic coding on the sequence of word embedding vectors using the converter of the semantic encoder including the embedding layer, where the global context semantic coding is based on a converter thought, so as to obtain a plurality of global context semantic feature vectors; and the cascading unit is used for cascading the plurality of global context semantic feature vectors to obtain the product text description semantic understanding feature vector.
In the above dynamic parameter and visualization model generation WEB 2D automatic modeling engine system, the context encoding unit includes: a first query vector construction subunit, configured to perform one-dimensional arrangement on the sequence of word embedding vectors to obtain a global feature vector; a first self-attention subunit, configured to calculate a product between the global feature vector and a transpose vector of a word vector in the sequence of word embedding vectors to obtain a plurality of self-attention association matrices; the first normalization subunit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices; the first attention calculating subunit is used for obtaining a plurality of probability values through a Softmax classification function by using each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; a first attention applying subunit, configured to weight each word vector in the sequence of word embedding vectors with each probability value in the plurality of probability values as a weight to obtain the plurality of context semantic feature vectors; and the first cascade subunit is used for cascading the plurality of context semantic feature vectors to obtain the global context semantic feature vector.
In the above dynamic parameter and visualization model generation WEB 2D automatic modeling engine system, the product parameter association module includes: the second query vector construction unit is used for carrying out one-dimensional arrangement on the plurality of demand parameter single-hot coding vectors so as to obtain global demand parameter single-hot coding vectors; a second self-attention unit, configured to calculate a product between the global demand parameter one-hot encoding vector and a transpose vector of each demand parameter one-hot encoding vector in the plurality of demand parameter one-hot encoding vectors to obtain a plurality of self-attention correlation matrices; the second normalization unit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices; the second attention calculating unit is used for obtaining a plurality of probability values through a Softmax classification function by using each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; a second attention applying unit, configured to weight each demand parameter independent heat coding vector in the plurality of demand parameter independent heat coding vectors by using each probability value in the plurality of probability values as a weight, so as to obtain the plurality of context semantic demand parameter independent heat coding vectors; and the second cascading unit is used for cascading the plurality of context semantic demand parameter single-hot encoding vectors to obtain the parameter context semantic association feature vectors.
In the above dynamic parameter and visualization model generation WEB 2D automatic modeling engine system, the feature fusion module is configured to: fusing the product text description semantic understanding feature vector and the parameter context semantic association feature vector with the following formula to obtain a modeling requirement understanding feature vector; wherein, the formula is:
Wherein/> Representing the product text description semantically understood feature vector,Representing the parameter context semantically associated feature vector,Representing a cascade function,Representing the modeling requirement understanding feature vector.
In the above dynamic parameter and visualization model generation WEB 2D automatic modeling engine system, the optimization module includes: the first response optimization unit is used for calculating incoherent sparse response type fusion of the product text description semantic understanding feature vector and the modeling demand understanding feature vector to obtain a first partial response type fusion feature vector; the second response optimization unit is used for calculating incoherent sparse response type fusion of the parameter context semantic association feature vector and the modeling demand understanding feature vector to obtain a second partial response type fusion feature vector; and the point adding optimization unit is used for carrying out position point-by-position point on the first partial response fusion feature vector and the second partial response fusion feature vector to obtain the optimization modeling demand understanding feature vector.
In the above dynamic parameter and visualization model generation WEB 2D automatic modeling engine system, the first response optimizing unit is configured to: calculating incoherent sparse response type fusion of the product text description semantic understanding feature vector and the modeling requirement understanding feature vector by the following formula to obtain the first partial response type fusion feature vector; wherein, the formula is:
Wherein the method comprises the steps of AndRepresenting the product text description semantic understanding feature vector, the modeling requirement understanding feature vector and the first partial responsiveness fusion feature vector, respectively,AndRepresenting the first and second norms of the vector, respectively,Is the length of the vector,AndThe vector product and the vector point product are represented separately, and all vectors are in the form of row vectors.
In the above dynamic parameter and visualization model generation WEB 2D automatic modeling engine system, the second response optimizing unit is configured to: calculating incoherent sparse response type fusion of the parameter context semantic association feature vector and the modeling requirement understanding feature vector by the following formula to obtain the first partial response type fusion feature vector; wherein, the formula is:
Wherein the method comprises the steps of AndRepresenting the parameter context semantically-related feature vector, the modeling requirement understanding feature vector, and the second partial responsiveness fusion feature vector, respectively,AndRepresenting the first and second norms of the vector, respectively,Is the length of the vector,AndThe vector product and the vector point product are represented separately, and all vectors are in the form of row vectors.
According to another aspect of the present application, there is provided a method for generating a WEB 2D automatic modeling engine with dynamic parameters and a visual model, comprising: acquiring a product requirement input by a user, wherein the product requirement comprises a product text description and a requirement parameter input by the user in a product custom parameter template form; after word segmentation processing is carried out on the product text description, a semantic encoder comprising a word embedding layer is used for obtaining a semantic understanding feature vector of the product text description; the user's demand parameters input in the product custom parameter template form are respectively passed through a single-heat encoder to obtain a plurality of demand parameter single-heat encoding vectors; passing the plurality of demand parameter one-time hot encoding vectors through a converter-based context encoder to obtain parameter context semantic association feature vectors; fusing the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain a modeling requirement understanding feature vector; based on the product text description semantic understanding feature vector and the parameter context semantic association feature vector, carrying out feature distribution optimization on the modeling demand understanding feature vector to obtain an optimized modeling demand understanding feature vector; and passing the optimized modeling demand understanding feature vector through a diffusion model-based model generator to generate a 2D model.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions that, when executed by the processor, cause the processor to perform the dynamic parameters and visualization model generation WEB 2D automated modeling engine method as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the dynamic parameters and visualization model generation WEB 2D automatic modeling engine method as described above.
Compared with the prior art, the WEB 2D automatic modeling engine system generated by the dynamic parameters and the visual model provided by the application further fuses semantic understanding characteristic information and product demand parameter characteristic information of text description of product demand by adopting a neural network model based on deep learning to obtain accurate product demand semantic understanding information, and improves modeling efficiency and quality.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a block diagram of a dynamic parameters and visualization model generation WEB2D automated modeling engine system according to an embodiment of the present application.
FIG. 2 is a system architecture diagram of a dynamic parameters and visualization model generation WEB 2D automated modeling engine system according to an embodiment of the present application.
FIG. 3 is a block diagram of a product text semantic understanding module in a dynamic parameter and visualization model generation WEB 2D automatic modeling engine system according to an embodiment of the application.
FIG. 4 is a block diagram of a product parameter association module in a dynamic parameter and visualization model generation WEB 2D automatic modeling engine system according to an embodiment of the application.
FIG. 5 is a block diagram of an optimization module in a dynamic parameter and visualization model generation WEB 2D automatic modeling engine system according to an embodiment of the application.
FIG. 6 is a flow chart of a method for generating a WEB2D automatic modeling engine by using dynamic parameters and a visual model according to an embodiment of the application.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application: as described above, the existing WEB 2D modeling has low intelligent degree, and the artificial design is easy to generate errors. In the process of WEB 2D modeling, the relation of each parameter is complicated, the efficiency of designing the 2D model is low, a great deal of energy of engineers is consumed, and the accuracy cannot be ensured. Therefore, an intelligent WEB 2D automatic modeling engine system is desired, which can drive the WEB 2D system to perform an automatic modeling engine to accurately perform 2D modeling, and improve modeling efficiency and quality.
Accordingly, in the process of actually performing WEB 2D modeling, the most critical point is to accurately and semantically understand the requirements of the product, so that the requirement description semantic feature information and parameter feature information of the product are obtained, and the quality of the 2D modeling is improved. However, since the text description of each product has different language expressions and the demand parameters of each product are different, it is difficult to accurately determine the text description semantic understanding characteristics of the product and the demand parameter characteristics of the product, which makes the intelligent implementation of 2D modeling difficult. Therefore, in the process of actually performing the WEB 2D modeling, the difficulty lies in how to accurately mine and fuse the semantic understanding characteristic information of the text description of the product requirement and the product requirement parameter characteristic information, so as to obtain accurate product requirement semantic understanding information, and improve the modeling efficiency and quality.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
Deep learning and development of neural networks provide new solutions and solutions for mining and fusing semantic understanding feature information of text descriptions of the product requirements and feature information of the product requirement parameters.
Specifically, in the technical scheme of the application, firstly, the product requirement input by a user is obtained, wherein the product requirement comprises a product text description and requirement parameters input by the user in a product custom parameter template form. Then, considering that the product text description is a sentence composed of a plurality of words and each word has a semantic association relation of context, in order to accurately extract semantic understanding feature information in the product text description, in the technical scheme of the application, word segmentation processing is further performed on the product text description so as to avoid semantic confusion during subsequent semantic feature extraction, and then the product text description is subjected to semantic encoding in a semantic encoder comprising a word embedding layer so as to obtain a product text description semantic understanding feature vector. In particular, the semantic encoder is a context semantic encoder based on a converter, and after the semantic features which can be identified by a computer are obtained by performing embedded encoding on the product text description after word segmentation, the context semantic encoder based on the converter is used to extract global context semantic association feature information, namely semantic understanding feature information of the product text description, of each word in the product text description, so as to obtain the semantic understanding feature vector of the product text description.
Then, taking the requirement parameters input by the user in the product custom parameter template form into requirement parameters of the product, wherein the requirement parameters have a relation of relevance, and in order to ensure that the modeled product parameters meet the requirements of the user, capturing and extracting the relevant characteristic information so as to perform 2D modeling. Specifically, in the technical scheme of the application, firstly, the demand parameters input by the user in the product custom parameter template form are respectively mapped into the same vector space through a single-hot encoder, so that a plurality of demand parameter single-hot encoding vectors are obtained; and then, the plurality of demand parameter single-hot coding vectors are coded in a context coder based on a converter, so that all parameter items in the demand parameters are extracted based on global context correlation characteristics, and parameter context semantic correlation characteristic vectors are obtained.
Further, fusing the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain a modeling requirement understanding feature vector, so as to fuse text semantic understanding feature information of product requirements in the product text description with context association feature information of each parameter item in the product requirement parameter, and obtain the modeling requirement understanding feature vector with fusion association features between the text requirement features and the parameter requirement features of the product.
Then, in order to generate a 2D model based on the fusion feature information of the text semantic understanding feature for the product requirement in the product text description and the context associated feature of each parameter item in the product requirement parameter, the modeling requirement understanding feature vector is further passed through a model generator based on a diffusion model to generate the 2D model. In particular, in one specific example of the present application, the diffusion model-based generator includes a forward diffusion process that can gradually add gaussian noise to the modeling demand understanding feature vector until becoming random noise, and a reverse generation process that is a denoising process that will start to gradually denoise for the random noise until a 2D model of the product is generated. It should be understood that, since the overall structure principle of the diffusion model is not complex, the diffusion model can be obtained by training the feature space of the diffusion model on a large scale, so that the diffusion model has strong generating capability, and each point on normal distribution is a mapping of real data, so that the diffusion model has better interpretability. Therefore, the 2D model of the product can be accurately determined, so that intelligent 2D modeling can be performed, and modeling efficiency and quality are improved.
In particular, in the technical scheme of the application, when the product text description semantic understanding feature vector and the parameter context semantic association feature vector are fused to obtain the modeling demand understanding feature vector, the fusion effect is affected if the fusion is directly performed in a point adding manner, for example, due to the difference between the text semantic features and the parameter semantic features expressed by the product text description semantic understanding feature vector and the parameter context semantic association feature vector.
The applicant considers that the modeling requirement understanding feature vector can be regarded as a response vector taking the product text description semantic understanding feature vector as a source vector, taking the parameter context semantic association feature vector as a conditional constraint, and can be regarded as a response vector taking the parameter context semantic association feature vector as a source vector, and taking the product text description semantic understanding feature vector as a conditional constraint, so that the modeling requirement understanding feature vector and the parameter context semantic association feature vector can be further taken as the source vector respectively, and the modeling requirement understanding feature vector is taken as the response vector to strengthen the responsive fusion, so that the fusion effect of the modeling requirement understanding feature vector on the product text description semantic understanding feature vector and the parameter context semantic association feature vector is improved.
Specifically, the product text description semantic understanding feature vector is recorded asThe parameter context semantic association feature vector is recorded asThe modeling demand understanding feature vectors are calculated separately, e.g., denotedNon-coherent sparse responsive fusion of (c) to arrive at semantic understanding feature vectors/>, respectively, with respect to the product text descriptionAnd the parameter context semantically associated feature vectorIs a partial response fusion feature vectorAndExpressed as:
Wherein the method comprises the steps of AndRepresenting the first and second norms of the vector,Is the length of the vector,AndThe vector product and the vector point product are represented separately, and all vectors are in the form of row vectors.
Here, in the case of the authenticity distribution (group-truth distribution) using the source vector as the feature inter-domain responsiveness fusion, the incoherent sparse response fusion representation between vectors is obtained through the fuzzy bit distribution responsiveness of vector differences represented by a norm and the true differential embedding responsiveness based on the modulo constraint of differential vectors, so as to extract the response relation of the probability distribution descriptiveness after the feature vector fusion, thereby improving the partial responsiveness fusion feature vector as the incoherent sparse response fusionAndFusion expression effect for source vector with response relationship. Thus, by fusing feature vectors/>, for the partial responsivenessAndAnd optimizing the modeling demand understanding feature vector by performing point, so that the feature expression effect of the modeling demand understanding feature vector can be improved, and the accuracy of the 2D model generated by the model generator based on the diffusion model is improved. Therefore, a WEB 2D automatic modeling engine can be generated based on the dynamic parameters and the visual model so as to accurately perform 2D modeling, and modeling efficiency and quality are improved.
Based on the above, the application provides a WEB 2D automatic modeling engine system generated by dynamic parameters and a visual model, which comprises the following components: the product demand acquisition module is used for acquiring product demands input by a user, wherein the product demands comprise product text descriptions and demand parameters input by the user in a product custom parameter template form; the product text semantic understanding module is used for obtaining a product text description semantic understanding feature vector through a semantic encoder comprising a word embedding layer after word segmentation processing is carried out on the product text description; the product parameter coding module is used for respectively passing the demand parameters input by the user in the product custom parameter template form through the independent heat coder to obtain a plurality of demand parameter independent heat coding vectors; the product parameter association module is used for enabling the plurality of demand parameter single-heat coding vectors to pass through a context encoder based on a converter to obtain parameter context semantic association feature vectors; the feature fusion module is used for fusing the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain a modeling demand understanding feature vector; the optimization module is used for carrying out feature distribution optimization on the modeling demand understanding feature vector based on the product text description semantic understanding feature vector and the parameter context semantic association feature vector so as to obtain an optimized modeling demand understanding feature vector; and a 2D model generation module for passing the optimization modeling demand understanding feature vector through a diffusion model-based model generator to generate a 2D model.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System: FIG. 1 is a block diagram of a dynamic parameters and visualization model generation WEB 2D automated modeling engine system according to an embodiment of the present application. As shown in fig. 1, a dynamic parameter and visualization model generation WEB 2D automatic modeling engine system 300 according to an embodiment of the present application includes: a product demand acquisition module 310; a product text semantic understanding module 320; a product parameter encoding module 330; a product parameter association module 340; a feature fusion module 350; an optimization module 360; and a 2D model generation module 370.
The product requirement collection module 310 is configured to obtain a product requirement input by a user, where the product requirement includes a product text description and a requirement parameter input by the user in a product custom parameter template form; the product text semantic understanding module 320 is configured to obtain a product text description semantic understanding feature vector through a semantic encoder including a word embedding layer after performing word segmentation processing on the product text description; the product parameter encoding module 330 is configured to pass the demand parameters input by the user in the product custom parameter template form through the single-hot encoder to obtain a plurality of demand parameter single-hot encoding vectors; the product parameter association module 340 is configured to pass the plurality of demand parameter one-time hot encoding vectors through a context encoder based on a converter to obtain parameter context semantic association feature vectors; the feature fusion module 350 is configured to fuse the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain a modeling requirement understanding feature vector; the optimizing module 360 is configured to perform feature distribution optimization on the modeling requirement understanding feature vector based on the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain an optimized modeling requirement understanding feature vector; and the 2D model generation module 370 is configured to pass the optimization modeling requirement understanding feature vector through a diffusion model-based model generator to generate a 2D model.
FIG. 2 is a system architecture diagram of a dynamic parameters and visualization model generation WEB 2D automated modeling engine system according to an embodiment of the present application. As shown in fig. 2, in the network architecture, first, the product requirement input by the user is obtained through the product requirement collection module 310, where the product requirement includes a product text description and requirement parameters input by the user in a product custom parameter template form; then, the product text semantic understanding module 320 performs word segmentation processing on the product text description acquired by the product requirement acquisition module 310, and then obtains a product text description semantic understanding feature vector through a semantic encoder comprising a word embedding layer; the product parameter encoding module 330 obtains a plurality of requirement parameter independent-heat encoding vectors by respectively passing the requirement parameters input by the user in the product self-defined parameter template form acquired by the product requirement acquisition module 310 through the independent-heat encoder; then, the product parameter association module 340 passes the plurality of demand parameter independent heat encoding vectors obtained by the product parameter encoding module 330 through a context encoder based on a converter to obtain parameter context semantic association feature vectors; the feature fusion module 350 fuses the product text description semantic understanding feature vector obtained by the product text semantic understanding module 320 and the parameter context semantic association feature vector obtained by the product parameter association module 340 to obtain a modeling requirement understanding feature vector; the optimization module 360 performs feature distribution optimization on the modeling demand understanding feature vector obtained by the feature fusion module 350 fusion based on the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain an optimized modeling demand understanding feature vector; further, the 2D model generation module 370 passes the optimization modeling demand understanding feature vector through a diffusion model-based model generator to generate a 2D model.
Specifically, during the operation of the dynamic parameter and visualization model generation WEB 2D automatic modeling engine system 300, the product requirement acquisition module 310 is configured to acquire a product requirement input by a user, where the product requirement includes a product text description and a requirement parameter input by the user in a product custom parameter template form. It should be understood that in the process of actually performing WEB 2D modeling, the quality of 2D modeling can be improved by precisely performing semantic understanding on the requirements of the product, so as to obtain semantic feature information and parameter feature information of the requirement specification of the product, but because the text specification of each product has different language expressions and the requirement parameters of each product are different, in the technical scheme of the application, firstly, the product requirement input by the user is obtained, the product requirement comprises the product text specification and the requirement parameters input by the user in the product custom parameter template form, and further, the precise semantic understanding information of the product requirement is obtained by fusing the semantic understanding feature information of the text specification of the product requirement and the parameter feature information of the product requirement parameter, so that the modeling efficiency and quality are improved.
Specifically, during the operation of the WEB 2D automatic modeling engine system 300 generated by the dynamic parameters and the visual model, the product text semantic understanding module 320 is configured to obtain a product text description semantic understanding feature vector through a semantic encoder including a word embedding layer after performing word segmentation processing on the product text description. Considering that the product text description is a sentence composed of a plurality of words and each word has a semantic association relation of context, in order to accurately extract semantic understanding feature information in the product text description, in the technical scheme of the application, the product text description is further subjected to word segmentation processing so as to avoid semantic confusion during subsequent semantic feature extraction, and then is subjected to semantic encoding in a semantic encoder comprising a word embedding layer so as to obtain a semantic understanding feature vector of the product text description. In particular, the semantic encoder is a context semantic encoder based on a converter, and after the semantic features which can be identified by a computer are obtained by performing embedded encoding on the product text description after word segmentation, the context semantic encoder based on the converter is used to extract global context semantic association feature information, namely semantic understanding feature information of the product text description, of each word in the product text description, so as to obtain the semantic understanding feature vector of the product text description.
FIG. 3 is a block diagram of a product text semantic understanding module in a dynamic parameter and visualization model generation WEB 2D automatic modeling engine system according to an embodiment of the application. As shown in fig. 3, the product text semantic understanding module 320 includes: a word segmentation unit 321, configured to perform word segmentation processing on the product text description to convert the product text description into a word sequence composed of a plurality of words; a word embedding unit 322, configured to map each word in the word sequence into a word embedding vector by using an embedding layer of the semantic encoder including the embedding layer, so as to obtain a sequence of word embedding vectors; a context coding unit 323, configured to perform global context semantic coding on the sequence of word embedding vectors using the converter of the semantic encoder including the embedding layer, where the global context semantic coding is based on a converter concept, so as to obtain a plurality of global context semantic feature vectors; and a concatenation unit 324, configured to concatenate the plurality of global context semantic feature vectors to obtain the product text description semantic understanding feature vector. Wherein the context coding unit 323 includes: a first query vector construction subunit, configured to perform one-dimensional arrangement on the sequence of word embedding vectors to obtain a global feature vector; a first self-attention subunit, configured to calculate a product between the global feature vector and a transpose vector of a word vector in the sequence of word embedding vectors to obtain a plurality of self-attention association matrices; the first normalization subunit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices; the first attention calculating subunit is used for obtaining a plurality of probability values through a Softmax classification function by using each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; a first attention applying subunit, configured to weight each word vector in the sequence of word embedding vectors with each probability value in the plurality of probability values as a weight to obtain the plurality of context semantic feature vectors; and the first cascade subunit is used for cascading the plurality of context semantic feature vectors to obtain the global context semantic feature vector.
Specifically, during the operation of the WEB 2D automatic modeling engine system 300 generated by the dynamic parameters and the visual model, the product parameter encoding module 330 is configured to pass the demand parameters input by the user in the product custom parameter template form through the independent heat encoders to obtain a plurality of independent heat encoding vectors of the demand parameters. Considering that the demand parameters input by the user in the product custom parameter template form are demand parameters of the product, and the demand parameters have a relevance relation, in order to ensure that the modeled product parameters meet the user demand, the relevance characteristic information is captured and extracted, so that the 2D modeling is performed. Specifically, in the technical scheme of the application, firstly, the demand parameters input by the user in the product custom parameter template form are respectively mapped into the same vector space through the independent heat encoder, so that a plurality of demand parameter independent heat encoding vectors are obtained.
Specifically, during the operation of the dynamic parameter and visualization model generation WEB 2D automatic modeling engine system 300, the product parameter association module 340 is configured to pass the plurality of demand parameter independent heat encoding vectors through a context encoder based on a converter to obtain parameter context semantic association feature vectors. That is, in the technical solution of the present application, the plurality of demand parameter independent-hot encoding vectors are encoded by a context encoder based on a converter, so as to extract global context-related features of each parameter item in the demand parameter, thereby obtaining parameter context semantic-related feature vectors.
FIG. 4 is a block diagram of a product parameter association module in a dynamic parameter and visualization model generation WEB 2D automatic modeling engine system according to an embodiment of the application. As shown in fig. 4, the product parameter association module 330 includes: a second query vector construction unit 341, configured to perform one-dimensional arrangement on the plurality of demand parameter unique-hot encoding vectors to obtain a global demand parameter unique-hot encoding vector; a second self-attention unit 342, configured to calculate a product between the global demand parameter one-hot encoding vector and a transpose vector of each demand parameter one-hot encoding vector in the plurality of demand parameter one-hot encoding vectors to obtain a plurality of self-attention correlation matrices; a second normalization unit 343, configured to perform normalization processing on each of the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices; a second attention calculating unit 344, configured to obtain a plurality of probability values by passing each normalized self-attention correlation matrix of the plurality of normalized self-attention correlation matrices through a Softmax classification function; a second attention applying unit 345, configured to weight each demand parameter independent heat encoding vector in the plurality of demand parameter independent heat encoding vectors with each probability value in the plurality of probability values as a weight, so as to obtain the plurality of context semantic demand parameter independent heat encoding vectors; and a second concatenation unit 346, configured to concatenate the plurality of context semantic requirement parameter one-hot encoding vectors to obtain the parameter context semantic association feature vector.
Specifically, during the operation of the WEB 2D automatic modeling engine system 300 generated by the dynamic parameters and the visual model, the feature fusion module 350 is configured to fuse the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain a modeling requirement understanding feature vector. That is, the product text description semantic understanding feature vector and the parameter context semantic association feature vector are fused, so that text semantic understanding feature information of a product requirement in the product text description and context association feature information of each parameter item in the product requirement parameter are fused, and the modeling requirement understanding feature vector with fusion association features between the text requirement feature and the parameter requirement feature of the product is obtained. In a specific example of the present application, the two may be fused in a cascade manner, more specifically, the product text description semantic understanding feature vector and the parameter context semantic association feature vector are fused in the following formula to obtain a modeling requirement understanding feature vector; wherein, the formula is:
Wherein/> Representing the product text description semantically understood feature vector,Representing the parameter context semantically associated feature vector,Representing a cascade function,Representing the modeling requirement understanding feature vector.
Specifically, during the operation of the WEB 2D automatic modeling engine system 300 generated by the dynamic parameters and the visual model, the optimization module 360 is configured to perform feature distribution optimization on the modeling requirement understanding feature vector based on the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain an optimized modeling requirement understanding feature vector. In the technical scheme of the application, when the product text description semantic understanding feature vector and the parameter context semantic association feature vector are fused to obtain the modeling demand understanding feature vector, the fusion effect is affected if the fusion is directly performed in a point adding mode, for example, due to the difference between the text semantic features and the parameter semantic features expressed by the product text description semantic understanding feature vector and the parameter context semantic association feature vector. The modeling requirement understanding feature vector can be considered to be a response vector taking the product text description semantic understanding feature vector as a source vector, the parameter context semantic association feature vector as a conditional constraint, or the parameter context semantic association feature vector as a source vector, and the product text description semantic understanding feature vector as a conditional constraint, so that the modeling requirement understanding feature vector and the parameter context semantic association feature vector can be further taken as the source vector respectively, and the modeling requirement understanding feature vector is taken as the response vector to strengthen the response fusion, so that the fusion effect of the modeling requirement understanding feature vector on the product text description semantic understanding feature vector and the parameter context semantic association feature vector is improved. Specifically, the product text description semantic understanding feature vector is recorded asThe parameter context semantic association feature vector is recorded asCalculating the modeling demand understanding feature vectors respectively, more specifically calculating incoherent sparse response type fusion of the product text description semantic understanding feature vectors and the modeling demand understanding feature vectors by the following formula to obtain the first partial response type fusion feature vector; wherein, the formula is:
Wherein the method comprises the steps of AndRepresenting the product text description semantic understanding feature vector, the modeling requirement understanding feature vector and the first partial responsiveness fusion feature vector, respectively,AndRepresenting the first and second norms of the vector, respectively,Is the length of the vector,AndRespectively representing vector multiplication and vector dot multiplication, wherein all vectors are in a row vector form; calculating incoherent sparse response type fusion of the parameter context semantic association feature vector and the modeling requirement understanding feature vector according to the following formula to obtain the first partial response type fusion feature vector; wherein, the formula is:
Wherein the method comprises the steps of AndRepresenting the parameter context semantically-related feature vector, the modeling requirement understanding feature vector, and the second partial responsiveness fusion feature vector, respectively,AndRepresenting the first and second norms of the vector, respectively,Is the length of the vector,AndThe vector product and the vector point product are represented separately, and all vectors are in the form of row vectors. Here, in the case of the authenticity distribution (group-truth distribution) using the source vector as the feature inter-domain responsiveness fusion, the incoherent sparse response fusion representation between vectors is obtained through the fuzzy bit distribution responsiveness of vector differences represented by a norm and the true differential embedding responsiveness based on the modulo constraint of differential vectors, so as to extract the response relation of the probability distribution descriptiveness after the feature vector fusion, thereby improving the partial responsiveness fusion feature vector/>, which is the incoherent sparse response fusionAndFusion expression effect for source vector with response relationship. Thus, by fusing feature vectors/>, for the partial responsivenessAndAnd optimizing the modeling demand understanding feature vector by performing point, so that the feature expression effect of the modeling demand understanding feature vector can be improved, and the accuracy of the 2D model generated by the model generator based on the diffusion model is improved. Therefore, a WEB 2D automatic modeling engine can be generated based on the dynamic parameters and the visual model so as to accurately perform 2D modeling, and modeling efficiency and quality are improved.
FIG. 5 is a block diagram of an optimization module in a system for generating a WEB 2D automatic modeling engine with dynamic parameters and a visual model according to an embodiment of the present application, as shown in FIG. 5, the optimization module 360 includes: the first response optimization unit 361 is configured to calculate a non-coherent sparse response fusion of the product text description semantic understanding feature vector and the modeling requirement understanding feature vector to obtain a first partial response fusion feature vector; the second response optimization unit 362 is configured to calculate a non-coherent sparse response fusion of the parameter context semantic association feature vector and the modeling requirement understanding feature vector to obtain a second partial response fusion feature vector; and the point adding optimization unit 363 is configured to perform location-based point-by-location calculation on the first partial response fusion feature vector and the second partial response fusion feature vector to obtain the optimization modeling demand understanding feature vector.
Specifically, during operation of the dynamic parameter and visualization model generation WEB 2D automatic modeling engine system 300, the 2D model generation module 370 is configured to pass the optimization modeling requirement understanding feature vector through a diffusion model-based model generator to generate a 2D model. In order to generate a 2D model based on the fused feature information of the text semantic understanding feature for the product requirement in the product text description and the context associated feature of each parameter item in the product requirement parameter, the modeling requirement understanding feature vector is further passed through a model generator based on a diffusion model to generate a 2D model. In particular, in one specific example of the present application, the diffusion model-based generator includes a forward diffusion process that can gradually add gaussian noise to the modeling demand understanding feature vector until becoming random noise, and a reverse generation process that is a denoising process that will start to gradually denoise for the random noise until a 2D model of the product is generated. It should be understood that, since the overall structure principle of the diffusion model is not complex, the diffusion model can be obtained by training the feature space of the diffusion model on a large scale, so that the diffusion model has strong generating capability, and each point on normal distribution is a mapping of real data, so that the diffusion model has better interpretability. Therefore, the 2D model of the product can be accurately determined, so that intelligent 2D modeling can be performed, and modeling efficiency and quality are improved.
In summary, the dynamic parameter and visual model generation WEB 2D automatic modeling engine system 300 according to the embodiment of the present application is illustrated, which discovers semantic understanding feature information of text description of product requirements and product requirement parameter feature information by using a neural network model based on deep learning, and further fuses the two to obtain accurate product requirement semantic understanding information, thereby improving modeling efficiency and quality.
As described above, the dynamic parameter and visualization model generation WEB2D automatic modeling engine system according to the embodiment of the present application may be implemented in various terminal devices. In one example, the dynamic parameters and visualization model generation WEB2D automatic modeling engine system 300 according to embodiments of the present application may be integrated into a terminal device as a software module and/or hardware module. For example, the dynamic parameter and visualization model generation WEB2D automatic modeling engine system 300 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the dynamic parameter, as well as the visualization model generation WEB2D automatic modeling engine system 300, may be one of numerous hardware modules of the terminal device.
Alternatively, in another example, the dynamic parameter and visualization model generation WEB 2D automatic modeling engine system 300 and the terminal device may be separate devices, and the dynamic parameter and visualization model generation WEB 2D automatic modeling engine system 300 may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to a agreed data format.
An exemplary method is: FIG. 6 is a flow chart of a method for generating a WEB2D automatic modeling engine by using dynamic parameters and a visual model according to an embodiment of the application. As shown in fig. 6, a method for generating a WEB2D automatic modeling engine by using dynamic parameters and a visual model according to an embodiment of the present application includes the steps of: s110, acquiring a product requirement input by a user, wherein the product requirement comprises a product text description and a requirement parameter input by the user in a product custom parameter template form; s120, a product text semantic understanding module is used for obtaining a product text description semantic understanding feature vector through a semantic encoder comprising a word embedding layer after word segmentation processing is carried out on the product text description; s130, a product parameter coding module, which is used for respectively passing the demand parameters input by the user in the product custom parameter template form through a single-heat coder to obtain a plurality of demand parameter single-heat coding vectors; s140, a product parameter association module, which is used for enabling the plurality of demand parameter single-hot coding vectors to pass through a context encoder based on a converter to obtain parameter context semantic association feature vectors; s150, a feature fusion module is used for fusing the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain a modeling demand understanding feature vector; s160, an optimization module is used for carrying out feature distribution optimization on the modeling demand understanding feature vector based on the product text description semantic understanding feature vector and the parameter context semantic association feature vector so as to obtain an optimized modeling demand understanding feature vector; and S170, a 2D model generation module for enabling the optimization modeling requirement understanding feature vector to pass through a model generator based on a diffusion model to generate a 2D model.
In one example, in the method for generating a WEB 2D automatic modeling engine by using the dynamic parameters and the visual model, the step S120 includes: word segmentation processing is carried out on the product text description so as to convert the product text description into a word sequence composed of a plurality of words; mapping each word in the word sequence into a word embedding vector by using an embedding layer of the semantic encoder comprising the embedding layer to obtain a sequence of word embedding vectors; performing global context semantic coding on the sequence of word embedded vectors based on a converter thought by using a converter of the semantic encoder comprising an embedded layer to obtain a plurality of global context semantic feature vectors; and cascading the plurality of global context semantic feature vectors to obtain the product text description semantic understanding feature vector. Wherein performing global context semantic coding on the sequence of word embedding vectors based on a converter concept using the converter of the semantic encoder including an embedding layer to obtain a plurality of global context semantic feature vectors, comprises: one-dimensional arrangement is carried out on the sequence of the word embedding vectors to obtain global feature vectors; calculating the product between the global feature vector and the transpose vector of the word vector in the sequence of word embedding vectors to obtain a plurality of self-attention correlation matrices; respectively carrying out standardization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices; obtaining a plurality of probability values by using a Softmax classification function through each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; weighting each word vector in the sequence of word embedding vectors by taking each probability value in the plurality of probability values as a weight so as to obtain the plurality of context semantic feature vectors; cascading the plurality of context semantic feature vectors to obtain the global context semantic feature vector.
In one example, in the method for generating a WEB 2D automatic modeling engine by using the dynamic parameters and the visual model, the step S140 includes: one-dimensional arrangement is carried out on the plurality of demand parameter independent heat coding vectors so as to obtain global demand parameter independent heat coding vectors; calculating the product between the global demand parameter single-hot encoding vector and the transpose vector of each demand parameter single-hot encoding vector in the plurality of demand parameter single-hot encoding vectors to obtain a plurality of self-attention correlation matrices; respectively carrying out standardization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices; obtaining a plurality of probability values by using a Softmax classification function through each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; weighting each demand parameter independent heat coding vector in the demand parameter independent heat coding vectors by taking each probability value in the probability values as a weight so as to obtain the context semantic demand parameter independent heat coding vectors; and cascading the plurality of context semantic demand parameter single-hot encoding vectors to obtain the parameter context semantic association feature vector.
In one example, in the method for generating a WEB 2D automatic modeling engine by using the dynamic parameters and the visual model, the step S150 includes: fusing the product text description semantic understanding feature vector and the parameter context semantic association feature vector with the following formula to obtain a modeling requirement understanding feature vector; wherein, the formula is:
Wherein/> Representing the product text description semantically understood feature vector,Representing the parameter context semantically associated feature vector,Representing a cascade function,Representing the modeling requirement understanding feature vector.
In one example, in the method for generating a WEB 2D automatic modeling engine by using the dynamic parameters and the visual model, the step S160 includes: calculating incoherent sparse response type fusion of the product text description semantic understanding feature vector and the modeling demand understanding feature vector to obtain a first partial response type fusion feature vector; calculating incoherent sparse response type fusion of the parameter context semantic association feature vector and the modeling demand understanding feature vector to obtain a second partial response type fusion feature vector; and carrying out position point-based calculation on the first partial response fusion feature vector and the second partial response fusion feature vector to obtain the optimization modeling demand understanding feature vector. Calculating incoherent sparse response type fusion of the product text description semantic understanding feature vector and the modeling requirement understanding feature vector to obtain a first partial response type fusion feature vector, wherein the method comprises the following steps of: calculating incoherent sparse response type fusion of the product text description semantic understanding feature vector and the modeling requirement understanding feature vector by the following formula to obtain the first partial response type fusion feature vector; wherein, the formula is:
Wherein the method comprises the steps of AndRepresenting the product text description semantic understanding feature vector, the modeling requirement understanding feature vector and the first partial responsiveness fusion feature vector, respectively,AndRepresenting the first and second norms of the vector, respectively,Is the length of the vector,AndRespectively representing vector multiplication and vector dot multiplication, wherein all vectors are in a row vector form; and calculating a non-coherent sparse responsive fusion of the parameter context semantic association feature vector and the modeling requirement understanding feature vector to obtain a second partial responsive fusion feature vector, comprising: calculating incoherent sparse response type fusion of the parameter context semantic association feature vector and the modeling requirement understanding feature vector by the following formula to obtain the first partial response type fusion feature vector; wherein, the formula is:
Wherein the method comprises the steps of AndRepresenting the parameter context semantically-related feature vector, the modeling requirement understanding feature vector, and the second partial responsiveness fusion feature vector, respectively,AndRepresenting the first and second norms of the vector, respectively,Is the length of the vector,AndThe vector product and the vector point product are represented separately, and all vectors are in the form of row vectors.
In summary, the method for generating the WEB 2D automatic modeling engine by the dynamic parameters and the visual model according to the embodiment of the application is explained, semantic understanding characteristic information and product demand parameter characteristic information of text description of product demand are mined by adopting a neural network model based on deep learning, and the two are further fused to obtain accurate product demand semantic understanding information, so that modeling efficiency and quality are improved.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 7.
Fig. 7 illustrates a block diagram of an electronic device according to an embodiment of the application.
As shown in fig. 7, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. On which one or more computer program instructions may be stored that may be executed by processor 11 to implement the functions in the WEB 2D automated modeling engine system and/or other desired functions for dynamic parameter and visualization model generation of the various embodiments of the present application described above. Various content such as fused escalator monitoring feature maps may also be stored in the computer readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 can output various information including models and the like to the outside. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 that are relevant to the present application are shown in fig. 7 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the functions of the method of generating a WEB 2D automated modeling engine with a visualization model and dynamic parameters according to various embodiments of the application described in the section "exemplary systems" above in this specification.
The computer program product may write program code for performing operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the functions of the method for generating a WEB 2D automated modeling engine according to the dynamic parameters and visualization model of the present application described in the above section of the "exemplary system" of the present description.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, but it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be construed as necessarily possessed by the various embodiments of the application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (2)

1. A dynamic parameter and visualization model generation WEB 2D automatic modeling engine system, comprising:
the product demand acquisition module is used for acquiring product demands input by a user, wherein the product demands comprise product text descriptions and demand parameters input by the user in a product custom parameter template form;
the product text semantic understanding module is used for obtaining a product text description semantic understanding feature vector through a semantic encoder comprising a word embedding layer after word segmentation processing is carried out on the product text description;
The product parameter coding module is used for respectively passing the demand parameters input by the user in the product custom parameter template form through the independent heat coder to obtain a plurality of demand parameter independent heat coding vectors;
the product parameter association module is used for enabling the plurality of demand parameter single-heat coding vectors to pass through a context encoder based on a converter to obtain parameter context semantic association feature vectors;
The feature fusion module is used for fusing the product text description semantic understanding feature vector and the parameter context semantic association feature vector to obtain a modeling demand understanding feature vector;
The optimization module is used for carrying out feature distribution optimization on the modeling demand understanding feature vector based on the product text description semantic understanding feature vector and the parameter context semantic association feature vector so as to obtain an optimized modeling demand understanding feature vector; and
The 2D model generation module is used for enabling the optimization modeling demand understanding feature vector to pass through a model generator based on a diffusion model to generate a 2D model;
wherein, the optimization module includes:
the first response optimization unit is used for calculating incoherent sparse response type fusion of the product text description semantic understanding feature vector and the modeling demand understanding feature vector to obtain a first partial response type fusion feature vector;
the second response optimization unit is used for calculating incoherent sparse response type fusion of the parameter context semantic association feature vector and the modeling demand understanding feature vector to obtain a second partial response type fusion feature vector;
The point adding optimization unit is used for carrying out position point-by-position point on the first partial response fusion feature vector and the second partial response fusion feature vector to obtain the optimization modeling demand understanding feature vector;
Wherein the first response optimizing unit is configured to: calculating incoherent sparse response type fusion of the product text description semantic understanding feature vector and the modeling requirement understanding feature vector by the following formula to obtain the first partial response type fusion feature vector;
Wherein, the formula is:
Wherein the method comprises the steps of AndRepresenting the product text description semantic understanding feature vector, the modeling requirement understanding feature vector and the first partial responsiveness fusion feature vector, respectively,AndRepresenting the first and second norms of the vector, respectively,Is the length of the vector,AndRespectively representing vector multiplication and vector dot multiplication, wherein all vectors are in a row vector form;
wherein the second response optimizing unit is configured to: calculating incoherent sparse response type fusion of the parameter context semantic association feature vector and the modeling requirement understanding feature vector by the following formula to obtain the second partial response type fusion feature vector;
Wherein, the formula is:
Wherein the method comprises the steps of AndRepresenting the parameter context semantically-related feature vector, the modeling requirement understanding feature vector, and the second partial responsiveness fusion feature vector, respectively,AndRepresenting the first and second norms of the vector, respectively,Is the length of the vector,AndRespectively representing vector multiplication and vector dot multiplication, wherein all vectors are in a row vector form;
The product text semantic understanding module comprises:
The word segmentation unit is used for carrying out word segmentation processing on the product text description so as to convert the product text description into a word sequence consisting of a plurality of words;
the word embedding unit is used for mapping each word in the word sequence into a word embedding vector by using an embedding layer of a semantic encoder comprising the embedding layer so as to obtain a sequence of word embedding vectors;
A context coding unit, configured to perform global context semantic coding on the sequence of word embedding vectors using the converter of the semantic encoder including the embedding layer, where the global context semantic coding is based on a converter thought, so as to obtain a plurality of global context semantic feature vectors; and
The cascading unit is used for cascading the global context semantic feature vectors to obtain the product text description semantic understanding feature vector;
Wherein the context encoding unit includes:
A first query vector construction subunit, configured to perform one-dimensional arrangement on the sequence of word embedding vectors to obtain a global feature vector;
A first self-attention subunit, configured to calculate a product between the global feature vector and a transpose vector of a word vector in the sequence of word embedding vectors to obtain a plurality of self-attention association matrices;
the first normalization subunit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices;
the first attention calculating subunit is used for obtaining a plurality of probability values through a Softmax classification function by using each normalized self-attention correlation matrix in the normalized self-attention correlation matrices;
A first attention applying subunit, configured to weight each word vector in the sequence of word embedding vectors with each probability value in the plurality of probability values as a weight to obtain the plurality of context semantic feature vectors;
A first cascade subunit, configured to cascade the plurality of context semantic feature vectors to obtain the global context semantic feature vector;
wherein, the product parameter association module comprises:
The second query vector construction unit is used for carrying out one-dimensional arrangement on the plurality of demand parameter single-hot coding vectors so as to obtain global demand parameter single-hot coding vectors;
A second self-attention unit, configured to calculate a product between the global demand parameter one-hot encoding vector and a transpose vector of each demand parameter one-hot encoding vector in the plurality of demand parameter one-hot encoding vectors to obtain a plurality of self-attention correlation matrices;
the second normalization unit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices;
The second attention calculating unit is used for obtaining a plurality of probability values through a Softmax classification function by using each normalized self-attention correlation matrix in the normalized self-attention correlation matrices;
a second attention applying unit, configured to weight each demand parameter independent heat coding vector in the plurality of demand parameter independent heat coding vectors by using each probability value in the plurality of probability values as a weight, so as to obtain the plurality of context semantic demand parameter independent heat coding vectors;
And the second cascading unit is used for cascading the plurality of context semantic demand parameter single-hot encoding vectors to obtain the parameter context semantic association feature vectors.
2. The dynamic parameters and visualization model generation WEB 2D automatic modeling engine system of claim 1, wherein the feature fusion module is configured to: fusing the product text description semantic understanding feature vector and the parameter context semantic association feature vector with the following formula to obtain a modeling requirement understanding feature vector;
Wherein, the formula is:
Wherein, Representing the product text description semantically understood feature vector,Representing the parameter context semantically associated feature vector,Representing a cascade function,Representing the modeling requirement understanding feature vector.
CN202310209360.1A 2023-03-07 2023-03-07 Dynamic parameter and visual model generation WEB 2D automatic modeling engine system Active CN116127019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310209360.1A CN116127019B (en) 2023-03-07 2023-03-07 Dynamic parameter and visual model generation WEB 2D automatic modeling engine system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310209360.1A CN116127019B (en) 2023-03-07 2023-03-07 Dynamic parameter and visual model generation WEB 2D automatic modeling engine system

Publications (2)

Publication Number Publication Date
CN116127019A CN116127019A (en) 2023-05-16
CN116127019B true CN116127019B (en) 2024-06-11

Family

ID=86311858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310209360.1A Active CN116127019B (en) 2023-03-07 2023-03-07 Dynamic parameter and visual model generation WEB 2D automatic modeling engine system

Country Status (1)

Country Link
CN (1) CN116127019B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578288B (en) * 2023-05-30 2023-11-28 杭州行至云起科技有限公司 Structured self-defined lamp efficiency configuration method and system based on logic judgment
CN117348877B (en) * 2023-10-20 2024-08-27 江苏洪旭德生科技有限公司 Technology development system and method based on artificial intelligence technology

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115130331A (en) * 2022-08-30 2022-09-30 湖北工业大学 Robust frequency invariant beam forming method based on sparse array
CN115146488A (en) * 2022-09-05 2022-10-04 山东鼹鼠人才知果数据科技有限公司 Variable business process intelligent modeling system and method based on big data
CN115203380A (en) * 2022-09-19 2022-10-18 山东鼹鼠人才知果数据科技有限公司 Text processing system and method based on multi-mode data fusion
CN115238591A (en) * 2022-08-12 2022-10-25 杭州国辰智企科技有限公司 Dynamic parameter checking and driving CAD automatic modeling engine system
CN115266159A (en) * 2022-08-01 2022-11-01 浙江师范大学 Fault diagnosis method and system for train traction system
CN115409018A (en) * 2022-09-20 2022-11-29 浙江书香荷马文化有限公司 Company public opinion monitoring system and method based on big data
CN115471216A (en) * 2022-11-03 2022-12-13 深圳市顺源科技有限公司 Data management method of intelligent laboratory management platform
CN115564203A (en) * 2022-09-23 2023-01-03 杭州国辰智企科技有限公司 Equipment real-time performance evaluation system and method based on multi-dimensional data cooperation
CN115602315A (en) * 2022-10-14 2023-01-13 浙江省中医院、浙江中医药大学附属第一医院(浙江省东方医院)(Cn) Glomerular filtration rate estimation system based on comprehensive data analysis

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11423304B2 (en) * 2020-01-15 2022-08-23 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for semantic analysis of multimedia data using attention-based fusion network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115266159A (en) * 2022-08-01 2022-11-01 浙江师范大学 Fault diagnosis method and system for train traction system
CN115238591A (en) * 2022-08-12 2022-10-25 杭州国辰智企科技有限公司 Dynamic parameter checking and driving CAD automatic modeling engine system
CN115130331A (en) * 2022-08-30 2022-09-30 湖北工业大学 Robust frequency invariant beam forming method based on sparse array
CN115146488A (en) * 2022-09-05 2022-10-04 山东鼹鼠人才知果数据科技有限公司 Variable business process intelligent modeling system and method based on big data
CN115203380A (en) * 2022-09-19 2022-10-18 山东鼹鼠人才知果数据科技有限公司 Text processing system and method based on multi-mode data fusion
CN115409018A (en) * 2022-09-20 2022-11-29 浙江书香荷马文化有限公司 Company public opinion monitoring system and method based on big data
CN115564203A (en) * 2022-09-23 2023-01-03 杭州国辰智企科技有限公司 Equipment real-time performance evaluation system and method based on multi-dimensional data cooperation
CN115602315A (en) * 2022-10-14 2023-01-13 浙江省中医院、浙江中医药大学附属第一医院(浙江省东方医院)(Cn) Glomerular filtration rate estimation system based on comprehensive data analysis
CN115471216A (en) * 2022-11-03 2022-12-13 深圳市顺源科技有限公司 Data management method of intelligent laboratory management platform

Also Published As

Publication number Publication date
CN116127019A (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN116127019B (en) Dynamic parameter and visual model generation WEB 2D automatic modeling engine system
CN115203380B (en) Text processing system and method based on multi-mode data fusion
CN115796173B (en) Data processing method and system for supervising reporting requirements
CN116245513B (en) Automatic operation and maintenance system and method based on rule base
CN116247824B (en) Control method and system for power equipment
CN116579618B (en) Data processing method, device, equipment and storage medium based on risk management
CN115992908A (en) Intelligent fluid control valve and detection method thereof
CN116257406A (en) Gateway data management method and system for smart city
CN115456789B (en) Abnormal transaction detection method and system based on transaction pattern recognition
CN115834433B (en) Data processing method and system based on Internet of things technology
CN116702156B (en) Information security risk evaluation system and method thereof
CN117033455B (en) Information technology consultation management system and method based on big data
CN116309580B (en) Oil and gas pipeline corrosion detection method based on magnetic stress
CN116776872A (en) Medical data structured archiving system
CN116015837A (en) Intrusion detection method and system for computer network information security
CN116665086A (en) Teaching method and system based on intelligent analysis of learning behaviors
CN115311598A (en) Video description generation system based on relation perception
CN116912597A (en) Intellectual property intelligent management system and method thereof
CN117079031A (en) Teflon circuit board drilling system and method
CN112801006B (en) Training method of expression representation model, and facial expression representation method and device
CN117348877B (en) Technology development system and method based on artificial intelligence technology
CN117421655A (en) Industrial Internet data stream anomaly detection method and system
CN113158630A (en) Text editing image method, storage medium, electronic device and system
CN116385733A (en) High-precision positioning system and method for bucket wheel machine
CN116595976B (en) Scientific research innovation platform control method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant