CN115731430A - Method for generating deep learning image data set by finite element model and server - Google Patents

Method for generating deep learning image data set by finite element model and server Download PDF

Info

Publication number
CN115731430A
CN115731430A CN202211457090.8A CN202211457090A CN115731430A CN 115731430 A CN115731430 A CN 115731430A CN 202211457090 A CN202211457090 A CN 202211457090A CN 115731430 A CN115731430 A CN 115731430A
Authority
CN
China
Prior art keywords
finite element
element model
image
deep learning
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211457090.8A
Other languages
Chinese (zh)
Inventor
毛凤山
刘志军
王婧
武坤鹏
朱明星
温友鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCCC Fourth Harbor Engineering Co Ltd
CCCC Fourth Harbor Engineering Institute Co Ltd
Original Assignee
CCCC Fourth Harbor Engineering Co Ltd
CCCC Fourth Harbor Engineering Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCCC Fourth Harbor Engineering Co Ltd, CCCC Fourth Harbor Engineering Institute Co Ltd filed Critical CCCC Fourth Harbor Engineering Co Ltd
Priority to CN202211457090.8A priority Critical patent/CN115731430A/en
Publication of CN115731430A publication Critical patent/CN115731430A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for generating a deep learning image data set by a finite element model and a server, wherein the method comprises the following steps: s1: extracting a finite element model calculation file; s2: determining a width and a height of an image in a picture dataset; s3: calculating material parameter values at nodes in an image coordinate system; s4: calculating the material parameter value on the slice plane; s5: determining the gray value of each color channel at each pixel point in the pixel matrix; s6: the pixel matrices are assembled into a multi-channel image. In the invention, an image is obtained by slicing in the extracted finite element analysis model, each color channel of each pixel point is normalized to a color space, each color channel of a model blank area in the image is assigned to be 0, and then the image which can be input into the deep learning model is obtained. The multichannel data set obtained by the method can be input into a convolutional neural network, and training data support is provided for solving a large amount of repeated calculation in the engineering field by utilizing deep learning.

Description

Method for generating deep learning image data set by finite element model and server
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method and a server for generating a deep learning image data set by a finite element model.
Background
Finite element analysis is widely applied to the fields of aerospace, machinery, civil engineering and the like, and is used for solving the problems of stress and deformation response calculation, optimal design and reliability analysis of engineering objects. The time cost of finite element calculation is large, and especially when the number of units is large or the number of calculation times is large, such as the reliability calculation of a three-dimensional model, the time of finite element calculation is often tens of days. During reliability calculation, the grids and boundary conditions of the models are the same, and only the material parameters are different among the models. In the prior art, models of tens of thousands of different material parameters are generated and then calculated sequentially.
Feature extraction can be automatically carried out in a training image data set through deep learning, weight parameters of a network model are obtained, and an unknown image is predicted. If the calculated finite element model is trained by deep learning, then the finite element models of different material parameters or models are predicted, so that the calculation times of the finite element model can be reduced, and the design cycle of the product is accelerated. The engineering significance of utilizing deep learning for reducing the number of finite element calculations in the product design process will be great.
However, artificial intelligence techniques are not available to solve the pain spots of a large number of repeated finite element calculations, and the key is the lack of a method for generating a data set from a finite element model that can be input into a deep learning network.
Disclosure of Invention
In order to overcome the defects of the prior art, one of the objectives of the present invention is to provide a method for generating a deep learning image data set by a finite element model, which can solve the problem of the lack of generating a data set capable of being input into a deep learning network from a finite element model.
The invention also aims to provide a server for generating a deep learning image data set by using a finite element model, which can solve the problem that the data set which can be input into a deep learning network is lack of generation from the finite element model at present.
In order to achieve one of the above purposes, the technical scheme adopted by the invention is as follows:
a method of generating a deep learning image dataset for a finite element model, comprising the steps of:
s1: extracting a finite element model calculation file;
s2: determining a width and a height of an image in a picture dataset;
s3: calculating a material parameter value at a node in an image coordinate system;
s4: calculating the material parameter value on the slice plane;
s5: determining the gray value of each color channel at each pixel point in the pixel matrix;
s6: the pixel matrices are assembled into a multi-channel image.
Preferably, the finite element model calculation file comprises node coordinates, element serial numbers and material parameters.
Preferably, the range of the width of the image in the image data set is (64,1000), and the range of the height of the image in the image data set is (64,1000).
Preferably, the step S3 is specifically realized by the following steps:
s31: calculating the node coordinates in the file by a finite element model and multiplying the node coordinates by a scale factor K to obtain the node coordinates in an image coordinate system, wherein the unit of K is pixel/m;
s32: and performing weighted average calculation on all material parameter values sharing the same node to obtain the material parameter values at the node in the image coordinate system.
Preferably, the value of K is:
K=min[I/(Xmax-Xmin),J/(Ymax-Ymin)];
the method comprises the steps of obtaining a finite element model, obtaining a picture data set, obtaining a node coordinate of the finite element model, and obtaining a node coordinate of the finite element model in the x-dimension direction, wherein I is the width of the image in the picture data set, J is the height of the image in the picture data set, xmax is the maximum value of the node coordinate of the finite element model in the x-dimension direction, xmin is the minimum value of the node coordinate of the finite element model in the x-dimension direction, ymax is the maximum value of the node coordinate of the finite element model in the y-dimension direction, and Ymin is the minimum value of the node coordinate of the finite element model in the y-dimension direction.
Preferably, the step S4 is specifically realized by the following steps:
d slice planes parallel to the xoy plane are generated to form a pixel matrix, and then material parameter values at coordinates (i, j, dd) on the slice planes are calculated through interpolation to obtain an nth material parameter value matrix calculated on the D slice plane and recorded as Qd, n; wherein i and j are integers in intervals (0,I ] and (0,J), and dd is the d-th slice plane.
Preferably, the step S5 is specifically implemented by the following steps:
by gray value = round { [ (C) m,n -C min,n )/(C max,n -C min,n )]*255, determining the gray value of each color channel at each pixel point in the pixel matrix; wherein, C max,n Cmin, n is the maximum and minimum of the nth material parameter in all nodes, C m,n For the nth material parameter value at the mth node, round () is a rounding function.
Preferably, the finite element models include a two-dimensional finite element model and a three-dimensional finite element model.
In order to achieve the second purpose, the technical scheme adopted by the invention is as follows:
a server for generating a deep learning image data set by a finite element model comprises a storage and a processor;
a memory for storing program instructions;
a processor for executing the program instructions to perform the method for generating a deep learning image dataset of a finite element model as described above.
Compared with the prior art, the invention has the beneficial effects that: and obtaining an image by slicing in the extracted finite element analysis model, normalizing each color channel of each pixel point to a fixed color space, and assigning each color channel of a model blank area in the image as 0 so as to obtain the image capable of being input into the deep learning model. The multichannel data set obtained by the method can be input into a convolutional neural network, and training data support is provided for solving a large amount of repeated calculation in the engineering field by utilizing deep learning.
Drawings
FIG. 1 is a flow chart of a method for generating a deep learning image dataset for a finite element model as described in the present invention.
FIG. 2 is a schematic diagram of a finite element calculation model provided in the third embodiment.
Fig. 3 is a schematic diagram of a first multi-channel image in the third embodiment.
Fig. 4 is a schematic diagram of a second multi-channel image in the third embodiment.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The invention will be further described with reference to the accompanying drawings and the detailed description below:
in the invention, a finite element model is converted into picture data, wherein the finite element model comprises a two-dimensional finite element model and a three-dimensional finite element model, so that deep learning research is convenient to carry out, wherein the picture only has length and width, and the finite element model is three-dimensional, therefore, in the invention, the conversion between a physical coordinate system (with the length of m or mm) and an image coordinate system (with the length of pixels) is solved, the gray value of each pixel point in the picture is corresponding to the material parameter value, and the size of the finite element model in the third-dimensional direction is corresponding to the number of channels of the picture.
The first embodiment is as follows:
as shown in fig. 1, a method for generating a deep learning image data set by a finite element model includes the following steps:
s1: extracting a finite element model calculation file;
specifically, the finite element model calculation file comprises node coordinates, unit serial numbers and material parameters. In this embodiment, the deep learning image dataset is generated using a computation file generated by finite element analysis software. The calculation file contains all node numbers and coordinates, unit numbers and node information, and material parameters in the finite element model.
S2: determining a width and a height of an image in a picture dataset;
specifically, the range of the width of the image in the image data set is (64,1000), and the range of the height of the image in the image data set is (64,1000). Further, the width I and the height J of the image in the image data set are determined according to the requirement of the input size of the image of the deep learning model, and both the width I and the height J of the image are larger than 64 and smaller than 10000.
S3: calculating material parameter values at nodes in an image coordinate system;
specifically, the step S3 is specifically realized by the following steps:
s31: calculating the node coordinates in the file multiplied by a scale factor K through a finite element model to obtain the node coordinates in an image coordinate system, wherein the unit of K is pixel/m;
s32: and performing weighted average calculation on all material parameter values sharing the same node to obtain the material parameter values at the node in the image coordinate system.
K=min[I/(X max -X min ),J/(Y max -Y min )];
Wherein I is a width of an image in the picture data set, J is a height of an image in the picture data set, and X max Is the maximum value of the node coordinates of the finite element model in the X dimension direction, X min Is the minimum value of the node coordinate, Y, of the finite element model in the x dimension direction max Is the maximum value of the node coordinates of the finite element model in the Y dimension direction, Y min Is the minimum value of the node coordinates of the finite element model in the y-dimension direction.
In this embodiment, the node coordinates in the image coordinate system are obtained by multiplying the node coordinates in the finite element model calculation file by a scaling factor K, where K is in pixels/m. The coordinates of the nodes in the finite element calculation file are all longDegree as a unit, it needs to be converted into an image coordinate system, i.e., in units of pixels. The value of K is as follows: if the node coordinate ranges of the finite element model in the X and y dimension directions are respectively [ X ] min ,X max ]And [ Y min ,Y max ]And then the value of K is determined according to the formula (1):
K=min[I/(X max -X min ),J/(Y max -Y min )] (1);
finite element model calculation file
In the finite element model generation calculation file, there is no material parameter information at nodes, and to obtain the material property value at each node, it can be calculated based on the unit material property and the unit node information in the calculation file (in the finite element model, each unit corresponds to one material parameter value, and the unit is defined by the node coordinate), specifically, it is to weight and average the material parameter values of all the units sharing the same node, and the obtained coordinates of each node and the material parameter value at the same node are (x) m ,y m ,z m ,c m,1, c m,2, …,c m,n, …,c m,N ) Wherein x is i 、y i 、z i Is the three-dimensional coordinate of the m-th node in the image coordinate system, c m,n Is the nth material parameter value at the mth node.
S4: calculating the material parameter value on the slice plane;
specifically, the step S4 is specifically implemented by the following steps:
d slice planes parallel to the xoy plane are generated to form a pixel matrix, and then coordinates (i, j, D) on the slice planes are interpolated d ) Obtaining the n-th material parameter value matrix calculated on the d-th slice plane and recording as Q d,n (ii) a Wherein i and j are intervals (0,I)]And (0,J)]Internal integer, d d Is the d slice plane.
In this embodiment, D slice planes parallel to the xoy plane are generated with z coordinates z = D 1 ,d 2 ,…,d d ,…,d D The spacing of the series of planes being less than a single plane in the finite element model in the z directionThe minimum size of the element avoids that a cell is not cut by any slicing plane. If the finite element model is a two-dimensional model, only one slice plane z =0 is required.
Calculating the point (i, j, d) d ) The value of the material parameter (f), wherein i and j are the interval (0,I)]And (0,J)]Integer of, point (i, j, d) d ) The method of calculating the value of the material parameter is a discrete point interpolation method, namely, the method is known as (x) m ,y m ,z m ) The value of the material parameter is (c) m,1 ,c m,2 ,…,c m,n ,…,c m,N ) Then (i, j, d) d ) Value of material parameter (c) of i,j,1 ,c i,j,2 ,…,c i,j,n ,…,c i,j,N ) Can be obtained by calculation through a bilinear interpolation method, and the nth material parameter value matrix calculated on the d slice plane is recorded as Q d,n
S5: determining the gray value of each color channel at each pixel point in the pixel matrix;
specifically, the step S5 is specifically implemented by the following steps:
by gray value = round { [ (C) m,n -C min,n )/(C max,n -C min,n )]*255, determining the gray value of each color channel at each pixel point in the pixel matrix; wherein, C max,n ,C min,n Respectively the maximum value and the minimum value of the nth material parameter in all nodes, C m,n For the nth material parameter value at the mth node, round () is a rounding function.
In this embodiment, the determination of the value of each color channel at each pixel point is calculated by using equation (2), and the principle is that the maximum value of the material parameter corresponds to white, the minimum value of the material parameter corresponds to black, the values of other material parameters are calculated by interpolation, and the pixel value of each channel in the part of the picture beyond the boundary of the finite element model is 0. Since the gray-scale value can only be an integer, the formula (2) includes an integer calculation function.
Gray value = round { [ (C) m,n -C min,n )/(C max,n -C min,n )]*255}(2)
S6: the pixel matrices are assembled into a multi-channel image.
Specifically, a material parameter value matrix Q d,n Assembly of multichannel images [ Q ] in a third dimension 1,1 ,Q 1,2 …,Q 1,n ,…,Q 1,N ,…,Q d,1 ,Q d,2 ,…,Q d,n ,…,Q d,N ,…,Q D,1 ,Q D,2 ,…,Q D,n ,…,Q D,N ]The dimension of the multichannel picture is I multiplied by J multiplied by (N.D), wherein I, J is the width and the height of the image respectively, N is the number of material parameters, and D is the number of slices of a finite element model. And each finite element model corresponds to one multichannel image, and a multichannel image dataset which can be used for deep learning can be obtained by generating the multichannel images for a plurality of finite element models.
Example two:
a server for generating a deep learning image data set by a finite element model comprises a storage and a processor;
a memory for storing program instructions;
a processor for executing the program instructions to perform the method for generating a deep learning image data set by using the finite element model according to the first embodiment.
Example three:
selecting a foundation pit reliability analysis case, wherein the foundation pit model is a two-dimensional model, the width of the model is 1m, and as shown in figure 2, the material parameter values of the model are elastic modulus E, poisson ratio mu, cohesive force c and internal friction angle phi. Firstly, 200 finite element calculation files in inp format are generated, and node information, unit information and material information in the files are extracted. The width and height of the multi-channel image are 543 pixels × 48 pixels, respectively, and the number of channels is 4. The first multi-channel image (fig. 3) and the second multi-channel image (fig. 4) are two multi-channel images generated using fig. 2 (i.e., the finite element computation model) and the inp file, respectively. 200 multichannel images can be obtained by using 200 inp calculation files, and the images can be input into a deep learning network, so that model calculation results of other material parameters can be predicted by using the deep learning network.
Various other modifications and changes may occur to those skilled in the art based on the foregoing teachings and concepts, and all such modifications and changes are intended to be included within the scope of the appended claims.

Claims (9)

1. A method of generating a deep learning image dataset for a finite element model, comprising the steps of:
s1: extracting a finite element model calculation file;
s2: determining a width and a height of an image in a picture dataset;
s3: calculating material parameter values at nodes in an image coordinate system;
s4: calculating the material parameter value on the slice plane;
s5: determining the gray value of each color channel at each pixel point in the pixel matrix;
s6: the pixel matrices are assembled into a multi-channel image.
2. A method of generating a deep learning image dataset with a finite element model as claimed in claim 1, characterized in that: the finite element model calculation file comprises node coordinates, unit serial numbers and material parameters.
3. A method of generating a deep learning image dataset with a finite element model as claimed in claim 1, characterized in that: the range of the width of the image in the picture data set is (64,1000), and the range of the height of the image in the picture data set is (64,1000).
4. The method for generating a deep learning image dataset with a finite element model as set forth in claim 1, wherein S3 is specifically realized by the steps of:
s31: calculating the node coordinates in the file by a finite element model and multiplying the node coordinates by a scale factor K to obtain the node coordinates in an image coordinate system, wherein the unit of K is pixel/m;
s32: and performing weighted average calculation on all material parameter values sharing the same node to obtain the material parameter values at the node in the image coordinate system.
5. The method for generating a deep learning image dataset of a finite element model as claimed in claim 4, wherein said K takes on the values:
K=min[I/(X max -X min ),J/(Y max -Y min )];
wherein I is a width of an image in the picture dataset, J is a height of an image in the picture dataset, and X max Is the maximum value of the node coordinates of the finite element model in the X dimension direction, X min Is the minimum value of the node coordinate, Y, of the finite element model in the x dimension direction max Is the maximum value of the node coordinates of the finite element model in the Y dimension direction, Y min Is the minimum value of the node coordinates of the finite element model in the y-dimension direction.
6. The method for generating a deep learning image dataset with a finite element model as set forth in claim 1, wherein S4 is specifically realized by the steps of:
d slice planes parallel to the xoy plane are generated to form a pixel matrix, and then coordinates (i, j, D) on the slice planes are interpolated d ) Obtaining the nth material parameter value matrix calculated on the d slice plane and recording as Q d,n (ii) a Wherein i and j are intervals (0,I)]And (0,J)]Internal integer, d d Is the d slice plane.
7. The method for generating a deep learning image dataset with a finite element model as set forth in claim 1, wherein S5 is specifically realized by the steps of:
by gray value = round { [ (C) m,n -C min,n )/(C max,n -C min,n )]*255, determining the gray value of each color channel at each pixel point in the pixel matrix; wherein, C max,n ,C min,n Respectively the maximum value and the minimum value of the nth material parameter in all nodes, C m,n Is the nth material parameter at the mth nodeThe value, round (), is a rounding function.
8. The method for generating a deep learning image dataset with a finite element model as set forth in claim 1, wherein the finite element model comprises a two-dimensional finite element model and a three-dimensional finite element model.
9. A server for generating a deep learning image dataset by a finite element model, comprising: comprises a storage and a processor;
a memory for storing program instructions;
a processor for executing the program instructions to perform a method of generating a deep learning image dataset for a finite element model according to any of claims 1-8.
CN202211457090.8A 2022-11-16 2022-11-16 Method for generating deep learning image data set by finite element model and server Pending CN115731430A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211457090.8A CN115731430A (en) 2022-11-16 2022-11-16 Method for generating deep learning image data set by finite element model and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211457090.8A CN115731430A (en) 2022-11-16 2022-11-16 Method for generating deep learning image data set by finite element model and server

Publications (1)

Publication Number Publication Date
CN115731430A true CN115731430A (en) 2023-03-03

Family

ID=85296873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211457090.8A Pending CN115731430A (en) 2022-11-16 2022-11-16 Method for generating deep learning image data set by finite element model and server

Country Status (1)

Country Link
CN (1) CN115731430A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071245A (en) * 2023-04-06 2023-05-05 苏州浪潮智能科技有限公司 Image processing method, device and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071245A (en) * 2023-04-06 2023-05-05 苏州浪潮智能科技有限公司 Image processing method, device and system

Similar Documents

Publication Publication Date Title
CN109212617B (en) Automatic identification method and device for electric imaging logging phase
CN110969250A (en) Neural network training method and device
CN111476835B (en) Unsupervised depth prediction method, system and device for consistency of multi-view images
CN111274903A (en) Cervical cell image classification method based on graph convolution neural network
CN115731430A (en) Method for generating deep learning image data set by finite element model and server
CN111553423A (en) Handwriting recognition method based on deep convolutional neural network image processing technology
EP3992657A1 (en) High-resolution sound source map obtaining and analyzing method and system using artificial intelligence neural network
CN112541584A (en) Deep neural network model parallel mode selection method
CN117132849A (en) Cerebral apoplexy hemorrhage transformation prediction method based on CT flat-scan image and graph neural network
Saku et al. Spatio-temporal prediction of soil deformation in bucket excavation using machine learning
CN115759291B (en) Spatial nonlinear regression method and system based on ensemble learning
CN112444850B (en) Seismic data velocity modeling method, storage medium and computing device
CN116705151A (en) Dimension reduction method and system for space transcriptome data
CN111583412A (en) Method for constructing calligraphy relief deep learning network and method for constructing calligraphy relief
CN116681959A (en) Machine learning-based frontal line identification method and device, storage medium and terminal
CN116416253A (en) Neuron extraction method and device based on bright-dark channel priori depth of field estimation
CN111505738A (en) Method and equipment for predicting meteorological factors in numerical weather forecast
CN116467946A (en) Deep learning-based mode prediction product downscaling method
CN111047654A (en) High-definition high-speed video background modeling method based on color information
CN114485417B (en) Structural vibration displacement identification method and system
CN110648030A (en) Method and device for predicting seawater temperature
CN111028245B (en) Multi-mode composite high-definition high-speed video background modeling method
CN115081516A (en) Internet of things flow prediction method based on biological connection group time-varying convolution network
CN103177420A (en) Image amplification method and image application device based on local-area feature correlations
CN114220013A (en) Camouflaged object detection method based on boundary alternating guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination