CN116071745B - Method and device for processing electron microscope density map target recognition model - Google Patents

Method and device for processing electron microscope density map target recognition model Download PDF

Info

Publication number
CN116071745B
CN116071745B CN202310201533.5A CN202310201533A CN116071745B CN 116071745 B CN116071745 B CN 116071745B CN 202310201533 A CN202310201533 A CN 202310201533A CN 116071745 B CN116071745 B CN 116071745B
Authority
CN
China
Prior art keywords
tensor
convolution
layer
activation
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310201533.5A
Other languages
Chinese (zh)
Other versions
CN116071745A (en
Inventor
陈伟杰
张林峰
孙伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenshi Technology Co ltd
Original Assignee
Beijing Shenshi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenshi Technology Co ltd filed Critical Beijing Shenshi Technology Co ltd
Priority to CN202310201533.5A priority Critical patent/CN116071745B/en
Publication of CN116071745A publication Critical patent/CN116071745A/en
Application granted granted Critical
Publication of CN116071745B publication Critical patent/CN116071745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention relates to a processing method and a processing device of an electron microscope density map target recognition model, wherein the method comprises the following steps: determining a target recognition model; determining a model loss function of the target recognition model; model training is carried out on the target recognition model according to the model loss function; and after the model training is successful, acquiring a first electron microscope density map, and performing key target identification processing on the first electron microscope density map by using a target identification model to obtain a corresponding first C alpha atomic characteristic map and a corresponding first trunk atomic characteristic map. The invention can improve the processing efficiency and the processing quality of the key target identification processing of the electron microscope density map.

Description

Method and device for processing electron microscope density map target recognition model
Technical Field
The invention relates to the technical field of data processing, in particular to a processing method and a processing device of an electron microscope density map target recognition model.
Background
The single-particle frozen electron microscope technology can analyze the corresponding protein three-dimensional structure based on an electron microscope density chart photographed by a frozen electron microscope, when the protein three-dimensional structure is analyzed, firstly, key target identification processing is needed to be carried out on the electron microscope density chart to obtain a C alpha atom characteristic chart marked with alpha carbon atom (called C alpha atom below) characteristics and a stem atom characteristic chart marked with protein stem atom (C atom, C alpha atom and N atom) characteristics, wherein the C alpha atom characteristics comprise C alpha atom coordinate characteristics, C alpha atom pseudo peptide bond vector characteristics and amino acid type characteristics of amino acid where the C alpha atom is positioned, and the mentioned stem atom characteristics comprise atom type characteristics of three types of stem atoms (C atom, C alpha atom and N atom).
When key target identification processing is carried out on the electron microscope density map by the conventional technical means, the atomic type, the pseudo peptide bond direction and the amino acid type are usually identified and marked in a manual mode, and position calculation is carried out according to the electron microscope density map after manual marking so as to obtain a corresponding C alpha atomic characteristic map and a corresponding trunk atomic characteristic map. The conventional technical means has the defects that the processing efficiency is difficult to improve because more manual operation links exist, and the processing quality is unstable because the processing quality is influenced by factors such as the manual experience level, the electron microscope density image noise, the electron microscope density image resolution and the like.
Disclosure of Invention
The invention aims at overcoming the defects of the prior art and provides a processing method, a processing device, electronic equipment and a computer readable storage medium of an electron microscope density map target identification model. The invention provides a target recognition model which is used for carrying out key target recognition processing on an input electron microscope density map, outputting a corresponding C alpha atomic characteristic map and a corresponding trunk atomic characteristic map, carrying out model loss function design on the target recognition model, training the model based on the designed loss function, and carrying out key target recognition processing on the input electron microscope density map based on the target recognition model after training maturity. According to the target recognition model provided by the invention, on one hand, the processing efficiency of performing key target recognition processing on the electron microscope density map can be improved, and on the other hand, the processing quality of the key target recognition processing can be steadily improved through model training.
In order to achieve the above object, a first aspect of the present invention provides a method for processing an object recognition model of an electron microscope density map, where the method includes:
determining a target recognition model;
determining a model loss function of the target recognition model;
performing model training on the target recognition model according to the model loss function;
and after the model training is successful, a first electron microscope density map is obtained, and the target recognition model is used for carrying out key target recognition processing on the first electron microscope density map to obtain a corresponding first C alpha atomic characteristic map and a corresponding first trunk atomic characteristic map.
Preferably, the target recognition model comprises a trunk feature extraction network, a first recognition branch network and a second recognition branch network; the trunk feature extraction network is used for carrying out feature extraction processing on the input electron microscope density map to generate corresponding first branch tensor and second branch tensor; the first recognition branch network is used for carrying out trunk atomic feature target recognition processing according to the first branch tensor to generate a corresponding trunk atomic feature map; and the second recognition branch network is used for performing C alpha atomic characteristic target recognition processing according to the second branch tensor to generate a corresponding C alpha atomic characteristic diagram.
Further, the trunk feature extraction network comprises eight convolution layers, three pooling layers, three up-sampling layers and three splicing layers; the eight convolution layers are respectively a first, a second, a third, a fourth, a fifth, a sixth, a seventh and an eighth convolution layer; the three pooling layers are respectively a first pooling layer, a second pooling layer and a third pooling layer; the three upsampling layers are respectively a first upsampling layer, a second upsampling layer and a third upsampling layer; the three splicing layers are respectively a first splicing layer, a second splicing layer and a third splicing layer;
the input end of the first convolution layer is the model input end of the target identification model, and the output end of the first convolution layer is respectively connected with the input end of the second convolution layer and the first input end of the third splicing layer; the output end of the second convolution layer is connected with the input end of the first pooling layer; the output end of the first pooling layer is respectively connected with the input end of the third convolution layer and the first input end of the second splicing layer; the output end of the third convolution layer is connected with the input end of the second pooling layer; the output end of the second pooling layer is respectively connected with the input end of the fourth convolution layer and the first input end of the first splicing layer; the output end of the fourth convolution layer is connected with the input end of the third pooling layer; the output end of the third pooling layer is connected with the input end of the fifth convolution layer; the output end of the fifth convolution layer is connected with the input end of the first up-sampling layer; the output end of the first upsampling layer is connected with the second input end of the first splicing layer; the output end of the first splicing layer is connected with the input end of the sixth convolution layer; the output end of the sixth convolution layer is connected with the input end of the second up-sampling layer; the output end of the second upsampling layer is connected with the second input end of the second splicing layer; the output end of the second splicing layer is connected with the input end of the seventh convolution layer; the output end of the seventh convolution layer is respectively connected with the input end of the third upsampling layer and the input end of the second identification branch network; the output end of the third upsampling layer is connected with the second input end of the third splicing layer; the output end of the third splicing layer is connected with the input end of the first identification branch network;
The first, second, third, fourth, fifth, sixth, seventh and eighth convolution layers are formed by sequentially connecting a plurality of first convolution activation modules; the first convolution activation modules comprise a first three-dimensional convolution unit and a first ReLU activation function unit, and the output ends of the first three-dimensional convolution units of each first convolution activation module are connected with the input ends of the corresponding first ReLU activation function units; the input end of the first three-dimensional convolution unit of the first convolution activating module of each convolution layer is the input end of the current convolution layer, the output end of the first ReLU activating function unit of the former first convolution activating module is connected with the input end of the first three-dimensional convolution unit of the latter first convolution activating module, and the output end of the first ReLU activating function unit of the last first convolution activating module is the output end of the current convolution layer;
the shape formats of the input and output tensors of the first, second, third, fourth, fifth, sixth, seventh, eighth, first, second, third, first up-sampling, second up-sampling, first, second, and third splicing layers are all H X Y X Z, where H is the number of characteristic channels, (X, Y, Z) is the tensor three-dimensional size, H, X, Y, Z is an integer greater than zero;
The tensor three-dimensional sizes of the input tensor and the output tensor of each convolution layer in the first, second, third, fourth, fifth, sixth, seventh and eighth convolution layers are consistent;
in the first, second and third pooling layers, the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of the output tensor of each pooling layer are respectively 1/2 of the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of the input tensor of the current pooling layer;
in the first, second and third upsampling layers, the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of each upsampling layer output tensor are 2 times of the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size input by the current upsampling layer respectively;
in the first, second and third splicing layers, the tensor three-dimensional sizes of the two input tensors input by the first and second input ends corresponding to each splicing layer are consistent, the tensor three-dimensional size of the tensor output tensor of each splicing layer is consistent with the tensor three-dimensional size of the corresponding two input tensors, and the number of characteristic channels of the tensor output by each splicing layer is the sum of the number of characteristic channels of the corresponding two input tensors.
Further, the trunk feature extraction network is configured to perform feature extraction processing on an input electron microscope density map to generate corresponding first and second branch tensors, and specifically includes:
The trunk feature extraction network takes the input electron microscope density map as a corresponding current electron microscope density map; the shape of the current electron microscope density chart is H 0 ×X 0 ×Y 0 ×Z 0 ,H 0 For the number of characteristic channels of the current electron microscope density map, (X) 0 、Y 0 、Z 0 ) The tensor three-dimensional size of the current electron microscope density map is obtained;
inputting the current electron microscope density map into the first convolution layer to carry out convolution operation processing to obtain a corresponding first convolution characteristic tensor; the first convolution specialThe shape of the symptomatic tensor is H 1 ×X 0 ×Y 0 ×Z 0 ,H 1 A number of feature channels that is the first convolution feature tensor;
inputting the first convolution characteristic tensor into the second convolution layer to carry out convolution operation processing to obtain a corresponding second convolution characteristic tensor; the shape of the second convolution characteristic tensor is H 2 ×X 0 ×Y 0 ×Z 0 ,H 2 A number of feature channels that is the second convolution feature tensor;
inputting the second convolution characteristic tensor into the first pooling layer for downsampling treatment to obtain a corresponding first pooling characteristic tensor; the shape of the first pooling feature tensor is H 2 ×X 1 ×Y 1 ×Z 1 ,(X 1 、Y 1 、Z 1 ) For the tensor three-dimensional size of the first pooled feature tensor, X 1 =X 0 /2,Y 1 =Y 0 /2,Z 1 =Z 0 /2;
Inputting the first pooling characteristic tensor into the third convolution layer to carry out convolution operation processing to obtain a corresponding third convolution characteristic tensor; the third convolution characteristic tensor has the shape of H 3 ×X 1 ×Y 1 ×Z 1 ,H 3 A number of feature channels that is the third convolution feature tensor;
inputting the third convolution characteristic tensor into the second pooling layer for downsampling treatment to obtain a corresponding second pooling characteristic tensor; the shape of the second pooling feature tensor is H 3 ×X 2 ×Y 2 ×Z 2 ,(X 2 、Y 2 、Z 2 ) For the tensor three-dimensional size of the second pooled feature tensor, X 2 =X 1 /2,Y 2 =Y 1 /2,Z 2 =Z 1 /2;
Inputting the second pooling characteristic tensor into the fourth convolution layer to carry out convolution operation processing to obtain a corresponding fourth convolution characteristic tensor; the fourth convolution characteristic tensor has the shape of H 4 ×X 2 ×Y 2 ×Z 2 ,H 4 For the fourth rollThe number of feature channels of the product feature tensor;
inputting the fourth convolution characteristic tensor into the third pooling layer for downsampling treatment to obtain a corresponding third pooling characteristic tensor; the third pooling feature tensor has a shape of H 4 ×X 3 ×Y 3 ×Z 3 ,(X 3 、Y 3 、Z 3 ) For the tensor three-dimensional size of the third pooled feature tensor, X 3 =X 2 /2,Y 3 =Y 2 /2,Z 3 =Z 2 /2;
Inputting the second pooling characteristic tensor into the fifth convolution layer to carry out convolution operation processing to obtain a corresponding fifth convolution characteristic tensor; the fifth convolution characteristic tensor has the shape of H 5 ×X 3 ×Y 3 ×Z 3 ,H 5 A number of feature channels that is the fifth convolution feature tensor;
inputting the fifth convolution characteristic tensor into the first upsampling layer for upsampling to obtain a corresponding first upsampling characteristic tensor; the shape of the first upsampling feature tensor is H 5 ×(X 3 *2)×(Y 3 *2)×(Z 3 *2)=H 5 ×X 2 ×Y 2 ×Z 2
Inputting the second pooling characteristic tensor and the first upsampling characteristic tensor as two input tensors input by the first input end and the second input end of the first splicing layer into the first splicing layer to perform tensor splicing processing along the characteristic channel direction to obtain a corresponding first splicing tensor; the shape of the first splicing tensor is H 6 ×X 2 ×Y 2 ×Z 2 ,H 6 For the number of characteristic channels of the first splice tensor, H 6 =H 3 +H 5
Inputting the first spliced tensor into the sixth convolution layer to carry out convolution operation processing to obtain a corresponding sixth convolution characteristic tensor; the shape of the sixth convolution characteristic tensor is H 7 ×X 2 ×Y 2 ×Z 2 ,H 7 A number of characteristic channels that is the sixth convolution characteristic tensor;
inputting the sixth convolution characteristic tensor into the second upsampling layer to perform upsampling processing to obtain a corresponding second upsampling characteristic tensor; the second upsampling feature tensor has a shape of H 7 ×(X 2 *2)×(Y 2 *2)×(Z 2 *2)=H 7 ×X 1 ×Y 1 ×Z 1
Inputting the first pooling characteristic tensor and the second upsampling characteristic tensor serving as two input tensors input by the first input end and the second input end of the second splicing layer into the second splicing layer to perform tensor splicing processing along the characteristic channel direction to obtain a corresponding second splicing tensor; the shape of the second splice tensor is H 8 ×X 1 ×Y 1 ×Z 1 ,H 8 For the number of characteristic channels of the second splice tensor, H 8 =H 2 +H 7
Inputting the second spliced tensor into the seventh convolution layer to carry out convolution operation processing to obtain a corresponding seventh convolution characteristic tensor; the seventh convolution characteristic tensor has the shape of H 9 ×X 1 ×Y 1 ×Z 1 ,H 9 A number of feature channels that is the seventh convolution feature tensor;
inputting the seventh convolution characteristic tensor into the third upsampling layer for upsampling to obtain a corresponding third upsampling characteristic tensor; the third upsampling feature tensor has the shape of H 9 ×(X 1 *2)×(Y 1 *2)×(Z 1 *2)=H 9 ×X 0 ×Y 0 ×Z 0
Inputting the first convolution characteristic tensor and the third upsampling characteristic tensor serving as two input tensors input by the first input end and the second input end of the third splicing layer into the third splicing layer to perform tensor splicing processing along the characteristic channel direction to obtain a corresponding third splicing tensor; the shape of the third splice tensor is H 10 ×X 0 ×Y 0 ×Z 0 ,H 10 For the number of characteristic channels of the third splice tensor, H 10 =H 1 +H 9
Inputting the third spliced tensor into the eighth convolution layer to carry out convolution operation processing to obtain a corresponding eighth convolution characteristic tensor; the shape of the eighth convolution characteristic tensor is H 11 ×X 0 ×Y 0 ×Z 0 ,H 11 A number of feature channels for the eighth convolution feature tensor;
Taking the obtained eighth convolution characteristic tensor as the corresponding first branch tensor, and taking the obtained seventh convolution characteristic tensor as the corresponding second branch tensor; and outputting the obtained first and second branch tensors.
Further, the first identification branch network comprises a ninth convolution layer and a first activation layer; the input end of the ninth convolution layer is the input end of the first identification branch network, and the output end of the ninth convolution layer is connected with the input end of the first activation layer; the output end of the first activation layer is the output end of the first identification branch network;
the ninth convolution layer is formed by sequentially connecting a plurality of second convolution activating modules; the second convolution activation modules comprise a second three-dimensional convolution unit and a second ReLU activation function unit, and the output ends of the second three-dimensional convolution units of each second convolution activation module are connected with the input ends of the corresponding second ReLU activation function units; the input end of the second three-dimensional convolution unit of the first second convolution activation module of the ninth convolution layer is the input end of the ninth convolution layer, the output end of the second ReLU activation function unit of the former second convolution activation module is connected with the input end of the second three-dimensional convolution unit of the latter second convolution activation module, and the output end of the second ReLU activation function unit of the last second convolution activation module is the output end of the ninth convolution layer;
The shape formats of the input tensor and the output tensor of the ninth convolution layer and the first activation layer are H×X×Y×Z, wherein H is the number of characteristic channels, (X, Y, Z) is the tensor three-dimensional size, and H, X, Y, Z is an integer greater than zero; the tensor three-dimensional sizes of the input tensor and the output tensor of the ninth convolution layer are consistent; the tensor three-dimensional sizes of the input tensor and the output tensor of the first activation layer are consistent;
the activation function used by the first activation layer is specifically a sigmoid function, and the value range of the function output value is [0,1].
Further, the first identifying branch network is configured to perform a trunk atomic feature object identifying process according to the first branch tensor to generate a corresponding trunk atomic feature map, and specifically includes:
the first identification branch network inputs the first branch tensor into the ninth convolution layer to carry out convolution operation processing to obtain a corresponding ninth convolution characteristic tensor; the shape of the first branch tensor is H 11 ×X 0 ×Y 0 ×Z 0 ,H 11 For the number of characteristic channels of the first branch tensor, (X) 0 、Y 0 、Z 0 ) A tensor three-dimensional size for the first branch tensor; the shape of the ninth convolution characteristic tensor is H 12 ×X 0 ×Y 0 ×Z 0 ,H 12 A number of feature channels that is the ninth convolution feature tensor;
Inputting the ninth convolution feature tensor into the first activation layer to perform feature activation operation to generate a corresponding first activation tensor; the first activation tensor has a shape of 3X 0 ×Y 0 ×Z 0 The method comprises the steps of carrying out a first treatment on the surface of the The first activation tensor includes X 0 *Y 0 *Z 0 A first vector of length 3; the first vector includes 3 atom type probabilities of: the probability of the C atom type, the probability of the C alpha atom type and the probability of the N atom type, and the value range of each atom type probability is [0,1 ]];
And outputting the first activation tensor as the corresponding trunk atomic feature map.
Further, the second identification branch network comprises four convolution layers, four activation layers and a first feature fusion layer; the four convolution layers are respectively a tenth, eleventh, twelfth and thirteenth convolution layers, and the four activation layers are respectively a second, third, fourth and fifth activation layers;
the input ends of the tenth, eleventh, twelfth and thirteenth convolution layers are the input ends of the first identification branch network; the output end of the tenth convolution layer is connected with the input end of the second activation layer; the output end of the second activation layer is connected with the first input end of the first characteristic fusion layer; the output end of the eleventh convolution layer is connected with the input end of the third activation layer; the output end of the third activation layer is connected with the second input end of the first characteristic fusion layer; the output end of the twelfth convolution layer is connected with the input end of the fourth activation layer; the output end of the fourth activation layer is connected with the third input end of the first characteristic fusion layer; the output end of the thirteenth convolution layer is connected with the input end of the fifth activation layer; the output end of the fifth activation layer is connected with the fourth input end of the first characteristic fusion layer; the output end of the first characteristic fusion layer is the output end of the second identification branch network;
The tenth, eleventh, twelfth and thirteenth convolution layers are formed by sequentially connecting a plurality of third convolution activation modules; the third convolution activation module comprises a third three-dimensional convolution unit and a third ReLU activation function unit, and the output end of the third three-dimensional convolution unit of each third convolution activation module is connected with the input end of the corresponding third ReLU activation function unit; in the tenth, eleventh, twelfth and thirteenth convolution layers, the input end of the third three-dimensional convolution unit of the first third convolution activation module of each convolution layer is the input end of the current convolution layer, the output end of the third ReLU activation function unit of the former third convolution activation module is connected with the input end of the third three-dimensional convolution unit of the latter third convolution activation module, and the output end of the third ReLU activation function unit of the last third convolution activation module is the output end of the current convolution layer;
the shape formats of the input tensors and the output tensors of the tenth convolution layer, the eleventh convolution layer, the twelfth convolution layer, the thirteenth convolution layer, the second activation layer, the third activation layer, the fourth activation layer and the fifth activation layer are all h×x×y×z, wherein H is the number of characteristic channels, (X, Y, Z) is a tensor three-dimensional size, and H, X, Y, Z is an integer greater than zero; the tensor three-dimensional sizes of input tensors and output tensors of the tenth, eleventh, twelfth and thirteenth convolution layers are consistent; the tensor three-dimensional sizes of input tensors and output tensors of the second, third, fourth and fifth active layers are consistent;
The activation function used by the second activation layer is specifically a sigmoid function, and the value range of the function output value is [0,1]; the activation function used by the third activation layer is specifically a 2 x sigmoid function, and the value range of the function output value is [0,2]; the activation function used by the fourth activation layer is specifically a 3.8-tan function, and the value range of the function output value is [ -3.8,3.8]; the activation function used by the fifth activation layer is specifically a softmax function, and the range of the function output value is [0,1].
Further, the second recognition branch network is configured to perform a target recognition process on the feature of the C α atom according to the second branch tensor to generate a corresponding feature map of the C α atom, and specifically includes:
the second recognition branch network inputs the second branch tensor into the tenth, eleventh, twelfth and thirteenth convolution layers respectively to carry out corresponding convolution operation processing to obtain corresponding tenth, eleventh, twelfth and thirteenth convolution characteristic tensors; the shape of the second branch tensor is H 9 ×X 1 ×Y 1 ×Z 1 ,H 9 For the number of characteristic channels of the second branch tensor, (X) 1 、Y 1 、Z 1 ) A tensor three-dimensional size for the second branch tensor; the tenth convolution characteristic tensor has the shape of H 13 ×X 1 ×Y 1 ×Z 1 ,H 13 A number of feature channels that is the tenth convolution feature tensor; the eleventh convolution characteristic tensor has the shape of H 14 ×X 1 ×Y 1 ×Z 1 ,H 14 A number of feature channels that is the eleventh convolution feature tensor; the twelfth convolution characteristic tensor has the shape of H 15 ×X 1 ×Y 1 ×Z 1 ,H 15 A number of feature channels for the twelfth convolution feature tensor; the thirteenth convolution characteristic tensor has the shape of H 16 ×X 1 ×Y 1 ×Z 1 ,H 16 A number of feature channels that is the thirteenth convolution feature tensor;
inputting the tenth, eleventh, twelfth and thirteenth convolution feature tensors into the corresponding second, third, fourth and fifth activation layers respectively to perform corresponding feature activation operation to obtain corresponding second, third, fourth and fifth activation tensors; the shape of the second activation tensor is 1X 1 ×Y 1 ×Z 1 The second activation tensor includes X 1 *Y 1 *Z 1 A second vector of length 1, each of said second vectors comprising 1 probability of a C alpha atom type, said probability of a C alpha atom type ranging from a value of [0,1]The method comprises the steps of carrying out a first treatment on the surface of the The third activation tensor has a shape of 3X 1 ×Y 1 ×Z 1 The third activation tensor includes X 1 *Y 1 *Z 1 And 3 third vectors with length, wherein each third vector is a C alpha atom coordinate vector, and each third vector consists of 3 axial coordinate components: x-axis, Y-axis and Z-axis coordinate components, wherein the value ranges of the X-axis, Y-axis and Z-axis coordinate components are [0,2 ] ]The unit is an angstrom; the fourth activation tensor has a shape of 3X 1 ×Y 1 ×Z 1 The fourth activation tensor includes X 1 *Y 1 *Z 1 And a fourth vector with the length of 3, wherein each fourth vector is a pseudo peptide bond vector and consists of 3 axial vector components: an X-axis vector component, a Y-axis vector component and a Z-axis vector component, wherein the value ranges of the X-axis, the Y-axis and the Z-axis vector components are [ -3.8,3.8 ]]The unit is an angstrom; the fifth activation tensor has a shape of NxX 1 ×Y 1 ×Z 1 N is the total number of preset amino acid types, N is an integer greater than 0, and the fifth activation tensor comprises X 1 *Y 1 *Z 1 A fifth vector of length N, the fifth vector comprising the amino groupsAmino acid type probability of total number N of acid types, wherein the value range of the amino acid type probability is [0,1 ]];
The obtained second, third, fourth and fifth activation tensors are used as four input tensors input by the first, second, third and fourth input ends of the first feature fusion layer to be input into the first feature fusion layer for feature fusion processing to obtain corresponding first fusion tensors; the shape of the first fusion tensor is H 17 ×X 1 ×Y 1 ×Z 1 ,H 17 A number of characteristic channels that is the first fusion tensor;
and outputting the obtained first fusion tensor as the corresponding C alpha atomic characteristic diagram.
Further preferably, the inputting the obtained second, third, fourth and fifth activation tensors as the four input tensors input by the first, second, third and fourth input ends of the first feature fusion layer into the first feature fusion layer to perform feature fusion processing to obtain corresponding first fusion tensors, which specifically includes:
the first feature fusion layer identifies a preset feature fusion mode; the feature fusion mode comprises a first mode and a second mode;
when the characteristic fusion mode is a first mode, tensor splicing processing is carried out on the second, third, fourth and fifth input activation tensors along the characteristic channel direction to obtain the corresponding first fusion tensor; the shape of the first fusion tensor is H 17 ×X 1 ×Y 1 ×Z 1 ,H 17 For the number of characteristic channels of the first fusion tensor, H 17 =1+3+3+N;
When the feature fusion mode is the second mode, constructing a feature fusion mode consisting of X 1 *Y 1 *Z 1 A grid space formed by a first unit grid of which the shape is 2A multiplied by 2A is recorded as a first grid space; and establishing a one-to-one correspondence for each of the first unit cells in the first cell space with the second, third, fourth, and fifth vectors in the second, third, fourth, and fifth activation tensors A relationship; distributing a first grid characteristic tensor consisting of a first grid C alpha atom type, a first grid C alpha atom coordinate vector, a first grid pseudo peptide bond vector and a first grid amino acid type for each first unit grid; initializing the first grid C alpha atom type in each first grid characteristic tensor to be a preset non-C alpha atom type, initializing the first grid C alpha atom coordinate vector to be a preset invalid coordinate vector, initializing the first grid pseudo-peptide bond vector to be a preset invalid pseudo-peptide bond vector, and initializing the first grid amino acid type to be a preset invalid amino acid type; traversing all the first unit grids one by one, taking the first unit grids traversed currently as corresponding current unit grids, taking the first grid characteristic tensor corresponding to the current unit grids as corresponding current grid characteristic tensor, identifying whether the C alpha atom type probability of the second vector corresponding to the current unit grids exceeds a preset C alpha atom type probability threshold, if so, setting the first grid C alpha atom type of the current grid characteristic tensor as a preset C alpha atom type, setting the first grid C alpha atom coordinate vector of the current grid characteristic tensor as the corresponding third vector, setting the first grid pseudo peptide key vector of the current grid characteristic tensor as the corresponding fourth vector, and setting the first grid amino acid type of the current grid characteristic tensor as the amino acid type probability corresponding to the amino acid type with the maximum probability value in the corresponding fifth vector; and at the end of the traversal, by the latest X 1 *Y 1 *Z 1 The first grid characteristic tensors form corresponding first fusion tensors; the shape of the first fusion tensor is H 17 ×X 1 ×Y 1 ×Z 1 ,H 17 For the number of characteristic channels of the first fusion tensor, H 17 =L 1 +L 2 +L 3 +L 4 ,L 1 For the length of the first lattice C alpha atom type, L 2 L is the length of the first grid C alpha atom coordinate vector 3 Is saidLength of first lattice pseudo-peptide bond vector, L 4 A length of the first lattice amino acid type; l (L) 1 Defaulting to 1, L 2 Defaulting to 3, L 3 Defaulting to 3, L 4 Default to 1;
and outputting the obtained first fusion tensor.
Preferably, the method comprises the steps of,
L 1 () A loss function for the first identified branch network; the loss function L 1 () Wherein M is 1 A first activation tensor output for the first identified branch network,for the corresponding first label tensor;
L 2 () A loss function for the second identified branch network; lambda (lambda) 1 、λ 2 、λ 3 And lambda (lambda) 4 The method comprises the steps of obtaining four preset super parameters; loss function L 2 () From super-parameter lambda 1 And loss function L 21 () Is a weighted product of (a) and a hyper-parameter lambda 3 And loss function L 22 () Is a weighted product of (a) and a hyper-parameter lambda 3 And loss function L 23 () The weighted product of (2) and the hyper-parameter lambda 4 And loss function L 24 () Is added by the weighted products of (2); the loss function L 21 () Said loss function L 22 () Said loss function L 23 () And the loss function L 24 () Wherein, (X, Y, Z) is tensor three-dimensional size, X, Y, Z are integers greater than 0;
the loss function L 21 () Wherein M is 2 A second activation tensor output for the second activation layer of the second identified branch network,for a corresponding second label tensor; the second activation tensor M 2 Comprising X Y Z second vectors m 2,(x,y,z) The second tag tensor->Comprises X Y Z second tag vectors ++>The method comprises the steps of carrying out a first treatment on the surface of the Beta is a preset cross entropy coefficient;
the loss function L 22 () Wherein M is 3 A third activation tensor output for the third activation layer of the second identified branch network,is the corresponding third label tensor; the third activation tensor M 3 Comprising X Y Z third vectors m 3,(x,y,z) The third tag tensor->Comprises X Y Z third tag vectors ++>
The loss function L 23 () Wherein M is 4 A fourth activation tensor output for the fourth activation layer of the second identified branch network,is the corresponding fourth label tensor; the fourth activation tensor M 4 Comprising X Y Z fourth vectors m 4,(x,y,z) The fourth tag tensor->Comprising X Y Z fourth tag vectors ++>
The loss function L 24 () Wherein M is 5 A fifth activation tensor output for the fifth activation layer of the second identified branch, Is the corresponding fifth label tensor; n preset amino groupTotal number of acid types; the fifth activation tensor M 5 Comprising X Y Z fifth vectors v x,y,z The fifth vector v x,y,z Amino acid type probability m comprising the total number of amino acid types N 5,(x,y,z,i) The method comprises the steps of carrying out a first treatment on the surface of the Said fifth tag tensor->Comprising X Y Z fifth tag vectors ++>The fifth tag vectorAmino acid type probability comprising the total number N of said amino acid types +.>
Preferably, the training of the target recognition model according to the model loss function specifically includes:
step 111, initializing the count value of the first counter to 0;
step 112, selecting an unused training data record from the preset training data set as a corresponding current training data record; extracting a training electron microscope density map, a main atom probability feature map, a C alpha atom position feature map, a C alpha-C alpha atom pseudo peptide bond vector feature map and a C alpha atom amino acid type probability feature map in the current training data record as a corresponding first training electron microscope density map, and the first label tensorSaid second tag tensor->Said third tag tensor->Said fourth tag tensor- >And said fifth tag tensor->The method comprises the steps of carrying out a first treatment on the surface of the The training data set comprises a plurality of the training data records; the training data record comprises the training electron microscope density map and the corresponding main atom probability feature map, the C alpha atom position feature map, the C alpha-C alpha atom pseudo peptide bond vector feature map and the C alpha atom amino acid type probability feature map;
step 113, inputting the first training electron microscope density map into the trunk feature extraction network of the target recognition model, and performing feature extraction processing on the first training electron microscope density map by the trunk feature extraction network to generate corresponding first training branch tensors and second training branch tensors;
step 114, inputting the first training branch tensor into the ninth convolution layer of the first recognition branch network of the target recognition model to perform convolution operation processing to obtain a corresponding ninth convolution characteristic tensor; inputting the ninth convolution feature tensor into the first activation layer in the first identification branch network to perform corresponding feature activation operation to obtain the corresponding first activation tensor M 1
Step 115, inputting the second training branch tensor into the tenth, eleventh, twelfth and thirteenth convolution layers of the second recognition branch network of the target recognition model respectively, and performing corresponding convolution operation processing to obtain corresponding tenth, eleventh, twelfth and thirteenth convolution feature tensors; inputting the tenth, eleventh, twelfth and thirteenth convolution feature tensors into the corresponding second, third, fourth and fifth activation layers in the second identification branch network to perform corresponding feature activation operation to obtain corresponding second, third, fourth and fifth activation tensors M 2 、M 3 、M 4 、M 5
Step 116, integrating the first, second, third, fourth and fifth activation tensors M 1 、M 2 、M 3 、M 4 、M 5 And the first, second, third, fourth, and fifth label tensors、/>、/>、/>、/>Inputting the model loss function L to calculate a loss value to obtain a corresponding first loss value;
step 117, identifying whether the first loss value is satisfied with a set loss value range; if yes, add 1 to the count value of the first counter, and go to step 118; if not, substituting the current model parameters of the target recognition model into the model loss function L to obtain a corresponding first target function, estimating the model parameters of the target recognition model towards the direction that the function value of the first target function reaches the minimum value to obtain corresponding updated model parameters, resetting the current model parameters of the target recognition model by using the updated model parameters, and returning to the step 113 when the resetting is finished;
step 118, identifying whether the count value of the first counter exceeds a preset counter threshold; if yes, go to step 119; if not, returning to step 112;
step 119, stopping the model training of the round and confirming that the model training is successful.
A second aspect of the embodiment of the present invention provides an apparatus for implementing the method for processing an object recognition model of an electron microscope density map according to the first aspect, where the apparatus includes: the system comprises a model setting module, a model training module and a model application module;
the model setting module is used for determining a target recognition model; determining a model loss function of the target recognition model;
the model training module is used for carrying out model training on the target recognition model according to the model loss function;
the model application module is used for acquiring a first electron microscope density map after the model training is successful, and performing key target identification processing on the first electron microscope density map by using the target identification model to obtain a corresponding first C alpha atomic characteristic map and a corresponding first trunk atomic characteristic map.
A third aspect of an embodiment of the present invention provides an electronic device, including: memory, processor, and transceiver;
the processor is configured to couple to the memory, and read and execute the instructions in the memory, so as to implement the method steps described in the first aspect;
the transceiver is coupled to the processor and is controlled by the processor to transmit and receive messages.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing computer instructions that, when executed by a computer, cause the computer to perform the instructions of the method of the first aspect.
The embodiment of the invention provides a processing method and device of an electron microscope density map target recognition model, electronic equipment and a computer readable storage medium. The invention provides a target recognition model which is used for carrying out key target recognition processing on an input electron microscope density map, outputting a corresponding C alpha atomic characteristic map and a corresponding trunk atomic characteristic map, carrying out model loss function design on the target recognition model, training the model based on the designed loss function, and carrying out key target recognition processing on the input electron microscope density map based on the target recognition model after training maturity. The target recognition model provided by the invention improves the processing efficiency and the processing quality of the key target recognition processing of the electron microscope density map.
Drawings
FIG. 1 is a schematic diagram of a method for processing an object recognition model of an electron microscope density map according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a target recognition model according to an embodiment of the present invention;
FIG. 3 is a block diagram of a processing device for an electron microscope density map object recognition model according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Description of the embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An embodiment of the present invention provides a method for processing an electron microscope density map target recognition model, as shown in fig. 1, which is a schematic diagram of a method for processing an electron microscope density map target recognition model according to an embodiment of the present invention, the method mainly includes the following steps:
step 1, determining a target recognition model;
the target recognition model determined by the embodiment of the present invention is shown in fig. 2, which is a schematic structural diagram of the target recognition model provided by the first embodiment of the present invention, and includes a trunk feature extraction network, a first recognition branch network, and a second recognition branch network; the trunk feature extraction network is used for carrying out feature extraction processing on the input electron microscope density map to generate corresponding first branch tensor and second branch tensor; the first recognition branch network is used for carrying out the trunk atomic characteristic target recognition processing according to the first branch tensor to generate a corresponding trunk atomic characteristic diagram; and the second recognition branch network is used for performing C alpha atomic characteristic target recognition processing according to the second branch tensor to generate a corresponding C alpha atomic characteristic diagram.
First, backbone feature extraction network
The trunk feature extraction network comprises eight convolution layers, three pooling layers, three up-sampling layers and three splicing layers; the eight convolution layers are respectively a first, a second, a third, a fourth, a fifth, a sixth, a seventh and an eighth convolution layer; the three pooling layers are respectively a first pooling layer, a second pooling layer and a third pooling layer; the three upsampling layers are respectively a first upsampling layer, a second upsampling layer and a third upsampling layer; the three splice layers are respectively a first splice layer, a second splice layer and a third splice layer. Here, the first, second, third, fourth, fifth, sixth, seventh and eighth convolution layers are used to perform feature extraction; the first, second and third pooling layers are used for tensor downsampling; the first, second and third upsampling layers are used for tensor upsampling; the first, second and third stitching layers are used for tensor stitching.
The connection relation of each layer in the trunk feature extraction network is as follows: the input end of the first convolution layer is a model input end of the target recognition model, and the output end is respectively connected with the input end of the second convolution layer and the first input end of the third splicing layer; the output end of the second convolution layer is connected with the input end of the first pooling layer; the output end of the first pooling layer is respectively connected with the input end of the third convolution layer and the first input end of the second splicing layer; the output end of the third convolution layer is connected with the input end of the second pooling layer; the output end of the second pooling layer is respectively connected with the input end of the fourth convolution layer and the first input end of the first splicing layer; the output end of the fourth convolution layer is connected with the input end of the third pooling layer; the output end of the third pooling layer is connected with the input end of the fifth convolution layer; the output end of the fifth convolution layer is connected with the input end of the first up-sampling layer; the output end of the first upsampling layer is connected with the second input end of the first splicing layer; the output end of the first splicing layer is connected with the input end of the sixth convolution layer; the output end of the sixth convolution layer is connected with the input end of the second up-sampling layer; the output end of the second upsampling layer is connected with the second input end of the second splicing layer; the output end of the second splicing layer is connected with the input end of the seventh convolution layer; the output end of the seventh convolution layer is respectively connected with the input end of the third upsampling layer and the input end of the second identification branch network; the output end of the third upsampling layer is connected with the second input end of the third splicing layer; the output end of the third splicing layer is connected with the input end of the first identification branch network.
It should be noted that, the first, second, third, fourth, fifth, sixth, seventh and eighth convolution layers are each formed by sequentially connecting a plurality of first convolution activating modules; the first convolution activation modules comprise a first three-dimensional convolution unit (Conv 3D) and a first ReLU activation function unit, and the output ends of the first three-dimensional convolution units of each first convolution activation module are connected with the input ends of the corresponding first ReLU activation function units; the input end of the first three-dimensional convolution unit of the first convolution activating module of each convolution layer is the input end of the current convolution layer, the output end of the first ReLU activating function unit of the former first convolution activating module is connected with the input end of the first three-dimensional convolution unit of the latter first convolution activating module, and the output end of the first ReLU activating function unit of the last first convolution activating module is the output end of the current convolution layer. Here, each convolution layer of the backbone feature extraction network according to the embodiment of the present invention is actually a structure like (conv3d+relu activation function) + … + (conv3d+relu activation function).
The shape formats of input and output tensors of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the sixth convolution layer, the seventh convolution layer, the eighth convolution layer, the first pooling layer, the second pooling layer, the third pooling layer, the first upsampling layer, the second upsampling layer, the third upsampling layer, the first splicing layer, the second splicing layer and the third splicing layer are all h×x×y×z, wherein H is the number of characteristic channels, (X, Y, Z) is the tensor three-dimensional size, and H, X, Y, Z is an integer greater than zero.
In the first, second, third, fourth, fifth, sixth, seventh and eighth convolution layers, the tensor three-dimensional sizes of the input and output tensors of each convolution layer are consistent. In the first, second and third pooling layers, the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of the output tensor of each pooling layer are respectively 1/2 of the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of the input tensor of the current pooling layer.
In the first, second and third upsampling layers, the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of the output tensor of each upsampling layer are 2 times the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of the input tensor of the current upsampling layer, respectively.
In the first, second and third splicing layers, the tensor three-dimensional sizes of the two input tensors input by the corresponding first and second input ends of each splicing layer are consistent, the tensor three-dimensional size of the output tensor of each splicing layer is consistent with the tensor three-dimensional size of the corresponding two input tensors, and the number of characteristic channels of the output tensor of each splicing layer is the sum of the number of characteristic channels of the corresponding two input tensors.
The aforementioned trunk feature extraction network is used for performing feature extraction processing on the input electron microscope density map to generate corresponding first and second branch tensors, and specifically comprises:
step A1, a trunk feature extraction network takes an input electron microscope density map as a corresponding current electron microscope density map;
wherein the current electron microscope density chart has the shape of H 0 ×X 0 ×Y 0 ×Z 0 ,H 0 The number of characteristic channels of the current electron microscope density chart (X) 0 、Y 0 、Z 0 ) The tensor three-dimensional size of the current electron microscope density map;
here, the electron microscope density map input by the object recognition model according to the embodiment of the present invention is a three-dimensional map tensor, and the three-dimensional space corresponding to the three-dimensional map tensor can be regarded as a space defined by X 0 *Y 0 *Z 0 A grid space consisting of unit grids with the shape of 1A multiplied by 1A, wherein each unit grid in the grid space corresponds to a grid characteristic vector, and the vector length of the grid characteristic vector is H 0
Step A2, inputting a current electron microscope density map into a first convolution layer for convolution operation processing to obtain a corresponding first convolution characteristic tensor;
wherein the first convolution characteristic tensor has a shape of H 1 ×X 0 ×Y 0 ×Z 0 ,H 1 A number of feature channels that is a first convolution feature tensor;
step A3, inputting the first convolution characteristic tensor into a second convolution layer for convolution operation processing to obtain a corresponding second convolution characteristic tensor;
wherein the second convolution characteristic tensor has a shape of H 2 ×X 0 ×Y 0 ×Z 0 ,H 2 A number of feature channels that is a second convolution feature tensor;
step A4, inputting the second convolution characteristic tensor into a first pooling layer for downsampling treatment to obtain a corresponding first pooling characteristic tensor;
wherein the first pooling feature tensor has a shape of H 2 ×X 1 ×Y 1 ×Z 1 ,(X 1 、Y 1 、Z 1 ) Tensor three-dimensional size, X, for a first pooled feature tensor 1 =X 0 /2,Y 1 =Y 0 /2,Z 1 =Z 0 /2;
Step A5, inputting the first pooling characteristic tensor into a third convolution layer for convolution operation processing to obtain a corresponding third convolution characteristic tensor;
wherein the third convolution characteristic tensor has the shape of H 3 ×X 1 ×Y 1 ×Z 1 ,H 3 The number of feature channels that are the third convolution feature tensor;
step A6, inputting the third convolution characteristic tensor into a second pooling layer for downsampling treatment to obtain a corresponding second pooling characteristic tensor;
wherein the second pooling feature tensor has a shape of H 3 ×X 2 ×Y 2 ×Z 2 ,(X 2 、Y 2 、Z 2 ) Tensor three-dimensional size, X, for the second pooled feature tensor 2 =X 1 /2,Y 2 =Y 1 /2,Z 2 =Z 1 /2;
Step A7, inputting the second pooling characteristic tensor into a fourth convolution layer for convolution operation processing to obtain a corresponding fourth convolution characteristic tensor;
wherein the fourth convolution featureTensor of shape H 4 ×X 2 ×Y 2 ×Z 2 ,H 4 The number of feature channels that is the fourth convolution feature tensor;
step A8, inputting the fourth convolution characteristic tensor into a third pooling layer for downsampling treatment to obtain a corresponding third pooling characteristic tensor;
wherein the third pooling feature tensor has the shape of H 4 ×X 3 ×Y 3 ×Z 3 ,(X 3 、Y 3 、Z 3 ) Tensor three-dimensional size, X, for third pooling feature tensor 3 =X 2 /2,Y 3 =Y 2 /2,Z 3 =Z 2 /2;
Step A9, inputting the second pooling characteristic tensor into a fifth convolution layer for convolution operation processing to obtain a corresponding fifth convolution characteristic tensor;
wherein the fifth convolution characteristic tensor has the shape of H 5 ×X 3 ×Y 3 ×Z 3 ,H 5 The number of feature channels that is the fifth convolution feature tensor;
step A10, inputting a fifth convolution characteristic tensor into a first upsampling layer for upsampling treatment to obtain a corresponding first upsampling characteristic tensor;
wherein the first upsampling feature tensor has the shape of H 5 ×(X 3 *2)×(Y 3 *2)×(Z 3 *2)=H 5 ×X 2 ×Y 2 ×Z 2
Step A11, a second pooling characteristic tensor and a first upsampling characteristic tensor are used as two input tensors input by a first input end and a second input end of a first splicing layer, and are input into the first splicing layer to be subjected to tensor splicing along the characteristic channel direction so as to obtain a corresponding first splicing tensor;
Wherein the first splice tensor has a shape of H 6 ×X 2 ×Y 2 ×Z 2 ,H 6 The number of characteristic channels, H, is the first splice tensor 6 =H 3 +H 5
Step A12, inputting the first spliced tensor into a sixth convolution layer for convolution operation processing to obtain a corresponding sixth convolution characteristic tensor;
wherein the sixth convolution characteristic tensor has the shape of H 7 ×X 2 ×Y 2 ×Z 2 ,H 7 The number of feature channels that is the sixth convolution feature tensor;
step A13, inputting a sixth convolution characteristic tensor into a second upsampling layer for upsampling treatment to obtain a corresponding second upsampling characteristic tensor;
wherein the second upsampling feature tensor has the shape of H 7 ×(X 2 *2)×(Y 2 *2)×(Z 2 *2)=H 7 ×X 1 ×Y 1 ×Z 1
Step A14, inputting the first pooling characteristic tensor and the second upsampling characteristic tensor as two input tensors input by a first input end and a second input end of a second splicing layer into the second splicing layer, and performing tensor splicing processing along the characteristic channel direction to obtain a corresponding second splicing tensor;
wherein the second splice tensor has a shape of H 8 ×X 1 ×Y 1 ×Z 1 ,H 8 The number of characteristic channels, H, is the second splice tensor 8 =H 2 +H 7
Step A15, inputting the second spliced tensor into a seventh convolution layer for convolution operation processing to obtain a corresponding seventh convolution characteristic tensor;
wherein the seventh convolution characteristic tensor has the shape of H 9 ×X 1 ×Y 1 ×Z 1 ,H 9 A number of feature channels that is a seventh convolution feature tensor;
Step A16, inputting the seventh convolution characteristic tensor into a third upsampling layer for upsampling treatment to obtain a corresponding third upsampling characteristic tensor;
wherein the third upsampling feature tensor has the shape of H 9 ×(X 1 *2)×(Y 1 *2)×(Z 1 *2)=H 9 ×X 0 ×Y 0 ×Z 0
Step A17, a first convolution characteristic tensor and a third up-sampling characteristic tensor are used as two input tensors input by a first input end and a second input end of a third splicing layer, and are input into the third splicing layer to be subjected to tensor splicing along the characteristic channel direction so as to obtain a corresponding third splicing tensor;
wherein the third splice tensor has a shape of H 10 ×X 0 ×Y 0 ×Z 0 ,H 10 The number of characteristic channels for the third splice tensor, H 10 =H 1 +H 9
Step A18, inputting the third spliced tensor into an eighth convolution layer for convolution operation processing to obtain a corresponding eighth convolution characteristic tensor;
wherein the eighth convolution characteristic tensor has the shape of H 11 ×X 0 ×Y 0 ×Z 0 ,H 11 A number of feature channels that is an eighth convolution feature tensor;
step A19, taking the obtained eighth convolution characteristic tensor as a corresponding first branch tensor, and taking the obtained seventh convolution characteristic tensor as a corresponding second branch tensor; and outputs the obtained first and second branch tensors.
Here, the shape of the first branch tensor output by the trunk feature extraction network in the embodiment of the present invention is H 11 ×X 0 ×Y 0 ×Z 0 The three-dimensional space corresponding to the first branch tensor can be regarded as a space defined by X 0 *Y 0 *Z 0 A grid space consisting of unit grids with the shape of 1A multiplied by 1A, wherein each unit grid in the grid space corresponds to a grid characteristic vector, and the vector length of the grid characteristic vector is H 1 The first branch tensor is input into a first recognition branch network to perform trunk atomic feature map recognition processing; the shape of the second branch tensor output by the trunk feature extraction network of the embodiment of the invention is H 9 ×X 1 ×Y 1 ×Z 1 The three-dimensional space corresponding to the second branch tensor can be regarded as a space defined by X 1 *Y 1 *Z 1 Namely (X) 0 /2)*(Y 0 /2)*(Z 0 2) a grid space formed by unit grids with the shape of 2A multiplied by 2A, wherein each unit grid in the grid space corresponds to one grid characteristic vector, and the vector length of the grid characteristic vector is H 9 The second branch tensor is inputAnd C alpha atomic characteristic map identification processing is carried out in the second identification branch network.
(II) first identification branch network
The first identified branch network includes a ninth convolutional layer and a first active layer.
The connection relation of each layer in the first identification branch network is as follows: the input end of the ninth convolution layer is the input end of the first identification branch network, and the output end is connected with the input end of the first activation layer; the output end of the first activation layer is the output end of the first identification branch network.
It should be noted that, the ninth convolution layer is formed by sequentially connecting a plurality of second convolution activating modules; the second convolution activation module, similar to the first convolution activation module, comprises a second three-dimensional convolution unit (Conv 3D) and a second ReLU activation function unit, and the output end of the second three-dimensional convolution unit of each second convolution activation module is connected with the input end of the corresponding second ReLU activation function unit; the input end of the second three-dimensional convolution unit of the first second convolution activation module of the ninth convolution layer is the input end of the ninth convolution layer, the output end of the second ReLU activation function unit of the former second convolution activation module is connected with the input end of the second three-dimensional convolution unit of the latter second convolution activation module, and the output end of the second ReLU activation function unit of the last second convolution activation module is the output end of the ninth convolution layer. Here, the internal structure of the ninth convolution layer of the first identification branch network according to the embodiment of the present invention is similar to the aforementioned first to eighth convolution layers as a sequential connection structure of (conv3d+relu activation function) + … + (conv3d+relu activation function).
The shape formats of the input tensor and the output tensor of the ninth convolution layer and the first activation layer are h×x×y×z, where H is the number of characteristic channels, (X, Y, Z) is the tensor three-dimensional size, and H, X, Y, Z is an integer greater than zero; the tensor three-dimensional sizes of the input tensor and the output tensor of the ninth convolution layer are consistent; the tensor three-dimensional sizes of the input and output tensors of the first active layer are consistent.
It should be noted that, the activation function used by the first activation layer is specifically a sigmoid function, and the value range of the function output value is [0,1].
The aforementioned first identifying branch network is configured to perform a trunk atomic feature object identifying process according to a first branch tensor to generate a corresponding trunk atomic feature map, and specifically includes:
step B1, a first identification branch network inputs a first branch tensor into a ninth convolution layer to carry out convolution operation processing to obtain a corresponding ninth convolution characteristic tensor;
wherein the first branch tensor has a shape of H 11 ×X 0 ×Y 0 ×Z 0 ,H 11 For the number of eigenchannels of the first branch tensor, (X) 0 、Y 0 、Z 0 ) A tensor three-dimensional size that is the first branch tensor; the ninth convolution characteristic tensor has the shape of H 12 ×X 0 ×Y 0 ×Z 0 ,H 12 A number of feature channels that is a ninth convolution feature tensor;
here, the ninth convolution layer of the first identified branch network according to the embodiment of the present invention is a layer corresponding to the first branch tensor and defined by X 0 *Y 0 *Z 0 Performing convolution operation on atomic features in each unit grid in a grid space formed by unit grids with the shapes of 1A multiplied by 1A;
step B2, inputting the ninth convolution characteristic tensor into the first activation layer to perform characteristic activation operation to generate a corresponding first activation tensor;
wherein the first activation tensor has a shape of 3X 0 ×Y 0 ×Z 0 The method comprises the steps of carrying out a first treatment on the surface of the The first activation tensor includes X 0 *Y 0 *Z 0 A first vector of length 3; the first vector includes 3 atom type probabilities of: the probability of the C atom type, the probability of the C alpha atom type and the probability of the N atom type, and the value range of each atom type probability is [0,1 ]];
Here, the first active layer of the first identified branch network in the embodiment of the present invention is used to identify the corresponding branch tensor in the first branch tensor, which is defined by X 0 *Y 0 *Z 0 In a grid space formed by unit grids with the shape of 1A multiplied by 1A, atoms in each unit grid are subjected to trunkThe method comprises the steps of identifying an atom (C atom, C alpha atom and N atom) target and outputting identification probabilities corresponding to three main atom types, namely C atom type probability, C alpha atom type probability and N atom type probability;
and B3, outputting the first activation tensor as a corresponding trunk atomic feature map.
(III) second identification branch network
The second identification branch network comprises four convolution layers, four activation layers and a first characteristic fusion layer; the four convolution layers are tenth, eleventh, twelfth and thirteenth convolution layers, respectively, and the four activation layers are second, third, fourth and fifth activation layers, respectively.
The connection relation of each layer in the second identification branch network is as follows: the input ends of the tenth, eleventh, twelfth and thirteenth convolution layers are the input ends of the first identification branch network; the output end of the tenth convolution layer is connected with the input end of the second activation layer; the output end of the second activation layer is connected with the first input end of the first characteristic fusion layer; the output end of the eleventh convolution layer is connected with the input end of the third activation layer; the output end of the third activation layer is connected with the second input end of the first characteristic fusion layer; the output end of the twelfth convolution layer is connected with the input end of the fourth activation layer; the output end of the fourth activation layer is connected with the third input end of the first characteristic fusion layer; the output end of the thirteenth convolution layer is connected with the input end of the fifth activation layer; the output end of the fifth activation layer is connected with the fourth input end of the first characteristic fusion layer; the output end of the first characteristic fusion layer is the output end of the second identification branch network.
It should be noted that, the tenth, eleventh, twelfth and thirteenth convolution layers are each formed by sequentially connecting a plurality of third convolution activation modules; the third convolution activation module, similar to the first convolution activation module described above, includes a third three-dimensional convolution unit (Conv 3D) and a third ReLU activation function unit, where an output end of the third three-dimensional convolution unit of each third convolution activation module is connected to an input end of the corresponding third ReLU activation function unit; in the tenth, eleventh, twelfth and thirteenth convolution layers, the input end of the third three-dimensional convolution unit of the first third convolution activation module of each convolution layer is the input end of the current convolution layer, the output end of the third ReLU activation function unit of the previous third convolution activation module is connected with the input end of the third three-dimensional convolution unit of the next third convolution activation module, and the output end of the third ReLU activation function unit of the last third convolution activation module is the output end of the current convolution layer. Here, the internal structures of the tenth, eleventh, twelfth, and thirteenth convolution layers of the second identification branch network according to the embodiment of the present invention are sequential connection structures like the aforementioned first to eighth convolution layers that are (conv3d+relu activation function) + … + (conv3d+relu activation function).
The shape formats of the input and output tensors of the tenth, eleventh, twelfth, thirteenth, second, third, fourth, and fifth active layers are h×x×y×z, where H is the number of characteristic channels, and (X, Y, Z) is the tensor three-dimensional size, and H, X, Y, Z is an integer greater than zero; in the tenth, eleventh, twelfth and thirteenth convolution layers, the tensor three-dimensional dimensions of the input and output tensors of each convolution layer remain the same; the tensor three-dimensional sizes of the input tensor and the output tensor of each of the second, third, fourth and fifth active layers are consistent.
It should be noted that, the activation function used by the second activation layer is specifically a sigmoid function, and the value range of the function output value is [0,1]; the activation function used by the third activation layer is specifically a 2 x sigmoid function, and the value range of the function output value is [0,2]; the activation function used by the fourth activation layer is specifically a 3.8 tan h function, and the value range of the function output value is [ -3.8,3.8]; the activation function used by the fifth activation layer is specifically a softmax function, and the range of the output value of the function is [0,1].
The aforementioned second recognition branch network is configured to perform a C alpha atom feature object recognition process according to a second branch tensor to generate a corresponding C alpha atom feature map, and specifically includes:
step C1, the second recognition branch network inputs a second branch tensor into a tenth convolution layer, an eleventh convolution layer, a twelfth convolution layer and a thirteenth convolution layer respectively to carry out corresponding convolution operation processing to obtain corresponding tenth convolution characteristic tensors, eleventh convolution characteristic tensors, twelfth convolution characteristic tensors and thirteenth convolution characteristic tensors;
wherein the second branch tensor has the shape of H 9 ×X 1 ×Y 1 ×Z 1 ,H 9 The number of eigenchannels for the second branch tensor, (X) 1 、Y 1 、Z 1 ) A tensor three-dimensional size that is the second branch tensor; the tenth convolution characteristic tensor has the shape of H 13 ×X 1 ×Y 1 ×Z 1 ,H 13 The number of feature channels that is the tenth convolution feature tensor; the eleventh convolution characteristic tensor has the shape of H 14 ×X 1 ×Y 1 ×Z 1 ,H 14 A number of feature channels that is an eleventh convolution feature tensor; the twelfth convolution characteristic tensor has the shape of H 15 ×X 1 ×Y 1 ×Z 1 ,H 15 A number of feature channels that is a twelfth convolution feature tensor; the thirteenth convolution characteristic tensor has the shape of H 16 ×X 1 ×Y 1 ×Z 1 ,H 16 A number of feature channels that is a thirteenth convolution feature tensor;
here, the tenth convolution layer of the second identifying branch network according to the embodiment of the present invention is a layer corresponding to the second branch tensor and defined by X 1 *Y 1 *Z 1 Performing convolution operation on atomic features in each unit grid in a grid space formed by unit grids of which the shapes are 2A multiplied by 2A; the eleventh convolution layer is used for carrying out convolution operation on the atomic position characteristics in each unit grid in the grid space corresponding to the second branch tensor; the twelfth convolution layer is used for carrying out convolution operation on pseudo peptide bond characteristics of atoms in each unit grid and adjacent atoms in a grid space corresponding to the second branch tensor; the thirteenth convolution layer is used for carrying out convolution operation on the amino acid type characteristics of the amino acid fragments where the atoms in each unit grid are in the grid space corresponding to the second branch tensor;
Step C2, inputting the tenth, eleventh, twelfth and thirteenth convolution feature tensors into the corresponding second, third, fourth and fifth activation layers respectively to perform corresponding feature activation operation to obtain corresponding second, third, fourth and fifth activation tensors;
wherein the second activation tensor has a shape of 1X 1 ×Y 1 ×Z 1 The second activation tensor includes X 1 *Y 1 *Z 1 A second vector of length 1, each second vector comprising 1 probability of a type of a C alpha atom, the probability of a type of a C alpha atom ranging from a value of 0,1]The method comprises the steps of carrying out a first treatment on the surface of the Here, the second activation layer of the second identifying branch network in the embodiment of the present invention is configured to identify a target of a ca atom for an atom in each unit grid in a grid space corresponding to the second branch tensor, and output a corresponding ca atom type probability vector, that is, a second vector;
the third activation tensor has a shape of 3X 1 ×Y 1 ×Z 1 The third activation tensor includes X 1 *Y 1 *Z 1 And 3 third vectors with length of 3, wherein each third vector is a C alpha atom coordinate vector, and consists of 3 axial coordinate components: the coordinate components of the X-axis, the Y-axis and the Z-axis have the values of [0,2 ]]The unit is an angstrom; here, the third activation layer of the second identifying branch network in the embodiment of the present invention is configured to identify, in a grid space corresponding to the second branch tensor, a C α atomic coordinate in each unit grid and output a corresponding C α atomic coordinate vector, that is, a third vector, where the coordinate components (X-axis, Y-axis, and Z-axis coordinate components) in the 3 axial directions of the third vector are a set of relative coordinate components, and an origin corresponding to the set of relative coordinate components is one of eight vertices of the current unit grid, which may be uniformly set in advance;
The fourth activation tensor has a shape of 3X 1 ×Y 1 ×Z 1 The fourth activation tensor includes X 1 *Y 1 *Z 1 And a fourth vector with the length of 3, wherein each fourth vector is a pseudo peptide bond vector and consists of 3 axial vector components: the X-axis vector component, the Y-axis vector component and the Z-axis vector component have the value range of [ -3.8,3.8)]The unit is an angstrom; here, the fourth active layer of the second identification branch network according to the embodiment of the present invention is used forIdentifying pseudo peptide bonds which are the connection bonds between the C alpha atoms and the neighboring C alpha atoms in each unit grid in a grid space corresponding to the second branch tensor, and outputting a corresponding pseudo peptide bond vector which is a fourth vector, wherein the origin of vector components (X-axis, Y-axis and Z-axis vector components) of the fourth vector in 3 axial directions is the C alpha atom coordinates in the corresponding unit grid;
the fifth activation tensor has a shape of NxX 1 ×Y 1 ×Z 1 N is the total number of the preset amino acid types, N is an integer greater than 0, and the fifth activation tensor comprises X 1 *Y 1 *Z 1 A fifth vector with the length of N being the total number of amino acid types, wherein the fifth vector comprises the amino acid type probability of the total number of N being the total number of amino acid types, and the value range of the amino acid type probability is [0,1 ] ]The method comprises the steps of carrying out a first treatment on the surface of the Here, the fifth activation layer of the second identifying branch network in the embodiment of the present invention is configured to perform target identification on the amino acid type of the amino acid fragment where the ca atom is located in each unit lattice in the lattice space corresponding to the second branch tensor, and output a corresponding identification vector, that is, a fifth vector, because the embodiment of the present invention supports multiple (N) types of amino acid type identification, the fifth vector is composed of N amino acid type probabilities;
step C3, the obtained second, third, fourth and fifth activation tensors are used as four input tensors input by the first, second, third and fourth input ends of the first feature fusion layer, and are input into the first feature fusion layer to be subjected to feature fusion processing to obtain corresponding first fusion tensors;
wherein the first fusion tensor has a shape of H 17 ×X 1 ×Y 1 ×Z 1 ,H 17 The number of characteristic channels that are the first fusion tensor;
the method specifically comprises the following steps: step C31, the first feature fusion layer identifies a preset feature fusion mode;
the feature fusion mode comprises a first mode and a second mode;
step C32, when the characteristic fusion mode is the first mode, performing tensor splicing processing on the second, third, fourth and fifth input activation tensors along the characteristic channel direction to obtain corresponding first fusion tensors;
Wherein the first fusion tensor has a shape of H 17 ×X 1 ×Y 1 ×Z 1 ,H 17 The number of characteristic channels, H, is the first fusion tensor 17 =1+3+3+N;
Here, when the feature fusion mode is the first mode, only four input tensors (second, third, fourth and fifth activation tensors) are needed to be spliced;
step C33, when the feature fusion mode is the second mode, constructing a feature fusion mode consisting of X 1 *Y 1 *Z 1 A grid space formed by a first unit grid of which the shape is 2A multiplied by 2A is recorded as a first grid space; establishing a one-to-one correspondence between each first unit grid in the first grid space and second, third, fourth and fifth vectors in the second, third, fourth and fifth activation tensors; and assigning a first lattice feature tensor consisting of a first lattice ca x 0 atomic type, a first lattice ca x 1 atomic coordinate vector, a first lattice pseudo-peptide bond vector and a first lattice amino acid type to each first unit lattice; initializing a first grid C A x 2 atom type in each first grid characteristic tensor to be a preset non-C alpha atom type, initializing a first grid C alpha atom coordinate vector to be a preset invalid coordinate vector, initializing a first grid pseudo-peptide bond vector to be a preset invalid pseudo-peptide bond vector, and initializing a first grid amino acid type to be a preset invalid amino acid type; traversing all first unit grids one by one, taking the first unit grid which is traversed currently as a corresponding current unit grid, taking a first grid characteristic tensor corresponding to the current unit grid as a corresponding current grid characteristic tensor, identifying whether the C alpha atom type probability of a second vector corresponding to the current unit grid exceeds a preset C alpha atom type probability threshold value, if so, setting the first grid C alpha atom type of the current grid characteristic tensor as a preset C alpha atom type, setting the first grid C alpha atom coordinate vector of the current grid characteristic tensor as a corresponding third vector, and setting the first grid pseudo peptide bond vector of the current grid characteristic tensor as a pair The corresponding fourth vector and the first grid amino acid type of the current grid characteristic tensor are set as the amino acid type corresponding to the amino acid type probability with the maximum probability value in the corresponding fifth vector; and at the end of the traversal, by the latest X 1 *Y 1 *Z 1 The first grid characteristic tensors form corresponding first fusion tensors;
wherein the first fusion tensor has a shape of H 17 ×X 1 ×Y 1 ×Z 1 ,H 17 The number of characteristic channels, H, is the first fusion tensor 17 =L 1 +L 2 +L 3 +L 4 ,L 1 For the length of the first lattice C alpha atom type, L 2 For the length of the first grid C alpha atom coordinate vector L 3 For the length of the first lattice pseudo-peptide bond vector, L 4 Length of the first lattice amino acid type; l (L) 1 Defaulting to 1, L 2 Defaulting to 3, L 3 Defaulting to 3, L 4 Default to 1;
when the feature fusion mode is the second mode, the corresponding first grid space is constructed by the grid space corresponding to the second branch tensor, and the first grid space can be divided into X 1 *Y 1 *Z 1 A first unit cell of 2 a x 2 a in shape, the first unit cell being in one-to-one correspondence with second, third, fourth, and fifth vectors of the second, third, fourth, and fifth activation tensors; traversing all the first unit grids, regarding the first unit grids corresponding to the second vector with the probability of the C alpha x 0 atom type exceeding the threshold value of the C alpha x 1 atom type probability as C alpha x 2 atom grids, setting the first grid C alpha atom type corresponding to the C alpha atom grid as C alpha atom type, extracting the third vector corresponding to the C alpha atom grid in the third activation tensor as a corresponding first grid C alpha atom coordinate vector, extracting the fourth vector corresponding to the C alpha atom grid in the fourth activation tensor as a corresponding first grid pseudo peptide bond vector, and extracting the amino acid type corresponding to the maximum probability in N amino acid type probabilities of the fifth vector corresponding to the C alpha atom grid in the fifth activation tensor as a corresponding first grid A lattice amino acid type; after traversing, forming a corresponding first fusion tensor by the first grid characteristic tensor (first grid C alpha atom type, first grid C alpha atom coordinate vector, first grid pseudo peptide bond vector and first grid amino acid type) corresponding to all the first unit grids;
step C34, outputting the obtained first fusion tensor;
and C4, outputting the obtained first fusion tensor as a corresponding C alpha atomic characteristic diagram.
Step 2, determining a model loss function of the target recognition model;
wherein, the liquid crystal display device comprises a liquid crystal display device,
L 1 () A loss function for the first identified branch network; loss function L 1 () Wherein M is 1 A first activation tensor output for the first identified branch network,for the corresponding first label tensor; here, the loss function L of the embodiment of the present invention 1 () The essence is a DiceLoss loss function;
L 2 () A loss function for the second identified branch network; lambda (lambda) 1 、λ 2 、λ 3 And lambda (lambda) 4 The method comprises the steps of obtaining four preset super parameters; loss function L 2 () From super-parameter lambda 1 And loss function L 21 () Is a weighted product of (a) and a hyper-parameter lambda 3 And loss function L 22 () Is a weighted product of (a) and a hyper-parameter lambda 3 And loss function L 23 () The weighted product of (2) and the hyper-parameter lambda 4 And loss function L 24 () Is added by the weighted products of (2);
Loss function L 21 () Loss function L 22 () Loss function L 23 () And a loss function L 24 () Wherein, (X, Y, Z) is tensor three-dimensional size, X, Y, Z are integers greater than 0;
loss function L 21 () In (a),M 2 A second activation tensor output for a second activation layer of the second identified branch network,for a corresponding second label tensor; second activation tensor M 2 Comprising X Y Z second vectors m 2,(x,y,z) Second label tensorComprises X Y Z second tag vectors ++>The method comprises the steps of carrying out a first treatment on the surface of the Beta is a preset cross entropy coefficient; here, the loss function L of the embodiment of the present invention 21 () The essence is a DiceLoss loss function+BCELoss loss function;
loss function L 22 () Wherein M is 3 A third activation tensor output for a third activation layer of the second identified branch network,is the corresponding third label tensor; third activation tensor M 3 Comprising X Y Z third vectors m 3,(x,y,z) Third label tensorComprises X Y Z third tag vectors ++>The method comprises the steps of carrying out a first treatment on the surface of the Here, the loss function L of the embodiment of the present invention 22 () The essence is a distance-based mean square error loss function; />
Loss function L 23 () Wherein M is 4 A fourth activation tensor output for a fourth activation layer of the second identified branch network,is the corresponding fourth label tensor; fourth activation tensor M 4 Comprising X Y Z fourth vectors m 4,(x,y,z) Fourth label tensorComprising X Y Z fourth tag vectors ++>The method comprises the steps of carrying out a first treatment on the surface of the Here, the loss function L of the embodiment of the present invention 23 () The essence is a distance-based mean square error loss function;
loss function L 24 () Wherein M is 5 A fifth activation tensor output for a fifth activation layer of the second identified branch,is the corresponding fifth label tensor; n is the total number of preset amino acid types; fifth activation tensor M 5 Comprising X Y Z fifth vectors v x,y,z Fifth vector v x,y,z Amino acid type probability m comprising total number of amino acid types N 5,(x,y,z,i) The method comprises the steps of carrying out a first treatment on the surface of the Fifth label tensorComprising X Y Z fifth tag vectors ++>Fifth tag vector->Amino acid type probability comprising the total number of amino acid types N ≡>The method comprises the steps of carrying out a first treatment on the surface of the Here, the loss function L of the embodiment of the present invention 24 () The essence is a cross entropy loss function.
Step 3, training a target recognition model according to the model loss function;
the method specifically comprises the following steps: step 31, initializing the count value of the first counter to 0;
step 32, selecting an unused training data record from the preset training data set as a corresponding current training data record; and training electron microscope density map, main atom probability feature map, C alpha atom position feature map and C alpha-C alpha atoms in the current training data record The pseudo peptide bond vector feature map and the C alpha atom amino acid type probability feature map are extracted to serve as corresponding first training electron microscope density map and first label tensorSecond tag tensor->Third tag tensor->Fourth tag tensor->And fifth tag tensor->
Wherein the training data set comprises a plurality of training data records; each training data record comprises a training electron microscope density map and a corresponding main atom probability feature map, a C alpha atom position feature map, a C alpha-C alpha atom pseudo peptide bond vector feature map and a C alpha atom amino acid type probability feature map;
here, the training electron microscope density map in each training data record in the embodiment of the invention is an original electron microscope density map without any target mark; the main atomic probability feature map is a feature map obtained by identifying main atomic types of the original electron microscope density map by manual or other technical means; the characteristic map of the C alpha atom probability is obtained by dividing the grid space of the original electron microscope density map by manual or other technical means and identifying C alpha atoms in each grid; the characteristic map of the position of the C alpha atom is obtained by identifying the relative coordinates of the C alpha atom in each grid through manual or other technical means; the characteristic diagram of the pseudo peptide bond vector of the C alpha-C alpha atoms is a characteristic diagram obtained by identifying the connection bond between the C alpha atoms and the adjacent C alpha atoms in each grid, namely the pseudo peptide bond, through manual or other technical means; the C alpha atom amino acid type probability feature map is a feature map obtained by identifying the amino acid type of an amino acid fragment where the C alpha atom is positioned in each grid through manual or other technical means;
Step 33, inputting the first training electron microscope density map into a trunk feature extraction network of a target recognition model, and performing feature extraction processing on the first training electron microscope density map by the trunk feature extraction network to generate corresponding first and second training branch tensors;
here, the current step is consistent with the processing steps A1-a19 of the foregoing main feature extraction network for performing feature extraction processing on the input electron microscope density map, and no repeated description is made here;
step 34, inputting the first training branch tensor into a ninth convolution layer of a first recognition branch network of the target recognition model to carry out convolution operation processing to obtain a corresponding ninth convolution characteristic tensor; inputting the ninth convolution characteristic tensor into a first activation layer in the first identification branch network to perform corresponding characteristic activation operation to obtain a corresponding first activation tensor M 1
Here, the current step is consistent with the processing steps B1-B2 of the first identifying branch network for identifying the trunk atomic feature target according to the first branch tensor, and will not be repeated herein;
step 35, inputting the second training branch tensor into tenth, eleventh, twelfth and thirteenth convolution layers of the second recognition branch network of the target recognition model respectively, and performing corresponding convolution operation processing to obtain corresponding tenth, eleventh, twelfth and thirteenth convolution characteristic tensors; and inputting the tenth, eleventh, twelfth and thirteenth convolution feature tensors into the corresponding second, third, fourth and fifth activation layers in the second identification branch network respectively to perform corresponding feature activation operation to obtain corresponding second, third, fourth and fifth activation tensors M 2 、M 3 、M 4 、M 5
Here, the current step is consistent with the processing steps C1-C2 of the foregoing second recognition branch network for performing the cα atomic feature object recognition processing according to the second branch tensor, and no repetitive description is made herein;
step 36, the first, second, third,Fourth and fifth activation tensors M 1 、M 2 、M 3 、M 4 、M 5 And first, second, third, fourth, and fifth label tensors、/>、/>、/>、/>Inputting a loss function L of the substitution model to calculate a loss value to obtain a corresponding first loss value;
step 37, identifying whether the first loss value is satisfied with the set loss value range; if yes, add 1 to the count value of the first counter, and go to step 38; if not, substituting the current model parameters of the target recognition model into the model loss function L to obtain a corresponding first target function, estimating the model parameters of the target recognition model towards the direction of enabling the function value of the first target function to reach the minimum value to obtain corresponding updated model parameters, resetting the current model parameters of the target recognition model by using the updated model parameters, and returning to the step 33 when the resetting is finished;
step 38, identifying whether the count value of the first counter exceeds a preset counter threshold; if yes, go to step 39; if not, returning to the step 32;
Step 39, stopping the model training of the round and confirming that the model training is successful.
It should be noted that, the model training manner in the steps 31-39 is an overall training manner, that is, training is performed on the model based on the overall model loss function, that is, the model loss function L, and the overall training manner is easy to increase training difficulty when the model loss function L is complex in structure; for this purpose, the embodiment of the invention is to reduceThe low model training difficulty also provides a progressive model training mode, and the training steps of the progressive model training mode are as follows: based on five single loss functions, i.e. loss function L 1 () Loss function L 21 () Loss function L 22 () Loss function L 23 () And a loss function L 24 () Training the target recognition models respectively, and after model training of five single loss functions is successful, based on the loss function L 2 () And performing two-round model training on the target recognition model, performing three-round model training on the target recognition model based on the model loss function L after the two-round model training is successful, stopping continuous training when the three-round model training is successful, and confirming that the model training is successful.
And step 4, acquiring a first electron microscope density map after the model training is successful, and performing key target identification processing on the first electron microscope density map by using a target identification model to obtain a corresponding first C alpha atomic characteristic map and a corresponding first trunk atomic characteristic map.
The object recognition model of the embodiment of the invention inputs the first electron microscope density map into the trunk feature extraction network, the trunk feature extraction network performs feature extraction processing on the input first electron microscope density map to generate corresponding first branch tensors and second branch tensors, the first recognition branch network performs trunk atom feature object recognition processing according to the first branch tensors to generate corresponding first trunk atom feature map, and the second recognition branch network performs C alpha atom feature object recognition processing according to the second branch tensors to generate corresponding first C alpha atom feature map.
Fig. 3 is a block diagram of a processing device of an object recognition model of an electron microscope density map according to a second embodiment of the present invention, where the device is a terminal device or a server for implementing the foregoing method embodiment, or may be a device capable of enabling the foregoing terminal device or the server to implement the foregoing method embodiment, and for example, the device may be a device or a chip system of the foregoing terminal device or the server. As shown in fig. 3, the apparatus includes: a model setup module 201, a model training module 202, and a model application module 203.
The model setting module 201 is used for determining a target recognition model; and determining a model loss function of the object recognition model.
The model training module 202 is configured to perform model training on the target recognition model according to the model loss function.
The model application module 203 is configured to obtain a first electron microscope density map after the model training is successful, and perform key target recognition processing on the first electron microscope density map by using a target recognition model to obtain a corresponding first ca atomic feature map and a corresponding first trunk atomic feature map.
The processing device for the electron microscope density map target recognition model provided by the embodiment of the invention can execute the method steps in the method embodiment, and the implementation principle and the technical effect are similar, and are not repeated here.
It should be noted that, it should be understood that the division of the modules of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. For example, the model setting module may be a processing element that is set up separately, may be implemented in a chip of the above apparatus, or may be stored in a memory of the above apparatus in the form of program codes, and may be called by a processing element of the above apparatus to execute the functions of the above determination module. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC), or one or more digital signal processors (Digital Signal Processor, DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, FPGA), etc. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces, in whole or in part, the processes or functions described in connection with the foregoing method embodiments. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wired (e.g., coaxial cable, fiber optic, digital subscriber line ((Digital Subscriber Line, DSL)), or wireless (e.g., infrared, wireless, bluetooth, microwave, etc.) means, the computer-readable storage medium may be any available medium that can be accessed by the computer or a data storage device such as a server, data center, etc., that contains an integration of one or more available media, the available media may be magnetic media (e.g., floppy disk, hard disk, tape), optical media (e.g., DVD), or semiconductor media (e.g., solid state disk, SSD), etc.
Fig. 4 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. The electronic device may be a terminal device or a server implementing the method of the foregoing embodiment, or may be a terminal device or a server implementing the method of the foregoing embodiment, which is connected to the foregoing terminal device or server. As shown in fig. 4, the electronic device may include: a processor 301 (e.g., a CPU), a memory 302, a transceiver 303; the transceiver 303 is coupled to the processor 301, and the processor 301 controls the transceiving actions of the transceiver 303. The memory 302 may store various instructions for performing the various processing functions and implementing the processing steps described in the methods of the previous embodiments. Preferably, the electronic device according to the embodiment of the present invention further includes: a power supply 304, a system bus 305, and a communication port 306. The system bus 305 is used to implement communication connections between the elements. The communication port 306 is used for connection communication between the electronic device and other peripheral devices.
The system bus 305 referred to in fig. 4 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface is used to enable communication between the database access apparatus and other devices (e.g., clients, read-write libraries, and read-only libraries). The Memory may comprise random access Memory (Random Access Memory, RAM) and may also include Non-Volatile Memory (Non-Volatile Memory), such as at least one disk Memory.
The processor may be a general-purpose processor, including a Central Processing Unit (CPU), a network processor (Network Processor, NP), a graphics processor (Graphics Processing Unit, GPU), etc.; but may also be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component.
It should be noted that, the embodiments of the present invention also provide a computer readable storage medium, where instructions are stored, when the computer readable storage medium runs on a computer, to cause the computer to perform the method and the process provided in the above embodiments.
The embodiment of the invention also provides a chip for running the instructions, and the chip is used for executing the processing steps described in the embodiment of the method.
The embodiment of the invention provides a processing method and device of an electron microscope density map target recognition model, electronic equipment and a computer readable storage medium. The invention provides a target recognition model which is used for carrying out key target recognition processing on an input electron microscope density map, outputting a corresponding C alpha atomic characteristic map and a corresponding trunk atomic characteristic map, carrying out model loss function design on the target recognition model, training the model based on the designed loss function, and carrying out key target recognition processing on the input electron microscope density map based on the target recognition model after training maturity. The target recognition model provided by the invention improves the processing efficiency and the processing quality of the key target recognition processing of the electron microscope density map.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (9)

1. The method for processing the electron microscope density map target recognition model is characterized by comprising the following steps of:
determining a target recognition model;
determining a model loss function of the target recognition model;
performing model training on the target recognition model according to the model loss function;
acquiring a first electron microscope density map after the model training is successful, and performing key target identification processing on the first electron microscope density map by using the target identification model to obtain a corresponding first C alpha atomic characteristic map and a corresponding first trunk atomic characteristic map;
the target recognition model comprises a trunk feature extraction network, a first recognition branch network and a second recognition branch network; the trunk feature extraction network is used for carrying out feature extraction processing on the input electron microscope density map to generate corresponding first branch tensor and second branch tensor; the first recognition branch network is used for carrying out trunk atomic feature target recognition processing according to the first branch tensor to generate a corresponding trunk atomic feature map; the second recognition branch network is used for performing C alpha atomic characteristic target recognition processing according to the second branch tensor to generate a corresponding C alpha atomic characteristic diagram;
The trunk feature extraction network comprises eight convolution layers, three pooling layers, three up-sampling layers and three splicing layers; the eight convolution layers are respectively a first, a second, a third, a fourth, a fifth, a sixth, a seventh and an eighth convolution layer; the three pooling layers are respectively a first pooling layer, a second pooling layer and a third pooling layer; the three upsampling layers are respectively a first upsampling layer, a second upsampling layer and a third upsampling layer; the three splicing layers are respectively a first splicing layer, a second splicing layer and a third splicing layer;
the input end of the first convolution layer is the model input end of the target identification model, and the output end of the first convolution layer is respectively connected with the input end of the second convolution layer and the first input end of the third splicing layer; the output end of the second convolution layer is connected with the input end of the first pooling layer; the output end of the first pooling layer is respectively connected with the input end of the third convolution layer and the first input end of the second splicing layer; the output end of the third convolution layer is connected with the input end of the second pooling layer; the output end of the second pooling layer is respectively connected with the input end of the fourth convolution layer and the first input end of the first splicing layer; the output end of the fourth convolution layer is connected with the input end of the third pooling layer; the output end of the third pooling layer is connected with the input end of the fifth convolution layer; the output end of the fifth convolution layer is connected with the input end of the first up-sampling layer; the output end of the first upsampling layer is connected with the second input end of the first splicing layer; the output end of the first splicing layer is connected with the input end of the sixth convolution layer; the output end of the sixth convolution layer is connected with the input end of the second up-sampling layer; the output end of the second upsampling layer is connected with the second input end of the second splicing layer; the output end of the second splicing layer is connected with the input end of the seventh convolution layer; the output end of the seventh convolution layer is respectively connected with the input end of the third upsampling layer and the input end of the second identification branch network; the output end of the third upsampling layer is connected with the second input end of the third splicing layer; the output end of the third splicing layer is connected with the input end of the first identification branch network;
The first, second, third, fourth, fifth, sixth, seventh and eighth convolution layers are formed by sequentially connecting a plurality of first convolution activation modules; the first convolution activation modules comprise a first three-dimensional convolution unit and a first ReLU activation function unit, and the output ends of the first three-dimensional convolution units of each first convolution activation module are connected with the input ends of the corresponding first ReLU activation function units; the input end of the first three-dimensional convolution unit of the first convolution activating module of each convolution layer is the input end of the current convolution layer, the output end of the first ReLU activating function unit of the former first convolution activating module is connected with the input end of the first three-dimensional convolution unit of the latter first convolution activating module, and the output end of the first ReLU activating function unit of the last first convolution activating module is the output end of the current convolution layer;
the shape formats of the input and output tensors of the first, second, third, fourth, fifth, sixth, seventh, eighth, first, second, third, first up-sampling, second up-sampling, first, second, and third splicing layers are all H X Y X Z, where H is the number of characteristic channels, (X, Y, Z) is the tensor three-dimensional size, H, X, Y, Z is an integer greater than zero;
The tensor three-dimensional sizes of the input tensor and the output tensor of each convolution layer in the first, second, third, fourth, fifth, sixth, seventh and eighth convolution layers are consistent;
in the first, second and third pooling layers, the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of the output tensor of each pooling layer are respectively 1/2 of the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of the input tensor of the current pooling layer;
in the first, second and third upsampling layers, the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size of each upsampling layer output tensor are 2 times of the X-axis, Y-axis and Z-axis components of the tensor three-dimensional size input by the current upsampling layer respectively;
in the first, second and third splicing layers, the three-dimensional sizes of the tensors of the two input tensors input by the first and second input ends corresponding to each splicing layer are consistent, the three-dimensional sizes of the tensors of the output tensors of each splicing layer are consistent with the three-dimensional sizes of the tensors of the corresponding two input tensors, and the number of characteristic channels of the output tensors of each splicing layer is the sum of the number of characteristic channels of the corresponding two input tensors;
the first identification branch network comprises a ninth convolution layer and a first activation layer; the input end of the ninth convolution layer is the input end of the first identification branch network, and the output end of the ninth convolution layer is connected with the input end of the first activation layer; the output end of the first activation layer is the output end of the first identification branch network;
The ninth convolution layer is formed by sequentially connecting a plurality of second convolution activating modules; the second convolution activation modules comprise a second three-dimensional convolution unit and a second ReLU activation function unit, and the output ends of the second three-dimensional convolution units of each second convolution activation module are connected with the input ends of the corresponding second ReLU activation function units; the input end of the second three-dimensional convolution unit of the first second convolution activation module of the ninth convolution layer is the input end of the ninth convolution layer, the output end of the second ReLU activation function unit of the former second convolution activation module is connected with the input end of the second three-dimensional convolution unit of the latter second convolution activation module, and the output end of the second ReLU activation function unit of the last second convolution activation module is the output end of the ninth convolution layer;
the shape formats of the input tensor and the output tensor of the ninth convolution layer and the first activation layer are H×X×Y×Z, wherein H is the number of characteristic channels, (X, Y, Z) is the tensor three-dimensional size, and H, X, Y, Z is an integer greater than zero; the tensor three-dimensional sizes of the input tensor and the output tensor of the ninth convolution layer are consistent; the tensor three-dimensional sizes of the input tensor and the output tensor of the first activation layer are consistent;
The activation function used by the first activation layer is specifically a sigmoid function, and the value range of the function output value is [0,1];
the second identification branch network comprises four convolution layers, four activation layers and a first characteristic fusion layer; the four convolution layers are respectively a tenth, eleventh, twelfth and thirteenth convolution layers, and the four activation layers are respectively a second, third, fourth and fifth activation layers;
the input ends of the tenth, eleventh, twelfth and thirteenth convolution layers are the input ends of the first identification branch network; the output end of the tenth convolution layer is connected with the input end of the second activation layer; the output end of the second activation layer is connected with the first input end of the first characteristic fusion layer; the output end of the eleventh convolution layer is connected with the input end of the third activation layer; the output end of the third activation layer is connected with the second input end of the first characteristic fusion layer; the output end of the twelfth convolution layer is connected with the input end of the fourth activation layer; the output end of the fourth activation layer is connected with the third input end of the first characteristic fusion layer; the output end of the thirteenth convolution layer is connected with the input end of the fifth activation layer; the output end of the fifth activation layer is connected with the fourth input end of the first characteristic fusion layer; the output end of the first characteristic fusion layer is the output end of the second identification branch network;
The tenth, eleventh, twelfth and thirteenth convolution layers are formed by sequentially connecting a plurality of third convolution activation modules; the third convolution activation module comprises a third three-dimensional convolution unit and a third ReLU activation function unit, and the output end of the third three-dimensional convolution unit of each third convolution activation module is connected with the input end of the corresponding third ReLU activation function unit; in the tenth, eleventh, twelfth and thirteenth convolution layers, the input end of the third three-dimensional convolution unit of the first third convolution activation module of each convolution layer is the input end of the current convolution layer, the output end of the third ReLU activation function unit of the former third convolution activation module is connected with the input end of the third three-dimensional convolution unit of the latter third convolution activation module, and the output end of the third ReLU activation function unit of the last third convolution activation module is the output end of the current convolution layer;
the shape formats of the input tensors and the output tensors of the tenth convolution layer, the eleventh convolution layer, the twelfth convolution layer, the thirteenth convolution layer, the second activation layer, the third activation layer, the fourth activation layer and the fifth activation layer are all h×x×y×z, wherein H is the number of characteristic channels, (X, Y, Z) is a tensor three-dimensional size, and H, X, Y, Z is an integer greater than zero; the tensor three-dimensional sizes of input tensors and output tensors of the tenth, eleventh, twelfth and thirteenth convolution layers are consistent; the tensor three-dimensional sizes of input tensors and output tensors of the second, third, fourth and fifth active layers are consistent;
The activation function used by the second activation layer is specifically a sigmoid function, and the value range of the function output value is [0,1]; the activation function used by the third activation layer is specifically a 2 x sigmoid function, and the value range of the function output value is [0,2]; the activation function used by the fourth activation layer is specifically a 3.8-tan function, and the value range of the function output value is [ -3.8,3.8]; the activation function used by the fifth activation layer is specifically a softmax function, and the value range of the function output value is [0,1];
the model loss function L is:
L 1 () A loss function for the first identified branch network; the loss function L 1 () Wherein M is 1 A first activation tensor output for the first identified branch network,for the corresponding first label tensor;
L 2 () A loss function for the second identified branch network; lambda (lambda) 1 、λ 2 、λ 3 And lambda (lambda) 4 The method comprises the steps of obtaining four preset super parameters; loss function L 2 () From super-parameter lambda 1 And loss function L 21 () Is a weighted product of (a) and a hyper-parameter lambda 3 And loss function L 22 () Is a weighted product of (a) and a hyper-parameter lambda 3 And loss function L 23 () The weighted product of (2) and the hyper-parameter lambda 4 And loss function L 24 () Is added by the weighted products of (2); the loss function L 21 () Said loss function L 22 () Said loss function L 23 () And the loss function L 24 () Wherein, (X, Y, Z) is tensor three-dimensional size, X, Y, Z are integers greater than 0;
the loss function L 21 () Wherein M is 2 A second activation tensor output for the second activation layer of the second identified branch network,for a corresponding second label tensor; the second activation tensor M 2 Comprising X Y Z second vectors m 2,(x,y,z) The second tag tensor->Comprises X Y Z second tag vectors ++>The method comprises the steps of carrying out a first treatment on the surface of the Beta is a preset cross entropy coefficient;
the loss function L 22 () Wherein M is 3 A third activation tensor output for the third activation layer of the second identified branch network,is the corresponding third label tensor; the third activation tensor M 3 Comprising X Y Z third vectors m 3,(x,y,z) The third tag tensor->Comprises X Y Z third tag vectors ++>
The loss function L 23 () Wherein M is 4 To be the instituteA fourth activation tensor output by the fourth activation layer of the second identified branch network,is the corresponding fourth label tensor; the fourth activation tensor M 4 Comprising X Y Z fourth vectors m 4,(x,y,z) The fourth tag tensor->Comprising X Y Z fourth tag vectors ++>
The loss function L 24 () Wherein M is 5 A fifth activation tensor output for the fifth activation layer of the second identified branch, Is the corresponding fifth label tensor; n is the total number of preset amino acid types; the fifth activation tensor M 5 Comprising X Y Z fifth vectors v x,y,z The fifth vector v x,y,z Amino acid type probability m comprising the total number of amino acid types N 5,(x,y,z,i) The method comprises the steps of carrying out a first treatment on the surface of the Said fifth tag tensor->Comprising X Y Z fifth tag vectors ++>The fifth tag vectorAmino acid type probability comprising the total number N of said amino acid types +.>
2. The method for processing the electron microscope density map target recognition model according to claim 1, wherein the trunk feature extraction network is configured to perform feature extraction processing on an input electron microscope density map to generate corresponding first and second branch tensors, and specifically includes:
the trunk feature extraction network takes the input electron microscope density map as a corresponding current electron microscope density map; the shape of the current electron microscope density chart is H 0 ×X 0 ×Y 0 ×Z 0 ,H 0 For the number of characteristic channels of the current electron microscope density map, (X) 0 、Y 0 、Z 0 ) The tensor three-dimensional size of the current electron microscope density map is obtained;
inputting the current electron microscope density map into the first convolution layer to carry out convolution operation processing to obtain a corresponding first convolution characteristic tensor; the shape of the first convolution characteristic tensor is H 1 ×X 0 ×Y 0 ×Z 0 ,H 1 A number of feature channels that is the first convolution feature tensor;
Inputting the first convolution characteristic tensor into the second convolution layer to carry out convolution operation processing to obtain a corresponding second convolution characteristic tensor; the shape of the second convolution characteristic tensor is H 2 ×X 0 ×Y 0 ×Z 0 ,H 2 A number of feature channels that is the second convolution feature tensor;
inputting the second convolution characteristic tensor into the first pooling layer for downsampling treatment to obtain a corresponding first pooling characteristic tensor; the shape of the first pooling feature tensor is H 2 ×X 1 ×Y 1 ×Z 1 ,(X 1 、Y 1 、Z 1 ) For the tensor three-dimensional size of the first pooled feature tensor, X 1 =X 0 /2,Y 1 =Y 0 /2,Z 1 =Z 0 /2;
Inputting the first pooling characteristic tensor into the third convolution layer to carry out convolution operation processing to obtain a corresponding third convolution characteristic tensor; the third convolution characteristic tensor has the shape of H 3 ×X 1 ×Y 1 ×Z 1 ,H 3 A number of feature channels that is the third convolution feature tensor;
inputting the third convolution characteristic tensor into the second pooling layer for downsampling treatment to obtain a corresponding second pooling characteristic tensor; the shape of the second pooling feature tensor is H 3 ×X 2 ×Y 2 ×Z 2 ,(X 2 、Y 2 、Z 2 ) For the tensor three-dimensional size of the second pooled feature tensor, X 2 =X 1 /2,Y 2 =Y 1 /2,Z 2 =Z 1 /2;
Inputting the second pooling characteristic tensor into the fourth convolution layer to carry out convolution operation processing to obtain a corresponding fourth convolution characteristic tensor; the fourth convolution characteristic tensor has the shape of H 4 ×X 2 ×Y 2 ×Z 2 ,H 4 A number of feature channels that is the fourth convolution feature tensor;
inputting the fourth convolution characteristic tensor into the third pooling layer for downsampling treatment to obtain a corresponding third pooling characteristic tensor; the third pooling feature tensor has a shape of H 4 ×X 3 ×Y 3 ×Z 3 ,(X 3 、Y 3 、Z 3 ) For the tensor three-dimensional size of the third pooled feature tensor, X 3 =X 2 /2,Y 3 =Y 2 /2,Z 3 =Z 2 /2;
Inputting the second pooling characteristic tensor into the fifth convolution layer to carry out convolution operation processing to obtain a corresponding fifth convolution characteristic tensor; the fifth convolution characteristic tensor has the shape of H 5 ×X 3 ×Y 3 ×Z 3 ,H 5 A number of feature channels that is the fifth convolution feature tensor;
inputting the fifth convolution characteristic tensor into the first upsampling layer for upsampling to obtain a corresponding first upsampling characteristic tensor; the shape of the first upsampling feature tensor is H 5 ×(X 3 *2)×(Y 3 *2)×(Z 3 *2)=H 5 ×X 2 ×Y 2 ×Z 2
Taking the second pooled feature tensor and the first upsampled feature tensor as the first pooled feature tensorInputting two input tensors input by the first input end and the second input end of the first splicing layer into the first splicing layer, and performing tensor splicing processing along the characteristic channel direction to obtain corresponding first splicing tensors; the shape of the first splicing tensor is H 6 ×X 2 ×Y 2 ×Z 2 ,H 6 For the number of characteristic channels of the first splice tensor, H 6 =H 3 +H 5
Inputting the first spliced tensor into the sixth convolution layer to carry out convolution operation processing to obtain a corresponding sixth convolution characteristic tensor; the shape of the sixth convolution characteristic tensor is H 7 ×X 2 ×Y 2 ×Z 2 ,H 7 A number of characteristic channels that is the sixth convolution characteristic tensor;
inputting the sixth convolution characteristic tensor into the second upsampling layer to perform upsampling processing to obtain a corresponding second upsampling characteristic tensor; the second upsampling feature tensor has a shape of H 7 ×(X 2 *2)×(Y 2 *2)×(Z 2 *2)=H 7 ×X 1 ×Y 1 ×Z 1
Inputting the first pooling characteristic tensor and the second upsampling characteristic tensor serving as two input tensors input by the first input end and the second input end of the second splicing layer into the second splicing layer to perform tensor splicing processing along the characteristic channel direction to obtain a corresponding second splicing tensor; the shape of the second splice tensor is H 8 ×X 1 ×Y 1 ×Z 1 ,H 8 For the number of characteristic channels of the second splice tensor, H 8 =H 2 +H 7
Inputting the second spliced tensor into the seventh convolution layer to carry out convolution operation processing to obtain a corresponding seventh convolution characteristic tensor; the seventh convolution characteristic tensor has the shape of H 9 ×X 1 ×Y 1 ×Z 1 ,H 9 A number of feature channels that is the seventh convolution feature tensor;
inputting the seventh convolution characteristic tensor into the third upsampling layer for upsampling to obtain a corresponding third upsampling bit A symptom tensor; the third upsampling feature tensor has the shape of H 9 ×(X 1 *2)×(Y 1 *2)×(Z 1 *2)=H 9 ×X 0 ×Y 0 ×Z 0
Inputting the first convolution characteristic tensor and the third upsampling characteristic tensor serving as two input tensors input by the first input end and the second input end of the third splicing layer into the third splicing layer to perform tensor splicing processing along the characteristic channel direction to obtain a corresponding third splicing tensor; the shape of the third splice tensor is H 10 ×X 0 ×Y 0 ×Z 0 ,H 10 For the number of characteristic channels of the third splice tensor, H 10 =H 1 +H 9
Inputting the third spliced tensor into the eighth convolution layer to carry out convolution operation processing to obtain a corresponding eighth convolution characteristic tensor; the shape of the eighth convolution characteristic tensor is H 11 ×X 0 ×Y 0 ×Z 0 ,H 11 A number of feature channels for the eighth convolution feature tensor;
taking the obtained eighth convolution characteristic tensor as the corresponding first branch tensor, and taking the obtained seventh convolution characteristic tensor as the corresponding second branch tensor; and outputting the obtained first and second branch tensors.
3. The method for processing the electron microscope density map target recognition model according to claim 1, wherein the first recognition branch network is configured to perform a trunk atomic feature target recognition process according to the first branch tensor to generate a corresponding trunk atomic feature map, and specifically includes:
The first identification branch network inputs the first branch tensor into the ninth convolution layer to carry out convolution operation processing to obtain a corresponding ninth convolution characteristic tensor; the shape of the first branch tensor is H 11 ×X 0 ×Y 0 ×Z 0 ,H 11 For the number of characteristic channels of the first branch tensor, (X) 0 、Y 0 、Z 0 ) Is the firstA tensor three-dimensional size of the branch tensor; the shape of the ninth convolution characteristic tensor is H 12 ×X 0 ×Y 0 ×Z 0 ,H 12 A number of feature channels that is the ninth convolution feature tensor;
inputting the ninth convolution feature tensor into the first activation layer to perform feature activation operation to generate a corresponding first activation tensor; the first activation tensor has a shape of 3X 0 ×Y 0 ×Z 0 The method comprises the steps of carrying out a first treatment on the surface of the The first activation tensor includes X 0 *Y 0 *Z 0 A first vector of length 3; the first vector includes 3 atom type probabilities of: the probability of the C atom type, the probability of the C alpha atom type and the probability of the N atom type, and the value range of each atom type probability is [0,1 ]];
And outputting the first activation tensor as the corresponding trunk atomic feature map.
4. The method for processing the electron microscope density map object recognition model according to claim 1, wherein the second recognition branch network is configured to perform a C alpha atomic feature object recognition process according to the second branch tensor to generate a corresponding C alpha atomic feature map, and specifically includes:
The second recognition branch network inputs the second branch tensor into the tenth, eleventh, twelfth and thirteenth convolution layers respectively to carry out corresponding convolution operation processing to obtain corresponding tenth, eleventh, twelfth and thirteenth convolution characteristic tensors; the shape of the second branch tensor is H 9 ×X 1 ×Y 1 ×Z 1 ,H 9 For the number of characteristic channels of the second branch tensor, (X) 1 、Y 1 、Z 1 ) A tensor three-dimensional size for the second branch tensor; the tenth convolution characteristic tensor has the shape of H 13 ×X 1 ×Y 1 ×Z 1 ,H 13 A number of feature channels that is the tenth convolution feature tensor; the eleventh convolution characteristic tensor has the shape of H 14 ×X 1 ×Y 1 ×Z 1 ,H 14 For the eleventh convolution characteristic tensorIs a characteristic channel number of (a); the twelfth convolution characteristic tensor has the shape of H 15 ×X 1 ×Y 1 ×Z 1 ,H 15 A number of feature channels for the twelfth convolution feature tensor; the thirteenth convolution characteristic tensor has the shape of H 16 ×X 1 ×Y 1 ×Z 1 ,H 16 A number of feature channels that is the thirteenth convolution feature tensor;
inputting the tenth, eleventh, twelfth and thirteenth convolution feature tensors into the corresponding second, third, fourth and fifth activation layers respectively to perform corresponding feature activation operation to obtain corresponding second, third, fourth and fifth activation tensors; the shape of the second activation tensor is 1X 1 ×Y 1 ×Z 1 The second activation tensor includes X 1 *Y 1 *Z 1 A second vector of length 1, each of said second vectors comprising 1 probability of a C alpha atom type, said probability of a C alpha atom type ranging from a value of [0,1]The method comprises the steps of carrying out a first treatment on the surface of the The third activation tensor has a shape of 3X 1 ×Y 1 ×Z 1 The third activation tensor includes X 1 *Y 1 *Z 1 And 3 third vectors with length, wherein each third vector is a C alpha atom coordinate vector, and each third vector consists of 3 axial coordinate components: x-axis, Y-axis and Z-axis coordinate components, wherein the value ranges of the X-axis, Y-axis and Z-axis coordinate components are [0,2 ]]The unit is an angstrom; the fourth activation tensor has a shape of 3X 1 ×Y 1 ×Z 1 The fourth activation tensor includes X 1 *Y 1 *Z 1 And a fourth vector with the length of 3, wherein each fourth vector is a pseudo peptide bond vector and consists of 3 axial vector components: an X-axis vector component, a Y-axis vector component and a Z-axis vector component, wherein the value ranges of the X-axis, the Y-axis and the Z-axis vector components are [ -3.8,3.8 ]]The unit is an angstrom; the fifth activation tensor has a shape of NxX 1 ×Y 1 ×Z 1 N is the total number of preset amino acid types, N is an integer greater than 0, and the fifth activation tensor comprises X 1 *Y 1 *Z 1 Length ofA fifth vector of the total number N of amino acid types, wherein the fifth vector comprises the amino acid type probability of the total number N of amino acid types, and the value range of the amino acid type probability is [0,1 ]];
The obtained second, third, fourth and fifth activation tensors are used as four input tensors input by the first, second, third and fourth input ends of the first feature fusion layer to be input into the first feature fusion layer for feature fusion processing to obtain corresponding first fusion tensors; the shape of the first fusion tensor is H 17 ×X 1 ×Y 1 ×Z 1 ,H 17 A number of characteristic channels that is the first fusion tensor;
and outputting the obtained first fusion tensor as the corresponding C alpha atomic characteristic diagram.
5. The method for processing an object recognition model of an electron microscope density map according to claim 4, wherein the inputting the obtained second, third, fourth and fifth activation tensors as four input tensors input by the first, second, third and fourth input ends of the first feature fusion layer into the first feature fusion layer to perform feature fusion processing to obtain corresponding first fusion tensors, specifically includes:
The first feature fusion layer identifies a preset feature fusion mode; the feature fusion mode comprises a first mode and a second mode;
when the characteristic fusion mode is a first mode, tensor splicing processing is carried out on the second, third, fourth and fifth input activation tensors along the characteristic channel direction to obtain the corresponding first fusion tensor; the shape of the first fusion tensor is H 17 ×X 1 ×Y 1 ×Z 1 ,H 17 For the number of characteristic channels of the first fusion tensor, H 17 =1+3+3+N;
When the feature fusion mode is the second mode, constructing a feature fusion mode consisting of X 1 *Y 1 *Z 1 The mesh space formed by the first unit mesh with the shape of 2A x 2A is marked as the firstA grid space; establishing a one-to-one correspondence between each first unit grid in the first grid space and the second, third, fourth and fifth vectors in the second, third, fourth and fifth activation tensors; distributing a first grid characteristic tensor consisting of a first grid C alpha atom type, a first grid C alpha atom coordinate vector, a first grid pseudo peptide bond vector and a first grid amino acid type for each first unit grid; initializing the first grid C alpha atom type in each first grid characteristic tensor to be a preset non-C alpha atom type, initializing the first grid C alpha atom coordinate vector to be a preset invalid coordinate vector, initializing the first grid pseudo-peptide bond vector to be a preset invalid pseudo-peptide bond vector, and initializing the first grid amino acid type to be a preset invalid amino acid type; traversing all the first unit grids one by one, taking the first unit grids traversed currently as corresponding current unit grids, taking the first grid characteristic tensor corresponding to the current unit grids as corresponding current grid characteristic tensor, identifying whether the C alpha atom type probability of the second vector corresponding to the current unit grids exceeds a preset C alpha atom type probability threshold, if so, setting the first grid C alpha atom type of the current grid characteristic tensor as a preset C alpha atom type, setting the first grid C alpha atom coordinate vector of the current grid characteristic tensor as the corresponding third vector, setting the first grid pseudo peptide key vector of the current grid characteristic tensor as the corresponding fourth vector, and setting the first grid amino acid type of the current grid characteristic tensor as the amino acid type probability corresponding to the amino acid type with the maximum probability value in the corresponding fifth vector; and at the end of the traversal, by the latest X 1 *Y 1 *Z 1 The first grid characteristic tensors form corresponding first fusion tensors; the shape of the first fusion tensor is H 17 ×X 1 ×Y 1 ×Z 1 ,H 17 For the number of characteristic channels of the first fusion tensor, H 17 =L 1 +L 2 +L 3 +L 4 ,L 1 For the length of the first lattice C alpha atom type, L 2 L is the length of the first grid C alpha atom coordinate vector 3 L is the length of the first lattice pseudo-peptide bond vector 4 A length of the first lattice amino acid type; l (L) 1 Defaulting to 1, L 2 Defaulting to 3, L 3 Defaulting to 3, L 4 Default to 1;
and outputting the obtained first fusion tensor.
6. The method for processing the electron microscope density map object recognition model according to claim 1, wherein the model training of the object recognition model according to the model loss function specifically comprises:
step 111, initializing the count value of the first counter to 0;
step 112, selecting an unused training data record from the preset training data set as a corresponding current training data record; extracting a training electron microscope density map, a main atom probability feature map, a C alpha atom position feature map, a C alpha-C alpha atom pseudo peptide bond vector feature map and a C alpha atom amino acid type probability feature map in the current training data record as a corresponding first training electron microscope density map, and the first label tensor Said second tag tensor->Said third tag tensor->Said fourth tag tensor->And said fifth tag tensor->The method comprises the steps of carrying out a first treatment on the surface of the The training data set comprises a plurality of the training data records; the training data record comprises the training electron microscope density map and the corresponding main atom probability feature map, the C alpha atom position feature map, the C alpha-C alpha atom pseudo peptide bond vector feature map and the C alpha atom amino acid type probability feature map;
step 113, inputting the first training electron microscope density map into the trunk feature extraction network of the target recognition model, and performing feature extraction processing on the first training electron microscope density map by the trunk feature extraction network to generate corresponding first training branch tensors and second training branch tensors;
step 114, inputting the first training branch tensor into the ninth convolution layer of the first recognition branch network of the target recognition model to perform convolution operation processing to obtain a corresponding ninth convolution characteristic tensor; inputting the ninth convolution feature tensor into the first activation layer in the first identification branch network to perform corresponding feature activation operation to obtain the corresponding first activation tensor M 1
Step 115, inputting the second training branch tensor into the tenth, eleventh, twelfth and thirteenth convolution layers of the second recognition branch network of the target recognition model respectively, and performing corresponding convolution operation processing to obtain corresponding tenth, eleventh, twelfth and thirteenth convolution feature tensors; inputting the tenth, eleventh, twelfth and thirteenth convolution feature tensors into the corresponding second, third, fourth and fifth activation layers in the second identification branch network to perform corresponding feature activation operation to obtain corresponding second, third, fourth and fifth activation tensors M 2 、M 3 、M 4 、M 5
Step 116, integrating the first, second, third, fourth and fifth activation tensors M 1 、M 2 、M 3 、M 4 、M 5 And the first, second, third, fourth, and fifth label tensors、/>、/>、/>、/>Inputting the model loss function L to calculate a loss value to obtain a corresponding first loss value;
step 117, identifying whether the first loss value is satisfied with a set loss value range; if yes, add 1 to the count value of the first counter, and go to step 118; if not, substituting the current model parameters of the target recognition model into the model loss function L to obtain a corresponding first target function, estimating the model parameters of the target recognition model towards the direction that the function value of the first target function reaches the minimum value to obtain corresponding updated model parameters, resetting the current model parameters of the target recognition model by using the updated model parameters, and returning to the step 113 when the resetting is finished;
Step 118, identifying whether the count value of the first counter exceeds a preset counter threshold; if yes, go to step 119; if not, returning to step 112;
step 119, stopping the model training of the round and confirming that the model training is successful.
7. An apparatus for performing the method of processing an electron microscope density map object recognition model according to any one of claims 1 to 6, characterized in that the apparatus comprises: the system comprises a model setting module, a model training module and a model application module;
the model setting module is used for determining a target recognition model; determining a model loss function of the target recognition model;
the model training module is used for carrying out model training on the target recognition model according to the model loss function;
the model application module is used for acquiring a first electron microscope density map after the model training is successful, and performing key target identification processing on the first electron microscope density map by using the target identification model to obtain a corresponding first C alpha atomic characteristic map and a corresponding first trunk atomic characteristic map.
8. An electronic device, comprising: memory, processor, and transceiver;
the processor being operative to couple with the memory, read and execute instructions in the memory to implement the method of any one of claims 1-6;
The transceiver is coupled to the processor and is controlled by the processor to transmit and receive messages.
9. A computer readable storage medium storing computer instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1-6.
CN202310201533.5A 2023-03-06 2023-03-06 Method and device for processing electron microscope density map target recognition model Active CN116071745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310201533.5A CN116071745B (en) 2023-03-06 2023-03-06 Method and device for processing electron microscope density map target recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310201533.5A CN116071745B (en) 2023-03-06 2023-03-06 Method and device for processing electron microscope density map target recognition model

Publications (2)

Publication Number Publication Date
CN116071745A CN116071745A (en) 2023-05-05
CN116071745B true CN116071745B (en) 2023-10-31

Family

ID=86173270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310201533.5A Active CN116071745B (en) 2023-03-06 2023-03-06 Method and device for processing electron microscope density map target recognition model

Country Status (1)

Country Link
CN (1) CN116071745B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898180A (en) * 2018-06-28 2018-11-27 中国人民解放军国防科技大学 Depth clustering method for single-particle cryoelectron microscope images
CN111210869A (en) * 2020-01-08 2020-05-29 中山大学 Protein cryoelectron microscope structure analysis model training method and analysis method
CN114283217A (en) * 2021-09-23 2022-04-05 腾讯科技(深圳)有限公司 Method, device and equipment for training reconstruction model of three-dimensional electron microscope image
KR20220072280A (en) * 2020-11-25 2022-06-02 한국기초과학지원연구원 Apparatur to predict aberration of transmission electron microscope and opperating method of thereof
CN114612501A (en) * 2022-02-07 2022-06-10 清华大学 Neural network model training method and cryoelectron microscope density map resolution estimation method
CN114841898A (en) * 2022-06-29 2022-08-02 华中科技大学 Deep learning-based post-processing method and device for three-dimensional density map of cryoelectron microscope
CN115526850A (en) * 2022-09-20 2022-12-27 北京深势科技有限公司 Training method and device for real-space decoder of refrigeration electron microscope and electronic equipment
CN115691658A (en) * 2022-11-07 2023-02-03 北京深势科技有限公司 Processing method and device for optimizing molecular structure based on three-dimensional atomic density map

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898180A (en) * 2018-06-28 2018-11-27 中国人民解放军国防科技大学 Depth clustering method for single-particle cryoelectron microscope images
CN111210869A (en) * 2020-01-08 2020-05-29 中山大学 Protein cryoelectron microscope structure analysis model training method and analysis method
KR20220072280A (en) * 2020-11-25 2022-06-02 한국기초과학지원연구원 Apparatur to predict aberration of transmission electron microscope and opperating method of thereof
CN114283217A (en) * 2021-09-23 2022-04-05 腾讯科技(深圳)有限公司 Method, device and equipment for training reconstruction model of three-dimensional electron microscope image
CN114612501A (en) * 2022-02-07 2022-06-10 清华大学 Neural network model training method and cryoelectron microscope density map resolution estimation method
CN114841898A (en) * 2022-06-29 2022-08-02 华中科技大学 Deep learning-based post-processing method and device for three-dimensional density map of cryoelectron microscope
CN115526850A (en) * 2022-09-20 2022-12-27 北京深势科技有限公司 Training method and device for real-space decoder of refrigeration electron microscope and electronic equipment
CN115691658A (en) * 2022-11-07 2023-02-03 北京深势科技有限公司 Processing method and device for optimizing molecular structure based on three-dimensional atomic density map

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于CNN模型的高分辨率遥感图像目标识别;曲景影;孙显;高鑫;;国外电子测量技术(第08期);全文 *

Also Published As

Publication number Publication date
CN116071745A (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN107730514B (en) Scene segmentation network training method and device, computing equipment and storage medium
CN111383741B (en) Method, device and equipment for establishing medical imaging model and storage medium
CN111862251B (en) Method, device, storage medium and electronic equipment for medical image reconstruction technology
CN113538281B (en) Image denoising method, image denoising device, computer equipment and storage medium
CN113327318B (en) Image display method, image display device, electronic equipment and computer readable medium
CN111884207B (en) Power grid topological structure visualization method, system and medium based on electrical distance
CN114219890A (en) Three-dimensional reconstruction method, device and equipment and computer storage medium
CN113674172A (en) Image processing method, system, device and storage medium
CN111709531A (en) Quantum state construction method and device, quantum computer equipment and storage medium
CN110246200B (en) Magnetic resonance cardiac cine imaging method and device and magnetic resonance scanner
CN114648611A (en) Three-dimensional reconstruction method and device of local orbit function
CN112825199A (en) Collision detection method, device, equipment and storage medium
CN116071745B (en) Method and device for processing electron microscope density map target recognition model
CN106951918B (en) Single-particle image clustering method for analysis of cryoelectron microscope
Jobst et al. Efficient MPS representations and quantum circuits from the Fourier modes of classical image data
CN113139617B (en) Power transmission line autonomous positioning method and device and terminal equipment
JP2020021208A (en) Neural network processor, neural network processing method, and program
CN114581261A (en) Fault diagnosis method, system, equipment and storage medium based on quick graph calculation
CN114419339A (en) Method and device for training data reconstruction model based on electric power portrait
CN114241044A (en) Loop detection method, device, electronic equipment and computer readable medium
CN113902107A (en) Data processing method, readable medium and electronic device for neural network model full connection layer
JP2022020464A (en) Neural network processing apparatus, neural network processing method, and computer program
CN113450364A (en) Tree-shaped structure center line extraction method based on three-dimensional flux model
CN116389706B (en) Combined training method and device for encoder and decoder of electronic microscope projection chart
CN117710513B (en) Quantum convolution neural network-based magnetic resonance imaging method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant