CN115097064B - Gas detection method and device, computer equipment and storage medium - Google Patents

Gas detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115097064B
CN115097064B CN202111119011.8A CN202111119011A CN115097064B CN 115097064 B CN115097064 B CN 115097064B CN 202111119011 A CN202111119011 A CN 202111119011A CN 115097064 B CN115097064 B CN 115097064B
Authority
CN
China
Prior art keywords
attention
self
layer
gas
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111119011.8A
Other languages
Chinese (zh)
Other versions
CN115097064A (en
Inventor
刘时亮
潘晓芳
张哲�
赵晓锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202111119011.8A priority Critical patent/CN115097064B/en
Publication of CN115097064A publication Critical patent/CN115097064A/en
Application granted granted Critical
Publication of CN115097064B publication Critical patent/CN115097064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/0004Gaseous mixtures, e.g. polluted air
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A50/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE in human health protection, e.g. against extreme weather
    • Y02A50/20Air quality improvement or preservation, e.g. vehicle emission control or emission reduction by using catalytic converters

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Combustion & Propulsion (AREA)
  • Food Science & Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application belongs to the technical field of medical treatment in artificial intelligence, and relates to a gas detection method, a device, computer equipment and a storage medium based on a multi-task self-attention network, wherein the method comprises the following steps: receiving gas component data of the gas to be detected sent by the electronic nose system; inputting the gas composition data into a multi-task self-attention network to detect the biomarker of the gas to be detected and the marker concentration information corresponding to the biomarker, wherein the multi-task self-attention network comprises an encoder and a decoder which are composed of a position coding module and a multi-task self-attention module; and confirming a gas detection result corresponding to the gas to be detected according to the biomarker and the marker concentration information. The application adopts the end-to-end deep learning network model, the original data can be input to detect the sample composition and obtain the corresponding marker concentration, so that the interference caused by human factors can be effectively avoided, and the problem of information loss caused by manual feature extraction can be reduced.

Description

Gas detection method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of medical technology in artificial intelligence, and in particular, to a gas detection method, apparatus, computer device, and storage medium based on a multi-task self-attention network.
Background
Lung cancer is the most common cancer worldwide and is also the cancer with the highest mortality rate worldwide. Early cancer screening is one of the most effective ways to reduce cancer mortality.
There is a conventional cancer detection method, that is, a sensor signal is processed according to a machine learning algorithm, and a cancer detection result is finally output.
However, the applicant has found that conventional cancer detection methods based on machine learning algorithms generally require manual feature extraction when processing sensor signals, and that the manual feature extraction process may cause human interference, and that the uncertain interference may affect the algorithm feature extraction from different aspects, resulting in human-induced differences in the final recognition algorithm results. Second, conventional machine learning algorithms cannot process high-dimensional data and require data dimension reduction before the next gas recognition function can be performed. The data dimension reduction process inevitably brings the problem of losing sensor signal data, and the loss of information also affects the final recognition accuracy of the algorithm.
Disclosure of Invention
The embodiment of the application aims to provide a gas detection method, a device, computer equipment and a storage medium based on a multi-task self-attention network, so as to solve the problem that the traditional cancer detection method based on a machine learning algorithm can cause loss of sensor signal data, thereby influencing the final recognition accuracy of the algorithm.
In order to solve the above technical problems, the embodiments of the present application provide a gas detection method based on a multi-task self-attention network, which adopts the following technical scheme:
receiving gas component data of the gas to be detected sent by the electronic nose system;
inputting the gas composition data into a multi-task self-attention network to detect a biomarker of the gas to be detected and marker concentration information corresponding to the biomarker, wherein the multi-task self-attention network comprises an encoder and a decoder which are composed of a position encoding module and a multi-task self-attention module;
and confirming a gas detection result corresponding to the gas to be detected according to the biomarker and the marker concentration information.
Further, the multi-task self-attention module includes:
multi-headed self-attention layer, normalization layer, feed forward network layer, and residual structure.
Further, the multi-head self-attention of the multi-head self-attention layer is expressed as:
head i =Attention(Q i ,K i ,V i )
Attention MHA =(concat(head 1 ,head 2 ,…,head h ))W o
wherein W is o Representing a learnable linear transformation parameter; i=1, 2,..h represents the number of heads.
Further, the feed forward network layer is expressed as:
FNN=max(0,XW 1 +b 1 )W 2 +b 2
in order to solve the above technical problems, the embodiments of the present application further provide a gas detection device based on a multi-task self-attention network, which adopts the following technical scheme:
the data receiving unit is used for receiving the gas component data of the gas to be detected, which is sent by the electronic nose system;
a gas detection unit for inputting the gas composition data into a multi-task self-attention network for detecting the biomarker of the gas to be detected and the marker concentration information corresponding to the biomarker, wherein the multi-task self-attention network comprises an encoder and a decoder which are composed of a position encoding module and a multi-task self-attention module;
and the result confirming unit is used for confirming a gas detection result corresponding to the gas to be detected according to the biomarker and the marker concentration information.
In order to solve the above technical problems, the embodiments of the present application further provide a computer device, which adopts the following technical schemes:
Comprising a memory having stored therein computer readable instructions which when executed by a processor implement the steps of a multitasking self-attention network based gas detection method as described above.
In order to solve the above technical problems, embodiments of the present application further provide a computer readable storage medium, which adopts the following technical solutions:
the computer readable storage medium has stored thereon computer readable instructions which when executed by a processor implement the steps of a multitasking self-attention network based gas detection method as described above.
Compared with the prior art, the embodiment of the application has the following main beneficial effects:
the application provides a gas detection method based on a multi-task self-attention network, which comprises the following steps: receiving gas component data of the gas to be detected sent by the electronic nose system; inputting the gas composition data into a multi-task self-attention network to detect a biomarker of the gas to be detected and marker concentration information corresponding to the biomarker, wherein the multi-task self-attention network comprises an encoder and a decoder which are composed of a position encoding module and a multi-task self-attention module; and confirming a gas detection result corresponding to the gas to be detected according to the biomarker and the marker concentration information. The application adopts the end-to-end deep learning network model, the original data can be input to detect the sample composition and obtain the corresponding marker concentration, so that the interference caused by human factors can be effectively avoided, and the problem of information loss caused by manual feature extraction can be reduced.
Drawings
For a clearer description of the solution in the present application, a brief description will be given below of the drawings that are needed in the description of the embodiments of the present application, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flowchart of an implementation of a gas detection method based on a multi-task self-attention network according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a multi-task self-attention network according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a multi-head self-attention structure according to an embodiment of the present application;
FIG. 5 is a schematic diagram of residual structure and layer normalization according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a gas detection device based on a multi-task self-attention network according to a second embodiment of the present application;
FIG. 7 is a schematic structural diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to better understand the technical solutions of the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the gas detection method based on the multi-task self-attention network provided in the embodiments of the present application is generally executed by a server/terminal device, and accordingly, the gas detection device based on the multi-task self-attention network is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flowchart of an implementation of a gas detection method based on a multi-task self-attention network according to an embodiment of the present application is shown, and for convenience of explanation, only a portion relevant to the present application is shown.
The gas detection method based on the multi-task self-attention network comprises the following steps:
step S201: receiving gas component data of the gas to be detected sent by the electronic nose system;
step S202: inputting the gas composition data into a multi-task self-attention network to detect the biomarker of the gas to be detected and the marker concentration information corresponding to the biomarker, wherein the multi-task self-attention network comprises an encoder and a decoder which are composed of a position coding module and a multi-task self-attention module;
step S203: and confirming a gas detection result corresponding to the gas to be detected according to the biomarker and the marker concentration information.
In an embodiment of the present application, referring to the schematic structure of the multi-tasking self-focusing network shown in fig. 3, the multi-tasking self-focusing network provided in the present application includes two parts of an encoder and a decoder, wherein the encoder is composed of a position coding and two layers of stacked self-focusing modules. The self-attention module includes: a multi-headed self-attention layer, a normalization layer, a feed-forward network layer and a residual structure. First, sensor data containing position information provided by a position coding layer flows in parallel into a multi-headed self-attention layer to obtain an output called an intermediate vector. In addition, the multi-headed self-attention mechanism can perform different linear transformations on the input data, helping the model capture various features of the signal. Furthermore, the multi-head self-attention structure has remarkable parallel processing performance because of not using any recursion structure, so that better effect can be achieved only by requiring shorter training time. In addition, the ability to process time series data in parallel makes the multi-headed self-attention model more suitable for deployment onto edge devices. The intermediate vector flows into the feedforward neural network layer after passing through the normalization layer to obtain a low-dimensional characteristic vector. The feedforward neural network provides nonlinear transformation for the model, improves the expression capacity of the model, and the layer normalization can accelerate the convergence of the model. The residual structure is connected with the multi-head self-attention module and the feedforward neural network, so that the problem of gradient loss in the training process can be prevented, and the layer normalization can accelerate model convergence. Flowing the low-dimensional feature vector into the second stacked self-attention module obtains a coding layer output named high-dimensional feature vector. In the decoder section, the full connection layer decodes the information extracted by the encoding layer. Then, by activating a function layer and a gas composition and concentration matching mechanism, a gas classification and concentration prediction result is obtained.
In the embodiment of the application, each element in the time series data flows through the encoder stack at the same time, so that the position information of the time series is lost. Thus, there is still a need forOne approach is to incorporate the order of elements into our model. We add a layer called a position coding layer to the self-attention module to mark the position information. Let the original input data be x=r t×d T is the length of the input data, d is the data dimension, the position matrix shape generated by the position coding layer is the same as the input data, namely P epsilon R t×d The original input data and the encoding result are added to obtain the sequence data containing the position information. The location matrix can be obtained by:
Figure GDA0004104333810000061
where i is the index of the data in the sequence, d is the sequence data dimension size, and k is the kth dimension of the data. The above equation represents adding sin variables in the even dimension and cos variables in the odd dimension of each sequence position vector of the position matrix P. The information for each location is specific and unique and adding this information to the original data results in a sequence signal with location information.
In the embodiment of the application, the multi-head self-attention structure is shown in fig. 4, and the attention mechanism focuses on information which is more critical to the current task in a plurality of pieces of information, so that the attention of other information is reduced, and the efficiency of processing the task is improved. And a self-attention mechanism is an attention mechanism that relates different positions within a sequence to calculate a representation of the sequence. The attention mechanism may be described as a function that maps a query vector Q and a series of key-value pairs (K-V) to an output value representing the correlation of elements of the sequence, where the query Q, key K, and value V vectors are all linear changes to the input vector. Self-attention may be achieved by a scaled dot product attention method. The calculation of the attention is mainly divided into three steps:
(1) Firstly, the similarity between the query vector Q and the key K in each key value pair (K-V) to calculate the similarity sequence element can be realized by dot product, and the similarity calculation expression is as follows:
similarly(Q,{K i ,v i } M )=[similarly(Q,K 1 ),…,similarly(Q,K M )]
(2) the M weights are then normalized using a softmax function;
(3) and finally, weighting and summing the normalized weight and the corresponding value V to obtain the attention. The implementation mode is as follows:
Figure GDA0004104333810000071
wherein d is k For the dimension of the query vector, when d k When large, the dot product result will become very discrete, divided by the dot product result
Figure GDA0004104333810000072
Plays a role in regulation, so that the inner product result is not too large.
Multi-head self-attention is to split Q, K and V into multiple different parts with the total amount of parameters kept unchanged, and the multi-head mechanism does not calculate attention only once, but runs scaled dot product attention in parallel in different subspaces. The self-attention information of different subspaces is spliced together, and the value obtained by linear transformation is used as the output result of multi-head self-attention. Because the self-attentions are distributed differently in different subspaces, the multi-head self-attentions mechanism can search the associated information of different angles between the sequences, and the multi-head self-attentions mechanism is also helpful for learning the dependency relationship of long-distance information. The multi-head self-attention calculation formula is as follows:
head i =Attention(Q i ,K i ,V i )
Attention MHA =(concat(head 1 ,head 2 ,…,head h ))W o
Wherein W is o Representing a learnable linear transformation parameter; i=1, 2,..h represents the number of heads.
In the present embodiment, the multihead self-attention module is followed by a feed-forward neural network module. Since the multi-headed self-attention module involves only linear transformations, the feedforward neural layer provides the model with nonlinear transformations that can increase the expressive power of the model. The layer is composed of two fully connected layers, the activation function of the first fully connected layer is Relu, and the second fully connected layer does not use the activation function. The expression is as follows:
FNN=max(0,XW 1 +b 1 )W 2 +b 2
in the present embodiment, there is one residual connection and layer normalization between the multi-headed self-attention layer and the feedforward neural network layer and after the feedforward neural network layer. Residual connection is typically used to solve the multi-layer network training problem, helping to avoid gradient vanishing or gradient explosion problems. The layer normalization normalizes the data, so that the convergence speed of the model can be increased, the training time of the network can be reduced, and the stability of the model can be improved. Residual structure and layer normalization is expressed as follows:
LayerNorm(X+MultiherdAttention(X))
LayerNorm(X+FeedForward(X))
where X represents the input of Multi-Head Attention or Feed Forward, multiHeadAttention (X) and FeedForward (X) represent the output, layerNorm represents layer normalization, as shown in FIG. 5.
In the embodiment of the application, the classification and regression network is composed of a full connection layer and an activation layer, and the function of the classification and regression network is to map the features extracted by the multi-head self-attention network into a sample space. The transcription layer is provided with two full-connection layers, the first full-connection layer performs feature transformation and information induction, an activation function is a Relu function, nonlinear change is provided for the classification regression layer, meanwhile, the network convergence speed is increased, and the expression of the Relu function is as follows:
Figure GDA0004104333810000081
the second full-connection layer is used for data prediction output, 5 neurons are provided, the first two neurons are used for gas concentration prediction output, and the rest neurons are used for gas type prediction output. The outputs are all subjected to a Sigmoid activation function so that the final output result is in the range of 0 to 1. The Sigmoid function is expressed as follows:
Figure GDA0004104333810000091
in the present embodiment, the model is divided into two parts altogether, one being the encoder part and one being the decoder part. While the main function of the encoder section is feature extraction, the feature extraction section may have a variety of options. The encoder may select a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or a long short term memory network (LSTM). These networks all have better feature extraction capabilities. But the multi-headed self-care network of the present application may be a better choice than gas data. CNN is suitable for processing an image signal, but is not suitable for a time-series signal. RNNs easily extract the characteristics of time signals, but due to their recursive network structure, gradient extinction or gradient explosion phenomena easily occur. While the ability of LSTM networks to extract features decreases as the length of the sequence signal increases, it is not suitable for feature extraction of long sequence signals. The multi-head self-attention network is a global characteristic relation of parallel extracted signals, the characteristic extraction capacity cannot be weakened along with the increase of the sequence length, the structure of the multi-head self-attention network can be used for parallel training, and the training speed and the recognition speed of a model can be effectively improved.
In the embodiment of the application, the self-attention modules of the application are connected in a serial stacking manner by adopting two self-attention modules. Other different connection means may be used. When the sequence length of the sample data set is increased, the stacking of modules can be properly increased, and the deep features of the signals can be effectively extracted.
In summary, a first embodiment of the present application provides a gas detection method based on a multi-task self-attention network, including: receiving gas component data of the gas to be detected sent by the electronic nose system; inputting the gas composition data into a multi-task self-attention network to detect the biomarker of the gas to be detected and the marker concentration information corresponding to the biomarker, wherein the multi-task self-attention network comprises an encoder and a decoder which are composed of a position coding module and a multi-task self-attention module; and confirming a gas detection result corresponding to the gas to be detected according to the biomarker and the marker concentration information. The application adopts the end-to-end deep learning network model, the original data can be input to detect the sample composition and obtain the corresponding marker concentration, so that the interference caused by human factors can be effectively avoided, and the problem of information loss caused by manual feature extraction can be reduced. Meanwhile, the method can be applied to screening methods of other diseases as well. For example, screening diabetics by detecting the concentration of acetone in the exhaled gas, ammonia exhaled by the human body can be used as a biomarker gas for renal failure patients. These marker gases can also be detected as an adjunct detection method to disease screening using electronic nose technology and the multitasking self-attention network algorithm of the present application. As a novel gas detection method, the method can not only play a role in disease screening in the medical field, but also be applied to mines. The method and the device can rapidly and accurately detect flammable and explosive and toxic gases in the mine, and can effectively reduce explosion or poisoning incidents.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Those skilled in the art will appreciate that implementing all or part of the processes of the methods of the embodiments described above may be accomplished by way of computer readable instructions, stored on a computer readable storage medium, which when executed may comprise processes of embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
Example two
With further reference to fig. 6, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a gas detection apparatus based on a multi-tasking self-attention network, where the apparatus embodiment corresponds to the method embodiment shown in fig. 2, and the apparatus is particularly applicable to various electronic devices.
As shown in fig. 6, the gas detection apparatus 600 based on the multi-tasking self-attention network of the present embodiment includes: a data receiving unit 610, a gas detecting unit 620, and a result confirming unit 630. Wherein:
A data receiving unit 610, configured to receive gas component data of a gas to be detected sent by the electronic nose system;
a gas detection unit 620 for inputting gas composition data into a multi-tasking self-attention network for detecting biomarkers of a gas to be measured and marker concentration information corresponding to the biomarkers, the multi-tasking self-attention network including an encoder and a decoder composed of a position encoding module and a multi-tasking self-attention module;
and a result confirmation unit 630 for confirming a gas detection result corresponding to the gas to be measured based on the biomarker and the marker concentration information.
In an embodiment of the present application, referring to the schematic structure of the multi-tasking self-focusing network shown in fig. 3, the multi-tasking self-focusing network provided in the present application includes two parts of an encoder and a decoder, wherein the encoder is composed of a position coding and two layers of stacked self-focusing modules. The self-attention module includes: a multi-headed self-attention layer, a normalization layer, a feed-forward network layer and a residual structure. First, sensor data containing position information provided by a position coding layer flows in parallel into a multi-headed self-attention layer to obtain an output called an intermediate vector. In addition, the multi-headed self-attention mechanism can perform different linear transformations on the input data, helping the model capture various features of the signal. Furthermore, the multi-head self-attention structure has remarkable parallel processing performance because of not using any recursion structure, so that better effect can be achieved only by requiring shorter training time. In addition, the ability to process time series data in parallel makes the multi-headed self-attention model more suitable for deployment onto edge devices. The intermediate vector flows into the feedforward neural network layer after passing through the normalization layer to obtain a low-dimensional characteristic vector. The feedforward neural network provides nonlinear transformation for the model, improves the expression capacity of the model, and the layer normalization can accelerate the convergence of the model. The residual structure is connected with the multi-head self-attention module and the feedforward neural network, so that the problem of gradient loss in the training process can be prevented, and the layer normalization can accelerate model convergence. Flowing the low-dimensional feature vector into the second stacked self-attention module obtains a coding layer output named high-dimensional feature vector. In the decoder section, the full connection layer decodes the information extracted by the encoding layer. Then, by activating a function layer and a gas composition and concentration matching mechanism, a gas classification and concentration prediction result is obtained.
In the embodiment of the application, each element in the time series data flows through the encoder stack at the same time, so that the position information of the time series is lost. Thus, there remains a need for a method to incorporate the order of elements into our model. We add a layer called a position coding layer to the self-attention module to mark the position information. Let the original input data be x=r t×d T is the length of the input data, d is the data dimension, the position matrix shape generated by the position coding layer is the same as the input data, namely P epsilon R t×d The original input data and the encoding result are added to obtain the sequence data containing the position information. The location matrix can be obtained by:
Figure GDA0004104333810000121
where i is the index of the data in the sequence, d is the sequence data dimension size, and k is the kth dimension of the data. The above equation represents adding sin variables in the even dimension and cos variables in the odd dimension of each sequence position vector of the position matrix P. The information for each location is specific and unique and adding this information to the original data results in a sequence signal with location information.
In the embodiment of the application, the multi-head self-attention structure is shown in fig. 4, and the attention mechanism focuses on information which is more critical to the current task in a plurality of pieces of information, so that the attention of other information is reduced, and the efficiency of processing the task is improved. And a self-attention mechanism is an attention mechanism that relates different positions within a sequence to calculate a representation of the sequence. The attention mechanism may be described as a function that maps a query vector Q and a series of key-value pairs (K-V) to an output value representing the correlation of elements of the sequence, where the query Q, key K, and value V vectors are all linear changes to the input vector. Self-attention may be achieved by a scaled dot product attention method. The calculation of the attention is mainly divided into three steps:
(1) Firstly, the similarity between the query vector Q and the key K in each key value pair (K-V) to calculate the similarity sequence element can be realized by dot product, and the similarity calculation expression is as follows:
similarly(Q,{K i ,v i } M )=[similarly(Q,K 1 ),…,similarly(Q,K M 0]
(2) the M weights are then normalized using a softmax function;
(3) and finally, weighting and summing the normalized weight and the corresponding value V to obtain the attention. The implementation mode is as follows:
Figure GDA0004104333810000131
wherein d is k For the dimension of the query vector, when d k When large, the dot product result will become very discrete, divided by the dot product result
Figure GDA0004104333810000132
Plays a role in regulation, so that the inner product result is not too large.
Multi-head self-attention is to split Q, K and V into multiple different parts with the total amount of parameters kept unchanged, and the multi-head mechanism does not calculate attention only once, but runs scaled dot product attention in parallel in different subspaces. The self-attention information of different subspaces is spliced together, and the value obtained by linear transformation is used as the output result of multi-head self-attention. Because the self-attentions are distributed differently in different subspaces, the multi-head self-attentions mechanism can search the associated information of different angles between the sequences, and the multi-head self-attentions mechanism is also helpful for learning the dependency relationship of long-distance information. The multi-head self-attention calculation formula is as follows:
head i =Attention(Q i ,K i ,V i )
Attention MHA =(concat(head 1 ,head 2 ,…,head h ))W o
Wherein W is o Representing a learnable linear transformation parameter; i=1, 2,..The number of heads is shown.
In the present embodiment, the multihead self-attention module is followed by a feed-forward neural network module. Since the multi-headed self-attention module involves only linear transformations, the feedforward neural layer provides the model with nonlinear transformations that can increase the expressive power of the model. The layer is composed of two fully connected layers, the activation function of the first fully connected layer is Relu, and the second fully connected layer does not use the activation function. The expression is as follows:
FNN=max(0,XW 1 +b 1 )W 2 +b 2
in the present embodiment, there is one residual connection and layer normalization between the multi-headed self-attention layer and the feedforward neural network layer and after the feedforward neural network layer. Residual connection is typically used to solve the multi-layer network training problem, helping to avoid gradient vanishing or gradient explosion problems. The layer normalization normalizes the data, so that the convergence speed of the model can be increased, the training time of the network can be reduced, and the stability of the model can be improved. Residual structure and layer normalization is expressed as follows:
LayerNorm(X+MultiherdAttention(X))
LayerNorm(X+FeedForward(X))
where X represents the input of Multi-Head Attention or Feed Forward, multiHeadAttention (X) and FeedForward (X) represent the output, layerNorm represents layer normalization, as shown in FIG. 5.
In the embodiment of the application, the classification and regression network is composed of a full connection layer and an activation layer, and the function of the classification and regression network is to map the features extracted by the multi-head self-attention network into a sample space. The transcription layer is provided with two full-connection layers, the first full-connection layer performs feature transformation and information induction, an activation function is a Relu function, nonlinear change is provided for the classification regression layer, meanwhile, the network convergence speed is increased, and the expression of the Relu function is as follows:
Figure GDA0004104333810000141
the second full-connection layer is used for data prediction output, 5 neurons are provided, the first two neurons are used for gas concentration prediction output, and the rest neurons are used for gas type prediction output. The outputs are all subjected to a Sigmoid activation function so that the final output result is in the range of 0 to 1. The Sigmoid function is expressed as follows:
Figure GDA0004104333810000142
in the present embodiment, the model is divided into two parts altogether, one being the encoder part and one being the decoder part. While the main function of the encoder section is feature extraction, the feature extraction section may have a variety of options. The encoder may select a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or a long short term memory network (LSTM). These networks all have better feature extraction capabilities. But the multi-headed self-care network of the present application may be a better choice than gas data. CNN is suitable for processing an image signal, but is not suitable for a time-series signal. RNNs easily extract the characteristics of time signals, but due to their recursive network structure, gradient extinction or gradient explosion phenomena easily occur. While the ability of LSTM networks to extract features decreases as the length of the sequence signal increases, it is not suitable for feature extraction of long sequence signals. The multi-head self-attention network is a global characteristic relation of parallel extracted signals, the characteristic extraction capacity cannot be weakened along with the increase of the sequence length, the structure of the multi-head self-attention network can be used for parallel training, and the training speed and the recognition speed of a model can be effectively improved.
In the embodiment of the application, the self-attention modules of the application are connected in a serial stacking manner by adopting two self-attention modules. Other different connection means may be used. When the sequence length of the sample data set is increased, the stacking of modules can be properly increased, and the deep features of the signals can be effectively extracted.
In summary, a second embodiment of the present application provides a gas detection apparatus 600 based on a multi-task self-attention network, including: a data receiving unit 610, configured to receive gas component data of a gas to be detected sent by the electronic nose system; a gas detection unit 620 for inputting gas composition data into a multi-tasking self-attention network for detecting biomarkers of a gas to be measured and marker concentration information corresponding to the biomarkers, the multi-tasking self-attention network including an encoder and a decoder composed of a position encoding module and a multi-tasking self-attention module; and a result confirmation unit 630 for confirming a gas detection result corresponding to the gas to be measured based on the biomarker and the marker concentration information. The application adopts the end-to-end deep learning network model, the original data can be input to detect the sample composition and obtain the corresponding marker concentration, so that the interference caused by human factors can be effectively avoided, and the problem of information loss caused by manual feature extraction can be reduced. Meanwhile, the method can be applied to screening methods of other diseases as well. For example, screening diabetics by detecting the concentration of acetone in the exhaled gas, ammonia exhaled by the human body can be used as a biomarker gas for renal failure patients. These marker gases can also be detected as an adjunct detection method to disease screening using electronic nose technology and the multitasking self-attention network algorithm of the present application. As a novel gas detection method, the method can not only play a role in disease screening in the medical field, but also be applied to mines. The method and the device can rapidly and accurately detect flammable and explosive and toxic gases in the mine, and can effectively reduce explosion or poisoning incidents.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 7, fig. 7 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 200 includes a memory 210, a processor 220, and a network interface 230 communicatively coupled to each other via a system bus. It should be noted that only computer device 200 having components 210-230 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 210 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 210 may be an internal storage unit of the computer device 200, such as a hard disk or a memory of the computer device 200. In other embodiments, the memory 210 may also be an external storage device of the computer device 200, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 200. Of course, the memory 210 may also include both internal storage units and external storage devices of the computer device 200. In this embodiment, the memory 210 is generally used to store an operating system and various application software installed on the computer device 200, such as computer readable instructions of a gas detection method based on a multi-tasking self-attention network. In addition, the memory 210 may be used to temporarily store various types of data that have been output or are to be output.
The processor 220 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 220 is generally used to control the overall operation of the computer device 200. In this embodiment, the processor 220 is configured to execute computer readable instructions stored in the memory 210 or process data, such as computer readable instructions for executing the gas detection method based on the multi-tasking self-attention network.
The network interface 230 may include a wireless network interface or a wired network interface, which network interface 230 is typically used to establish communication connections between the computer device 200 and other electronic devices.
According to the computer equipment, the end-to-end deep learning network model is adopted, the sample composition can be detected by inputting the original data, the corresponding marker concentration is obtained, the interference caused by human factors can be effectively avoided, and the problem of information loss caused by manual feature extraction is solved.
The present application also provides another embodiment, namely, a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of a multitasking self-attention network based gas detection method as described above.
The computer readable storage medium provided by the application adopts an end-to-end deep learning network model, and can detect sample components and obtain corresponding marker concentration by inputting original data, so that the problems of interference caused by human factors and information loss caused by manual feature extraction can be effectively avoided.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
It is apparent that the embodiments described above are only some embodiments of the present application, but not all embodiments, the preferred embodiments of the present application are given in the drawings, but not limiting the patent scope of the present application. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a more thorough understanding of the present disclosure. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing, or equivalents may be substituted for elements thereof. All equivalent structures made by the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the protection scope of the application.

Claims (6)

1. A method for gas detection based on a multi-tasking self-attention network, comprising the steps of:
receiving gas component data of the gas to be detected sent by the electronic nose system;
inputting the gas composition data into a multi-task self-attention network to detect a biomarker of the gas to be detected and biomarker concentration information corresponding to the biomarker, wherein the multi-task self-attention network comprises an encoder and a decoder, and the encoder consists of a position encoding module and a multi-task self-attention module;
confirming a gas detection result corresponding to the gas to be detected according to the biomarker and the biomarker concentration information;
the position coding module comprises a position coding layer used for marking position information; let the original input data be x=r t ×d T is the length of the input data, d is the data dimension, the position matrix shape generated by the position coding layer is the same as the input data, namely P epsilon R t×d Adding the original input data and the coding result to obtain sequence data containing position information; the location matrix is obtained by:
Figure FDA0004125624430000011
where i is the index of the data in the sequence, d is the sequence data dimension size, and k is the kth dimension of the data; the above expression represents adding sin variable in the even dimension and cos variable in the odd dimension of each sequence position vector of the position matrix P; the information for each location is specific and unique, and adding this information to the raw data produces a sequence signal with location information;
The multi-tasking self-attention module is a two-layer stacked self-attention module comprising: a multi-head self-attention layer, a normalization layer, a feedforward neural network layer and a residual error structure; sensor data containing position information is streamed in parallel into the multi-headed self-attention layer to obtain an output called an intermediate vector, wherein the position information is provided by a position coding layer; the intermediate vector flows into a feedforward neural network layer after passing through a normalization layer to obtain a low-dimensional feature vector; the residual structure is connected with the multi-head self-attention layer and the feedforward neural network layer; flowing the low-dimensional feature vector into a second stacked self-attention module to obtain an encoder output named high-dimensional feature vector; the self-attention module calculates the attention in three steps:
(1) firstly, calculating the similarity between similarity sequence elements by using a query vector Q and a key K in each key value pair K-V through dot products, wherein the similarity calculation expression is as follows:
similarly(Q,{K i ,v i } M )=[similarly(Q,K 1 ),…,similarly(Q,K M )]
(2) the M weights are then normalized using a softmax function;
(3) finally, weighting and summing the normalized weight and the corresponding value V to obtain attention; the implementation mode is as follows:
Figure FDA0004125624430000021
wherein d is k Is the dimension of the query vector;
The multi-head self-attention is to split Q, K and V into a plurality of different parts under the condition that the total parameter amount is kept unchanged, and the multi-head mechanism is to run scaled dot product attention in different subspaces in parallel; splicing the self-attention information of different subspaces, and performing linear transformation again to obtain a value serving as an output result of multi-head self-attention; the multi-head self-attention calculation formula is as follows:
head i =Attention(Q i ,K i ,V i )
Attention MHA =(concat(head 1 ,head 2w ,…,head h ))W o
wherein W is o Representing a learnable linear transformation parameter; i=1, 2,..h represents the number of heads;
in the decoder section, the full connection layer decodes the information extracted by the encoder; then, a gas classification and concentration prediction result is obtained by activating a function layer and a gas composition and concentration matching mechanism; the decoder is provided with two full-connection layers, wherein the first full-connection layer performs feature transformation and information induction, an activation function is a Relu function, nonlinear change is provided for a classification regression layer, meanwhile, the network convergence speed is increased, and the expression of the Relu function is as follows:
Figure FDA0004125624430000022
the second full-connection layer is used for data prediction output, 5 neurons are provided, the first two neurons are used for gas concentration prediction output, and the other neurons are used for gas type prediction output; the outputs are all subjected to the following Sigmoid activation function:
Figure FDA0004125624430000031
So that the final output result is in the range of 0 to 1.
2. The multi-tasking self-attention network based gas detection method of claim 1 wherein said feedforward neural network layer is represented as:
FNN=max(0,XW 1 +b 1 )W 2 +b 2
3. a gas detection apparatus based on a multi-tasking self-attention network, comprising:
the data receiving unit is used for receiving the gas component data of the gas to be detected, which is sent by the electronic nose system;
a gas detection unit for inputting the gas composition data into a multi-task self-attention network for detecting the biomarker of the gas to be detected and biomarker concentration information corresponding to the biomarker, wherein the multi-task self-attention network comprises an encoder and a decoder, and the encoder consists of a position encoding module and a multi-task self-attention module;
a result confirmation unit for confirming a gas detection result corresponding to the gas to be detected according to the biomarker and the biomarker concentration information;
the position coding module comprises a position coding layer used for marking position information; let the original input data be x=r t ×d T is the length of the input data, d is the data dimension, the position matrix shape generated by the position coding layer is the same as the input data, namely P epsilon R t×d Adding the original input data and the coding result to obtain sequence data containing position information; the location matrix is obtained by:
Figure FDA0004125624430000032
where i is the index of the data in the sequence, d is the sequence data dimension size, and k is the kth dimension of the data; the above expression represents adding sin variable in the even dimension and cos variable in the odd dimension of each sequence position vector of the position matrix P; the information for each location is specific and unique, and adding this information to the raw data produces a sequence signal with location information;
the multi-tasking self-attention module is a two-layer stacked self-attention module comprising: a multi-head self-attention layer, a normalization layer, a feedforward neural network layer and a residual error structure; sensor data containing position information is streamed in parallel into the multi-headed self-attention layer to obtain an output called an intermediate vector, wherein the position information is provided by a position coding layer; the intermediate vector flows into a feedforward neural network layer after passing through a normalization layer to obtain a low-dimensional feature vector; the residual structure is connected with the multi-head self-attention layer and the feedforward neural network layer; flowing the low-dimensional feature vector into a second stacked self-attention module to obtain an encoder output named high-dimensional feature vector; the self-attention module calculates the attention in three steps:
(1) Firstly, calculating the similarity between similarity sequence elements by using a query vector Q and a key K in each key value pair K-V through dot products, wherein the similarity calculation expression is as follows:
similarly(Q,{K i ,v i } M )=[similarly(Q,K 1 ),…,similarly(Q,K M )]
(2) the M weights are then normalized using a softmax function;
(3) finally, weighting and summing the normalized weight and the corresponding value V to obtain attention; the implementation mode is as follows:
Figure FDA0004125624430000041
wherein d is k Is the dimension of the query vector;
the multi-head self-attention is to split Q, K and V into a plurality of different parts under the condition that the total parameter amount is kept unchanged, and the multi-head mechanism is to run scaled dot product attention in different subspaces in parallel; splicing the self-attention information of different subspaces, and performing linear transformation again to obtain a value serving as an output result of multi-head self-attention; the multi-head self-attention calculation formula is as follows:
head i =Attention(Q i ,K i ,V i )
Attention MHA =(concat(head 1 ,head 2 ,…,head h ))W o
wherein W is o Representing a learnable linear transformation parameter; i=1, 2,..h represents the number of heads;
in the decoder section, the full connection layer decodes the information extracted by the encoder; then, a gas classification and concentration prediction result is obtained by activating a function layer and a gas composition and concentration matching mechanism; the decoder is provided with two full-connection layers, wherein the first full-connection layer performs feature transformation and information induction, an activation function is a Relu function, nonlinear change is provided for a classification regression layer, meanwhile, the network convergence speed is increased, and the expression of the Relu function is as follows:
Figure FDA0004125624430000051
The second full-connection layer is used for data prediction output, 5 neurons are provided, the first two neurons are used for gas concentration prediction output, and the other neurons are used for gas type prediction output; the outputs are all subjected to the following Sigmoid activation function:
Figure FDA0004125624430000052
so that the final output result is in the range of 0 to 1.
4. A multi-tasking self-attention network based gas detection device as in claim 3 wherein said feed forward neural network layer is represented as:
FNN=max(0,XW 1 +b 1 )W 2 +b 2
5. a computer device comprising a memory having stored therein computer readable instructions which when executed implement the steps of the multitasking self-attention network based gas detection method of any of claims 1 to 2.
6. A computer readable storage medium having stored thereon computer readable instructions which when executed by a processor implement the steps of the multitasking self-attention network based gas detection method of any of claims 1 to 2.
CN202111119011.8A 2021-09-24 2021-09-24 Gas detection method and device, computer equipment and storage medium Active CN115097064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111119011.8A CN115097064B (en) 2021-09-24 2021-09-24 Gas detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111119011.8A CN115097064B (en) 2021-09-24 2021-09-24 Gas detection method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115097064A CN115097064A (en) 2022-09-23
CN115097064B true CN115097064B (en) 2023-06-20

Family

ID=83287627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111119011.8A Active CN115097064B (en) 2021-09-24 2021-09-24 Gas detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115097064B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116502158B (en) * 2023-02-07 2023-10-27 北京纳通医用机器人科技有限公司 Method, device, equipment and storage medium for identifying lung cancer stage
CN116189800B (en) * 2023-02-23 2023-08-18 深圳大学 Pattern recognition method, device, equipment and storage medium based on gas detection
CN117091799B (en) * 2023-10-17 2024-01-02 湖南一特医疗股份有限公司 Intelligent three-dimensional monitoring method and system for oxygen supply safety of medical center

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9645127B2 (en) * 2012-05-07 2017-05-09 Alexander Himanshu Amin Electronic nose system and method
CN103018282B (en) * 2012-12-21 2015-07-15 上海交通大学 Electronic nose system for early detection of lung cancer
US20160106935A1 (en) * 2014-10-17 2016-04-21 Qualcomm Incorporated Breathprint sensor systems, smart inhalers and methods for personal identification
CN104751004A (en) * 2015-04-15 2015-07-01 苏州大学 Disease pre-warning method, device and system
CN105738434B (en) * 2016-02-01 2019-05-10 清华大学深圳研究生院 A kind of diabetes diagnosis system based on electronic nose detection breathing gas
CN107463766A (en) * 2017-06-23 2017-12-12 深圳市中识创新科技有限公司 Generation method, device and the computer-readable recording medium of blood glucose prediction model
CN108693353A (en) * 2018-05-08 2018-10-23 重庆大学 A kind of long-range diabetes intelligent diagnosis system detecting breathing gas based on electronic nose
CN110146642B (en) * 2019-05-14 2022-03-25 上海大学 Odor analysis method and device
CN111540463B (en) * 2019-12-18 2023-09-15 中国科学院上海微系统与信息技术研究所 Exhaled gas detection method and system based on machine learning

Also Published As

Publication number Publication date
CN115097064A (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN115097064B (en) Gas detection method and device, computer equipment and storage medium
CN112231569A (en) News recommendation method and device, computer equipment and storage medium
CN114358203A (en) Training method and device for image description sentence generation module and electronic equipment
CN112084779B (en) Entity acquisition method, device, equipment and storage medium for semantic recognition
CN114420107A (en) Speech recognition method based on non-autoregressive model and related equipment
CN113420212A (en) Deep feature learning-based recommendation method, device, equipment and storage medium
CN112699213A (en) Speech intention recognition method and device, computer equipment and storage medium
CN117093477A (en) Software quality assessment method and device, computer equipment and storage medium
CN117807482A (en) Method, device, equipment and storage medium for classifying customs clearance notes
CN113807089A (en) Text data processing method, neural network training method and related equipment
CN116186295B (en) Attention-based knowledge graph link prediction method, attention-based knowledge graph link prediction device, attention-based knowledge graph link prediction equipment and attention-based knowledge graph link prediction medium
CN117557331A (en) Product recommendation method and device, computer equipment and storage medium
CN112818688B (en) Text processing method, device, equipment and storage medium
CN112733645A (en) Handwritten signature verification method and device, computer equipment and storage medium
CN113657104A (en) Text extraction method and device, computer equipment and storage medium
CN113159315A (en) Neural network training method, data processing method and related equipment
Feng et al. Fault diagnosis method based on the multi-head attention focusing on data positional information
CN113792549B (en) User intention recognition method, device, computer equipment and storage medium
CN116402048B (en) Interpretable blockchain application trend analysis method and system
CN113515931B (en) Text error correction method, device, computer equipment and storage medium
CN116796729A (en) Text recommendation method, device, equipment and storage medium based on feature enhancement
US20230103872A1 (en) Flexible framework for joint representation learning and unknown category discovery
CN117874234A (en) Text classification method and device based on semantics, computer equipment and storage medium
CN116861079A (en) Ordering recommendation method based on long-term preference of user and related equipment thereof
CN115019768A (en) Method and device for synthesizing sequence speech based on monotonicity constraint function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant