CN117373085A - Non-contact heart rate detection method based on space-time self-attention and deep neural network - Google Patents

Non-contact heart rate detection method based on space-time self-attention and deep neural network Download PDF

Info

Publication number
CN117373085A
CN117373085A CN202311344530.3A CN202311344530A CN117373085A CN 117373085 A CN117373085 A CN 117373085A CN 202311344530 A CN202311344530 A CN 202311344530A CN 117373085 A CN117373085 A CN 117373085A
Authority
CN
China
Prior art keywords
space
heart rate
attention
time
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311344530.3A
Other languages
Chinese (zh)
Inventor
孙宁
易磊
何佩鲜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202311344530.3A priority Critical patent/CN117373085A/en
Publication of CN117373085A publication Critical patent/CN117373085A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a non-contact heart rate detection method based on a space-time self-attention and deep neural network, which comprises the following steps of: selecting a plurality of videos from a video database of remote physiological measurement, performing cutting and face alignment operation on each frame of image by using face key point detection, and dividing the cut videos into a training set and a testing set according to corresponding proportion; carrying out inter-frame difference extraction on the training set divided in the previous step, leaving key information of the face, and removing useless information such as environmental background; constructing an end-to-end trainable deep neural network model combining a space-time self-attention mechanism; sending the processed training set into a deep neural network for training; when the new video is used for non-contact heart rate measurement, the video sequence obtained through the steps is sent into a network model, and a heart rate signal value corresponding to the video is obtained. The method and the system of the invention improve the accuracy and the effectiveness of non-contact heart rate measurement.

Description

Non-contact heart rate detection method based on space-time self-attention and deep neural network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a non-contact heart rate detection method based on a space-time self-attention and deep neural network.
Background
Currently, video-based remote physiological measurement has become one of the important branches in the field of computer vision. Because of the characteristics of no contact, super perception, comfort and convenience, the multifunctional medical instrument has great advantages in many scenes, and is mainly applied to detection of medical care, health monitoring, lie detection, emotion, fatigue state and the like. The existing non-contact heart rate detection method has achieved extremely high recognition accuracy on a data set in a laboratory environment. However, the data in the real environment contains a large amount of interference, such as complex illumination conditions, uncertain postures, occlusion and the like, which greatly influences the accuracy of heart rate detection.
In recent years, with successful practice of self-attention mechanisms in the field of natural language processing, a very good effect can be achieved by using a completely dependent self-attention neural network structure, and efficient parallelization can be performed. Compared with the traditional cyclic neural network and convolutional neural network, the self-attention-based model breaks through the limitations in the past, however, the deep neural network based on the pure self-attention mechanism in the past has the defects in terms of time and space modeling, and the time and space characteristics of the video sequence signal cannot be effectively extracted, which still has certain defects for the video sequence.
By combining the three-dimensional labeling space-time self-attention mechanism with the deep neural network, the advantages of time and space characteristics can be modeled by utilizing the self-attention mechanism, and the problem that the full self-attention-based model has high requirements on the scale of the database is solved.
Disclosure of Invention
The invention mainly aims to provide a non-contact heart rate detection method based on a space-time self-attention and deep neural network, so as to improve the accuracy and the effectiveness of non-contact heart rate measurement.
In order to achieve the above object, the invention provides a non-contact heart rate detection method based on a space-time self-attention and deep neural network, which comprises the following steps:
step 1, selecting a plurality of videos from a video database of remote physiological measurement, processing each frame of image by using a face key point detection technology to realize cutting and face alignment operation, and finally dividing the cut videos into a training set and a testing set according to a specified proportion, wherein the training set is used for training and learning a neural network model, and the testing set is used for evaluating the performance of the trained neural network model;
step 2, carrying out inter-frame difference extraction on the training set and the testing set, leaving useful information, and removing useless information to obtain a processed video sequence, wherein the useful information comprises face information, and the useless information comprises environmental background information;
step 3, constructing an end-to-end trainable depth neural network model combining a space-time self-attention mechanism, wherein the depth neural network model comprises a video sequence segmentation module, a space-time self-attention coding module and a heart rate signal characterization module;
step 4, sending the training set processed in the step 2 into a deep neural network for training to obtain a trained deep neural network model;
step 5, inputting the test set processed in the step 2 into a trained deep neural network for testing so as to verify the effect and performance of the deep neural network model;
step 6, performing non-contact heart rate measurement on the new video, including: and (3) processing the video required to be subjected to non-contact heart rate measurement by utilizing the step (1) and the step (2) to obtain a processed corresponding video sequence, inputting the processed corresponding video sequence into a trained and verified deep neural network model, and finally characterizing to obtain a heart rate signal measured value.
As a further improvement of the present invention, step 3 specifically includes the steps of:
step 3-1, inputting the video sequence processed in the step 1 and the step 2 into a video sequence segmentation module, segmenting the video sequence into non-overlapping marks in the time dimension, the height dimension and the width dimension, and projecting the non-overlapping marks into a high-dimensional space to obtain space-time feature vectors corresponding to the video sequence;
step 3-2, adding position codes to the space-time feature vectors obtained in the step 3-1, inputting the space-time self-attention code network for learning after pooling, carrying out time attention learning on the space-time feature vectors in the same space dimension to establish time feature correlation, and carrying out space attention learning on the space-time feature vectors in the same time dimension to establish space feature correlation;
and 3-3, inputting the space-time characteristic vector which is learned by the space-time self-attention coding network in the step 3-2 into a heart rate signal characterization network for calculation, and finally obtaining a heart rate signal value of the video sequence.
As a further improvement of the present invention, in step 3, the global spatio-temporal self-attention encoding network comprises a plurality of global spatio-temporal self-attention encoding modules consisting of three residual blocks, wherein the first residual block comprises a normalization layer, three full-connection layers and a multi-head spatial attention layer for learning spatial feature correlations; the second residual block comprises a normalization layer, three full-connection layers and a multi-head time attention layer and is used for learning time characteristic correlation; the third residual block comprises a normalization layer and a feed-forward full-connection layer.
As a further improvement of the invention, in step 3, the heart rate signal characterization network comprises a plurality of upsampling modules comprising a three-dimensional convolution layer, a batch normalization layer and an activation layer.
As a further improvement of the invention, the video sequence F epsilon R after the frame normalization processing is processed by a video sequence segmentation module C×H×W×T Segmentation into non-overlapping markers P ε R C1×H1×W1×T1 Wherein H and W represent the height and width of the image in the video sequence, both the height and width are 224, T represents the length of the video sequence, the length is 96, C represents the number of channels of the image, and the number of channels is 3; spatiotemporal feature vector L E R mapped by linear layer to contain heart rate information D×T×S Where D represents the dimension of the feature, s=h×w represents the size of each frame image, and the space-time feature vector is added with the position-coding information E pos After pooling, the final input is obtainedSpatio-temporal feature vector X to a spatio-temporal self-attention encoding network 0 :X 0 =dropout(L+E pos )。
As a further development of the invention, the characteristic dimension is 256.
As a further improvement of the invention, the whole training process comprises optimizing the mean square error loss function by utilizing an SGD optimization algorithm, wherein the initial learning rate is set to be 0.01, the automatic decline is 0.5 times per 10 rounds, and the training round number is 100 rounds.
The invention has the beneficial effects that: the original video sequence is processed by using an inter-frame difference extraction mode, so that the influence of a large amount of interference on the accuracy of non-contact heart rate detection in a non-limiting environment is effectively reduced. The video sequence is segmented into non-overlapping marks in the time, height and width dimensions by means of the video sequence segmentation module, so that time and space information of the video sequence are fused to a certain extent, the parameter number of network calculation is effectively reduced, and the calculation efficiency and the processing capacity of the deep neural network are improved; the time self-attention mechanism and the space self-attention mechanism are introduced to fully model the time domain and the space domain of the video sequence, so that the problem of space-time information loss of the single self-attention mechanism in the past is solved, and the space-time feature extraction capacity and the training efficiency of the model are effectively improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
For the purpose of making apparent the objects, technical solutions and advantages of the present invention, the present invention will be further described in detail with reference to the following specific embodiments with reference to the accompanying drawings, but is not intended to limit the scope of the present invention.
As shown in fig. 1, the non-contact heart rate detection method based on the space-time self-attention and depth neural network mainly comprises the following steps:
step 1, selecting a plurality of videos from a video database of remote physiological measurement, processing each frame of image by using a face key point detection technology to realize cutting and face alignment operation, and finally dividing the cut videos into a training set and a testing set according to a specified proportion, wherein the training set is used for training and learning a neural network model, and the testing set is used for evaluating the performance of the trained neural network model;
step 2, carrying out inter-frame difference extraction on the training set and the testing set, leaving useful information, and removing useless information to obtain a processed video sequence, wherein the useful information comprises face information, and the useless information comprises environmental background information;
step 3, constructing an end-to-end trainable depth neural network model combining a space-time self-attention mechanism, wherein the depth neural network model comprises a video sequence segmentation module, a space-time self-attention coding module and a heart rate signal characterization module;
step 4, sending the training set processed in the step 2 into a deep neural network for training to obtain a trained deep neural network model;
step 5, inputting the test set processed in the step 2 into a trained deep neural network for testing so as to verify the effect and performance of the deep neural network model;
step 6, performing non-contact heart rate measurement on the new video, including: and (3) processing the video required to be subjected to non-contact heart rate measurement by utilizing the step (1) and the step (2) to obtain a processed corresponding video sequence, inputting the processed corresponding video sequence into a trained and verified deep neural network model, and finally characterizing to obtain a heart rate signal measured value.
In this embodiment, the end-to-end trainable deep neural network model building process in step 3, which combines the spatio-temporal self-attention mechanism, specifically includes:
step 3-1, inputting the video sequence processed in the step 1 and the step 2 into a video sequence segmentation module, segmenting the video sequence into non-overlapping marks in the time dimension, the height dimension and the width dimension, and projecting the non-overlapping marks into a high-dimensional space to obtain space-time feature vectors corresponding to the video sequence;
step 3-2, adding position codes to the space-time feature vectors obtained in the step 3-1, inputting the space-time self-attention code network for learning after pooling, carrying out time attention learning on the space-time feature vectors in the same space dimension to establish time feature correlation, and carrying out space attention learning on the space-time feature vectors in the same time dimension to establish space feature correlation;
and 3-3, inputting the space-time characteristic vector which is learned by the space-time self-attention coding network in the step 3-2 into a heart rate signal characterization network for calculation, and finally obtaining a heart rate signal value of the video sequence.
Specifically, the video sequence F epsilon R after frame normalization processing is performed through a video sequence segmentation module C×H×W×T Segmentation into non-overlapping markers P ε R C1×H1×W1×T1 Wherein H and W represent the height and width of the image in the video sequence, both the height and width are 224, T represents the length of the video sequence, the length is 96, C represents the number of channels of the image, and the number of channels is 3; mapping to space-time characteristic vector L epsilon R containing physiological information through linear layer D×T×S Where D represents the dimension of the feature, in this example d=256, s represents the size of each frame of image, and the position-coding information E is added pos And the vector X finally input to the space-time attention coding network is obtained after pooling 0
The expression of the above steps is: x is X 0 =dropout(L+E pos )。
Then, the obtained vector X 0 The method comprises the steps of sending the spatial-temporal global self-attention coding network to learn, carrying out space attention learning on the spatial-temporal vectors in the same space dimension in pairs to establish space feature correlation, and then carrying out time attention learning on the spatial-temporal vectors in the same space dimension in pairs to establish time feature correlation; the global space-time self-attention coding network comprises a plurality of global space-time self-attention coding modules consisting of three residual blocks, wherein a first residual block comprises a normalization layer, three full-connection layers and a multi-head attention layer, and is used for learning spatial feature correlation; the second residual block comprises a normalization layer, three full-connection layers and a multi-head attention layer and is used for learning time characteristic correlation; the third residual block comprises a normalization layer and a feed-forward full-connection layer.
And finally, inputting the space-time characteristic vector which is output after the space-time global self-attention coding network is learned into a heart rate signal characterization network to obtain a blood volume pulse wave signal value. The heart rate signal characterization network comprises a plurality of up-sampling modules, each up-sampling module comprises a three-dimensional convolution layer, a batch normalization layer and an activation layer, a heart rate characterization signal is obtained after the blood volume pulse wave signal value is subjected to Fourier transformation, the accuracy of measurement is measured by solving a mean square error and a pearson correlation coefficient between a heart rate true value and a predicted value of the whole test set, the smaller the mean square error is, the closer the correlation coefficient is to 1, the closer the predicted value is to the true value, and the smaller the error is.
The whole training process optimizes a mean square error loss function by utilizing an SGD optimization algorithm, wherein the initial learning rate is set to be 0.01, the initial learning rate is automatically reduced by 0.5 times every 10 training rounds, and the training rounds are 100.
In summary, according to the non-contact heart rate detection method based on the space-time self-attention and depth neural network, the original video sequence is processed by using the inter-frame difference extraction mode, so that the influence of a large amount of interference on the non-contact heart rate detection accuracy in a non-limiting environment is effectively reduced. The video sequence is segmented into non-overlapping marks in the time, height and width dimensions by means of the video sequence segmentation module, so that time and space information of the video sequence are fused to a certain extent, the parameter number of network calculation is effectively reduced, and the calculation efficiency and the processing capacity of the deep neural network are improved; the time self-attention mechanism and the space self-attention mechanism are introduced to fully model the time domain and the space domain of the video sequence, so that the problem of space-time information loss of the single self-attention mechanism in the past is solved, and the space-time feature extraction capacity and the training efficiency of the model are effectively improved.
The above examples are presented for the purpose of illustration of the invention only and are not intended to be limiting; it is intended that all technical solutions according to the present invention and their inventive concepts be replaced or modified within the scope of the present invention.

Claims (7)

1. A non-contact heart rate detection method based on a space-time self-attention and deep neural network is characterized by comprising the following steps of:
step 1, selecting a plurality of videos from a video database of remote physiological measurement, processing each frame of image by using a face key point detection technology to realize cutting and face alignment operation, and finally dividing the cut videos into a training set and a testing set according to a specified proportion, wherein the training set is used for training and learning a neural network model, and the testing set is used for evaluating the performance of the trained neural network model;
step 2, carrying out inter-frame difference extraction on the training set and the testing set, leaving useful information, and removing useless information to obtain a processed video sequence, wherein the useful information comprises face information, and the useless information comprises environmental background information;
step 3, constructing an end-to-end trainable depth neural network model combining a space-time self-attention mechanism, wherein the depth neural network model comprises a video sequence segmentation module, a space-time self-attention coding module and a heart rate signal characterization module;
step 4, sending the training set processed in the step 2 into a deep neural network for training to obtain a trained deep neural network model;
step 5, inputting the test set processed in the step 2 into a trained deep neural network for testing so as to verify the effect and performance of the deep neural network model;
step 6, performing non-contact heart rate measurement on the new video, including: and (3) processing the video required to be subjected to non-contact heart rate measurement by utilizing the step (1) and the step (2) to obtain a processed corresponding video sequence, inputting the processed corresponding video sequence into a trained and verified deep neural network model, and finally characterizing to obtain a heart rate signal measured value.
2. The non-contact heart rate detection method according to claim 1, wherein step 3 specifically comprises the steps of:
step 3-1, inputting the video sequence processed in the step 1 and the step 2 into a video sequence segmentation module, segmenting the video sequence into non-overlapping marks in the time dimension, the height dimension and the width dimension, and projecting the non-overlapping marks into a high-dimensional space to obtain space-time feature vectors corresponding to the video sequence;
step 3-2, adding position codes to the space-time feature vectors obtained in the step 3-1, inputting the space-time self-attention code network for learning after pooling, carrying out time attention learning on the space-time feature vectors in the same space dimension to establish time feature correlation, and carrying out space attention learning on the space-time feature vectors in the same time dimension to establish space feature correlation;
and 3-3, inputting the space-time characteristic vector which is learned by the space-time self-attention coding network in the step 3-2 into a heart rate signal characterization network for calculation, and finally obtaining a heart rate signal value of the video sequence.
3. The non-contact heart rate detection method according to claim 2, wherein: in step 3, the global spatio-temporal self-attention encoding network comprises a plurality of global spatio-temporal self-attention encoding modules consisting of three residual blocks, wherein the first residual block comprises a normalization layer, three full-connection layers and a multi-head spatial attention layer for learning spatial feature correlation; the second residual block comprises a normalization layer, three full-connection layers and a multi-head time attention layer and is used for learning time characteristic correlation; the third residual block comprises a normalization layer and a feed-forward full-connection layer.
4. The non-contact heart rate detection method according to claim 2, wherein: in step 3, the heart rate signal characterization network includes a plurality of upsampling modules including a three dimensional convolution layer, a batch normalization layer, and an activation layer.
5. The non-contact heart rate detection method according to claim 2, wherein: by means of a video sequence segmentation moduleFrame normalized video sequence F epsilon R C×H×W×T Segmentation into non-overlapping markers P ε R C1×H1×W1×T1 Wherein H and W represent the height and width of the image in the video sequence, both the height and width are 224, T represents the length of the video sequence, the length is 96, C represents the number of channels of the image, and the number of channels is 3; spatiotemporal feature vector L E R mapped by linear layer to contain heart rate information D×T×S Where D represents the dimension of the feature, s=h×w represents the size of each frame image, and the space-time feature vector is added with the position-coding information E pos After pooling, the space-time characteristic vector X finally input to the space-time self-attention coding network is obtained 0 :X 0 =dropout(L+E pos )。
6. The non-contact heart rate detection method of claim 5, wherein: the feature dimension is 256.
7. The non-contact heart rate detection method according to claim 1, wherein: the whole training process comprises the step of optimizing a mean square error loss function by utilizing an SGD (generalized algorithm), wherein the initial learning rate is set to be 0.01, the initial learning rate is automatically reduced by 0.5 times every 10 training rounds, and the training rounds are 100.
CN202311344530.3A 2023-10-17 2023-10-17 Non-contact heart rate detection method based on space-time self-attention and deep neural network Pending CN117373085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311344530.3A CN117373085A (en) 2023-10-17 2023-10-17 Non-contact heart rate detection method based on space-time self-attention and deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311344530.3A CN117373085A (en) 2023-10-17 2023-10-17 Non-contact heart rate detection method based on space-time self-attention and deep neural network

Publications (1)

Publication Number Publication Date
CN117373085A true CN117373085A (en) 2024-01-09

Family

ID=89405374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311344530.3A Pending CN117373085A (en) 2023-10-17 2023-10-17 Non-contact heart rate detection method based on space-time self-attention and deep neural network

Country Status (1)

Country Link
CN (1) CN117373085A (en)

Similar Documents

Publication Publication Date Title
CN113673489B (en) Video group behavior identification method based on cascade Transformer
CN111310707B (en) Bone-based graph annotation meaning network action recognition method and system
CN111079532B (en) Video content description method based on text self-encoder
CN109919204B (en) Noise image-oriented deep learning clustering method
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
CN111738054B (en) Behavior anomaly detection method based on space-time self-encoder network and space-time CNN
CN112560948B (en) Fundus image classification method and imaging method under data deviation
CN111523421A (en) Multi-user behavior detection method and system based on deep learning and fusion of various interaction information
CN110930378A (en) Emphysema image processing method and system based on low data demand
CN114445420A (en) Image segmentation model with coding and decoding structure combined with attention mechanism and training method thereof
CN114883003A (en) ICU (intensive care unit) hospitalization duration and death risk prediction method based on convolutional neural network
CN115831377A (en) Intra-hospital death risk prediction method based on ICU (intensive care unit) medical record data
CN115346269A (en) Gesture motion recognition method
CN116383757B (en) Bearing fault diagnosis method based on multi-scale feature fusion and migration learning
CN117237685A (en) Mechanical equipment fault diagnosis method based on multi-mode deep clustering
CN116910573A (en) Training method and device for abnormality diagnosis model, electronic equipment and storage medium
CN117152815A (en) Student activity accompanying data analysis method, device and equipment
CN117036760A (en) Multi-view clustering model implementation method based on graph comparison learning
CN116662307A (en) Intelligent early warning method, system and equipment based on multi-source data fusion
CN116543338A (en) Student classroom behavior detection method based on gaze target estimation
CN117011219A (en) Method, apparatus, device, storage medium and program product for detecting quality of article
CN117373085A (en) Non-contact heart rate detection method based on space-time self-attention and deep neural network
CN114980723A (en) Fault prediction method and system for cross-working-condition chip mounter suction nozzle
CN114052762A (en) Method for predicting size of narrow blood vessel and size of instrument based on Swin-T
CN112598115A (en) Deep neural network hierarchical analysis method based on non-local neighbor relation learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination