CN114550033A - Video sequence guide wire segmentation method and device, electronic equipment and readable medium - Google Patents

Video sequence guide wire segmentation method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN114550033A
CN114550033A CN202210110680.7A CN202210110680A CN114550033A CN 114550033 A CN114550033 A CN 114550033A CN 202210110680 A CN202210110680 A CN 202210110680A CN 114550033 A CN114550033 A CN 114550033A
Authority
CN
China
Prior art keywords
picture frame
video sequence
segmentation
neural network
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210110680.7A
Other languages
Chinese (zh)
Inventor
王澄
滕皋军
朱建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Hengle Medical Technology Co ltd
Original Assignee
Zhuhai Hengle Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Hengle Medical Technology Co ltd filed Critical Zhuhai Hengle Medical Technology Co ltd
Priority to CN202210110680.7A priority Critical patent/CN114550033A/en
Publication of CN114550033A publication Critical patent/CN114550033A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a video sequence guide wire segmentation method, a device, electronic equipment and a readable medium, which comprises the following steps: responding to the segmentation request, and acquiring a picture frame of a video sequence guide wire; dividing a picture frame into a plurality of image blocks; coding the image block through a CNN convolutional neural network and a Transformer neural network to obtain a plurality of coding blocks, wherein the coding blocks are used for representing the global feature information of the image frame; and performing up-sampling, convolution and cascade processing on the coding block to obtain a segmentation result of the picture frame. The invention has the beneficial effects that: the method solves the problem that the CNN is insufficient in acquiring the global information of the image, captures the global information by combining a Transformer and utilizing a self-attention mechanism, realizes advantage complementation by combining the CNN and the Transformer, and improves the segmentation precision of the video sequence guide wire catheter.

Description

Video sequence guide wire segmentation method and device, electronic equipment and readable medium
Technical Field
The invention relates to the field of computers, in particular to a video sequence guide wire segmentation method, a video sequence guide wire segmentation device, electronic equipment and a readable medium.
Background
Guidewire segmentation, which is a crucial component in cardiovascular robotic interventional systems, has been a research hotspot. However, designing a real-time and accurate guidewire segmentation model is difficult. In addition, the difficulty of guidewire segmentation is exacerbated by the low signal-to-noise ratio of the X-ray images and the large imbalance in the number of target and background pixels in the images, as the guidewire is constantly moving irregularly.
To date, a number of guidewire segmentation methods have been proposed. Early stage guidewire segmentation models were mainly based on some traditional methods, such as curve fitting. Although this method can provide a certain effect in a specific case, when the length and shape of the guide wire are greatly changed, the accuracy of the guide wire segmentation is greatly affected. The existing guide wire segmentation model based on deep learning mainly utilizes a Convolutional Neural Network (CNN), and does not well utilize global feature information. In addition, most of these models process each frame independently, without using timing information in the guidewire sequence. However, the global characteristic information and the time sequence information of the guide wire sequence can be effectively utilized to help the model to better segment the guide wire.
Disclosure of Invention
The invention aims to solve at least one technical problem in the prior art, provides a video sequence guide wire segmentation method, a video sequence guide wire segmentation device, an electronic device and a readable medium, and improves the segmentation precision of a video sequence guide wire catheter.
The technical scheme of the invention comprises a video sequence guide wire segmentation method, which is characterized by comprising the following steps: responding to the segmentation request, and acquiring a picture frame of a video sequence guide wire; dividing the picture frame into a plurality of image blocks; coding the image block through a CNN convolutional neural network and a Transformer neural network to obtain a plurality of coding blocks, wherein the coding blocks are used for representing the global feature information of the picture frame; and performing up-sampling, convolution and cascade processing on the coding block to obtain a segmentation result of the picture frame.
According to the video sequence guide wire segmentation method, acquiring the picture frame of the video sequence guide wire further comprises: acquiring the picture frame and the last picture frame of the video sequence guide wire; and determining the time sequence relation of the picture frame through the picture frame and the last picture frame.
The video sequence guidewire segmentation method according to, wherein dividing the picture frame into a plurality of tiles comprises: transforming the picture frame into a plurality of flattened image blocks through an RNN convolutional neural network to obtain the image blocks
Figure BDA0003494989220000021
Where (P × P) denotes the resolution of each block, N ═ HW/P2Indicates the number of image blocks, i.e. the length of the sequence, and (H × W) indicates the resolution of the picture frame.
According to the video sequence guide wire segmentation method, the step of coding the image block through a CNN convolutional neural network and a Transformer neural network to obtain a plurality of coding blocks comprises the following steps: mapping the image block to an embedding space by using trainable linear projection through an RNN convolutional neural network to obtain the coding block retaining space information, wherein the calculation formula is
Figure BDA0003494989220000022
Wherein
Figure BDA0003494989220000023
Representing block-embedded projections, Epos∈RN×DIndicating position embedding;
the coding block is used as input and is obtained through Transformer neural network training
z'l=MSA(LN(z'l))+z'l-1
zl=MLP(LN(z'l))+z'l
Where LN (-) represents a layer regularization operation, zlIs a coded picture representation in which MSA (-) and MLP (-) respectively represent a pluralityHead attention processing and perceptron processing.
According to the video sequence guide wire segmentation method, the up-sampling, convolution and cascade processing are performed on the coding blocks, and the obtaining of the segmentation result of the picture frame comprises the following steps: the multi-head attention processing and the perceptron processing are respectively realized through the Transformer neural network interaction layer, and the interaction layer comprises a multi-head attention block and a multi-layer perceptron block, and the multi-head attention block and the multi-layer perceptron block.
The video sequence guide wire segmentation method further comprises the following steps: and acquiring low-level feature information of the picture frame for the previous picture frame through the CNN convolutional neural network.
7. The method according to claim 6, wherein the performing upsampling, convolution and concatenation on the coding block to obtain the segmentation result of the picture frame comprises: performing feature conversion on the global feature information to obtain a feature conversion result; and performing cascade processing on the coding blocks and the low-level feature information to obtain mixed features, and performing multiple upsampling on the features and the feature conversion result to obtain a segmentation result.
The technical scheme of the invention also comprises a video sequence guide wire segmentation device, which comprises: the first module is used for acquiring a picture frame of a video sequence guide wire according to the segmentation request; a second module for dividing the picture frame into a plurality of image blocks; a third module, configured to perform coding processing on the image block through a CNN convolutional neural network and a Transformer neural network to obtain multiple coding blocks, where the coding blocks are used to represent global feature information of the picture frame; and the fourth module is used for performing upsampling, convolution and cascade processing on the coding block to obtain a segmentation result of the picture frame.
The technical scheme of the invention also comprises an electronic device which comprises a processor and a memory; the memory is used for storing programs; the processor executes the program to implement the video sequence guide wire segmentation method according to any one of the above items.
The technical solution of the present invention further includes a computer-readable storage medium, wherein the storage medium stores a program, and the program is executed by a processor to implement any one of the video sequence guidewire segmentation methods.
The invention has the beneficial effects that: the method solves the problem that the CNN is insufficient in acquiring the global information of the image, captures the global information by combining a Transformer and utilizing a self-attention mechanism, realizes advantage complementation by combining the CNN and the Transformer, and improves the segmentation precision of the video sequence guide wire catheter.
Drawings
The invention is further described below with reference to the accompanying drawings and examples;
fig. 1 is a flow chart of a video sequence guide wire segmentation method according to an embodiment of the invention.
Fig. 2 is a schematic diagram of the segmentation of the guide wires of the CNN convolutional neural network and the transform neural network according to the embodiment of the present invention.
FIG. 3 is a diagram of a Transformer neural network according to an embodiment of the invention.
Fig. 4 is a diagram of a video sequence guide wire segmentation device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. In the following description, suffixes such as "module", "part", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no peculiar meaning in itself. Thus, "module", "component" or "unit" may be used mixedly. "first", "second", etc. are used for the purpose of distinguishing technical features only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features or implicitly indicating the precedence of the indicated technical features. In the following description, the method steps are labeled continuously for convenience of examination and understanding, and the implementation sequence of the steps is adjusted without affecting the technical effect achieved by the technical scheme of the invention in combination with the overall technical scheme of the invention and the logical relationship among the steps. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Fig. 1 is a flow chart of a video sequence guide wire segmentation method according to an embodiment of the invention. The process comprises the following steps:
s100, responding to a segmentation request, and acquiring a picture frame of a video sequence guide wire;
s200, dividing the picture frame into a plurality of picture blocks;
s300, coding an image block through a CNN convolutional neural network and a Transformer neural network to obtain a plurality of coding blocks, wherein the coding blocks are used for representing global feature information of a picture frame;
s400, performing up-sampling, convolution and cascade processing on the coding block to obtain a segmentation result of the picture frame.
In some embodiments, the present embodiments may also be used in methods of catheter segmentation.
Fig. 2 is a schematic diagram of the segmentation of the guide wires of the CNN convolutional neural network and the transform neural network according to the embodiment of the present invention. The current frame and the Previous frame in the figure are the current frame and the Previous frame of the video sequence, and the image block processing and the low-level feature information of the picture frame are acted by 2 CNN convolutional neural networks; in the figure, Reshape is matrix conversion, Conv3x3, ReLU is a convolution kernel of 3x3 to perform convolution processing, and activation processing is performed through a ReLU activation function, Upesple is upsampling, Feature collocation is represented in a virtual frame in a diagram as Feature extraction, and Segmentation head is a segment header.
The method mainly comprises encoding and decoding, wherein the encoding comprises the following steps:
the encoder mainly comprises a CNN module and a Transformer module. Since the Transformer is input in sequence, it is first necessary to deform the guidewire image X into a series of flattened blocks
Figure BDA0003494989220000051
Where (P x P) denotes the resolution of each block,
Figure BDA0003494989220000052
indicating the number of image blocks, i.e. the length of the sequence, (H × W) indicates the resolution of the original image. Next, the blocks are mapped to a D-dimensional embedding space using a trainable linear projection. To encode spatial information of a block, position embedding is added to the block embedding to preserve position information. The calculation formula is as follows:
Figure BDA0003494989220000053
wherein
Figure BDA0003494989220000054
Representing block-embedded projections, Epos∈RN×DIndicating position embedding.
The transform module consists of multiple-head Self-Attention (MSA) and Multi-Layer Perceptron (MLP) blocks of multiple layers of interaction. The structure of the transform layer is shown in FIG. 3.
The output of the L-th layer is represented as follows:
z'l=MSA(LN(z'l))+z'l-1
zl=MLP(LN(z'l))+z'l
where LN (-) represents a layer regularization operation, zlIs a coded image representation.
The decoder includes:
the decoder consists essentially of upsampling, convolution and concatenation operations. The encoder-generated feature representation can be directly upsampled to generate the final segmentation result. However, such a simple operation does not provide a desirable segmentation effect because the transform-generated feature indicates the size
Figure BDA0003494989220000055
The difference from the original image size (H × W) is too large, and direct upsampling loses much low-level information like shape and contour. Therefore, to obtain such low-level information, embodiments of the present invention utilize the feature representation directly generated by the CNN in addition to the transform-generated feature representation to obtain the low-level information. In addition, in order to acquire the time sequence information, the embodiment of the present invention uses not only the guide wire information of the current frame but also the guide wire information of the previous frame, and generates a mixed feature representation by cascading the information, and then performs upsampling on the mixed feature representation and cascading with the previous-level feature to generate a final feature representation in a top-down manner, and finally generates a final segmentation result through one segmentation head.
Fig. 4 is a diagram of a video sequence guide wire segmentation apparatus according to an embodiment of the present invention, which includes a first processing module 401, a second processing module 402, a third processing module 403, and a fourth processing module 404;
the first module is used for acquiring a picture frame of a video sequence guide wire according to a segmentation request; a second module for dividing the picture frame into a plurality of image blocks; the third module is used for coding the image block through a CNN convolutional neural network and a Transformer neural network to obtain a plurality of coding blocks, and the coding blocks are used for representing the global feature information of the image frame; and the fourth module is used for performing up-sampling, convolution and cascade processing on the coding block to obtain a segmentation result of the picture frame. The embodiment solves the problem that the CNN has defects in acquiring the global information of the image, the transform is combined to capture the global information by utilizing a self-attention mechanism, the CNN and the transform are combined to realize advantage complementation, and the segmentation precision of the video sequence guide wire catheter is improved.
The embodiment of the invention also provides the electronic equipment, which comprises a processor and a memory;
the memory stores a program; the processor executes a program to execute the video sequence guide wire method; the electronic device has a function of loading and running a software system of the video sequence guide wire provided by the embodiment of the invention, such as a personal computer, a mobile phone, a smart phone, a tablet computer and the like.
An embodiment of the present invention further provides a computer-readable storage medium, where the storage medium stores a program, and the program is executed by a processor to implement the video sequence guide wire method described above.
It should be recognized that the method steps in embodiments of the present invention may be embodied or carried out by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (10)

1. A video sequence guide wire segmentation method is characterized by comprising the following steps:
responding to the segmentation request, and acquiring a picture frame of a video sequence guide wire;
dividing the picture frame into a plurality of image blocks;
coding the image block through a CNN convolutional neural network and a Transformer neural network to obtain a plurality of coding blocks, wherein the coding blocks are used for representing the global feature information of the picture frame;
and performing up-sampling, convolution and cascade processing on the coding block to obtain a segmentation result of the picture frame.
2. The method according to claim 1, wherein the obtaining a picture frame of a video sequence guide wire further comprises:
acquiring the picture frame and a previous picture frame of the video sequence guide wire;
and determining the time sequence relation of the picture frame according to the picture frame and the last picture frame.
3. The video sequence guidewire segmentation method of claim 2, wherein the dividing the picture frame into a plurality of tiles comprises:
transforming the picture frame into a plurality of flattened image blocks through an RNN convolutional neural network to obtain the image blocks
Figure FDA0003494989210000011
Where (P × P) denotes the resolution of each block, N ═ HW/P2Indicates the number of image blocks, i.e. the length of the sequence, and (H × W) indicates the resolution of the picture frame.
4. The method of claim 3, wherein the encoding the image block by using a CNN convolutional neural network and a transform neural network to obtain a plurality of encoded blocks comprises:
mapping the image block to an embedding space by using trainable linear projection through an RNN convolutional neural network to obtain the coding block retaining space information, wherein the calculation formula is
Figure FDA0003494989210000012
Wherein
Figure FDA0003494989210000013
Representing block-embedded projections, Epos∈RN×DIndicating position embedding;
the coding block is used as input and is obtained through Transformer neural network training
z'l=MSA(LN(z'l))+z'l-1
zl=MLP(LN(z'l))+z'l
Wherein LN (-) representsLayer regularization operation, zlIs an encoded image representation in which MSA (-) and MLP (-) represent multi-headed attention processing and perceptron processing, respectively.
5. The method according to claim 4, wherein the performing upsampling, convolution and concatenation on the coding block to obtain the segmentation result of the picture frame comprises:
the multi-head attention processing and the perceptron processing are respectively realized through the Transformer neural network interaction layer, and the interaction layer comprises a multi-head attention block and a multi-layer perceptron block, and the multi-head attention block and the multi-layer perceptron block.
6. The video sequence guidewire segmentation method of claim 4, wherein the method further comprises:
and acquiring low-level feature information of the picture frame for the previous picture frame through the CNN convolutional neural network.
7. The method according to claim 6, wherein the performing upsampling, convolution and concatenation on the coding block to obtain the segmentation result of the picture frame comprises:
performing feature conversion on the global feature information to obtain a feature conversion result;
and performing cascade processing on the coding blocks and the low-level feature information to obtain mixed features, and performing multiple upsampling on the features and the feature conversion result to obtain a segmentation result.
8. A video sequence guidewire segmentation apparatus, comprising:
the first module is used for acquiring a picture frame of a video sequence guide wire according to the segmentation request;
a second module for dividing the picture frame into a plurality of image blocks;
a third module, configured to perform coding processing on the image block through a CNN convolutional neural network and a Transformer neural network to obtain multiple coding blocks, where the coding blocks are used to represent global feature information of the picture frame;
and the fourth module is used for performing upsampling, convolution and cascade processing on the coding block to obtain a segmentation result of the picture frame.
9. An electronic device comprising a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the video sequence guide wire segmentation method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the storage medium stores a program which is executed by a processor to implement the video sequence guidewire segmentation method according to any one of claims 1 to 7.
CN202210110680.7A 2022-01-29 2022-01-29 Video sequence guide wire segmentation method and device, electronic equipment and readable medium Pending CN114550033A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210110680.7A CN114550033A (en) 2022-01-29 2022-01-29 Video sequence guide wire segmentation method and device, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210110680.7A CN114550033A (en) 2022-01-29 2022-01-29 Video sequence guide wire segmentation method and device, electronic equipment and readable medium

Publications (1)

Publication Number Publication Date
CN114550033A true CN114550033A (en) 2022-05-27

Family

ID=81673969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210110680.7A Pending CN114550033A (en) 2022-01-29 2022-01-29 Video sequence guide wire segmentation method and device, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN114550033A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097941A (en) * 2022-07-13 2022-09-23 北京百度网讯科技有限公司 Human interaction detection method, human interaction detection device, human interaction detection equipment and storage medium
WO2024001886A1 (en) * 2022-06-30 2024-01-04 深圳市中兴微电子技术有限公司 Coding unit division method, electronic device and computer readable storage medium
WO2024027616A1 (en) * 2022-08-01 2024-02-08 深圳市中兴微电子技术有限公司 Intra-frame prediction method and apparatus, computer device, and readable medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001886A1 (en) * 2022-06-30 2024-01-04 深圳市中兴微电子技术有限公司 Coding unit division method, electronic device and computer readable storage medium
CN115097941A (en) * 2022-07-13 2022-09-23 北京百度网讯科技有限公司 Human interaction detection method, human interaction detection device, human interaction detection equipment and storage medium
CN115097941B (en) * 2022-07-13 2023-10-10 北京百度网讯科技有限公司 Character interaction detection method, device, equipment and storage medium
WO2024027616A1 (en) * 2022-08-01 2024-02-08 深圳市中兴微电子技术有限公司 Intra-frame prediction method and apparatus, computer device, and readable medium

Similar Documents

Publication Publication Date Title
CN114550033A (en) Video sequence guide wire segmentation method and device, electronic equipment and readable medium
WO2020108562A1 (en) Automatic tumor segmentation method and system in ct image
US20190355103A1 (en) Guided hallucination for missing image content using a neural network
CN112257572B (en) Behavior identification method based on self-attention mechanism
CN111291825B (en) Focus classification model training method, apparatus, computer device and storage medium
CN112950471A (en) Video super-resolution processing method and device, super-resolution reconstruction model and medium
CN112465060A (en) Method and device for detecting target object in image, electronic equipment and readable storage medium
CN113807361B (en) Neural network, target detection method, neural network training method and related products
CN110599455A (en) Display screen defect detection network model, method and device, electronic equipment and storage medium
CN113780326A (en) Image processing method and device, storage medium and electronic equipment
CN112802197A (en) Visual SLAM method and system based on full convolution neural network in dynamic scene
CN112420170A (en) Method for improving image classification accuracy of computer aided diagnosis system
WO2022216521A1 (en) Dual-flattening transformer through decomposed row and column queries for semantic segmentation
CN117593275A (en) Medical image segmentation system
CN116934591A (en) Image stitching method, device and equipment for multi-scale feature extraction and storage medium
CN108961161B (en) Image data processing method, device and computer storage medium
CN116095183A (en) Data compression method and related equipment
CN113610856B (en) Method and device for training image segmentation model and image segmentation
CN115994892A (en) Lightweight medical image segmentation method and system based on ghostnet
CN116205924A (en) Prostate segmentation algorithm based on U2-Net
CN112530554B (en) Scanning positioning method and device, storage medium and electronic equipment
CN115205487A (en) Monocular camera face reconstruction method and device
CN114882047A (en) Medical image segmentation method and system based on semi-supervision and Transformers
CN114913196A (en) Attention-based dense optical flow calculation method
CN110489584B (en) Image classification method and system based on dense connection MobileNet model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination