CN118052716B - Ovarian cyst image processing method - Google Patents

Ovarian cyst image processing method Download PDF

Info

Publication number
CN118052716B
CN118052716B CN202410444353.4A CN202410444353A CN118052716B CN 118052716 B CN118052716 B CN 118052716B CN 202410444353 A CN202410444353 A CN 202410444353A CN 118052716 B CN118052716 B CN 118052716B
Authority
CN
China
Prior art keywords
feature
layer
image
convolution
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410444353.4A
Other languages
Chinese (zh)
Other versions
CN118052716A (en
Inventor
李敏
张勇
焦灵华
王玉玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lin Danqin
Original Assignee
Shandong Huanghai Intelligent Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Huanghai Intelligent Equipment Co ltd filed Critical Shandong Huanghai Intelligent Equipment Co ltd
Priority to CN202410444353.4A priority Critical patent/CN118052716B/en
Publication of CN118052716A publication Critical patent/CN118052716A/en
Application granted granted Critical
Publication of CN118052716B publication Critical patent/CN118052716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an ovarian cyst image processing method, and relates to the technical field of image recognition. The invention provides an ovarian cyst image processing flow, which comprises the steps of manufacturing an ovarian cyst data set, constructing a deformed self-attention module TSAM, constructing a convolution feedforward network CFFN, constructing a characteristic processing block FHB, constructing a characteristic processing group FHG, constructing a characteristic processing layer FHL, constructing a super-resolution reconstruction module, constructing an ovarian cyst image processing model, training an ovarian cyst image processing model and processing in real time; meanwhile, a deformed self-attention module TSAM is provided, proper balance can be achieved between self-attention channels and space information, a convolution feedforward network CFFN is provided, high-frequency information loss caused by the TSAM can be compensated, the TSAM and CFFN can form a characteristic processing block FHB, and the FHB can effectively construct paired correlations in a large window on the premise of introducing smaller calculation burden.

Description

Ovarian cyst image processing method
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to an ovarian cyst image processing method.
Background
The automatic ovarian cyst image processing is performed by utilizing the image recognition technology, so that the working efficiency and the diagnosis accuracy of doctors can be improved, subjective errors are reduced, the diagnosis process of patients is accelerated, more reliable basis is provided for personalized treatment, and the health management and the treatment effect of the patients are promoted.
In the field of medical imaging, real-time processing of ultrasound images is critical, particularly in diagnosing diseases such as ovarian cysts. The real-time property and the change speed of the ultrasonic image are high, so that doctors need to accurately analyze and identify the image in a short time, and therefore, the selection of an algorithm with certain accuracy and identification speed is important. Conventional image processing algorithms may have some limitations in processing ultrasound images because they are typically based on manually designed feature extraction and classifiers. These algorithms may not achieve sufficient accuracy in the face of complex ultrasound imaging situations.
The super-resolution reconstruction of the ovarian cyst image has important clinical significance, can provide clearer and more detailed image information by improving the spatial resolution of the image, and is helpful for doctors to diagnose and evaluate the type, the size and the position of the ovarian cyst more accurately. The super-resolution reconstruction algorithm with high selection speed and high accuracy means that a high-quality image reconstruction result can be generated in a short time, so that a quicker and more accurate diagnosis process is realized, a more reliable auxiliary tool is provided for doctors, and the treatment effect and the medical experience of patients are finally improved.
For the existing super-resolution image reconstruction algorithm, increasing the window size of the image super-resolution model based on the transducer can obviously improve the model performance, but the calculation cost is quite large. By improving the algorithm structure, more algorithm performance improvement can be brought while less calculation overhead is brought.
Disclosure of Invention
The invention provides an ovarian cyst image processing method, which aims to provide a deformed self-attention module TSAM, can obtain proper balance between self-attention channels and space information, and simultaneously provides a convolution feedforward network CFFN, local deep convolution branches are added between two linear layers of an FFN block to assist in coding more details, the calculated amount is increased less compared with the FFN, high-frequency information loss caused by the TSAM can be compensated, the TSAM and CFFN can form a characteristic processing block FHB, the FHB has a simple integral structure, can be easily applied to the existing super-resolution network based on window self-attention, and can effectively construct pairwise correlation in a large window on the premise of introducing smaller calculation burden.
The invention aims to provide a deformed self-attention module TSAM, and provides an ovarian cyst image processing method, which comprises the following steps of:
s1, manufacturing an ovarian cyst data set, acquiring an ovarian image by using ultrasonic equipment, manually marking a cyst area in the ovarian image, forming the ovarian cyst data set by all marking data and the ovarian image, and dividing a training set and a verification set according to the ratio of 8 to 2;
s2, constructing a deformed self-attention module TSAM, wherein the TSAM comprises a plurality of linear layers, element-by-element addition, matrix multiplication and dimension transformation;
S3, constructing a convolution feedforward network CFFN which comprises a linear Layer, a ReLU activation function, a Layer Norm and a depth convolution;
s4, constructing a characteristic processing block FHB, wherein the characteristic processing block comprises a deformed self-attention module TSAM and a convolution feedforward network CFFN;
S5, constructing a feature processing group FHG, wherein the feature processing group FHG comprises a plurality of feature processing blocks FHB and a single convolution;
s6, constructing a feature processing layer FHL, wherein the feature processing layer comprises a plurality of feature processing groups FHG and a single convolution;
s7, constructing a super-resolution reconstruction module which comprises a pixel embedding layer, a characteristic processing layer FHL and a super-resolution image reconstruction layer;
s8, constructing an ovarian cyst image processing model, wherein the ovarian cyst image processing model consists of an input and super-resolution reconstruction module, a backbone network, a detection head and an output;
S9, training an ovarian cyst image processing model and processing in real time, training the ovarian cyst image processing model by using an ovarian cyst data set, and inputting an ovarian image generated by ultrasonic equipment in real time into the ovarian cyst image processing model after training is completed to obtain a processing result.
Preferably, in step S2, for the deformed self-attention module TSAM, a feature map is input,/>And/>Representative feature map/>Height and width of/(v)Representative feature map/>Setting attenuation parameter/>,/>Obtained by Layer Norm-And then/>Divided into N non-overlapping square windows, use/>Representing all square windows,/>,/>For the side length of each square window,Then obtain/>、/>And/>,/>,/>,/>,/>,/>,/>、/>And/>Is three linear layers,/>Hold and/>The same channel dimension,/>And/>Is compressed to/>And then/>And/>Dimension transformation, i.e./>And/>Channel/>Transform into/>Will/>Transform into/>Thereby obtaining/>And/>,/>Use/>、/>And/>Obtaining the output/>, of a deformed self-attention module TSAM,/>Wherein/>Representative/>Transpose of/>Is/>Channel dimension of/>, whereinRepresents a linear layer,/>Representing element-by-element additions,/>Representing the Softmax function.
Preferably, in step S3, for the convolutional feed forward network CFFN, a signature is input,/>AndRepresentative feature map/>Height and width of/(v)Representative feature map/>First calculate the number of channels of the intermediate feature,/>,/>,/>Represents the number of layers of Nor m,Is the first linear layer,/>Is the first ReLU activation function, and then calculates the output of the convolutional feed forward network CFFN,/>,/>Is a deep convolution,/>Is a second ReLU activation function,/>Is the second linear layer,/>Representing element-by-element additions.
Preferably, in step S4, for a single feature processing block FHB, the input feature map is first passed through the deformed self-attention module TSAM, then through the convolutional feed-forward network CFFN, and an output feature map is obtained, where the dimensions of the output feature map and the input feature map are identical.
Preferably, in step S5, for the feature processing group FHG, a feature map is input,/>,/>And/>Representative feature map/>Height and width of/(v)Representative feature map/>To obtain the output/>, of the feature processing group FHG,/>,/>,/>Representative feature handling group FHG,/>Representing the nth feature processing block FHB,/>Representing a 3 x 3 convolution,/>Representing element-by-element additions.
Preferably, in step S6, for the feature processing layer FHL, a feature map is input,/>,/>And/>Representative feature map/>Height and width of/(v)Representative feature map/>Obtaining the output/>, of the feature processing layer FHL,/>,/>,/>Representative feature processing layer FHL,/>Representing the mth feature processing group FHG,/>Representing a 3 x 3 convolution,/>Representing element-by-element additions.
Preferably, in step S7, for the super-resolution reconstruction module, comprising a pixel embedding layer, a feature processing layer FHL and a super-resolution image reconstruction layer, the pixel embedding layer is a3×3 convolution, the super-resolution image reconstruction layer is composed of a3×3 convolution and a sub-pixel convolution, for the input image,/>,/>And/>Is the input image/>Height and width of input image/>The number of channels is 3 and the pixel embedding layer will/>Conversion to feature embedding/>,/>Wherein/>And/>Inequality,/>And/>Inequality,/>Not equal to 3, will/>Input to a feature processing layer to obtain features/>Will/>Inputting the image to a super-resolution image reconstruction layer to obtain a super-resolution reconstructed image/>,/>Is/>Twice the height and width of (a).
Compared with the prior art, the invention has the following technical effects:
The technical scheme provided by the invention provides a deformed self-attention module TSAM, proper balance can be obtained between a self-attention channel and space information, a convolution feedforward network CFFN is provided, local deep convolution branches are added between two linear layers of an FFN block to assist in coding more details, the calculated amount is increased less compared with the FFN, high-frequency information loss caused by the TSAM can be compensated, the TSAM and CFFN can form a characteristic processing block FHB, the FHB has a simple integral structure, the method can be easily applied to the existing super-resolution network based on window self-attention, and paired correlations in a large window can be effectively constructed on the premise of introducing smaller calculation load.
Drawings
Fig. 1 is a flowchart of an ovarian cyst image processing provided by the invention.
Fig. 2 is a block diagram of a modified self-attention module TSAM provided by the present invention.
Fig. 3 is a block diagram of a convolutional feed forward network CFFN provided by the present invention.
Fig. 4 is a block diagram of a feature processing block FHB provided by the present invention.
Fig. 5 is a diagram of the feature processing group FHG according to the present invention.
Fig. 6 is a diagram of the feature processing layer FHL according to the present invention.
Detailed Description
The invention aims to provide an ovarian cyst image processing method, a deformed self-attention module TSAM is provided, proper balance can be obtained between a self-attention channel and space information, a convolution feedforward network CFFN is provided, local deep convolution branches are added between two linear layers of an FFN block to assist in coding more details, the calculated amount is increased little compared with the FFN, high-frequency information loss caused by the TSAM can be compensated, the TSAM and CFFN can form a characteristic processing block FHB, the FHB is simple in integral structure, the method can be easily applied to the existing super-resolution network based on window self-attention, and pairing correlation in a large window can be effectively constructed on the premise of introducing smaller calculation load.
Referring to fig. 1, in an embodiment of the present application, an ovarian cyst image processing method includes:
S1, manufacturing an ovarian cyst data set, using ultrasonic equipment for a plurality of patients suffering from ovarian cysts, deriving an ovarian image from the ultrasonic equipment, selecting 20000 images in total, selecting the ovarian image capable of clearly displaying a cyst area, adding 4000 images in total, manually marking the cyst area in the ovarian image, forming the ovarian cyst data set by all marking data and the ovarian image, and dividing a training set and a verification set according to the ratio of 8 to 2, wherein the training set is 3200 images, and the verification set is 800 images;
S2, constructing a deformed self-attention module TSAM, wherein the deformed self-attention module TSAM comprises 4 linear layers, 1 element-by-element addition wiping, 2 matrix multiplication operations and 2 dimension transformation operations;
S3, constructing a convolution feedforward network CFFN which comprises 2 linear layers, 2 ReLU activation functions, 1 Layer Norm and 1 depth convolution operation;
S4, constructing a feature processing block FHB, wherein the single feature processing block FHB comprises a single deformed self-attention module TSAM and a single convolution feedforward network CFFN;
S5, constructing a feature processing group FHG, wherein the feature processing group FHG comprises 4 feature processing blocks FHB and a single convolution;
S6, constructing a feature processing layer FHL, wherein the feature processing layer comprises 3 feature processing groups FHG and a single convolution;
S7, constructing a super-resolution reconstruction module which comprises a single pixel embedded layer, a single feature processing layer FHL and a single super-resolution image reconstruction layer;
S8, constructing an ovarian cyst image processing model, wherein the ovarian cyst image processing model consists of an input ovarian image, a super-resolution reconstruction module, resnet backbone networks, resnet detection heads and output;
S9, training an ovarian cyst image processing model and processing in real time, training the ovarian cyst image processing model by using an ovarian cyst data set, inputting an ovarian image generated by ultrasonic equipment in real time to the ovarian cyst image processing model after training is completed, obtaining a processing result, outputting coordinates of an ovarian cyst identification frame if an ovarian cyst is found, drawing the ovarian cyst identification frame on the input ovarian image, obtaining an output ovarian image, and displaying the output ovarian image by using display screen equipment.
Further, in step S2, for the deformed self-attention module TSAM, the configuration thereof is as shown in fig. 2, and a feature map is inputted,/>,/>And/>Representative feature map/>Height and width of/(v)Representative feature map/>Setting attenuation parameter/>,/>Obtained by Layer Norm-Layer Norm corresponds to the normalized Layer in FIG. 2, and will then/>Divided into N non-overlapping square windows, use/>Representing all of the square windows of the display,,/>For the side length of each square window,/>Then obtain/>、/>And/>,/>,/>,/>,/>,/>、/>And/>Is three linear layers,/>Hold and/>The same channel dimension,/>And/>Is compressed toAnd then/>And/>Dimensional transformation, dimensional transformation and/>, in fig. 2Correspondence, i.e./>And/>Is formed by the passage of (a)Transform into/>Will/>Transform into/>Thereby obtaining/>And/>,/>Use/>、/>And/>Obtaining the output/>, of a deformed self-attention module TSAM,/>Wherein/>Representative/>Transpose of/>Is thatChannel dimension of/>, whereinRepresents a linear layer,/>Representing element-by-element additions,/>Representing the Softmax function.
Further, in step S3, for the convolutional feed-forward network CFFN, whose structure is shown in fig. 3, a feature map is input,/>And/>Representative feature map/>Height and width of/(v)Representative feature map/>First, calculate the number of channels of the intermediate feature/>,/>,/>,/>Represents Layer Norm,/>Corresponding to the normalized layer in FIG. 3,/>Is the first linear layer,/>Is the first ReLU activation function, and then calculates the output/>, of the convolutional feed-forward network CFFN,/>Is a deep convolution,/>Corresponding to the depth convolution in FIG. 3,/>Is a second ReLU activation function,/>Is the second linear layer,/>Representing element-by-element additions.
Further, in step S4, for the single feature processing block FHB, the structure is as shown in fig. 4, where TSAM and CFFN are connected in series, the input feature map is first passed through the deformed self-attention module TSAM, then passed through the convolution feedforward network CFFN, and an output feature map is obtained, where the dimensions of the output feature map and the input feature map are consistent.
Further, in step S5, a feature map is input to the feature processing group FHG, the structure of which is shown in FIG. 5,/>And/>Representative feature map/>Height and width of/(v)Representative feature map/>To obtain the output/>, of the feature processing group FHG,/>,/>,/>Representative feature handling group FHG,/>Representing the nth feature processing block FHB,/>Represents a3 x 3 convolution, corresponding to the convolution in fig. 5,/>Representing element-by-element additions.
Further, in step S6, a feature map is input to feature processing layer FHL, the structure of which is shown in FIG. 6,/>And/>Representative feature map/>Height and width of/(v)Representative feature map/>Obtaining the output/>, of the feature processing layer FHL,/>,/>,/>Representative feature processing layer FHL,/>Representing the mth feature processing group FHG,/>Represents a3 x 3 convolution, corresponding to the convolution in fig. 6,/>Representing element-by-element additions.
Further, in step S7, for the super-resolution reconstruction module, comprising a pixel embedding layer, a feature processing layer FHL and a super-resolution image reconstruction layer, the pixel embedding layer is a3×3 convolution, the super-resolution image reconstruction layer is composed of a3×3 convolution and a sub-pixel convolution, for the input image,/>,/>And/>Is the input image/>Height and width of input image/>The number of channels is 3 and the pixel embedding layer will/>Conversion to feature embedding/>,/>Wherein/>AndInequality,/>And/>Inequality,/>Not equal to 3, will/>Input to a feature processing layer to obtain features/>Will/>Inputting the image to a super-resolution image reconstruction layer to obtain a super-resolution reconstructed image/>,/>Is/>Twice the height and width of (a).
Further, for the input image,/>640 And 640 are input images/>Height and width of input image/>The number of channels is 3 and the pixel embedding layer will/>Conversion to feature embedding/>,/>Will/>Input to a feature processing layer to obtain features/>,/>Will/>After being input into the super-resolution image reconstruction layer, the super-resolution reconstruction image/> isobtained,/>
Further, in step S2, when an image is inputThe input feature map of the deformed self-attention module TSAM is/>Setting attenuation parameter 2,/>Obtained by Layer Norm-Then willDivided into 64 non-overlapping square windows, use/>Representing all square windows,/>10 Is the side length of each square window, and then obtain/>、/>And/>,/>,/>,/>,/>,/>And then/>And/>Dimension transformation, i.eAnd/>Channel 48 of (2) is transformed to 192, will/>Transform into/>Thereby obtaining/>And/>,/>Use/>、/>And/>Obtaining the output/>, of a deformed self-attention module TSAM,/>
Further, in step S3, when an image is inputThe input signature of the convolutional feed-forward network CFFN is/>First, calculate the intermediate feature/>,/>The output/>, of the convolutional feed forward network CFFN is then calculated,/>
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and improvements could be made by those skilled in the art without departing from the inventive concept, which fall within the scope of the present invention.

Claims (4)

1. An ovarian cyst image processing method is characterized by comprising the following steps:
s1, manufacturing an ovarian cyst data set, acquiring an ovarian image by using ultrasonic equipment, manually marking a cyst area in the ovarian image, forming the ovarian cyst data set by all marking data and the ovarian image, and dividing a training set and a verification set according to the ratio of 8 to 2;
S2, constructing a deformed self-attention module TSAM, wherein the TSAM comprises a plurality of linear layers, element-by-element addition, matrix multiplication and dimension transformation; the modified self-attention module TSAM inputs characteristic graphs X Tin,XTin∈RH×W×C, H and W to represent the height and width of characteristic graph X Tin, C to represent the channel number of characteristic graph X Tin, sets attenuation parameters r, X Tin to obtain X TinL through Layer Norm, then divides X TinL into N non-overlapping square windows, uses X to represent all square windows, S is the side length of each square window, NS 2 = H X W, then Q, K and V, Q = L Q(X),K=LK(X),V=LV (X), L Q、LK and L V are three linear layers, Q maintains the same channel dimension as X, the channel dimensions of K and V are compressed to C/r 2, then K and V are dimension transformed, i.e., K and V channels C/r 2 are transformed to C, NS 2 is transformed to NS 2/r2, resulting in K t and V t,/>The output X Tout,XTout∈RH×W×C of the deformed self-attention module TSAM is obtained using Q, K t and V t,Wherein/>Representing a transpose of K t, d k is the channel dimension of K, where L O represents a linear layer, + represents an element-by-element addition, and Softmax represents a Softmax function;
S3, constructing a convolution feedforward network CFFN which comprises a linear Layer, a ReLU activation function, a Layer Norm and a depth convolution; the convolution feedforward network CFFN, the input feature maps X Cin∈RH×W×C, H and W represent the height and width of the feature map X Cin, C represents the channel number of the feature map X Cin, first calculate the intermediate feature X CinM,XCinM=ReLU1(L1(LN(XCin))),XCinM∈RH×W×C, LN represents Layer Norm, L 1 is the first linear Layer, reLU1 is the first ReLU activation function, then calculate the output XCout,XCout=L2(ReLU2(DWConv(XCinM))+XCinM)+XCin,XCout∈RH×W×C,DWConv of the convolution feedforward network CFFN as a deep convolution, reLU2 is the second ReLU activation function, L 2 is the second linear Layer, and+ represents element-by-element addition;
s4, constructing a characteristic processing block FHB, wherein the characteristic processing block comprises a deformed self-attention module TSAM and a convolution feedforward network CFFN;
S5, constructing a feature processing group FHG, wherein the feature processing group FHG comprises a plurality of feature processing blocks FHB and a single convolution;
s6, constructing a feature processing layer FHL, wherein the feature processing layer comprises a plurality of feature processing groups FHG and a single convolution;
S7, constructing a super-resolution reconstruction module which comprises a pixel embedding layer, a characteristic processing layer FHL and a super-resolution image reconstruction layer; the super-resolution reconstruction module comprises a pixel embedding layer, a characteristic processing layer FHL and a super-resolution image reconstruction layer, wherein the pixel embedding layer is a 3X 3 convolution, the super-resolution image reconstruction layer is composed of a 3X 3 convolution and a sub-pixel convolution, for an input image I, I E R H×W×3, H and W are the height and width of the input image I, the channel number of the input image I is 3, the pixel embedding layer converts I into characteristic embedding F pe, Wherein H 1 and H are unequal, W 1 and W are unequal, C 1 is unequal to 3, and F pe is input into the feature processing layer to obtain features F h,/>F h is input into a super-resolution image reconstruction layer, so that the height and width of a super-resolution reconstruction image I sr,Isr∈R2H×2W×3,Isr are twice as high as those of the super-resolution reconstruction image I;
s8, constructing an ovarian cyst image processing model, wherein the ovarian cyst image processing model consists of an input and super-resolution reconstruction module, a backbone network, a detection head and an output;
S9, training an ovarian cyst image processing model and processing in real time, training the ovarian cyst image processing model by using an ovarian cyst data set, and inputting an ovarian image generated by ultrasonic equipment in real time into the ovarian cyst image processing model after training is completed to obtain a processing result.
2. The method according to claim 1, wherein in step S4, for the single feature processing block FHB, the input feature map is first subjected to the deformed self-attention module TSAM, and then is subjected to the convolution feedforward network CFFN, and an output feature map is obtained, where the dimensions of the output feature map and the input feature map are identical.
3. The method of claim 1, wherein in step S5, for the feature processing group FHG, the input feature images X FHGin,XFHGin∈RH×W×C, H and W represent the height and width of feature image X FHGin, C represents the number of channels of feature image X FHGin, and the output X FHGout,XFHGout∈RH×W×C,XFHGout=FHG(XFHGin of feature processing group FHG is obtained,
FHG(XFHGin)=Conv3×3(FHBn(…FHB2(FHB1(XFHGin))…))+XFHGin,FHG Representing feature processing group FHG, FHB n represents nth feature processing block FHB, conv 3×3 represents 3 x 3 convolution, and + represents element-by-element addition.
4. The method according to claim 1, wherein in step S6, for the feature processing layer FHL, the input feature patterns X FHLin,XFHLin∈RH×W×C, H and W represent the height and width of the feature pattern X FHLin, C represents the channel number of the feature pattern X FHLin, the output XFHLout,XFHLout∈RH×W×C,XFHLout=FHL(XFHLin),FHL(XFHLin)=Conv3×3(FHGm(…FHG2(FHG1(XFHLin))…))+XFHLin,FHL of the feature processing layer FHL is obtained and the FHG m represents the mth feature processing group FHG, conv 3×3 represents the 3X 3 convolution, and +represents the element-by-element addition.
CN202410444353.4A 2024-04-15 2024-04-15 Ovarian cyst image processing method Active CN118052716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410444353.4A CN118052716B (en) 2024-04-15 2024-04-15 Ovarian cyst image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410444353.4A CN118052716B (en) 2024-04-15 2024-04-15 Ovarian cyst image processing method

Publications (2)

Publication Number Publication Date
CN118052716A CN118052716A (en) 2024-05-17
CN118052716B true CN118052716B (en) 2024-06-18

Family

ID=91046837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410444353.4A Active CN118052716B (en) 2024-04-15 2024-04-15 Ovarian cyst image processing method

Country Status (1)

Country Link
CN (1) CN118052716B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118521497A (en) * 2024-07-22 2024-08-20 山东黄海智能装备有限公司 Fluorescence labeling cell imaging image enhancement processing method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10426442B1 (en) * 2019-06-14 2019-10-01 Cycle Clarity, LLC Adaptive image processing in assisted reproductive imaging modalities
US10646156B1 (en) * 2019-06-14 2020-05-12 Cycle Clarity, LLC Adaptive image processing in assisted reproductive imaging modalities
CN111369433B (en) * 2019-11-12 2024-02-13 天津大学 Three-dimensional image super-resolution reconstruction method based on separable convolution and attention
CN113449131B (en) * 2021-06-29 2022-06-03 山东建筑大学 Object image re-identification method based on multi-feature information capture and correlation analysis
CN116485646A (en) * 2023-04-14 2023-07-25 武汉大学 Micro-attention-based light-weight image super-resolution reconstruction method and device
CN117635428A (en) * 2023-12-04 2024-03-01 重庆理工大学 Super-resolution reconstruction method for lung CT image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
分层特征融合注意力网络图像超分辨率重建;雷鹏程;刘丛;唐坚刚;彭敦陆;;中国图象图形学报;20200916(09);全文 *
基于残差U-Net和自注意力Transformer编码器的磁场预测方法;金亮;电工技术学报;20231128;全文 *

Also Published As

Publication number Publication date
CN118052716A (en) 2024-05-17

Similar Documents

Publication Publication Date Title
CN118052716B (en) Ovarian cyst image processing method
Chen et al. Residual attention u-net for automated multi-class segmentation of covid-19 chest ct images
CN110599528B (en) Unsupervised three-dimensional medical image registration method and system based on neural network
CN112132023B (en) Crowd counting method based on multi-scale context enhancement network
CN113744275B (en) Feature transformation-based three-dimensional CBCT tooth image segmentation method
US20070292049A1 (en) Method of combining images of multiple resolutions to produce an enhanced active appearance model
CN112163447B (en) Multi-task real-time gesture detection and recognition method based on Attention and Squeezenet
CN112580515A (en) Lightweight face key point detection method based on Gaussian heat map regression
CN111881743B (en) Facial feature point positioning method based on semantic segmentation
WO2020233368A1 (en) Expression recognition model training method and apparatus, and device and storage medium
CN111127316A (en) Single face image super-resolution method and system based on SNGAN network
CN110246171B (en) Real-time monocular video depth estimation method
CN114863179B (en) Endoscope image classification method based on multi-scale feature embedding and cross attention
CN113762147A (en) Facial expression migration method and device, electronic equipment and storage medium
CN115439329B (en) Face image super-resolution reconstruction method and computer-readable storage medium
CN110895815A (en) Chest X-ray pneumothorax segmentation method based on deep learning
CN114266735A (en) Method for detecting pathological change abnormality of chest X-ray image
CN116797541A (en) Transformer-based lung CT image super-resolution reconstruction method
CN111626296A (en) Medical image segmentation system, method and terminal based on deep neural network
WO2020194378A1 (en) Image processing system, image processing device, image processing method, and computer-readable medium
CN113689546A (en) Cross-modal three-dimensional reconstruction method for ultrasonic or CT image of two-view twin transducer
CN116309507A (en) AIS focus prediction method for performing feature fusion on CTP under attention mechanism
CN116486182A (en) Medical image classification method and system based on DenseNet network
CN113298827B (en) Image segmentation method based on DP-Net network
CN114267069A (en) Human face detection method based on data generalization and feature enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240703

Address after: No. 2, Dawei 10th Lane, Xidong Village, Xichang Town, Jiedong County, Jieyang City, Guangdong Province, 522000

Patentee after: Lin Danqin

Country or region after: China

Address before: 276800 55 / F, tower a, Tiande sea view city, No. 386, Haiqu East Road, Donggang District, Rizhao City, Shandong Province

Patentee before: Shandong Huanghai Intelligent Equipment Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right