CN115937526B - Method for segmenting gonad region of bivalve shellfish based on search identification network - Google Patents

Method for segmenting gonad region of bivalve shellfish based on search identification network Download PDF

Info

Publication number
CN115937526B
CN115937526B CN202310224105.4A CN202310224105A CN115937526B CN 115937526 B CN115937526 B CN 115937526B CN 202310224105 A CN202310224105 A CN 202310224105A CN 115937526 B CN115937526 B CN 115937526B
Authority
CN
China
Prior art keywords
module
feature
attention
representing
gonad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310224105.4A
Other languages
Chinese (zh)
Other versions
CN115937526A (en
Inventor
岳峻
陈艺菲
王卫军
付晴晴
李振波
寇光杰
贾世祥
杨建敏
戴昌怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ludong University
Original Assignee
Ludong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ludong University filed Critical Ludong University
Priority to CN202310224105.4A priority Critical patent/CN115937526B/en
Publication of CN115937526A publication Critical patent/CN115937526A/en
Application granted granted Critical
Publication of CN115937526B publication Critical patent/CN115937526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method for segmenting gonad regions of bivalve shellfish based on a search identification network, which comprises the following steps: acquiring a bivalve shellfish data set; constructing a multi-effect feature fusion search recognition model; the search recognition model includes: searching a model and identifying the model; inputting the images in the data set into the search model to perform gonad region target positioning, and obtaining images containing target objects; inputting the image containing the target object into the identification model for identification segmentation, obtaining a complete gonad segmentation image, and completing the gonad region segmentation of the bivalve shellfish. Compared with the traditional method for observing gonad development by destroying the shell and extracting the internal tissue, the method provided by the invention has the advantages that the gray level image is segmented to a certain extent on the basis of the nondestructive nuclear magnetism detection, so that the foundation is laid for the application of aquiculture by fusion in the prior art.

Description

Method for segmenting gonad region of bivalve shellfish based on search identification network
Technical Field
The invention belongs to the technical field of bivalve gonad segmentation, and particularly relates to a bivalve gonad region segmentation method based on a search recognition network.
Background
In recent years, the aquaculture industry in China is rapidly developed. The gonad index and the fullness of the bivalve shellfish become important marks of gonad maturation, and in the artificial breeding link, the female and male individuals of the bivalve shellfish with good gonad development are selected for parent breeding, so that stable inheritance of excellent characters can be realized, and the number and quality of breeding can be improved. The traditional method can only observe the gonad development condition by manually destroying the internal tissues, and is irreversible to the damage of living shellfish. And the bivalve shellfish with good gonad development can only judge the gonad size by observing the shell size. However, the gonads of the large bivalve shellfish are not necessarily full, and have large deviation.
In the breeding process of bivalve shellfish, the gonad of shellfish parent to be bred is segmented by utilizing computer vision technology, and after the gonad size is determined, living shellfish is selected for breeding according to the gonad size standard conforming to breeding, so that the breeding yield and quality can be improved. Because the picture imaged after the nuclear magnetic resonance of the small animal is a gray-scale picture, and the difference between the neutral gland area of the picture and other tissues is not obvious and the colors are similar, the picture is not easy to distinguish by non-professional staff, the segmentation difficulty is higher than that of a remarkable object, the problem that the gonad area is segmented by mistake and the like is solved, and the segmentation accuracy is lower. And most of the segmentation technologies do not well deal with the problem of segmentation boundary definition, and the segmentation boundary blurring condition is easy to occur.
Therefore, in order to solve the above-mentioned problems, it is highly desirable to propose a gonad region segmentation method based on a search recognition network, so as to segment the gonad portion of bivalve shellfish to determine whether it meets the conditions of parent breeding.
Obtaining nuclear magnetic resonance images (Magnetic Resonance Imaging, MRI) of bivalve shellfish through an advanced small animal magnetic resonance imaging system, imaging a target animal through the technology, obtaining biological information such as tissue structures, functions and the like of the animal in various states, and reducing damage to living bivalve shellfish; and then combining the deep learning and related technologies in the image vision field, providing a searching and identifying network to segment and train the MRI image gonad region of the bivalve shellfish, and obtaining the gonad region which is successfully segmented.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method for segmenting the gonad region of bivalve shellfish based on a search recognition network, which is used for carrying out certain segmentation on gray level images on the basis of nondestructive nuclear magnetic detection compared with the traditional method for observing gonad development conditions by destroying internal tissues of shells, thereby laying a foundation for the application of carrying out aquaculture by utilizing fusion in the prior art.
In order to achieve the above object, the present invention provides a method for dividing gonad region of bivalve shellfish based on a search recognition network, comprising:
acquiring a bivalve shellfish data set;
constructing a multi-effect feature fusion search recognition model; the search recognition model includes: searching a model and identifying the model;
inputting the images in the data set into the search model to perform gonad region target positioning, and obtaining images containing target objects;
inputting the image containing the target object into the identification model for identification segmentation, obtaining a complete gonad segmentation image, and completing the gonad region segmentation of the bivalve shellfish.
Optionally, the search model includes: the device comprises a feature extraction module, a texture enhancement module, a feature fusion module and a compact pyramid refinement module;
acquiring the image containing the target object comprises:
inputting the images in the dataset into the feature extraction module for feature extraction to obtain a preset resolution feature map; the preset resolution feature map comprises: resolution and semantic information of varying degrees of size;
inputting the preset resolution feature map into the texture enhancement module for target feature and boundary information enhancement, and obtaining a candidate feature map;
inputting the candidate feature map into the feature fusion module to fuse adjacent features and obtain a feature map after multi-effect feature fusion;
inputting the feature map after the multi-effect feature fusion into the compact pyramid refinement module for separation convolution processing, and obtaining the refined image containing the target object.
Optionally, the texture enhancement module includes: parallel residual branches and normal 1 x 1 convolution branches; the parallel residual branches include: a first parallel residual branch, a second parallel residual branch, a third parallel residual branch, and a fourth parallel residual branch; the first parallel residual branch, the second parallel residual branch, the third parallel residual branch and the fourth parallel residual branch are four parallel residual branches with different expansion rates;
the step of obtaining the candidate feature map comprises the following steps:
the preset resolution characteristic diagram is sequentially input into the first parallel residual branch, the second parallel residual branch, the third parallel residual branch and the fourth parallel residual branch to perform channel reduction processing, operation decomposition processing, expansion processing of a preset size and convolution calculation, and a preset channel number is obtained;
and inputting the preset channel number into the common 1 multiplied by 1 convolution branch, adding a ReLU function, and obtaining the candidate feature map.
Optionally, the fusing of adjacent features in the feature fusing module includes: selecting the candidate feature graphs of the 3 features with the preset rank ranking to fuse adjacent features;
the candidate feature graphs of the features with the preset level 3 are subjected to neighbor connection function operation as follows:
Figure SMS_1
wherein,,
Figure SMS_3
representing a 3 x 3 convolution operation normalized by a batch, u representing a batch normalization operation,/->
Figure SMS_6
Representing up-sampling twice operations,/->
Figure SMS_9
Representing the obtained 5 th multi-effect feature fusion feature map,/->
Figure SMS_4
Representing the 5 th candidate feature map, +.>
Figure SMS_7
Representing the obtained 4 th multi-effect feature fusion feature map,/->
Figure SMS_8
Representing the 4 th candidate feature map, +.>
Figure SMS_10
Representing the obtained 3 rd multi-effect feature fusion feature map,/->
Figure SMS_2
Representing candidate feature map 3 +.>
Figure SMS_5
Representing the multiplication operation of the corresponding elements one by one.
Optionally, the compact pyramid refinement module includes: depth convolution and point convolution; the depth convolution comprises a plurality of parallel depth convolutions with different expansion rates;
inputting the feature image after the multi-effect feature fusion into the compact pyramid refinement module for separation convolution processing comprises the following steps:
firstly, inputting the fused feature images into the depth convolution, adding a plurality of parallel depth convolutions, and carrying out batch normalization operation; then compressing the channels to the same number as the input for the normalized image based on the point convolution; and finally obtaining the thinned image containing the target object.
Optionally, the identification model includes: a packet reversal attention module and a switchable self-attention module;
the obtaining of the complete gonadal segmentation image comprises:
inputting the image containing the target object into the group inversion attention module for inversion and group embedded processing to obtain a combined feature map;
and inputting the combined feature map into the switchable self-attention module to extract attention features, and obtaining the complete gonad segmentation image.
Optionally, the packet reversal attention module includes: a reverse guiding sub-module and a grouping guiding sub-module;
the step of obtaining the combined feature map comprises the following steps:
inputting the thinned image containing the target object into the inversion guide submodule to perform image inversion to obtain an inversion chart;
inputting the thinned image containing the target object into the grouping guide submodule, grouping according to a dimension channel, and obtaining grouping characteristics; and respectively inserting the reverse graphs into the grouping features to obtain the combined feature graph.
Optionally, the inverting guide sub-module performs image inversion expressed as:
Figure SMS_11
where ¬ denotes the inverse, E denotes the matrix, σ denotes the sigmoid function,
Figure SMS_12
and->
Figure SMS_13
Representing 4 downsampling and 2 upsampling, respectively,/->
Figure SMS_14
The reverse attention to the output is indicated to direct the operation.
Optionally, the switchable self-attention module comprises: a decision sub-module and a switching sub-module;
inputting the combined feature map into the switchable self-attention module for attention feature extraction comprises:
inputting the combined feature map into the decision sub-module, adaptively generating different decision weights by the decision sub-module according to different inputs, acquiring the importance of different operators by utilizing the decision sub-module aggregation information, adding the decision sub-module into the switching sub-module, endowing different weights with full-connection neural network, convolution neural network and instance enhancement as switchable attention operators, and acquiring a final attention feature image, namely finishing attention feature extraction.
Optionally, the final attention feature image is expressed as:
Figure SMS_15
wherein sigma represents a sigmoid function,
Figure SMS_16
representing the final attention profile, < >>
Figure SMS_17
Representing a fully connected neural network attention diagram, +.>
Figure SMS_18
Representing fully connected neural network operator weights, +.>
Figure SMS_19
Representing a convolutional neural network attention map, < >>
Figure SMS_20
Representing convolutional neural network operator weightsHeavy (I) of>
Figure SMS_21
Representing instance enhanced attention seeking, ->
Figure SMS_22
Indicating the instance enhancer weights, +..
Compared with the prior art, the invention has the following advantages and technical effects:
the invention provides living body detection for bivalve shellfish by using nuclear magnetic resonance of professional small animals, and effectively segments the gonad region by using the provided searching and identifying network, thereby improving the accuracy of detection segmentation on non-obvious objects. The gonads in the pictures can be directly obtained through identification and segmentation, so that the size of the gonad part is calculated, and mature gonads are screened for parent breeding. Compared with the traditional method for observing gonad development by destroying the shell and extracting the internal tissue, the method provided by the invention has the advantages that the gray level image is segmented to a certain extent on the basis of the nondestructive nuclear magnetism detection, so that the foundation is laid for the application of aquiculture by fusion in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a flow chart of a method for segmenting gonad regions of bivalve shellfish according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a dataset annotation according to an embodiment of the present invention;
FIG. 3 is a graph showing the results of the split gonads according to the embodiment of the present invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The invention provides a bivalve gonad region segmentation method based on a search identification network, which comprises the following steps:
acquiring a bivalve shellfish data set;
constructing a multi-effect feature fusion search recognition model; the search recognition model includes: searching a model and identifying the model;
inputting the images in the data set into the search model to perform gonad region target positioning, and obtaining images containing target objects;
inputting the image containing the target object into the identification model for identification segmentation, obtaining a complete gonad segmentation image, and completing the gonad region segmentation of the bivalve shellfish.
Further, the search model includes: the device comprises a feature extraction module, a texture enhancement module, a feature fusion module and a compact pyramid refinement module;
acquiring the image containing the target object comprises:
inputting the images in the data set into the feature extraction module to perform feature extraction to obtain a feature map with preset resolution
Figure SMS_23
The method comprises the steps of carrying out a first treatment on the surface of the The preset resolution feature map comprises: resolution and semantic information of varying degrees of size;
inputting the preset resolution feature map into the texture enhancement module for target feature and boundary information enhancement, and obtaining a candidate feature map;
inputting the candidate feature map into the feature fusion module to fuse adjacent features and obtain a feature map after multi-effect feature fusion;
inputting the feature map after the multi-effect feature fusion into the compact pyramid refinement module for separation convolution processing, and obtaining the refined image containing the target object.
Further, the texture enhancement module includes: parallel residual branches and normal 1 x 1 convolution branches; the parallel residual branches include: a first parallel residual branch, a second parallel residual branch, a third parallel residual branch, and a fourth parallel residual branch; the first parallel residual branch, the second parallel residual branch, the third parallel residual branch and the fourth parallel residual branch are four parallel residual branches with different expansion rates;
the step of obtaining the candidate feature map comprises the following steps:
the preset resolution characteristic diagram is sequentially input into the first parallel residual branch, the second parallel residual branch, the third parallel residual branch and the fourth parallel residual branch to perform channel reduction processing, operation decomposition processing, expansion processing of a preset size and convolution calculation, and a preset channel number is obtained;
and inputting the preset channel number into the common 1 multiplied by 1 convolution branch, adding a ReLU function, and obtaining the candidate feature map.
Further, the feature fusion module performs fusion of adjacent features, including: selecting the candidate feature graphs of the 3 features with the preset rank ranking to fuse adjacent features;
the candidate feature graphs of the features with the preset level 3 are subjected to neighbor connection function operation as follows:
Figure SMS_24
wherein,,
Figure SMS_26
representing a 3 x 3 convolution operation normalized by a batch, u representing a batch normalization operation,/->
Figure SMS_28
Representing up-sampling twice operations,/->
Figure SMS_31
Representing the obtained 5 th multi-effect feature fusion feature map,
Figure SMS_27
Representing the 5 th candidate feature map, +.>
Figure SMS_30
Representing the obtained 4 th multi-effect feature fusion feature map,/->
Figure SMS_32
Representing the 4 th candidate feature map, +.>
Figure SMS_33
Representing the obtained 3 rd multi-effect feature fusion feature map,/->
Figure SMS_25
Representing candidate feature map 3 +.>
Figure SMS_29
Representing the multiplication operation of the corresponding elements one by one.
Further, the compact pyramid refinement module includes: depth convolution and point convolution; the depth convolution comprises a plurality of parallel depth convolutions with different expansion rates;
inputting the feature image after the multi-effect feature fusion into the compact pyramid refinement module for separation convolution processing comprises the following steps:
firstly, inputting the fused feature images into the depth convolution, adding a plurality of parallel depth convolutions, and carrying out batch normalization operation; then compressing the channels to the same number as the input for the normalized image based on the point convolution; and finally obtaining the thinned image containing the target object.
Further, the identification model includes: a packet reversal attention module and a switchable self-attention module;
the obtaining of the complete gonadal segmentation image comprises:
inputting the image containing the target object into the group inversion attention module for inversion and group embedded processing to obtain a combined feature map;
and inputting the combined feature map into the switchable self-attention module to extract attention features, and obtaining the complete gonad segmentation image.
Further, the packet reversal attention module includes: a reverse guiding sub-module and a grouping guiding sub-module;
the step of obtaining the combined feature map comprises the following steps:
inputting the thinned image containing the target object into the inversion guide submodule to perform image inversion to obtain an inversion chart;
inputting the thinned image containing the target object into the grouping guide submodule, grouping according to a dimension channel, and obtaining grouping characteristics; and respectively inserting the reverse graphs into the grouping features to obtain the combined feature graph.
Further, the inversion guide sub-module performs image inversion expressed as:
Figure SMS_34
where ¬ denotes the inverse, E denotes the matrix, σ denotes the sigmoid function,
Figure SMS_35
and->
Figure SMS_36
Representing 4 downsampling and 2 upsampling, respectively,/->
Figure SMS_37
The reverse attention to the output is indicated to direct the operation.
Further, the switchable self-attention module comprises: a decision sub-module and a switching sub-module;
inputting the combined feature map into the switchable self-attention module for attention feature extraction comprises:
inputting the combined feature map into the decision sub-module, adaptively generating different decision weights by the decision sub-module according to different inputs, acquiring the importance of different operators by utilizing the decision sub-module aggregation information, adding the decision sub-module into the switching sub-module, giving different weights to the full-connection neural network, the convolution neural network and the instance enhancement as switchable attention operators, and finally completing the segmentation feature image of attention switching, namely completing attention feature extraction.
The decision sub-module may generate decision weights for selecting the self-attention operator to be global information
Figure SMS_38
The embedding serves as an input, so that the decision sub-module can adaptively generate different decision weights according to different inputs, and the decision modules on different network layers are mutually independent. After the importance of different operators is acquired by utilizing the information gathered by the decision sub-module, a switching sub-module is added, different weights are given to the fully connected neural network (fc), the convolutional neural network (cnn) and the instance enhancement (ie) as switchable attention operators, and finally the segmentation feature images of the attention switching are completed.
Further, the final attention feature image is expressed as:
Figure SMS_39
Figure SMS_40
Figure SMS_41
wherein sigma represents a sigmoid function,
Figure SMS_42
representing the final attention profile, < >>
Figure SMS_43
Representing a fully connected neural network attention diagram, +.>
Figure SMS_44
Representing fully connected neural network operator weights,
Figure SMS_45
Representing a convolutional neural network attention map, < >>
Figure SMS_46
Representing convolutional neural network operator weights, +.>
Figure SMS_47
Representing instance enhanced attention seeking, ->
Figure SMS_48
Indicating the instance enhancer weights, +..
Examples
In the embodiment, the nuclear magnetic resonance image (Magnetic Resonance Imaging, MRI) of the bivalve shellfish is obtained through an advanced small animal magnetic resonance imaging system, the target animal is imaged through the technology, and biological information such as tissue structures, functions and the like of the animal in various states is obtained, so that the damage to the living bivalve shellfish is reduced; and then combining the deep learning and related technologies in the image vision field, providing a searching and identifying network to segment and train the MRI image gonad region of the bivalve shellfish, and obtaining the gonad region which is successfully segmented.
As shown in fig. 1, the present embodiment uses pacific oyster in bivalve shellfish as a research object, and provides a method for dividing gonad region of bivalve shellfish based on search recognition network, specifically comprising the following steps:
A7.0T high-field-intensity small animal magnetic resonance imaging system device is adopted, a non-invasive nuclear magnetic technique is utilized to carry out professional Magnetic Resonance Imaging (MRI) on the pacific oyster, then a Labelme marking tool is utilized to carry out manual segmentation marking on 2000 pictures, a pacific oyster gonad picture data set is established, and the data set is divided into a training set and a testing set.
Firstly, extracting features, and utilizing Res2Net network to extract images
Figure SMS_49
Extracting features
Figure SMS_50
The 5 features obtained have a resolution of +.>
Figure SMS_51
The feature pyramids obtained by this method cover resolution and semantic information of varying degrees of size.
And secondly, strengthening the target characteristics and the boundary information through a texture strengthening module. The module comprises 4 parallel residual branches with different expansion rates and 1 common branch. Will be
Figure SMS_52
The feature image is put into a 1 x 1 convolution operation of the first layer so that the channel is reduced to 32. And then enters a convolution layer containing (2 i-1) x (2 i-1) and a convolution layer of 3 x 3, wherein the standard convolution operation with the size of (2 i-1) x (2 i-1) can be decomposed into two steps of operation of (2 i-2) x 1 and 1 x (2 i-1), and the reasoning efficiency can be improved on the basis of not reducing the operation capability. In the 3×3 convolution layer, this embodiment sets an expansion ratio of a specific size. The number of channels is 32 by carrying out convolution calculation of 1 multiplied by 1 through connection, and finally the 5 th common branch is added, and the whole is input into a ReLU function to obtain candidate characteristic +.>
Figure SMS_53
Then, fusion of adjacent features is carried out, so that semantic information in the same layer is kept consistent, cross-layer semantic consistency is guaranteed, surrounding features are fused, local information is obtained through expansion, and therefore the specific position of oyster gonads is located. According to the proposed method of aggregation between adjacent features by using a neighbor connection decoder, the position information thereof is obtained so as to obtain a rough segmentation object. Since low-level features with greater resolution require more computational resources, there is less contribution to the improvement in performance. The present embodiment fuses only the first 3 feature images of higher rank
Figure SMS_54
Wherein
Figure SMS_55
Each feature image is specifically expressed as follows:
Figure SMS_56
(1)
wherein the method comprises the steps of
Figure SMS_57
Representing a 3 x 3 convolution operation normalized by batch processing,>
Figure SMS_58
representing the up-sampling of the features to be selected twice with the aim of ensuring a shape match between the features, again +.>
Figure SMS_59
Corresponding elements are multiplied one by one to reduce the gap between adjacent features. This approach is an improvement over UNet decoders (with the bottom two high resolution layers removed), and in contrast to conventional feature aggregation approaches, it does not merge features by broadcasting all dense connected layer or skip connected partial decoders, but rather uses neighbor decoders to connect only between adjacent layers.
The compact pyramid refinement stage follows, and the compact pyramid refinement module employs depth-wise separable convolution. The convolution consists of a depth convolution of 3×3 and a point convolution of 1×1, and the calculated amount of the convolution is 8-9 times less than that of the standard convolution. On the basis, the three depth-direction separable convolutions are used in parallel in the embodiment, and the expansion rate of each parallel convolution is respectively 1,2 and 3. The three parallel convolutions are added for batch normalization operations and then the channels are compressed to the same number as the input using a 1 x 1 convolution. And finally, performing better optimization by using residual connection. In this way, the lightweight decoder of the present embodiment aggregates multi-level features from top to bottom.
The above is the search network phase and the identification of network parts is described in detail below. The rough target object is obtained by performing rough target positioning operation on the target object in the range searching process. However, when the features are used, the embodiment only selects the first three stages with higher feature levels to operate, and omits the detail problems such as structure, fine granularity and the like brought by the high-resolution images in the low-level features. In this section, the present embodiment will emphasize the mutual fusion of features between layers while ensuring efficiency, and further refine the features by using packet inversion attention to focus the network on other region information. The switchable self-attention module is utilized to automatically select and integrate attention operators to calculate attention force diagrams, and the combination of different excitation operators at different network layers is realized.
Since the rough target image obtained in the searching stage cannot completely display important information such as boundary contours and the like, so that the incompleteness of segmentation gonads is caused, the group inversion attention is put forward, and the obtained rough target image and the feature to be candidate obtained in the current stage are used for carrying out group comparison so as to carry out a more accurate segmentation task. The group inversion attention module is divided into two steps, wherein the first step is to perform an inversion operation on the rough target image; the second step is to perform grouping embedding operation on the feature to be candidate and the inverted feature.
Inversion guidance: the rough target image obtained in the searching stage cannot well acquire boundary information, attention to space and structure is not strong enough, and part of gonad parts are not recognized except a known range, so that image inversion is firstly performed in the module, and a specific formula is as follows:
Figure SMS_60
(2)
wherein ¬ represents an inverse operation, which is to perform an inverse operation on the matrix E; sigma represents a sigmoid function, converting a mask to [0,1 ]]A section;
Figure SMS_61
and->
Figure SMS_62
Representing 4 downsampling and 2 upsampling, respectively.
Grouping guiding operation: the present embodiment performs an erase operation on the estimated target region through the reverse attention mechanism, thereby mining the details in the complementary region. Candidate feature map
Figure SMS_63
Grouping by dimension channel, dividing into +.>
Figure SMS_64
Wherein->
Figure SMS_65
Representing the processed feature packet size. And reversing the previous operation +.>
Figure SMS_66
Regular interpolation into the processed packet characteristics can ultimately be expressed as
Figure SMS_67
. Finally, according to the remaining learning process, the present embodiment combines a plurality of grouping reverse attention modules, and performs iterative refinement operations using them.
The switchable self-attention module mainly improves the excitation module of the attention module, and the specific structure of the switchable self-attention module can be divided into two sub-modules, namely a decision module and a switching module. The feature map of the current network layer is defined as
Figure SMS_68
The value of interest of x is calculated as follows. First, a global averaging pool, denoted GAP (·) is applied to extract global information from features in the extrusion module, as shown in the equation:
Figure SMS_69
(3)
wherein,,
Figure SMS_70
is global information embedding. m are respectively used as the input of the decision module and the switchable module. To use the aggregated messages in extrusion operationsThe importance of the different operations is determined by the fact that this embodiment is followed by F (-), which aims to fully capture decision information from the channel dependencies. To achieve this goal, the function must meet two criteria, first, it must be flexible (in particular, it must be able to learn the nonlinear interactions between channels), and second, it must learn a non-exclusive relationship, as this embodiment is intended to ensure that multiple incentive operators are allowed to be emphasized. Avoiding only one actuation. To meet these criteria, the present embodiment chooses to use a simple gating mechanism with sigmoid activation: />
Figure SMS_71
(4)
Where σ is a Sigmoid function and F (·) is a decision function. The present embodiment defines F (-) with a fully connected network,
Figure SMS_72
decision vector +.>
Figure SMS_73
Represents the importance of the corresponding incentive operator.
In this embodiment, w is used to adjust the proportion of each operator in EO, and the results of each operator are combined in the form of dot multiplication to obtain the final attention feature map v, where the formula is as follows:
Figure SMS_74
(5)
where σ is a sigmoid function.
According to the embodiment, the segmentation effect of the traditional segmentation network facing the complex background is improved through searching the identification network, the segmentation accuracy is improved, and the boundary area in the image is strengthened through the texture enhancement module, so that a clear bivalve gonad segmentation area is obtained. And finally, calculating the size of the separated gonads, and screening bivalve shellfish meeting the breeding requirements for artificial breeding.
The beneficial effects brought by the embodiment are as follows:
firstly, the invention researches a method for segmenting the gonad region of bivalve shellfish through a search and identification network model, proposes to perform living detection on bivalve shellfish by using nuclear magnetic resonance of professional small animals, and effectively segments the gonad region by using the proposed search and identification network, thereby improving the accuracy of detection segmentation on non-obvious objects. The gonads in the pictures can be directly obtained through identification and segmentation, so that the size of the gonad part is calculated, and mature gonads are screened for parent breeding. Compared with the traditional method for observing gonad development by destroying the shell and extracting the internal tissue, the method provided by the invention has the advantages that the gray level image is segmented to a certain extent on the basis of the nondestructive nuclear magnetism detection, so that the foundation is laid for the application of aquiculture by fusion in the prior art. The model trained by the algorithm can be deployed in a picture set for completing nuclear magnetic resonance detection, real-time segmentation is carried out, and whether the gonad size meets the requirement of parent breeding is judged. The operation of the Pacific oyster dataset gonad segmentation labeling is shown in the following figure 2, and the accurate circling of the gonad part manually is needed during the labeling. Thereby improving the accuracy in the training process. Fig. 3 is a graph showing the results of the segmentation of oyster gonads, and it can be seen that the results of the segmentation by the algorithm proposed in this embodiment are better and approximate to a truth-value graph (artificial annotation graph).
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily conceivable by those skilled in the art within the technical scope of the present application should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. The method for segmenting the gonad region of the bivalve shellfish based on the search identification network is characterized by comprising the following steps of:
acquiring a bivalve shellfish data set based on a nuclear magnetic resonance imaging method;
constructing a multi-effect feature fusion search recognition model; the search recognition model includes: searching a model and identifying the model;
inputting the images in the data set into the search model to perform gonad region target positioning, and obtaining images containing target objects;
inputting the image containing the target object into the identification model for identification segmentation, obtaining a complete gonad segmentation image, and completing the gonad region segmentation of the bivalve shellfish;
the search model includes: the device comprises a feature extraction module, a texture enhancement module, a feature fusion module and a compact pyramid refinement module;
acquiring the image containing the target object comprises:
inputting the images in the data set into the feature extraction module to perform feature extraction to obtain a feature map with preset resolution
Figure QLYQS_1
Figure QLYQS_2
The method comprises the steps of carrying out a first treatment on the surface of the The preset resolution feature map comprises: resolution and semantic information of varying degrees of size;
inputting the preset resolution feature map into the texture enhancement module for target feature and boundary information enhancement, and obtaining a candidate feature map;
inputting the candidate feature map into the feature fusion module to fuse adjacent features and obtain a feature map after multi-effect feature fusion;
inputting the feature map after the multi-effect feature fusion into the compact pyramid refinement module for separation convolution treatment to obtain the refined image containing the target object;
the identification model comprises: a packet reversal attention module and a switchable self-attention module;
the obtaining of the complete gonadal segmentation image comprises:
inputting the image containing the target object into the group inversion attention module for inversion and group embedded processing to obtain a combined feature map;
and inputting the combined feature map into the switchable self-attention module to extract attention features, and obtaining the complete gonad segmentation image.
2. The method for segmenting gonad regions of bivalve shellfish based on search identification network according to claim 1, wherein the texture enhancement module comprises: parallel residual branches and normal 1 x 1 convolution branches; the parallel residual branches include: a first parallel residual branch, a second parallel residual branch, a third parallel residual branch, and a fourth parallel residual branch; the first parallel residual branch, the second parallel residual branch, the third parallel residual branch and the fourth parallel residual branch are four parallel residual branches with different expansion rates;
the step of obtaining the candidate feature map comprises the following steps:
the preset resolution characteristic diagram is sequentially input into the first parallel residual branch, the second parallel residual branch, the third parallel residual branch and the fourth parallel residual branch to perform channel reduction processing, operation decomposition processing, expansion processing of a preset size and convolution calculation, and a preset channel number is obtained;
and inputting the preset channel number into the common 1 multiplied by 1 convolution branch, adding a ReLU function, and obtaining the candidate feature map.
3. The method for segmenting the gonad region of bivalve shellfish based on search identification network according to claim 1, wherein the feature fusion module performs fusion of adjacent features, which comprises: selecting the candidate feature graphs of the 3 features with the preset rank ranking to fuse adjacent features;
the candidate feature graphs of the features with the preset level 3 are subjected to neighbor connection function operation as follows:
Figure QLYQS_4
wherein (1)>
Figure QLYQS_7
Representing a 3 x 3 convolution operation normalized by a batch, u representing a batch normalization operation,/->
Figure QLYQS_10
Representing up-sampling twice operations,/->
Figure QLYQS_5
Representing the obtained 5 th multi-effect feature fusion feature map,/->
Figure QLYQS_8
Representing the 5 th candidate feature map, +.>
Figure QLYQS_11
Representing the obtained 4 th multi-effect feature fusion feature map,/->
Figure QLYQS_12
Representing the 4 th candidate feature map, +.>
Figure QLYQS_3
Representing the obtained 3 rd multi-effect feature fusion feature map,
Figure QLYQS_6
representing candidate feature map 3 +.>
Figure QLYQS_9
Representing the multiplication operation of the corresponding elements one by one.
4. The search identification network-based bivalve gonad region segmentation method according to claim 1, wherein the compact pyramid refinement module comprises: depth convolution and point convolution; the depth convolution comprises a plurality of parallel depth convolutions with different expansion rates;
inputting the feature image after the multi-effect feature fusion into the compact pyramid refinement module for separation convolution processing comprises the following steps:
firstly, inputting the fused feature images into the depth convolution, adding a plurality of parallel depth convolutions, and carrying out batch normalization operation; then compressing the channels to the same number as the input for the normalized image based on the point convolution; and finally obtaining the thinned image containing the target object.
5. The search identification network-based bivalve gonad region segmentation method of claim 1, wherein the group inversion attention module comprises: a reverse guiding sub-module and a grouping guiding sub-module;
the step of obtaining the combined feature map comprises the following steps:
inputting the thinned image containing the target object into the inversion guide submodule to perform image inversion to obtain an inversion chart;
inputting the thinned image containing the target object into the grouping guide submodule, grouping according to a dimension channel, and obtaining grouping characteristics; and respectively inserting the reverse graphs into the grouping features to obtain the combined feature graph.
6. The method for segmenting gonad regions of bivalve shellfish based on search identification network according to claim 5, wherein the inversion guiding sub-module performs image inversion as follows:
Figure QLYQS_13
wherein ¬ represents an inverse operation, E represents a matrix, σ represents a sigmoid function, ++>
Figure QLYQS_14
And->
Figure QLYQS_15
Representing 4 downsampling and 2 upsampling, respectively,/->
Figure QLYQS_16
Reverse attention directed operation representing output。
7. The search identification network-based bivalve gonad region segmentation method of claim 1, wherein the switchable self-attention module comprises: a decision sub-module and a switching sub-module;
inputting the combined feature map into the switchable self-attention module for attention feature extraction comprises:
inputting the combined feature map into the decision sub-module, adaptively generating different decision weights by the decision sub-module according to different inputs, acquiring the importance of different operators by utilizing the decision sub-module aggregation information, adding the decision sub-module into the switching sub-module, endowing different weights with full-connection neural network, convolution neural network and instance enhancement as switchable attention operators, and acquiring a final attention feature image, namely finishing attention feature extraction.
8. The search identification network-based bivalve gonad region segmentation method according to claim 7, wherein the final attention feature image is represented as:
Figure QLYQS_18
wherein σ represents a sigmoid function, +.>
Figure QLYQS_21
Representing the final attention profile, < >>
Figure QLYQS_22
Representing a fully connected neural network attention diagram, +.>
Figure QLYQS_19
Representing fully connected neural network operator weights, +.>
Figure QLYQS_20
Representing a convolutional neural network attention map, < >>
Figure QLYQS_23
Representing convolutional neural network operator weights, +.>
Figure QLYQS_24
Representing instance enhanced attention seeking, ->
Figure QLYQS_17
Indicating the instance enhancer weights, +.. />
CN202310224105.4A 2023-03-10 2023-03-10 Method for segmenting gonad region of bivalve shellfish based on search identification network Active CN115937526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310224105.4A CN115937526B (en) 2023-03-10 2023-03-10 Method for segmenting gonad region of bivalve shellfish based on search identification network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310224105.4A CN115937526B (en) 2023-03-10 2023-03-10 Method for segmenting gonad region of bivalve shellfish based on search identification network

Publications (2)

Publication Number Publication Date
CN115937526A CN115937526A (en) 2023-04-07
CN115937526B true CN115937526B (en) 2023-06-09

Family

ID=85837003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310224105.4A Active CN115937526B (en) 2023-03-10 2023-03-10 Method for segmenting gonad region of bivalve shellfish based on search identification network

Country Status (1)

Country Link
CN (1) CN115937526B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907530A (en) * 2021-02-08 2021-06-04 南开大学 Method and system for detecting disguised object based on grouped reverse attention
CN113643268A (en) * 2021-08-23 2021-11-12 四川大学 Industrial product defect quality inspection method and device based on deep learning and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516972B (en) * 2021-01-12 2024-02-13 腾讯科技(深圳)有限公司 Speech recognition method, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907530A (en) * 2021-02-08 2021-06-04 南开大学 Method and system for detecting disguised object based on grouped reverse attention
CN113643268A (en) * 2021-08-23 2021-11-12 四川大学 Industrial product defect quality inspection method and device based on deep learning and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于语义搜索的语音交互系统模型研究;刘幺和;李巧云;;计算机应用(07);全文 *

Also Published As

Publication number Publication date
CN115937526A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Islam et al. Underwater image super-resolution using deep residual multipliers
Fu et al. Rethinking general underwater object detection: Datasets, challenges, and solutions
CN116309648A (en) Medical image segmentation model construction method based on multi-attention fusion
CN111597946B (en) Processing method of image generator, image generation method and device
CN112488976B (en) Multi-modal medical image fusion method based on DARTS network
Liu et al. A quantitative detection algorithm based on improved faster R-CNN for marine benthos
CN109920538B (en) Zero sample learning method based on data enhancement
CN112233017B (en) Method for enhancing pathological face data based on generation countermeasure network
Chen et al. Skin lesion segmentation using recurrent attentional convolutional networks
Song et al. Contextualized CNN for scene-aware depth estimation from single RGB image
CN115311194A (en) Automatic CT liver image segmentation method based on transformer and SE block
CN115880720A (en) Non-labeling scene self-adaptive human body posture and shape estimation method based on confidence degree sharing
CN117746045B (en) Method and system for segmenting medical image by fusion of transducer and convolution
Chicchon et al. Semantic segmentation of fish and underwater environments using deep convolutional neural networks and learned active contours
CN118430790A (en) Mammary tumor BI-RADS grading method based on multi-modal-diagram neural network
Chen et al. TSEUnet: A 3D neural network with fused Transformer and SE-Attention for brain tumor segmentation
Abdel-Nabi et al. A novel ensemble strategy with enhanced cross attention encoder-decoder framework for tumor segmentation in whole slide images
Iqbal et al. LDMRes-Net: Enabling real-time disease monitoring through efficient image segmentation
CN115937526B (en) Method for segmenting gonad region of bivalve shellfish based on search identification network
Zhang et al. MFFSSD: an enhanced SSD for underwater object detection
Variyar et al. Learning and Adaptation from Minimum Samples with Heterogeneous Quality: An investigation of image segmentation networks on natural dataset
Samudrala et al. Semantic Segmentation in Medical Image Based on Hybrid Dlinknet and Unet
Khan et al. Face recognition via multi-level 3D-GAN colorization
Gao et al. Covariance self-attention dual path Unet for rectal tumor segmentation
CN111582067B (en) Facial expression recognition method, system, storage medium, computer program and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant