CN113096079B - Image analysis system and construction method thereof - Google Patents

Image analysis system and construction method thereof Download PDF

Info

Publication number
CN113096079B
CN113096079B CN202110338180.4A CN202110338180A CN113096079B CN 113096079 B CN113096079 B CN 113096079B CN 202110338180 A CN202110338180 A CN 202110338180A CN 113096079 B CN113096079 B CN 113096079B
Authority
CN
China
Prior art keywords
image
block
training
network
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110338180.4A
Other languages
Chinese (zh)
Other versions
CN113096079A (en
Inventor
廖欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
West China Second University Hospital of Sichuan University
Original Assignee
West China Second University Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by West China Second University Hospital of Sichuan University filed Critical West China Second University Hospital of Sichuan University
Priority to CN202110338180.4A priority Critical patent/CN113096079B/en
Publication of CN113096079A publication Critical patent/CN113096079A/en
Application granted granted Critical
Publication of CN113096079B publication Critical patent/CN113096079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides an image analysis system and a construction method thereof. The system comprises an image database construction unit, a convolutional neural network unit and an analysis unit; the image database construction unit comprises an image data acquisition unit, an image data labeling unit and an image database construction unit; the convolutional neural network unit comprises a convolutional neural network model construction unit and a convolutional neural network model training unit; and the analysis unit is used for analyzing the specific image structure in the image to be analyzed by using the trained image anomaly detection model. The image analysis system has a simple structure, can rapidly identify the image and output an analysis result, and improves the judgment accuracy, the working efficiency and the work duration state.

Description

Image analysis system and construction method thereof
Technical Field
The invention relates to the field of image analysis, in particular to an image analysis system and a construction method thereof.
Background
Currently, with research and progress of artificial intelligence technology, the artificial intelligence technology is being applied to various fields, and is a comprehensive subject, and relates to a wide range of fields, namely a technology at a hardware level and a technology at a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine Learning (Deep Learning) and other directions.
Machine learning is specialized in studying how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, reorganizing existing knowledge structures to continually improve its own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. At present, various forms of machine learning models have thoroughly changed many fields of artificial intelligence, and particularly, the application of the model in intelligent analysis and identification of images is relatively wide.
Disclosure of Invention
In order to overcome the above-mentioned drawbacks of the prior art, an object of the present invention is to provide an image analysis system and a construction method thereof.
In order to achieve the above object of the present invention, the present invention provides an image analysis system including an image database construction unit, a convolutional neural network unit, and an analysis unit;
the image database construction unit comprises an image data acquisition unit for acquiring input image data, an image data labeling unit for labeling different image structures in each input image data, and an image database construction unit for classifying and sorting the labeled image data provided by the image data labeling unit;
the convolutional neural network unit comprises a convolutional neural network model construction unit and a convolutional neural network model training unit, wherein the convolutional neural network model construction unit is used for constructing an image anomaly detection model; the convolutional neural network model training unit trains the image anomaly detection model, inputs the model comprise a training image set, loss function weights, a feature extraction network and a classification network, and outputs a feature set S comprising the training set, a feature extraction network after training and weights f thereof θ
And the analysis unit is used for analyzing the specific image structure in the image to be analyzed by using the trained image anomaly detection model.
The image analysis system provided by the invention has a simple structure, can be used for rapidly identifying the image and outputting the analysis result, and improves the judgment accuracy, the working efficiency and the work duration state.
The preferable scheme of the image analysis system is that the convolutional neural network construction unit comprises a feature extraction network and a classification network;
the characteristic extraction network comprises A low layer formed by M convolution modules BLOCK-A, A high layer formed by N residual convolution modules BLOCK-B, P residual convolution modules BLOCK-C, A convolution layer connected with the high layer and A tanh () activation function;
the convolution module BLOCK-A consists of A convolution layer and A LeakyReLU () activation function;
the residual convolution module BLOCK-B consists of overlapped convolution kernels of 1x1 and 3x3 and a layer jump;
the residual convolution module BLOCK-C is composed of convolution kernels with overlapped 1x1, 1x3 and 3x1 and a layer jump;
the classification network is made up of fully connected layers and a LeakyReLU () activation function.
The feature extraction network provided by the invention has a plurality of convolution kernels with different scales on the same layer, so that sparse and non-sparse features can be learned at the same time, and the jump layer ensures that the network can consider deep layer and shallow layer network features at the same time. The two characteristics in the network structure design increase the characteristic expression capability of the network.
The preferable scheme of the image analysis system is that the convolutional neural network construction unit comprises a feature extraction network and a classification network;
the characteristic extraction network comprises a low layer formed by K convolution modules BLOCK-D, a high layer formed by Q self-supervision convolution modules BLOCK-F, a convolution layer connected with the high layer and a tanh () activation function;
the convolution module BLOCK-D consists of a convolution layer and a LeakyReLU () activation function;
the self-supervision convolution module BLOCK-F module comprises a plurality of convolution kernels of 1x1 and 3x3 which are overlapped with each other and a mean value pooling layer;
the classification network is made up of fully connected layers and a LeakyReLU () activation function.
The feature extraction network provided by the invention enables the same layer to have convolution kernels with various different scales, so that sparse and non-sparse features can be learned at the same time, and the feature expression capability of the network is improved.
The preferable scheme of the image analysis system is that the convolutional neural network construction unit comprises a feature extraction network and a classification network;
the characteristic extraction network comprises a plurality of convolution layers, and a BLOCK-G module is introduced into the middle position of the convolution layers;
the BLOCK-G module comprises a plurality of superimposed 1x1, 3x3, 5x5 convolution kernels and a maximum pooling layer;
the classification network is made up of fully connected layers and a LeakyReLU () activation function.
The characteristic extraction network of the invention enables the same layer to have convolution kernels with different scales, and increases the characteristic expression capability of the network.
The application also provides a construction method of the image analysis system, the image analysis system is constructed, an original image is obtained by an image database construction unit, different image structures in the original image are marked, marked image data are classified and sorted, and a training set, a checking set and a testing set are divided; training the image anomaly detection model by adopting a training set to obtain an ideal image anomaly detection model, and checking the ideal image anomaly detection model by adopting a checking set to detect the accuracy rate of the ideal image anomaly detection model; testing an ideal image anomaly detection model by adopting a test set, and detecting the robustness of the model;
if the difference between the accuracy of the image anomaly detection model on the test set and the accuracy in the calibration set training exceeds a preset value, the model is subjected to fitting, the model is returned to the convolutional neural network training unit, and the network structure or parameters are adjusted to perform retraining so as to obtain an image anomaly detection model, the difference between the accuracy on the test set and the accuracy in the calibration set training is within the preset value, and at the moment, the robustness of the image anomaly detection model is high.
According to the system construction method, the check set and the test set are introduced, so that the phenomenon of under fitting and over fitting of the ideal weight of the abnormal detection model can be avoided, and the ideal weight of the abnormal detection model obtained through training is guaranteed to have robustness.
The method for constructing the image analysis system is characterized in that the training process of the training set on the image anomaly detection model is as follows:
b1: training a characteristic extraction network of an image anomaly detection model by adopting images in a training set;
b2: using the feature extraction network after training to obtain and store feature set S, S≡S≡U { f corresponding to training set θ (p), namely: for each image in the training set, randomly extracting an image block which is the same as the receiving field of the feature extraction network, and obtaining the feature vector f of the image block through the feature extraction network obtained by training θ (p) the above featuresThe vector integrally forms a feature vector set S;
b3: preserving trained feature extraction network weights f θ And a feature vector set S corresponding to the training set.
The invention acquires the ideal weight after the training of the abnormality detection model is completed, and acquires the image feature set of the specified image structure according to the ideal weight and the training set image, so as to realize the abnormality detection of the subsequent test image, quickly identify the image and output the analysis result, and improve the judgment accuracy.
The method for constructing the image analysis system preferably comprises the following steps:
b11: for each image in the training set, randomly selecting an image block p in eight adjacent areas of a 3X3 grid, wherein the scale of the p is the same as the receiving field of the feature extraction network, then randomly dithering the center of the image block p to obtain an image block p1, calculating the cross entropy of the image block p and the p1 as a subitem Loss function loss_1,wherein the image block p 1 The true relative position with respect to p is y {0,1, …,7}, y i Referring to the number of image blocks in the training set for category i at 8 relative positions y {0,1, …,7 }; classifier C φ Trained to correctly predict image block p 1 Relative to image block p, i.e. y=c φ (f θ (p),f θ (p 1 )),a i The confidence coefficient of the category i calculated by the classifier is calculated, and N is the total number of samples in the training set;
for image block p, randomly selecting an image block p which is in the same row or column as but not adjacent to image block p in the four neighbours of its 5×5 grid 2 ,p 2 The scale is the same as the acceptance field of the feature extraction network, the cross entropy of the image blocks p and p2 is calculated as a subitem Loss function loss_2,in which the image block p 2 The true relative position with respect to p is y {0,1,2,3}, y i Finger training setThe number of image blocks of category i at the 4 relative positions y {0,1,2,3 }; classifier C φ Trained to correctly predict image block p 2 Relative to image block p, i.e. y=c φ (f θ (p),f θ (p 2 )),b i Is the confidence of category i calculated by the classifier;
for the image block p, 2-4 of the image blocks p3, p4, p5 and p6 are obtained for four adjacent crossing areas of the image block p, the L2 norm distance between the p and the selected image block of the p3, p4, p5 and p6 is calculated, the average value is obtained, the average value is taken as a subitem Loss function loss_3,||f θ (p)-f θ (p 2+i )|| 2 refers to the L2 norm distance between the image block p and the selected image block of p3, p4, p5 and p 6;
b12: calculating a Loss function loss=λ of the network model 1 *Loss_1+λ 2 *Loss_2+Loss_3,λ 1 、λ 2 For the weight value in the loss function, which is larger than 0, and back propagation is carried out by using an Adam optimizer, so as to realize network weight iteration and optimization of the feature extraction network model;
b13: and B11-B12 are repeatedly executed until the number of the specified rounds is reached, and then the optimal weights of the feature extraction network and the classification network are selected and saved according to the loss function of each round of training. The method for constructing the image analysis system preferably comprises the following steps:
step 1: for each image in the training set, any image block p is selected, one image block p7 is randomly selected in eight adjacent areas of 3X3 grids of the image block p, the cross entropy of the image block p and the p7 is calculated as a subitem Loss function loss_4,wherein the image block p 7 The true relative position with respect to p is y {0,1, …,7}, y i Referring to the number of image blocks in the training set for category i at 8 relative positions y {0,1, …,7 }; classifier C φ Trained to correctly predict image block p 7 Relative position to picture block p, i.e. +.>c i The probability value of the category i calculated by the classifier is calculated, and N is the total number of samples in the training set;
for the image block p, randomly taking an image block p8 which is in the same row or the same column as the image block p but not adjacent to the image block p in the four neighborhoods of the 5X5 network, calculating the cross entropy of the p and the p8 as a subitem Loss function loss_5,wherein the image block p 8 The true relative position with respect to p is y {0,1,2,3}, y i Referring to the number of image blocks in the training set for category i at the 4 relative positions y {0,1,2,3 }; classifier C φ Trained to correctly predict image block p 8 Relative position to picture block p, i.e. +.>d i Is the probability value of category i calculated by the classifier;
step 2: calculating a Loss function loss=lambda.loss_4+loss_5 of the network model, wherein lambda is a weight value in the Loss function and is larger than 0, and performing back propagation by using an Adam optimizer to realize network weight iteration and optimization of the feature extraction network model;
step 3: and (3) repeatedly executing the designated number of the steps 1-2, and selecting and storing the optimal weights of the feature extraction network and the classification network according to the loss function of each training round.
The training process solves the problems that the number of the feature centers of the complex image is uncertain, and the workload of distributing corresponding image blocks for different feature centers is extremely large. Because the image blocks selected randomly from the training image have larger intra-class variance variation, part of the image blocks correspond to the background and part of the image blocks contain the target, and the situation that the background and the target are contained simultaneously can also exist. Thus, mapping all features of different image blocks to one center, performing a unimodal cluster, will impair the link between the features and the content. In order to solve the problem, the scheme does not explicitly define a center nor divide corresponding image blocks, and on the contrary, semantic similar image blocks are obtained by sampling space adjacent image blocks, then a training feature extraction network automatically collects the image blocks with similar feature semantics, and when the trained feature extraction network can well solve the prior task, the network is considered to be capable of extracting effective features.
The system construction method can select the optimal training result from the results of the appointed training round number of the abnormal detection model according to the loss function, and obtain the ideal weight of the abnormal detection model. Due to the design of the depth anomaly detection network structure and the self-supervision learning technology introduced in the construction of the loss function, the method can complete model training under the condition of a small sample data set, and further realize analysis work on a target image structure.
The beneficial effects of the invention are as follows: the invention has the advantages of high accuracy, short time consumption and long working duration, has wide application range, can be widely applied to the fields of medical treatment, traffic safety and the like, is particularly in the medical field, is favorable for solving the problem of uneven medical resource distribution, can realize remote high-quality medical treatment and the like, and provides more convenient and accurate pathological diagnosis service for vast patients.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of an image analysis system;
FIG. 2 is a schematic diagram of a first convolutional neural network construction element network architecture;
FIG. 3 is A schematic diagram of A convolution module BLOCK-A network architecture;
FIG. 4 is a schematic diagram of a convolution module BLOCK-B network architecture;
FIG. 5 is a schematic diagram of a convolution module BLOCK-C network architecture;
FIG. 6 is a schematic diagram of a second convolutional neural network building block network architecture;
FIG. 7 is a schematic diagram of a convolution module BLOCK-D network architecture;
FIG. 8 is a schematic diagram of a convolution module BLOCK-F network architecture;
FIG. 9 is a schematic diagram of a third convolutional neural network construction element network architecture;
FIG. 10 is a schematic diagram of a convolution module BLOCK-G network architecture;
FIG. 11 is a schematic diagram of an eight neighborhood of a 3×3 grid of image blocks p;
fig. 12 is a schematic diagram of the four neighborhoods of a 5x5 grid of image blocks p;
fig. 13 is a schematic diagram of four adjacent intersection areas of the image block p.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, unless otherwise specified and defined, it should be noted that the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, mechanical or electrical, or may be in communication with each other between two elements, directly or indirectly through intermediaries, as would be understood by those skilled in the art, in view of the specific meaning of the terms described above.
As shown in fig. 1, the present invention provides an image analysis system, which includes an image database construction unit, a convolutional neural network unit, and an analysis unit.
The image database construction unit comprises an image data acquisition unit, an image data labeling unit and an image database construction unit. The image data acquisition unit is used for acquiring input image data, the image data labeling unit is used for labeling different image structures in each input image data, the image database construction unit is used for classifying and sorting the labeled image data provided by the image data labeling unit, and dividing a training set, a checking set and a testing set to construct an image database.
The convolutional neural network unit comprises a convolutional neural network model construction unit and a convolutional neural network model training unit. The convolutional neural network model construction unit is used for constructing an image anomaly detection model; the convolutional neural network model training unit trains the image anomaly detection model to obtain an ideal image anomaly detection model, wherein the input of the model comprises a training image set, a loss function weight parameter, a feature extraction network and a classification network, and the output of the model comprises a feature set S of the training set, a feature extraction network after training and a weight f of the feature extraction network θ
And the analysis unit is used for analyzing the specific image structure in the image to be analyzed by using the trained image anomaly detection model.
Furthermore, the input terminal is used for inputting the existing image into the image data obtaining unit, and these input data are finally collected by the image database construction unit in a classified manner for supporting the subsequent image analysis work. The output terminal is used for presenting the analysis result (specific image structure and corresponding area ratio in the image) of the robust and ideal feature extraction network model obtained by the convolutional neural network model training unit to the doctor as clinical diagnosis reference so as to improve the accuracy, the working efficiency and the working duration of the staff.
In this embodiment, the convolutional neural network construction unit includes a feature extraction network and a classification network, and the feature extraction network extracts feature information of the image block so that the subsequent classification network can correctly predict the relative position of the image block. In this embodiment, the classification network is composed of a fully connected layer and a LeakyReLU () activation function, and once training is completed, the classification network is discarded.
Feature extraction network this embodiment provides three models:
first, as shown in fig. 2, the feature extraction network is formed by using A modular splicing ideA, so that the width and depth of the network can be amplified as required, the lower layer is formed by M convolution modules BLOCK-A, the upper layer introduces N residual convolution modules BLOCK-B and P residual convolution modules BLOCK-C, and one convolution layer and tanh () activation function connected with the upper layer follows. M is configurable, the value range is an integer between 3 and 6, the default value is 4, N is configurable, the value range is an integer between 1 and 3, and the default value is 2; p is configurable, the value range is a positive integer between 1 and 3, and the default value is 2.
Wherein the convolution module BLOCK-A is composed of A convolution layer and A LeakyReLU () activation function, as shown in fig. 3; the residual convolution module BLOCK-B is formed by overlapping convolution kernels of 1x1 and 3x3 and a layer jump, as shown in FIG. 4; the residual convolution module BLOCK-C is constructed by superimposing convolution kernels of 1x1, 1x3, 3x1 and a layer jump, as shown in fig. 5. Because the same layer has convolution kernels with various scales, sparse and non-sparse features can be learned at the same time, and layer jump (shortcuts) ensures that the network can consider deep and shallow network features at the same time. The two characteristics in the network structure design increase the characteristic expression capability of the network.
Second, as shown in fig. 6, the feature extraction network is formed by using a modular splicing idea, so that the width and depth of the network can be amplified as required, the lower layer of the feature extraction network is formed by K convolution modules BLOCK-D, the upper layer is introduced with Q self-supervision convolution modules BLOCK-F, and then a convolution layer and a tanh () activation function are connected with the upper layer; k is configurable, the value range is an integer between 4 and 6, the default value is 5, Q is configurable, the value range is an integer between 1 and 3, and the default value is 1.
Wherein the convolution module BLOCK-D is composed of a convolution layer and a LeakyReLU () activation function, as shown in fig. 7; the self-supervision convolution module BLOCK-F enables the same layer to have convolution kernels with various different scales through a plurality of 1x1 and 3x3 convolution kernels and an average pooling layer which are overlapped with each other, sparse and non-sparse features can be learned at the same time, and the feature expression capability of a network is improved, as shown in figure 8.
Third, as shown in fig. 9, the feature extraction network includes a plurality of convolution layers, and a BLOCK-G module is introduced at a middle position of the plurality of convolution layers. The feature extraction network extracts feature information of the image blocks so that the subsequent classification network can correctly predict the relative positions of the image blocks.
The BLOCK-G module includes a layer of convolution kernels of various different scales by superimposing the convolution kernels of 1x1, 3x3, 5x5 and the maximum pooling layer, so that the same layer has the convolution kernels of various different scales, and the characteristic expression capability of the network is increased, as shown in fig. 10.
The convolution modules referred to in this application may be conventional convolution layer modules unless otherwise specified.
In order to improve the accuracy and the robustness of the image anomaly detection model, the convolutional neural network unit further comprises a convolutional neural network model checking unit; the convolutional neural network model checking unit comprises a model checking unit and a model testing unit, wherein the model checking unit is used for detecting the accuracy of the convolutional network model obtained through training; the model test unit is used for detecting whether the convolutional network model obtained through training is over-fitted or not so as to screen out the network model with high robustness.
In this embodiment, when the analysis unit uses the trained image anomaly detection model to analyze a specific image structure in an image to be analyzed, the analysis may be performed by using an existing method, or may be performed by using the following method.
Image I to be analyzed test And carrying out sliding window blocking, and dividing the image blocks with the same size as the receiving field of the feature extraction network according to the sliding step S pixels to obtain an image block sequence, wherein the size of the segmented image is W multiplied by W pixels, and S is more than or equal to 1 and less than or equal to W.
Carrying out self-adaptive segmentation on the image blocks after the sliding window is segmented, and distinguishing targets and blank backgrounds in the image blocks; namely, dividing the foreground and the background, discarding the image blocks with the target duty ratio smaller than the threshold T1, and not entering the subsequent processing; preserving image BLOCKs with a duty ratio greater than a threshold T1 to form an image BLOCK sequence { BLOCK } i,j And the i and the j are counts of the image blocks in x and y coordinates respectively, and jointly form the number of the image blocks.
BLOCK sequence { BLOCK } i,j The image blocks in the image are input into a feature extraction network to obtain an image I test Abnormal characteristics of (2)And (5) a sign graph M. The method comprises the following steps: BLOCK sequence { BLOCK } i,j Through a feature extraction network, calculating the abnormal value of the abnormal value by abnormal i,j And BLOCK the image BLOCK i,j Is of abnormal value of (a) to be abnormal i,j As an initial outlier score for each pixel in the image block, outliers are assigned abnormal i,j =min h∈S ||f(p)-h|| 2 Wherein the image BLOCK sequence { BLOCK } i,j The feature vector obtained by the input feature extraction network is f (BLOCK) i,j ) H is any feature vector in the feature vector set S, |·|| 2 Representing the L2 norm distance, min h∈S ||f(p)-h|| 2 Then represents the BLOCK of the image i,j Is at least a minimum L2 norm distance from any feature vector in the set of feature vectors S.
Then calculate the image I to be analyzed test Abnormal feature map M after feature extraction network:
calculating an image I to be analyzed test Abnormal score value P of each pixel of (3) i,jImage I to be analyzed test Abnormal score value p of all pixels in (1) i,j The corresponding abnormality feature maps M, M, N are each referred to as the total number of image blocks in the x-and y-directions.
Threshold segmentation is carried out on the abnormal feature image M, and a specific image structure type is calculated in an image I to be analyzed by utilizing a binary image after segmentation test Is a percentage of the area of the substrate. The method comprises the following steps:
threshold segmentation is carried out on the abnormal feature map M according to a threshold T2, and the area percentage of a specific image structure is calculatedWherein AREA is used as the main component GCT Is the image I to be analyzed test The area of the region in (b) corresponds to the sequence { BLOCK } i,j Sum of foreground AREAs of each image block in the array STRUCT Is the image I to be analyzed test The area of the specific image structure in the image corresponds to the foreground after threshold segmentation of the abnormal feature map MArea minus sequence { BLOCK i,j Sum of background areas of each image block in the image.
The application also provides an image analysis system construction method, which specifically comprises the following steps: constructing the image analysis system, obtaining an original image by an image database construction unit, marking different image structures in the original image, classifying and arranging marked image data, and dividing a training set, a checking set and a testing set; training the image anomaly detection model by adopting a training set to obtain an ideal image anomaly detection model, and checking the ideal image anomaly detection model by adopting a checking set to detect the accuracy rate of the ideal image anomaly detection model; and testing an ideal image anomaly detection model by adopting a test set, and detecting the robustness of the model.
If the difference between the accuracy of the image anomaly detection model on the test set and the accuracy in the calibration set training exceeds a preset value, the model is subjected to fitting, the model is returned to the convolutional neural network training unit, and the network structure or parameters are adjusted to perform retraining so as to obtain an image anomaly detection model, the difference between the accuracy on the test set and the accuracy in the calibration set training is within the preset value, and at the moment, the robustness of the image anomaly detection model is high.
The training process of the training set on the image anomaly detection model is as follows:
b1: and training a characteristic extraction network of the image anomaly detection model by adopting images in a training set.
In this embodiment, the following two training processes are provided:
first kind:
b11: for each image in the training set, as shown in fig. 11, an image block p is arbitrarily selected from eight neighborhoods of a 3×3 grid, the scale of the p is the same as the receiving field of the feature extraction network, then random dithering is performed on the center of the image block p to obtain an image block p1, the cross entropy of the image block p and the p1 is calculated as a subitem Loss function loss_1,wherein the image block p 1 The true relative position with respect to p is y 0,1, …,7,y i referring to the number of image blocks in the training set for category i at 8 relative positions y {0,1, …,7 }; classifier C φ Trained to correctly predict image block p 1 Relative to image block p, i.e. y=c φ (f θ (p),f θ (p 1 ) Representing prediction of image block p with classifier 1 Relative position to image block p, a i Is the confidence of class i calculated by the classifier, and N is the total number of samples in the training set.
For the image block p, as shown in fig. 12, in the four neighbours of the 5×5 grid, one image block p is randomly selected which is in the same row or column as the image block p but is not adjacent to the image block p 2 ,p 2 The scale is the same as the acceptance field of the feature extraction network, the cross entropy of the image blocks p and p2 is calculated as a subitem Loss function loss_2,in which the image block p 2 The true relative position with respect to p is y {0,1,2,3}, y i Referring to the number of image blocks in the training set for category i at the 4 relative positions y {0,1,2,3 }; classifier C φ Trained to correctly predict image block p 2 Relative to image block p, i.e. y=c φ (f θ (p),f θ (p 2 ) Representing prediction of image block p with classifier 2 Relative position to image block p, b i Is the confidence of category i calculated by the classifier.
For the image block p, as shown in fig. 13, 2 to 4 of the image blocks p3, p4, p5, p6 are acquired for four adjacent intersection areas of p, that is, for four edge points (upper left corner, upper right corner, lower left corner, lower right corner) of the image block p as new image block centers, four new image blocks p3, p4, p5, p6 of the same scale as the image block p can be acquired, L2 norm distances of p and selected image blocks among p3, p4, p5, p6 are calculated and averaged as a subterm Loss function loss—3,||f θ (p)-f θ (p 2+i )|| 2 refers to image blocks p and p3, p4,The L2 norm distance of the image block is selected from p5 and p 6.
B12: calculating a Loss function loss=λ of the network model 1 *Loss_1+λ 2 *Loss_2+Loss_3,λ 1 、λ 2 And for the weight values in the loss function to be larger than 0, the Adam optimizer is utilized to carry out back propagation, so as to realize the iteration and optimization of the network weight of the feature extraction network model.
B13: and B11-B12 are repeatedly executed until the number of the specified rounds is reached, and then the optimal weights of the feature extraction network and the classification network are selected and saved according to the loss function of each round of training.
Second kind:
step 1: for each image in the training set, any image block p is selected, one image block p7 is randomly selected in eight adjacent areas of 3X3 grids of the image block p, the cross entropy of the image block p and the p7 is calculated as a subitem Loss function loss_4,wherein the image block p 7 The true relative position with respect to p is y {0,1, …,7}, y i Referring to the number of image blocks in the training set for category i at 8 relative positions y {0,1, …,7 }; classifier C φ Trained to correctly predict image block p 7 Relative position to picture block p, i.e. +.>Representing prediction image block p with classifier 7 Relative position to image block p, c i Is the confidence of class i calculated by the classifier, and N is the total number of samples in the training set.
For the image block p, randomly taking an image block p8 which is in the same row or the same column as the image block p but not adjacent to the image block p at a distance of four adjacent steps of the 5X5 network, calculating the cross entropy of p and p8 as a subitem Loss function loss_5,wherein the image block p 8 The true relative position with respect to p is y {0,1,2,3}, y i The finger is used in the training set,the number of tiles for category i at the 4 relative positions y {0,1,2,3 }; classifier C φ Trained to correctly predict image block p 8 Relative position to picture block p +.>Representing prediction image block p with classifier 8 Relative position to image block p, d i Is the confidence of category i calculated by the classifier.
Step 2: and calculating a Loss function loss=lambda.Loss_4+Loss_5 of the network model, wherein lambda is a weight value in the Loss function and is larger than 0, and performing back propagation by using an Adam optimizer to realize network weight iteration and optimization of the feature extraction network model.
Step 3: and (3) repeatedly executing the designated number of the steps 1-2, and selecting and storing the optimal weights of the feature extraction network and the classification network according to the loss function of each training round.
It should be noted that, if the number of image data in the training set is too small, the minimum network layer number can be set in a configurable range for the network structure, so as to avoid the occurrence of over fitting when the training data is insufficient; if the model accuracy cannot be increased when training is performed on the existing convolutional neural network model, more network layers can be set in a configurable range aiming at the network structure, namely, the model fitting capacity is improved by increasing the convolutional model depth.
Then the next step is performed:
b2: using the feature extraction network after training to obtain and store feature set S, S≡S≡U { f corresponding to training set θ (p), namely: for each image in the training set, randomly extracting an image block which is the same as the receiving field of the feature extraction network, and obtaining the feature vector f of the image block through the feature extraction network obtained by training θ (p) the feature vector as a whole constitutes a feature vector set S.
B3: preserving trained feature extraction network weights f θ And training the feature set S corresponding to the image set to finish training the image anomaly detection model.
After training the image anomaly detection model, an ideal image anomaly detection model is obtained, and a verification set can be adopted to verify the ideal image anomaly detection model, so that the accuracy rate of the ideal image anomaly detection model is detected; testing an ideal image anomaly detection model by adopting a test set, and detecting the robustness of the model; if the difference between the accuracy of the image anomaly detection model on the test set and the accuracy in the calibration set training exceeds a preset value, the model is overfitted, the network structure or parameters are regulated to conduct retraining so as to obtain an image anomaly detection model, the difference between the accuracy on the test set and the accuracy in the calibration set training is within the preset value, and at the moment, the robustness of the image anomaly detection model is high. The accuracy here is: and carrying out sliding window blocking on the images in the check set or the test set according to the step-length pixels to obtain an image block sequence (the image block sequence is the same as the receiving field of the feature extraction network), then obtaining an abnormal feature image of the image through the feature extraction network, carrying out threshold segmentation on the abnormal feature image, calculating the area percentage of a specific image structure type in the image by utilizing the segmented binary image, and comparing the percentage with a manual labeling result to obtain the accuracy. If the accuracy of the images in the verification set is within the acceptable range, the image anomaly detection model obtained through training is considered to be an ideal model.
The image analysis system and the construction method can be used for, but are not limited to, analyzing pathological images so as to analyze the tumor cell image structure in the pathological images: for example, intelligent, efficient and quantitative analysis of follicular structures, island structures, liang Suozhuang structures, ribbon structures, diffuse structures and the like is performed, 8 pathologists with diagnosis experience of ovarian granulomatous tumors of more than 5 years are selected, 30 pathological section images of the ovarian granulomatous tumors are provided for each person, the tumor cell image structures in the pathological section images are analyzed, the accuracy and average time are calculated, and diagnosis states of the doctors are counted.
TABLE 1 comparison of results of image analysis of ovarian granulomatous pathological sections
As can be seen from table 1, the scheme provided by the invention is adopted to analyze the tumor cell image structures (follicular structures, island structures, liang Suozhuang structures, ribbon structures, diffuse structures and the like) in pathological sections, so that the accuracy is higher than that of a professional pathologist, and quantitative conclusions (the pathologist can only obtain subjective qualitative or semi-quantitative conclusions through visual analysis). Furthermore, the analysis of the method of the invention is less time consuming and the duration of the work is long.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (5)

1. The image analysis system construction method is characterized in that the constructed image analysis system comprises an image database construction unit, a convolutional neural network unit and an analysis unit;
the image database construction unit comprises an image data acquisition unit for acquiring input image data, an image data labeling unit for labeling different image structures in each input image data, and an image database construction unit for classifying and sorting the labeled image data provided by the image data labeling unit;
the convolutional neural network unit comprises a convolutional neuralThe system comprises a network model construction unit and a convolutional neural network model training unit, wherein the convolutional neural network model construction unit is used for constructing an image anomaly detection model; the convolutional neural network model training unit trains the image anomaly detection model, inputs the model comprise a training image set, loss function weights, a feature extraction network and a classification network, and outputs a feature set S comprising the training set, a feature extraction network after training and weights f thereof θ
The analysis unit analyzes a specific image structure in the image to be analyzed by using the trained image anomaly detection model;
the method comprises the steps that an original image is obtained by an image database construction unit, different image structures in the original image are marked, marked image data are classified and sorted, and a training set, a checking set and a testing set are divided; training the image anomaly detection model by adopting a training set to obtain an ideal image anomaly detection model, and checking the ideal image anomaly detection model by adopting a checking set to detect the accuracy rate of the ideal image anomaly detection model; testing an ideal image anomaly detection model by adopting a test set, and detecting the robustness of the model;
if the difference between the accuracy of the image anomaly detection model on the test set and the accuracy in the calibration set training exceeds a preset value, the model is subjected to fitting, the model is returned to the convolutional neural network training unit, and the network structure or parameters are adjusted to perform retraining so as to obtain an image anomaly detection model, the difference between the accuracy on the test set and the accuracy in the calibration set training is within the preset value, and at the moment, the robustness of the image anomaly detection model is high;
the training process of the training set on the image anomaly detection model is as follows:
b1: training a characteristic extraction network of an image anomaly detection model by adopting images in a training set;
training is performed by adopting one of the following two training processes:
training process one:
b11: for each image in the training set, randomly selecting one image block p from eight neighborhoods of 3X3 grid, and extracting the scale of p and the receiving field of the characteristic extraction networkAnd then randomly dithering the center of the image block p to obtain an image block p1, calculating the cross entropy of the image block p and the image block p1 as a subitem Loss function loss_1,wherein the image block p 1 The true relative position with respect to p is y {0,1, …,7}, y i Referring to the number of image blocks in the training set for category i at 8 relative positions y {0,1, …,7 }; classifier C φ Trained to correctly predict image block p 1 Relative to image block p, i.e. y=c φ (f θ (p),f θ (p 1 )),a i The confidence coefficient of the category i calculated by the classifier is calculated, and N is the total number of samples in the training set;
for image block p, randomly selecting an image block p which is in the same row or column as but not adjacent to image block p in the four neighbours of its 5×5 grid 2 ,p 2 The scale is the same as the acceptance field of the feature extraction network, the cross entropy of the image blocks p and p2 is calculated as a subitem Loss function loss_2,in which the image block p 2 The true relative position with respect to p is y {0,1,2,3}, y i Referring to the number of image blocks in the training set for category i at the 4 relative positions y {0,1,2,3 }; classifier C φ Trained to correctly predict image block p 2 Relative to image block p, i.e. y=c φ (f θ (p),f θ (p 2 )),b i Is the confidence of category i calculated by the classifier;
for the image block p, 2-4 of the image blocks p3, p4, p5 and p6 are obtained for four adjacent crossing areas of the image block p, the L2 norm distance between the p and the selected image block of the p3, p4, p5 and p6 is calculated, the average value is obtained, the average value is taken as a subitem Loss function loss_3,||f θ (p)-f θ (p 2+i )|| 2 refers to L of a picture block p and a selected picture block among p3, p4, p5, p62 norm distance;
b12: calculating a Loss function loss=λ of the network model 1 *Loss_1+λ 2 *Loss_2+Loss_3,λ 1 、λ 2 For the weight value in the loss function, which is larger than 0, and back propagation is carried out by using an Adam optimizer, so as to realize network weight iteration and optimization of the feature extraction network model;
b13: repeatedly executing the steps B11 to B12 until the number of the specified rounds, and selecting and storing the optimal weights of the feature extraction network and the classification network according to the loss function of each round of training;
training process II:
step 1: for each image in the training set, any image block p is selected, one image block p7 is randomly selected in eight adjacent areas of 3X3 grids of the image block p, the cross entropy of the image block p and the p7 is calculated as a subitem Loss function loss_4,wherein the image block p 7 The true relative position relative to p is y {0,1,..7 }, y i Referring to the number of image blocks in the training set for 8 relative positions of category i at y {0,1,.,. 7 }; classifier C φ Trained to correctly predict image block p 7 Relative position to picture block p, i.e. +.>c i The probability value of the category i calculated by the classifier is calculated, and N is the total number of samples in the training set;
for the image block p, randomly taking an image block p8 which is in the same row or the same column as the image block p but not adjacent to the image block p in the four neighborhoods of the 5X5 network, calculating the cross entropy of the p and the p8 as a subitem Loss function loss_5,wherein the image block p 8 The true relative position with respect to p is y {0,1,2,3}, y i Referring to the number of image blocks in the training set for category i at the 4 relative positions y {0,1,2,3 }; classifier C φ Trained to correctly predict image block p 8 Relative position to picture block p, i.e. +.>d i Is the probability value of category i calculated by the classifier;
step 2: calculating a Loss function loss=lambda.loss_4+loss_5 of the network model, wherein lambda is a weight value in the Loss function and is larger than 0, and performing back propagation by using an Adam optimizer to realize network weight iteration and optimization of the feature extraction network model;
step 3: repeatedly executing the designated number of rounds in the steps 1-2, and selecting and storing the optimal weights of the feature extraction network and the classification network according to the loss function of each round of training;
b2: using the feature extraction network after training to obtain and store feature set S, S≡S≡U { f corresponding to training set θ (p), namely: for each image in the training set, randomly extracting an image block which is the same as the receiving field of the feature extraction network, and obtaining the feature vector f of the image block through the feature extraction network obtained by training θ (p) the feature vectors as a whole form a feature vector set S;
b3: preserving trained feature extraction network weights f θ And a feature vector set S corresponding to the training set.
2. The image analysis system construction method according to claim 1, wherein the convolutional neural network construction unit includes a feature extraction network and a classification network;
the characteristic extraction network comprises A low layer formed by M convolution modules BLOCK-A, A high layer formed by N residual convolution modules BLOCK-B, P residual convolution modules BLOCK-C, A convolution layer connected with the high layer and A tanh () activation function;
the convolution module BLOCK-A consists of A convolution layer and A LeakyReLU () activation function;
the residual convolution module BLOCK-B consists of overlapped convolution kernels of 1x1 and 3x3 and a layer jump;
the residual convolution module BLOCK-C is composed of convolution kernels with overlapped 1x1, 1x3 and 3x1 and a layer jump;
the classification network is made up of fully connected layers and a LeakyReLU () activation function.
3. The image analysis system construction method according to claim 1, wherein the convolutional neural network construction unit includes a feature extraction network and a classification network;
the characteristic extraction network comprises a low layer formed by K convolution modules BLOCK-D, a high layer formed by Q self-supervision convolution modules BLOCK-F, a convolution layer connected with the high layer and a tanh () activation function;
the convolution module BLOCK-D consists of a convolution layer and a LeakyReLU () activation function;
the self-supervision convolution module BLOCK-F module comprises a plurality of convolution kernels of 1x1 and 3x3 which are overlapped with each other and a mean value pooling layer;
the classification network is made up of fully connected layers and a LeakyReLU () activation function.
4. The image analysis system construction method according to claim 1, wherein the convolutional neural network construction unit includes a feature extraction network and a classification network;
the characteristic extraction network comprises a plurality of convolution layers, and a BLOCK-G module is introduced into the middle position of the convolution layers;
the BLOCK-G module comprises a plurality of superimposed 1x1, 3x3, 5x5 convolution kernels and a maximum pooling layer;
the classification network is made up of fully connected layers and a LeakyReLU () activation function.
5. The image analysis system construction method according to claim 1, wherein the convolutional neural network unit further comprises a convolutional neural network model checking unit;
the convolutional neural network model checking unit comprises a model checking unit and a model testing unit, wherein the model checking unit is used for detecting the accuracy of the convolutional network model obtained through training; the model test unit is used for detecting whether the convolutional network model obtained through training is over-fitted or not so as to screen out the network model with high robustness.
CN202110338180.4A 2021-03-30 2021-03-30 Image analysis system and construction method thereof Active CN113096079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110338180.4A CN113096079B (en) 2021-03-30 2021-03-30 Image analysis system and construction method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110338180.4A CN113096079B (en) 2021-03-30 2021-03-30 Image analysis system and construction method thereof

Publications (2)

Publication Number Publication Date
CN113096079A CN113096079A (en) 2021-07-09
CN113096079B true CN113096079B (en) 2023-12-29

Family

ID=76671154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110338180.4A Active CN113096079B (en) 2021-03-30 2021-03-30 Image analysis system and construction method thereof

Country Status (1)

Country Link
CN (1) CN113096079B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691940B (en) * 2021-08-13 2022-09-27 天津大学 Incremental intelligent indoor positioning method based on CSI image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108922602A (en) * 2018-05-28 2018-11-30 中山大学附属第六医院 The same period new chemoradiation therapy curative effect evaluation system and method before rectal cancer based on big data analysis MRI image
WO2019099226A1 (en) * 2017-11-14 2019-05-23 Google Llc Weakly-supervised action localization by sparse temporal pooling network
WO2020014477A1 (en) * 2018-07-11 2020-01-16 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for image analysis with deep learning to predict breast cancer classes
CN110727097A (en) * 2019-12-19 2020-01-24 上海兰脉信息科技有限公司 Pathological microscopic image real-time acquisition and analysis system, method, device and medium
CN111178245A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Lane line detection method, lane line detection device, computer device, and storage medium
CN111666998A (en) * 2020-06-03 2020-09-15 电子科技大学 Endoscope intelligent intubation decision-making method based on target point detection
CN111985536A (en) * 2020-07-17 2020-11-24 万达信息股份有限公司 Gastroscope pathological image classification method based on weak supervised learning
CN112364565A (en) * 2020-11-12 2021-02-12 河北工业大学 Fault arc protection method based on AlexNet
CN112446388A (en) * 2020-12-05 2021-03-05 天津职业技术师范大学(中国职业培训指导教师进修中心) Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019099226A1 (en) * 2017-11-14 2019-05-23 Google Llc Weakly-supervised action localization by sparse temporal pooling network
CN108922602A (en) * 2018-05-28 2018-11-30 中山大学附属第六医院 The same period new chemoradiation therapy curative effect evaluation system and method before rectal cancer based on big data analysis MRI image
WO2020014477A1 (en) * 2018-07-11 2020-01-16 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for image analysis with deep learning to predict breast cancer classes
CN110727097A (en) * 2019-12-19 2020-01-24 上海兰脉信息科技有限公司 Pathological microscopic image real-time acquisition and analysis system, method, device and medium
CN111178245A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Lane line detection method, lane line detection device, computer device, and storage medium
CN111666998A (en) * 2020-06-03 2020-09-15 电子科技大学 Endoscope intelligent intubation decision-making method based on target point detection
CN111985536A (en) * 2020-07-17 2020-11-24 万达信息股份有限公司 Gastroscope pathological image classification method based on weak supervised learning
CN112364565A (en) * 2020-11-12 2021-02-12 河北工业大学 Fault arc protection method based on AlexNet
CN112446388A (en) * 2020-12-05 2021-03-05 天津职业技术师范大学(中国职业培训指导教师进修中心) Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ComNet: Combinational Neural Network for Object Detection in UAV-Borne Thermal Images;M. Li等;《IEEE Transactions on Geoscience and Remote Sensing》;第59卷(第8期);全文 *
基于深度卷积神经网络的宫颈细胞病理智能辅助诊断方法;廖欣等;《液晶与显示》;第33卷(第6期);全文 *

Also Published As

Publication number Publication date
CN113096079A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
CN110287932B (en) Road blocking information extraction method based on deep learning image semantic segmentation
CN105869173B (en) A kind of stereoscopic vision conspicuousness detection method
CN106682633B (en) The classifying identification method of stool examination image visible component based on machine vision
CN105260738B (en) High-resolution remote sensing image change detecting method and system based on Active Learning
WO2018052586A1 (en) Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
CN110647875B (en) Method for segmenting and identifying model structure of blood cells and blood cell identification method
Morris A pyramid CNN for dense-leaves segmentation
CN112801146B (en) Target detection method and system
CN111090764B (en) Image classification method and device based on multitask learning and graph convolution neural network
Pan et al. Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
KR101618996B1 (en) Sampling method and image processing apparatus for estimating homography
CN105930794A (en) Indoor scene identification method based on cloud computing
CN113096080B (en) Image analysis method and system
CN110008853B (en) Pedestrian detection network and model training method, detection method, medium and equipment
CN114463637B (en) Winter wheat remote sensing identification analysis method and system based on deep learning
CN108664986B (en) Based on lpNorm regularized multi-task learning image classification method and system
CN110738132B (en) Target detection quality blind evaluation method with discriminant perception capability
CN108710893A (en) A kind of digital image cameras source model sorting technique of feature based fusion
Geng et al. An improved helmet detection method for YOLOv3 on an unbalanced dataset
CN111222545B (en) Image classification method based on linear programming incremental learning
CN113096079B (en) Image analysis system and construction method thereof
Devisurya et al. Early detection of major diseases in turmeric plant using improved deep learning algorithm
CN109741351A (en) A kind of classification responsive type edge detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant