CN109671062A - Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing - Google Patents

Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing Download PDF

Info

Publication number
CN109671062A
CN109671062A CN201811512039.6A CN201811512039A CN109671062A CN 109671062 A CN109671062 A CN 109671062A CN 201811512039 A CN201811512039 A CN 201811512039A CN 109671062 A CN109671062 A CN 109671062A
Authority
CN
China
Prior art keywords
ultrasonic
image detection
images
training
ultrasound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811512039.6A
Other languages
Chinese (zh)
Inventor
王利团
朱敏娟
曹晏阁
黄伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Intelligent Diega Technology Partnership (limited Partnership)
Original Assignee
Chengdu Intelligent Diega Technology Partnership (limited Partnership)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Intelligent Diega Technology Partnership (limited Partnership) filed Critical Chengdu Intelligent Diega Technology Partnership (limited Partnership)
Priority to CN201811512039.6A priority Critical patent/CN109671062A/en
Publication of CN109671062A publication Critical patent/CN109671062A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A kind of ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing provided by the embodiments of the present application, pass through the multiple groups ultrasound image pretreatment to acquisition, ultrasound image described in each group includes multiple ultrasound images, multiple ultrasound images in ultrasound image described in same group are obtained by the same primary inspection of same sufferer, after being divided to pretreated data by preset ratio, it is respectively trained and tests the deep neural network model constructed in advance, until the test effect of ultrasound image detection model achieves the desired results.This programme merges same sufferer with multiple ultrasound image datas once checked, effectively reduces average information loss, keeps testing result more accurate.

Description

Ultrasonic image detection method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the field of application image detection, in particular to an ultrasonic image detection method, an ultrasonic image detection device, electronic equipment and a readable storage medium.
Background
Color ultrasound is one of the first-choice imaging methods in thyroid nodule-like lesion examination, and high-resolution thyroid color ultrasound examination is the most sensitive method for evaluating thyroid nodules at present. At present, many researches are carried out on methods for automatically identifying thyroid cancer through color ultrasound images, one method is to segment thyroid nodules from each thyroid color ultrasound image and then judge whether the nodules are benign or malignant so as to judge whether an examinee has thyroid cancer, and the other method is to directly judge whether the thyroid cancer exists in the thyroid cancer through extracting image features based on a single thyroid color ultrasound image.
Most of the existing thyroid cancer detection methods are based on a single color ultrasound image, and in actual clinical examination, when doctors judge whether thyroid nodules are benign or malignant, the doctors often need to observe the thyroid nodules from multiple angles, and if necessary, the doctors need to judge whether thyroid nodules are benign or malignant by combining information such as blood flow signals around the thyroid nodules. Therefore, a single color ultrasound image often cannot contain all the features of a thyroid nodule, and the lack of information can cause misdiagnosis of benign and malignant nodules to some extent. In addition, another intelligent diagnosis method for judging whether the nodules are good or not through segmentation firstly depends on two modules of nodule segmentation and nodule classification, and if the nodules are omitted in a segmentation algorithm, the accuracy of good or bad classification of the nodules is directly influenced.
Disclosure of Invention
In view of the above, an object of the present application is to provide an ultrasound image detection method, an ultrasound image detection apparatus, an electronic device and a readable storage medium to improve the above problem.
The embodiment of the application provides an ultrasonic image detection method, which comprises the following steps:
selecting a plurality of groups of ultrasonic images, and preprocessing each group of ultrasonic images to obtain a data set, wherein each group of ultrasonic images comprises a plurality of ultrasonic images, and the plurality of ultrasonic images in the same group of ultrasonic images are obtained by the same examination of the same patient;
dividing the data set into a training set and a test set according to a preset proportion, and training a pre-constructed deep neural network model by using the training set to obtain an ultrasonic image detection training model;
and inputting the test set into the ultrasonic image detection training model, testing the ultrasonic image detection training model, and obtaining the ultrasonic image detection model.
Further, the step of selecting a plurality of sets of ultrasound images, and preprocessing each set of ultrasound images to obtain a data set includes:
selecting a plurality of groups of ultrasonic images, carrying out lesion marking on each group of ultrasonic images according to the obtained pathological detection report and diagnosis result, and obtaining a lesion marking result;
and selecting frames and cutting the color Doppler ultrasound detection parts in the ultrasonic images after lesion marking to form a data set.
Further, the pre-constructed deep neural network model comprises an input layer, a hidden layer and an output layer, wherein the hidden layer comprises a plurality of network layers, and each of the input layer, the hidden layer and the output layer comprises a plurality of neurons.
Further, the step of dividing the data set into a training set and a test set according to a preset proportion, training a pre-constructed deep neural network model with the training set, and obtaining an ultrasound image detection training model includes:
dividing the data set into a training set and a test set according to a preset proportion, wherein the training set comprises X { (X)1,d1),(X2,d2),…,(Xi,di),…,(Xm,dm) In which Xi={xi1,xi2,…,xij,…,xiNN denotes the number of ultrasound images included in a group of ultrasound images, xijRepresenting the jth ultrasound image in the ith group of ultrasound images, m representing the number of groups of ultrasound images in the training set, diRepresenting the lesion marking result corresponding to the ith group of ultrasonic images;
carrying out augmentation operation and normalization operation on each group of ultrasonic images in the training set, wherein the augmentation operation comprises rotation and turnover;
and inputting the ultrasonic image into a pre-constructed deep neural network model, and optimizing the connection weight between the neurons by using a forward calculation and back propagation algorithm to obtain an ultrasonic image detection training model.
Further, the step of inputting the ultrasound image into a pre-constructed deep neural network model, and optimizing the connection weight between the neurons by using a forward calculation and back propagation algorithm to obtain an ultrasound image detection training model includes:
inputting the ultrasonic image into a pre-constructed deep neural network model;
the deep neural network model respectively extracts the feature set of each ultrasonic imageFeatures extracted from a jth image representing the ith set of ultrasound images at an L-th layer network layer;
fusing each of the ultrasound imagesTo obtain a fused feature in the ultrasound imageWherein, αjAttention weight;
inputting the fusion characteristics to the output layer for classification processing to obtain network output;
and updating the connection weight by using a back propagation algorithm to obtain an ultrasonic image detection training model.
Further, the preprocessing comprises lesion labeling of multiple groups of ultrasonic images, and the step of inputting the test set into the ultrasonic image detection training model, testing the ultrasonic image detection training model, and obtaining the ultrasonic image detection model comprises;
inputting the test set into the ultrasonic image detection training model to obtain a test result;
comparing the test result with the lesion marking result, counting the number of the ultrasonic image groups with the test result consistent with the lesion marking result, and obtaining the accuracy of the test result;
and comparing the accuracy with a preset value, if the accuracy is smaller than the preset value, training the ultrasonic image detection training model by using the training set until the accuracy is larger than or equal to the preset value, and using the ultrasonic image detection training model as an ultrasonic image detection model.
Further, after the step of obtaining the ultrasound image detection model, the method further includes:
obtaining an original ultrasonic image;
cutting a color Doppler ultrasound detection part in the original ultrasound image to form an image to be detected, wherein the original ultrasound image is a plurality of ultrasound images obtained by the same examination of the same patient;
and detecting the lesion of the image to be detected by using the ultrasonic image detection model to obtain a detection result.
The embodiment of the present application further provides an ultrasound image detection apparatus, including:
the system comprises a preprocessing module, a data acquisition module and a data processing module, wherein the preprocessing module is used for selecting a plurality of groups of ultrasonic images and preprocessing each group of ultrasonic images to obtain a data set, each group of ultrasonic images comprises a plurality of ultrasonic images, and the plurality of ultrasonic images in the same group of ultrasonic images are obtained by the same examination of the same patient;
the training module is used for dividing the data set into a training set and a test set according to a preset proportion, and training a pre-constructed deep neural network model by using the training set to obtain an ultrasonic image detection training model;
and the test module is used for inputting the test set into the ultrasonic image detection training model, testing the ultrasonic image detection training model and obtaining the ultrasonic image detection model.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a storage medium;
a processor;
an ultrasound image detection device stored in the storage medium and including software functional modules executed by the processor, the device comprising:
the system comprises a preprocessing module, a data acquisition module and a data processing module, wherein the preprocessing module is used for selecting a plurality of groups of ultrasonic images and preprocessing each group of ultrasonic images to obtain a data set, each group of ultrasonic images comprises a plurality of ultrasonic images, and the plurality of ultrasonic images in the same group of ultrasonic images are obtained by the same examination of the same patient;
the training module is used for dividing the data set into a training set and a test set according to a proportion, and training a deep neural network model constructed in advance by using the training set to obtain an ultrasonic image detection training model;
and the test module is used for inputting the test set into the ultrasonic image detection training model, testing the ultrasonic image detection training model and obtaining the ultrasonic image detection model.
The embodiment of the present application further provides a readable storage medium, in which a computer program is stored, and when the computer program is executed, the method for detecting an ultrasound image is implemented.
According to the ultrasonic image detection method, the ultrasonic image detection device, the electronic equipment and the readable storage medium, multiple groups of ultrasonic images are obtained through pretreatment, each group of ultrasonic images comprises multiple ultrasonic images, the multiple ultrasonic images in the same group of ultrasonic images are obtained through the same examination of the same patient, after the pretreated data are divided according to the preset proportion, the deep neural network model built in advance is trained and tested respectively until the test effect of the ultrasonic image detection model reaches the expected effect. The scheme fuses a plurality of ultrasonic image data of the same patient and the same examination, effectively reduces the loss of intermediate information and enables the detection result to be more accurate.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of an ultrasound image detection method according to an embodiment of the present application.
Fig. 3 is a flowchart of the sub-steps of step S1 in fig. 2.
Fig. 4 is a schematic diagram of a valid part of a frame-selected color Doppler ultrasound in the ultrasound image detection method according to the embodiment of the present application.
Fig. 5 is a flowchart of the sub-steps of step S2 in fig. 2.
Fig. 6 is a flowchart of the sub-steps of step S3 in fig. 2.
Fig. 7 is another flowchart of an ultrasound image detection method according to an embodiment of the present application.
Icon: 100-an electronic device; 110-ultrasound image detection means; 120-a processor; 130-storage medium.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
As shown in fig. 1, an embodiment of the present invention provides an electronic device 100, where the electronic device 100 includes a storage medium 130, a processor 120, and an ultrasound image detection apparatus 110.
The storage medium 130 is electrically connected to the processor 120, directly or indirectly, to enable transmission or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The ultrasound image detection apparatus 110 includes at least one software function module which can be stored in the storage medium 130 in the form of software or firmware (firmware). The processor 120 is configured to execute an executable computer program stored in the storage medium 130, for example, a software functional module and a computer program included in the ultrasound image detection apparatus 110, so as to implement the ultrasound image detection method.
The storage medium 130 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The storage medium 130 is used for storing a program, and the processor 120 executes the program after receiving the execution instruction.
The processor 120 may be an integrated circuit chip having signal processing capabilities. The Processor 120 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 120 may be any conventional processor or the like.
It is to be understood that the configuration shown in fig. 1 is merely exemplary, and that the electronic device 100 may include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Alternatively, the specific type of the electronic device 100 is not limited, and may be, for example, but not limited to, a smart phone, a Personal Computer (PC), a tablet PC, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a web server, a data server, and the like having a processing function.
With reference to fig. 2, an embodiment of the present invention further provides an ultrasound image detection method applicable to the electronic device 100. Wherein the method steps defined by the method-related flow may be implemented by the processor 120.
The specific process shown in fig. 2 will be described in detail below by taking an example of a thyroid ultrasound image.
The method comprises S1, S2, S3.
S1, selecting a plurality of groups of ultrasound images, and preprocessing each group of ultrasound images to obtain a data set, wherein each group of ultrasound images includes a plurality of ultrasound images, and a plurality of ultrasound images in the same group of ultrasound images are obtained from the same examination of the same patient.
In order to detect whether the examinee has thyroid cancer, a radiologist acquires a plurality of color ultrasound images at different angles for the thyroid gland and bilateral necks of the examinee at each examination, and acquires color ultrasound images in various forms such as blood flow signals and spectrum signals for thyroid nodules which are found to have possible lesions. The detection method of the invention inputs a plurality of images collected by a patient into the detection model, and a plurality of ultrasonic images of the same patient are used as a data set and are transmitted into the model for detection.
The detection of the thyroid cancer in the thyroid color Doppler image data by a Deep Neural Network (DNNs) automatically learns the characteristics of the thyroid cancer and other thyroid disease pathological changes from a large number of thyroid color Doppler images. Therefore, before the deep neural network is trained, a large number of thyroid color ultrasound images need to be labeled and preprocessed, and a data set needs to be divided correspondingly.
Therefore, referring to fig. 3, the S1 specifically includes S11 and S12.
And S11, selecting multiple groups of ultrasonic images, carrying out lesion marking on each group of ultrasonic images according to the obtained pathology detection report and diagnosis result, and obtaining a lesion marking result.
When an examiner performs examination each time, a radiology technologist collects a plurality of color ultrasound images of the thyroid and the peripheral related parts of the examiner respectively, and each color ultrasound image obtained by single examination of a certain examiner is a group of image data. A professional radiologist marks the exact location of each set of image data where a thyroid lesion occurs and gives the type of lesion. When a radiologist is unsure of whether a diseased nodule is cancerous, the examiner is often advised to further the examination by determining the malignancy or malignancy of the nodule by means of a pathological puncture or a pathological section, etc. Common thyroid lesions include nodular goiter, hashimoto thyroiditis, thyroma, thyroid cancer and the like. The color ultrasound image used by the invention is subjected to data annotation according to a radiology technician and a pathological report diagnosis result.
And S12, selecting and cutting the color ultrasound examination part in each group of the ultrasound images after lesion marking to form a data set.
Referring to fig. 4, fig. 4 is a schematic diagram of a frame-selected color super active portion. Specifically, each color Doppler ultrasound image is cut, a color Doppler ultrasound inspection part in each color Doppler ultrasound image is automatically positioned, marked by an outer frame A, and cut.
And S2, dividing the data set into a training set and a test set according to a preset proportion, and training a pre-constructed deep neural network model by using the training set to obtain an ultrasonic image detection training model.
Specifically, the pre-constructed deep neural network model comprises an input layer, a hidden layer and an output layer, wherein the hidden layer comprises a plurality of network layers, and each of the input layer, the hidden layer and the output layer comprises a plurality of neurons. The neurons of each adjacent network layer may be all connected or partially connected. The adjacent network layers are layers passing through in sequence according to the data flow direction.
In the embodiment of the present application, the preset ratio for dividing the data set may be 4:1, and in practical applications, the ratio may be set according to requirements, which is not limited herein.
Referring to fig. 5, the S2 further includes S21, S22 and S23.
S21, dividing the data set into a training set and a testing set according to a preset proportion, wherein the training set comprises X { (X)1,d1),(X2,d2),…,(Xi,di),…,(Xm,dm) In which Xi={xi1,xi2,…,xij,…,xiNN denotes the number of ultrasound images included in a group of ultrasound images, xijRepresenting the jth ultrasound image in the ith group of ultrasound images, m representing the number of groups of ultrasound images in the training set, diAnd representing the lesion labeling result corresponding to the ith group of ultrasonic images.
It should be noted that N may be a natural number such as 2, 3, 4, 5, and 6, and the number of ultrasound images may vary in different ultrasound image groups, which is not limited herein.
And S22, performing augmentation operation and normalization operation on each group of ultrasonic images in the training set, wherein the augmentation operation comprises rotation, overturning and the like.
In order to make the deep neural network model have better performance, in the training stage, the amplification operation is carried out on each group of ultrasonic images by adopting methods such as rotation, overturning and the like, so as to obtain a larger data set for training the deep neural network model.
Specifically, for each group of ultrasonic images, a three-channel (RGB) normalized value of the ultrasonic image is directly used as an input value of the deep neural network model. For the output layer of the deep neural network model, because the invention aims to solve the problem of image two classification, namely, outputting two different classes, if the ith image sample belongs to the class 1, the target output is expressed as di=[1,0]TWhereas, the target output is represented as di=[0,1]T
And S23, inputting the ultrasonic image into a pre-constructed deep neural network model, and optimizing the connection weight between the neurons by using a forward calculation and back propagation algorithm to obtain an ultrasonic image detection training model.
Further, the S23 includes S231, S232, S233, S234, and S235.
S231, inputting the ultrasonic image into a pre-constructed deep neural network model.
S232, the deep neural network model respectively extracts the feature set of each ultrasonic imageThe jth image representing the ith set of ultrasound images is characterized as being extracted at an L-th network level.
Specifically, in the embodiment of the present invention, for a network layer including L layers, a connection weight matrix from the L-th layer to the L +1 layer of the network layer is Wl. Let the activation function of the neuron on the l-th layer be f (·), and continuously perform forward calculation from the input layer to the output layer, taking the fully-connected layer as an example, the process is:
wherein, ai lIndicating the activation value of the ith neuron in the l-th layer network layer. The activation function adopts ReLU (Rectified Linear Unit, Linear rectification function),
calculating in sequence to obtain sample XiFeature set of all images in
S233, fusing the feature sets of the ultrasonic images to obtain fused features in the ultrasonic imagesWherein, αjIs the attention weight.
And S234, inputting the fusion characteristics to the output layer for classification processing to obtain network output.
Specifically, after the feature set is obtained, an attention mechanism is adopted to fuse the features of the multiple images to obtain a sample XiIs characterized byThe process is as follows:
wherein,the attention weight set is obtained from the attention network.
Feature after fusionInputting the classified network to obtain the network output of the last layerThe output layer adopts a softmax classifier, and uses a Cross Entropy (Cross Entropy) Function as a Loss Function (Loss Function), wherein the Loss Function isWherein,is diThe transposed matrix of (2).
And S235, updating the connection weight value by using a back propagation algorithm to obtain an ultrasonic image detection training model. S235 also includes S2351, S2352, S2353 and S2354.
Specifically, S2351, calculating the sensitivity of the output layer
Wherein,is the net input of the output layer, diIs the target output.
S2352, taking the full connection layer as an example, calculating the sensitivity of each layer from the output layer to the front in sequence
S2353, calculating out the corresponding gradient from the sensitivity
S2354, updating the corresponding weight:
Wl←Wl-ηΔWl
wherein η is the learning step size of the weight value update.
And S3, inputting the test set into the ultrasonic image detection training model, testing the ultrasonic image detection training model, and obtaining the ultrasonic image detection model.
Referring to fig. 6, S3 further includes S31, S32 and S33.
And S31, inputting the test set into the ultrasonic image detection training model to obtain a test result.
The test of the deep neural network model is to test whether the designed neural network model can correctly classify the data which are not learned after training, namely whether the new thyroid color Doppler ultrasound image can correctly judge whether thyroid cancer exists in the image. The method can be used for evaluating the performance of the designed deep neural network. And inputting a plurality of groups of thyroid color ultrasound images in the test set in the test process. And calculating the activation value of the neuron of the network output layer through the trained deep neural network, and predicting the class of the group of images according to the activation value. This process is done entirely independently of the ultrasound image detection training model.
And S32, comparing the test result with the lesion labeling result, counting the number of the ultrasonic image groups with the test result consistent with the lesion labeling result, and obtaining the accuracy of the test result.
S33, comparing the accuracy with a preset value, if the accuracy is less than the preset value, using the training set to train the ultrasonic image detection training model, and taking the ultrasonic image detection training model as the ultrasonic image detection model until the accuracy is greater than or equal to the preset value.
Specifically, the test result output by the ultrasonic image detection training model is compared with the lesion labeling result, the number of correct predicted sample groups is counted, and the accuracy is calculated. And when the accuracy reaches the preset value, finishing the training of the deep neural network of the thyroid color Doppler ultrasound image classification problem. Otherwise, S2 is carried out to continue training the model, and the ultrasonic image detection training model is used as the ultrasonic image detection model when the accuracy is larger than or equal to the preset value.
Referring to fig. 7, after the step of obtaining the ultrasound image detection model, the practical application method further includes:
and S100, obtaining an original ultrasonic image.
S200, cutting a color ultrasound examination part in the original ultrasound image to form an image to be detected, wherein the original ultrasound image is a plurality of ultrasound images obtained by the same examination of the same patient.
Specifically, referring again to fig. 4, the color ultrasound inspection portion of each color ultrasound image is automatically positioned, marked with a red frame, and cropped.
S300, detecting the lesion of the image to be detected by using the ultrasonic image detection model to obtain a detection result.
The method can be used for judging whether the part in the ultrasonic image is diseased or not by using the new ultrasonic image to be detected with unknown results after the ultrasonic image detection model is trained, can save a large amount of manpower and material resources for medical diagnosis, saves a part of work of a radiological technician, and is convenient for developing diagnosis and treatment work of hospitals with insufficient doctor resources, such as rural hospitals and township hospitals and the like.
The present application further provides an ultrasound image detection apparatus 110, comprising:
the preprocessing module is used for selecting a plurality of groups of ultrasonic images and preprocessing each group of ultrasonic images to obtain a data set, wherein each group of ultrasonic images comprises a plurality of ultrasonic images, and the plurality of ultrasonic images in the same group of ultrasonic images are obtained by the same detection of the same patient.
And the training module is used for dividing the data set into a training set and a test set according to a preset proportion, and training a pre-constructed deep neural network model by using the training set to obtain an ultrasonic image detection training model.
And the test module is used for inputting the test set into the ultrasonic image detection training model, testing the ultrasonic image detection training model and obtaining the ultrasonic image detection model.
It can be understood that, for the specific operation method of each functional module in this embodiment, reference may be made to the detailed description of the corresponding step in the foregoing method embodiment, and no repeated description is provided herein.
A readable storage medium, in which a computer program is stored, which when executed implements the ultrasound image detection method described above.
To sum up, according to the ultrasound image detection method, the ultrasound image detection device, the electronic device 100 and the readable storage medium provided by the embodiment of the present application, by preprocessing multiple sets of obtained ultrasound images, each set of ultrasound images includes multiple ultrasound images, the multiple ultrasound images in the same set of ultrasound images are obtained by the same examination of the same patient, and after the preprocessed data are divided according to a preset proportion, the depth neural network model constructed in advance is trained and tested respectively until the test effect of the ultrasound image detection model reaches an expected effect. The scheme fuses a plurality of ultrasonic image data of the same patient and the same examination, effectively reduces the loss of intermediate information and enables the detection result to be more accurate.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An ultrasound image detection method, comprising:
selecting a plurality of groups of ultrasonic images, and preprocessing each group of ultrasonic images to obtain a data set, wherein each group of ultrasonic images comprises a plurality of ultrasonic images, and the plurality of ultrasonic images in the same group of ultrasonic images are obtained by the same examination of the same patient;
dividing the data set into a training set and a test set according to a preset proportion, and training a pre-constructed deep neural network model by using the training set to obtain an ultrasonic image detection training model;
and inputting the test set into the ultrasonic image detection training model, testing the ultrasonic image detection training model, and obtaining the ultrasonic image detection model.
2. The method of claim 1, wherein the step of selecting a plurality of groups of ultrasound images, and preprocessing each group of ultrasound images to obtain a data set comprises:
selecting a plurality of groups of ultrasonic images, carrying out lesion marking on each group of ultrasonic images according to the obtained pathological detection report and diagnosis result, and obtaining a lesion marking result;
and selecting frames and cutting the color Doppler ultrasound detection parts in the ultrasonic images after lesion marking to form a data set.
3. The method according to claim 2, wherein the pre-constructed deep neural network model comprises an input layer, a hidden layer and an output layer, the hidden layer comprises a plurality of network layers, and each of the input layer, the hidden layer and the output layer comprises a plurality of neurons.
4. The method of claim 3, wherein the step of dividing the data set into a training set and a test set according to a predetermined ratio, training a pre-constructed deep neural network model with the training set, and obtaining the training model for ultrasonic image detection comprises:
dividing the data set into a training set and a test set according to a preset proportion, wherein the training set comprises X { (X)1,d1),(X2,d2),…,(Xi,di),…,(Xm,dm) In which Xi={xi1,xi2,…xij,…,xiNN denotes the number of ultrasound images included in a group of ultrasound images, xijRepresenting the jth ultrasound image in the ith group of ultrasound images, and m representing the superset in the training setNumber of sets of acoustic images, diRepresenting the lesion marking result corresponding to the ith group of ultrasonic images;
carrying out augmentation operation and normalization operation on each group of ultrasonic images in the training set, wherein the augmentation operation comprises rotation and turnover;
and inputting the ultrasonic image into a pre-constructed deep neural network model, and optimizing the connection weight between the neurons by using a forward calculation and back propagation algorithm to obtain an ultrasonic image detection training model.
5. The method of claim 4, wherein the step of inputting the ultrasound image into a pre-constructed deep neural network model, and optimizing the connection weights between the neurons by using a forward computation and back propagation algorithm to obtain an ultrasound image detection training model comprises:
inputting the ultrasonic image into a pre-constructed deep neural network model;
the deep neural network model respectively extracts the feature set of each ultrasonic image Features extracted from a jth image representing the ith set of ultrasound images at an L-th layer network layer;
fusing the characteristics of the ultrasonic images to obtain fused characteristics in the ultrasonic images Wherein, αjAttention weight;
inputting the fusion characteristics to the output layer for classification processing to obtain network output;
and updating the connection weight by using a back propagation algorithm to obtain an ultrasonic image detection training model.
6. The method according to claim 1, wherein the preprocessing includes lesion labeling a plurality of groups of ultrasound images, and the step of inputting the test set into the ultrasound image detection training model, testing the ultrasound image detection training model, and obtaining the ultrasound image detection model includes;
inputting the test set into the ultrasonic image detection training model to obtain a test result;
comparing the test result with the lesion marking result, counting the number of the ultrasonic image groups with the test result consistent with the lesion marking result, and obtaining the accuracy of the test result;
and comparing the accuracy with a preset value, if the accuracy is smaller than the preset value, training the ultrasonic image detection training model by using the training set until the accuracy is larger than or equal to the preset value, and using the ultrasonic image detection training model as an ultrasonic image detection model.
7. The method of claim 1, wherein after the step of obtaining the ultrasound image inspection model, the method further comprises:
obtaining an original ultrasonic image;
cutting a color Doppler ultrasound detection part in the original ultrasound image to form an image to be detected, wherein the original ultrasound image is a plurality of ultrasound images obtained by the same examination of the same patient;
and detecting the lesion of the image to be detected by using the ultrasonic image detection model to obtain a detection result.
8. An ultrasound image detection apparatus, comprising:
the system comprises a preprocessing module, a data acquisition module and a data processing module, wherein the preprocessing module is used for selecting a plurality of groups of ultrasonic images and preprocessing each group of ultrasonic images to obtain a data set, each group of ultrasonic images comprises a plurality of ultrasonic images, and the plurality of ultrasonic images in the same group of ultrasonic images are obtained by the same examination of the same patient;
the training module is used for dividing the data set into a training set and a test set according to a preset proportion, and training a pre-constructed deep neural network model by using the training set to obtain an ultrasonic image detection training model;
and the test module is used for inputting the test set into the ultrasonic image detection training model, testing the ultrasonic image detection training model and obtaining the ultrasonic image detection model.
9. An electronic device, characterized in that the electronic device comprises:
a storage medium;
a processor;
an ultrasound image detection device stored in the storage medium and including software functional modules executed by the processor, the device comprising:
the system comprises a preprocessing module, a data acquisition module and a data processing module, wherein the preprocessing module is used for selecting a plurality of groups of ultrasonic images and preprocessing each group of ultrasonic images to obtain a data set, each group of ultrasonic images comprises a plurality of ultrasonic images, and the plurality of ultrasonic images in the same group of ultrasonic images are obtained by the same examination of the same patient;
the training module is used for dividing the data set into a training set and a test set according to a proportion, and training a deep neural network model constructed in advance by using the training set to obtain an ultrasonic image detection training model;
and the test module is used for inputting the test set into the ultrasonic image detection training model, testing the ultrasonic image detection training model and obtaining the ultrasonic image detection model.
10. A readable storage medium, wherein a computer program is stored in the readable storage medium, which when executed, implements the ultrasound image detection method of any of claims 1-7.
CN201811512039.6A 2018-12-11 2018-12-11 Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing Pending CN109671062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811512039.6A CN109671062A (en) 2018-12-11 2018-12-11 Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811512039.6A CN109671062A (en) 2018-12-11 2018-12-11 Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing

Publications (1)

Publication Number Publication Date
CN109671062A true CN109671062A (en) 2019-04-23

Family

ID=66143700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811512039.6A Pending CN109671062A (en) 2018-12-11 2018-12-11 Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing

Country Status (1)

Country Link
CN (1) CN109671062A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136066A (en) * 2019-05-23 2019-08-16 北京百度网讯科技有限公司 Super-resolution method, device, equipment and storage medium towards video
CN110378888A (en) * 2019-07-22 2019-10-25 新名医(北京)科技有限公司 A kind of physiology phase monitoring method, device, ultrasonic device and storage medium
CN111275689A (en) * 2020-01-20 2020-06-12 平安科技(深圳)有限公司 Medical image identification and detection method and device and computer readable storage medium
CN113609971A (en) * 2021-08-04 2021-11-05 广州威拓电子科技有限公司 Method, device and equipment for inspecting microseism observation equipment and storage medium
CN113842166A (en) * 2021-10-25 2021-12-28 上海交通大学医学院 Ultrasound image acquisition method and related device based on ultrasound imaging equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945870A (en) * 2017-12-13 2018-04-20 四川大学 Retinopathy of prematurity detection method and device based on deep neural network
CN108154505A (en) * 2017-12-26 2018-06-12 四川大学 Diabetic retinopathy detection method and device based on deep neural network
CN108230311A (en) * 2018-01-03 2018-06-29 四川大学 A kind of breast cancer detection method and device
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
CN108537135A (en) * 2018-03-16 2018-09-14 北京市商汤科技开发有限公司 The training method and device of Object identifying and Object identifying network, electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945870A (en) * 2017-12-13 2018-04-20 四川大学 Retinopathy of prematurity detection method and device based on deep neural network
CN108154505A (en) * 2017-12-26 2018-06-12 四川大学 Diabetic retinopathy detection method and device based on deep neural network
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
CN108230311A (en) * 2018-01-03 2018-06-29 四川大学 A kind of breast cancer detection method and device
CN108537135A (en) * 2018-03-16 2018-09-14 北京市商汤科技开发有限公司 The training method and device of Object identifying and Object identifying network, electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136066A (en) * 2019-05-23 2019-08-16 北京百度网讯科技有限公司 Super-resolution method, device, equipment and storage medium towards video
CN110136066B (en) * 2019-05-23 2023-02-24 北京百度网讯科技有限公司 Video-oriented super-resolution method, device, equipment and storage medium
CN110378888A (en) * 2019-07-22 2019-10-25 新名医(北京)科技有限公司 A kind of physiology phase monitoring method, device, ultrasonic device and storage medium
CN110378888B (en) * 2019-07-22 2022-01-25 新名医(北京)科技有限公司 Physiological period monitoring method and device, ultrasonic equipment and storage medium
CN111275689A (en) * 2020-01-20 2020-06-12 平安科技(深圳)有限公司 Medical image identification and detection method and device and computer readable storage medium
WO2021147218A1 (en) * 2020-01-20 2021-07-29 平安科技(深圳)有限公司 Medical image recognition and analysis method and apparatus, device and storage medium
CN111275689B (en) * 2020-01-20 2024-09-13 平安科技(深圳)有限公司 Medical image recognition detection method, device and computer readable storage medium
CN113609971A (en) * 2021-08-04 2021-11-05 广州威拓电子科技有限公司 Method, device and equipment for inspecting microseism observation equipment and storage medium
CN113842166A (en) * 2021-10-25 2021-12-28 上海交通大学医学院 Ultrasound image acquisition method and related device based on ultrasound imaging equipment

Similar Documents

Publication Publication Date Title
CN109671062A (en) Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing
Kisilev et al. From medical image to automatic medical report generation
Bellotti et al. A completely automated CAD system for mass detection in a large mammographic database
Ionescu et al. Prediction of reader estimates of mammographic density using convolutional neural networks
Deng et al. A classification–detection approach of COVID-19 based on chest X-ray and CT by using keras pre-trained deep learning models
CN112070231A (en) Data slicing for machine learning performance testing and improvement
Khanna et al. Radiologist-level two novel and robust automated computer-aided prediction models for early detection of COVID-19 infection from chest X-ray images
Goel et al. The effect of machine learning explanations on user trust for automated diagnosis of COVID-19
Zhang et al. Explainability metrics of deep convolutional networks for photoplethysmography quality assessment
CN117095241B (en) Screening method, system, equipment and medium for drug-resistant phthisis class
CN112508884A (en) Comprehensive detection device and method for cancerous region
Denzinger et al. Automatic CAD-RADS scoring using deep learning
Kaliyugarasan et al. Pulmonary nodule classification in lung cancer from 3D thoracic CT scans using fastai and MONAI
Zhang Computer-aided diagnosis for pneumoconiosis staging based on multi-scale feature mapping
Vogado et al. A ensemble methodology for automatic classification of chest X-rays using deep learning
Abdalla et al. Transfer learning models comparison for detecting and diagnosing skin cancer
Balajee et al. Machine learning based identification and classification of disorders in human knee joint–computational approach
Xu et al. Data-driven decision model based on dynamical classifier selection
Deng et al. Ai-empowered computational examination of chest imaging for covid-19 treatment: A review
Murty et al. Integrative hybrid deep learning for enhanced breast cancer diagnosis: leveraging the Wisconsin Breast Cancer Database and the CBIS-DDSM dataset
Conforti et al. Kernel-based support vector machine classifiers for early detection of myocardial infarction
GALAGAN et al. Automation of polycystic ovary syndrome diagnostics through machine learning algorithms in ultrasound imaging
CN114529759B (en) Thyroid nodule classification method and device and computer readable medium
CN111768367B (en) Data processing method, device and storage medium
Zheng et al. Assessing accuracy of mammography in the presence of verification bias and intrareader correlation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190423

RJ01 Rejection of invention patent application after publication