CN117392040A - Standard section identification method, system, device and storage medium - Google Patents

Standard section identification method, system, device and storage medium Download PDF

Info

Publication number
CN117392040A
CN117392040A CN202210759731.9A CN202210759731A CN117392040A CN 117392040 A CN117392040 A CN 117392040A CN 202210759731 A CN202210759731 A CN 202210759731A CN 117392040 A CN117392040 A CN 117392040A
Authority
CN
China
Prior art keywords
image
frame
model
identified
ultrasonic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210759731.9A
Other languages
Chinese (zh)
Inventor
王�忠
薛隆基
甘从贵
骆伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chison Medical Technologies Co ltd
Original Assignee
Chison Medical Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chison Medical Technologies Co ltd filed Critical Chison Medical Technologies Co ltd
Priority to CN202210759731.9A priority Critical patent/CN117392040A/en
Publication of CN117392040A publication Critical patent/CN117392040A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a system, a device and a storage medium for identifying a standard section, wherein the method comprises the following steps: acquiring an ultrasonic image sequence to be identified, and obtaining a saliency detection image sequence corresponding to the ultrasonic image sequence to be identified through a saliency detection model in a pre-trained target combination model; according to the ultrasonic image sequence to be identified and the saliency detection image sequence, identifying the probability that each frame of ultrasonic image to be identified belongs to each standard tangent plane category through a tangent plane identification model in a pre-trained target combination model; and determining whether the corresponding ultrasonic image to be identified is a standard section of a certain category according to the probability that each frame of ultrasonic image to be identified belongs to each standard section category. The technical scheme provided by the invention can improve the identification efficiency of the standard section.

Description

Standard section identification method, system, device and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a system, a device and a storage medium for identifying a standard section.
Background
Medical ultrasound examination is a medical imaging diagnostic technique based on ultrasound, which visualizes muscle and internal organs (including the size, structure and pathological lesions thereof), and is widely used in prenatal diagnosis and the like due to its advantages of no radiation, non-invasiveness, reliability, low cost, and the like.
However, in practical applications, many examination items require a doctor to use a related scanning device, and after a period of scanning, the doctor can find a standard section of the corresponding ultrasound image, and then perform medical diagnosis on the standard section. For example, measurement of fetal macroteratogenesis growth parameters, bladder volume measurement, and the like.
Therefore, it is important to quickly, accurately and automatically identify the standard section of the ultrasonic image, and the found standard section of the ultrasonic image directly affects the final inspection result. However, searching for the standard section of the ultrasound image requires a doctor to have a lot of clinical examination and diagnosis experience, but even for a doctor with a lot of experience, it also takes much time to find the corresponding standard section.
In view of this, there is a need for a method of identifying a standard cut surface that improves the efficiency of determining a standard cut surface in an ultrasound image.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a method, a system, a device and a storage medium for identifying a standard tangent plane, so as to solve the technical problem in the prior art that the efficiency is relatively low when determining the standard tangent plane in an ultrasound image in the medical ultrasound examination process.
In one aspect, the present invention provides a method for identifying a standard tangential plane, the method comprising: acquiring an ultrasonic image sequence to be identified, and obtaining a saliency detection image sequence corresponding to the ultrasonic image sequence to be identified through a saliency detection model in a pre-trained combined model; according to the ultrasonic image sequence to be identified and the saliency detection image sequence, identifying the probability that each frame of ultrasonic image to be identified belongs to each standard tangent plane category through a tangent plane identification model in a pre-trained target combination model; and determining whether the corresponding ultrasonic image to be identified is a standard section of a certain category according to the probability that each frame of ultrasonic image to be identified belongs to each standard section category.
In another aspect, the present invention further provides a system for identifying a standard section, where the system includes: the acquisition module is used for acquiring an ultrasonic image sequence to be identified and acquiring a saliency detection image sequence corresponding to the ultrasonic image sequence to be identified through a saliency detection model in a pre-trained combined model; the identification module is used for carrying out image fusion processing on the ultrasonic image sequence to be identified and the saliency detection image sequence to obtain a fusion image sequence, and identifying the probability that each frame of fusion image in the fusion image sequence belongs to each standard section category through a section identification model in a pre-trained target combination model; and the determining module is used for determining whether each frame of ultrasonic image in the ultrasonic image sequence to be identified corresponds to a standard tangent plane of a certain category according to the probability that each frame of fused image in the fused image sequence belongs to each standard tangent plane category.
The invention also provides a device for identifying the standard tangent plane, which comprises a processor and a memory, wherein the memory is used for storing a computer program, and the computer program realizes the method for identifying the standard tangent plane when being executed by the processor.
In another aspect, the present invention further provides a computer readable storage medium, where the computer readable storage medium is used to store a computer program, where the computer program when executed by a processor implements the method for identifying a standard tangent plane as described above.
According to the technical scheme, the saliency detection model in the pre-trained combined model can be utilized to predict the saliency detection image of the ultrasonic image to be identified in each frame, and the saliency detection image shows which areas in the ultrasonic image to be identified are noted. And then using the saliency detection image sequence and the ultrasonic image sequence to be identified as the input of a section identification model in the pre-trained target combination model to obtain the probability that each frame of ultrasonic image to be identified is a standard section of each type. And then, after the probability that each frame of ultrasonic image to be identified is the standard section of each type is obtained, determining that the ultrasonic image of the target frame in the ultrasonic image sequence to be identified is the standard section of the corresponding type. Thus, the standard section can be quickly and accurately identified from the ultrasonic image of the equipment to be detected.
Therefore, through the mode, the saliency characteristic of each frame of ultrasonic image in the ultrasonic image sequence to be identified can be detected through the saliency detection model in the trained combined model, the saliency detection image is generated, the saliency characteristic is compared through the section identification model in the pre-trained target combined model, so that the probability that each frame of ultrasonic image to be identified belongs to a certain type of standard section can be accurately determined, the standard section is identified from the ultrasonic image sequence to be identified, and the identification efficiency of the standard section is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a schematic diagram of a method for identifying a standard cut surface in one embodiment of the invention;
FIG. 2 is a flow chart of a method for identifying a standard cut surface according to one embodiment of the invention;
FIG. 3 is a schematic diagram of a significance detection portion of a method of identifying a standard cut surface in accordance with one embodiment of the invention;
FIG. 4 shows a partial schematic view of a method of identifying a normal cut surface in one embodiment of the invention;
FIG. 5 shows a functional block diagram of a system for identifying a normal cut surface in accordance with one embodiment of the present invention;
fig. 6 is a schematic diagram showing the structure of a device for identifying a standard cut surface according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, based on the embodiments of the invention, which a person skilled in the art would obtain without making any inventive effort, are within the scope of the invention.
The identification method of the standard section can be applied to imaging equipment of ultrasonic detection images and can also be applied to a processing server which is in communication connection with acquisition equipment of the ultrasonic detection images. By using the method provided by the application, the standard section in the ultrasonic detection image can be effectively identified.
Referring to fig. 1, the method for identifying a standard tangent plane according to an embodiment of the present application may include the following steps:
s1: and acquiring an ultrasonic image sequence to be identified, and obtaining a saliency detection image sequence corresponding to the ultrasonic image sequence to be identified through a saliency detection model in a pre-trained combined model.
Referring to fig. 2, in the present embodiment, the pre-trained combined model includes a saliency detection model and a section identification model. The ultrasound image to be identified may be an ultrasound image containing human organ tissue (e.g., heart, lung) acquired by an image acquisition device in a medical ultrasound examination system. The sequence of ultrasound images to be identified may be a collection of consecutively acquired ultrasound images, in particular an ultrasound image video. The saliency detection image may be a saliency feature image corresponding to the ultrasound image to be identified, and in practical application, may be a binary image, or may be an image with different pixels. In the ultrasonic image sequence to be identified, at least one frame of ultrasonic image is a corresponding standard tangent plane, and in order to timely and effectively find out the frame of ultrasonic image in the ultrasonic image sequence to be identified, each frame of ultrasonic image needs to be accurately detected to obtain the salient feature of each frame of ultrasonic image, so that the probability of the standard tangent plane can be calculated according to the salient feature.
S3: and identifying the probability that each frame of ultrasonic image to be identified belongs to each standard tangent plane category through a tangent plane identification model in a pre-trained target combination model according to the ultrasonic image sequence to be identified and the saliency detection image sequence.
Referring to fig. 2, in the present embodiment, a corresponding image sequence is input according to an input corresponding to a tangent plane recognition model in a pre-trained target combination model. Specifically, in practical application, each frame of saliency detection image can be fused to a corresponding ultrasonic image to be identified of each frame according to a preset weight, so as to obtain each frame of fused image. Inputting the fusion image of the corresponding frame into a section identification model in a pre-trained target combination model, and further calculating the probability that the ultrasonic image corresponding to the identification to be identified belongs to each standard section category. For example, if the input of the tangent plane identification model in the pre-trained target combination model is sixteen frames of fusion images, the fusion image sequence is split into a plurality of sub-fusion image sequences, and each sub-fusion image sequence contains sixteen frames of fusion images. If the input of the tangent plane identification model in the pre-trained target combination model is thirty-two frames of fusion images, splitting the fusion image sequence into a plurality of sub-fusion image sequences, wherein each sub-fusion image sequence comprises thirty-two frames of fusion images. Inputting each sub-fusion image sequence into a section identification model in a pre-trained target combination model one by one, and further calculating the probability that all the ultrasonic images to be identified corresponding to each sub-fusion image sequence belong to each standard section category.
S5: and determining whether the corresponding ultrasonic image to be identified is a standard section of a certain category according to the probability that each frame of ultrasonic image to be identified belongs to each standard section category.
Referring to fig. 2, in this embodiment, the classification of the standard tangent plane is adapted to a pre-trained target combination model corresponding to the ultrasound image sequence to be identified. Specifically, if the identified ultrasound image series is an ultrasound video image of the acquired heart, and all categories of the standard section corresponding to the ultrasound image of the heart are contained in the categories in the pre-trained target combination model. For example, categories of standard cut planes of cardiac ultrasound images include: a left heart long axis section, a heart bottom short axis section, a left chamber short axis section and a four-cavity heart section. In practical application, if the standard section belonging to the left long axis section is determined from the ultrasonic image to be identified, determining that the target frame image is the standard section of the left long axis section according to the probability of the standard section belonging to the left long axis section in all frame images of the ultrasonic image to be identified.
In this embodiment, a saliency detection model in a pre-trained combined model may be used to predict a saliency detection image of an ultrasound image to be identified for each frame, where the saliency detection image indicates which regions in the ultrasound image to be identified are to be noted. And then using the saliency detection image sequence and the ultrasonic image sequence to be identified as the input of a section identification model in the pre-trained target combination model to obtain the probability that each frame of ultrasonic image to be identified is a standard section of each type. And then, after the probability that each frame of ultrasonic image to be identified is the standard section of each type is obtained, determining that the ultrasonic image of the target frame in the ultrasonic image sequence to be identified is the standard section of the corresponding type. Thus, the standard section can be quickly and accurately identified from the ultrasonic image of the equipment to be detected.
Therefore, according to the method and the device, the saliency characteristic of each frame of ultrasonic image in the ultrasonic image sequence to be identified can be detected through the saliency detection model in the trained combined model, the saliency detection image is generated, the saliency characteristic is compared through the section identification model in the pre-trained target combined model, so that the probability that each frame of ultrasonic image to be identified belongs to a certain type of standard section can be accurately determined, the standard section is identified from the ultrasonic image sequence to be identified, and the identification efficiency of the standard section is improved.
In one embodiment, weights are set for a first loss function of the undetermined significance detection model and a second loss function of the undetermined tangent plane identification model to obtain a combined loss function. Specifically, by the formula loss s =αloss 1 +βloss 2 Obtaining a combined loss function; wherein loss is s To combine loss functions, loss 1 As a first loss function, loss 2 For the second loss function, alpha and beta are super parameters, and the contribution degree of the first loss function and the second loss function in the combined loss function is respectively determined by alpha and beta.
In this embodiment, the combined loss function is used as a loss function, a combined model composed of a primary trained pending significance detection model and a primary trained pending tangent plane identification model is trained for the second time, and when the combined loss function reaches a set threshold, a corresponding combined model is determined to be a target combined model.
In one embodiment, an eye tracker is installed on an ultrasonic device, and a training method of a once trained pending significance detection model includes: taking a multi-frame ultrasonic video image obtained by analyzing the acquired ultrasonic video within a preset time period as input to obtain a corresponding saliency detection image sequence; processing attention force diagram corresponding to each frame of ultrasonic video image obtained by an eye tracker by adopting a two-dimensional truncated Gaussian kernel function to obtain a corresponding attention pixel diagram sequence with pixel values within [0,1 ]; calculating the similarity between the corresponding attention pixel map sequence and the corresponding saliency detection image sequence by adopting a preset mathematical model; and when the similarity is smaller than a preset threshold, determining the corresponding saliency detection model as a trained undetermined saliency detection model.
In this embodiment, an eye tracker is mounted on the ultrasound apparatus, and attention corresponding to each frame of ultrasound video image acquired in real time is sought. Specifically, the attention area of the doctor on each frame of ultrasonic video image is recorded, and the attention force diagram corresponding to each frame of ultrasonic video image is obtained. For example, an ultrasound video image Vx, attention is directed to the force diagram Ax, x representing the identification of the different frames. The ultrasound video image Vx and the attention map Ax are the same size. Note that in the strive, the pixel value of the region to which the doctor is aware is 1, and the pixel values of the other regions are 0. The attention map is processed through a two-dimensional truncated Gaussian kernel function, and the obtained attention pixel map is that the pixel value of the region, which is far away from the doctor, is close to 0, the pixel value of the region, which is close to the doctor, is close to 1, and the pixel value of the region, which is close to the doctor, is still 0.
In one embodiment, the mathematical model that calculates the similarity between the corresponding sequence of attention pixel maps and the corresponding sequence of saliency detection images is:
wherein h is the length of the saliency detection image, w is the width of the saliency detection image, T is a preset frame number parameter,for the pixel value of the ith row and jth column in the attention pixel map of the t-th frame, is>For the pixel value of the ith row and the jth column in the t-th frame saliency detection image, D KL (m||a) is the relative entropy of the saliency detection image sequence M and the attention pixel map sequence a; the length of the saliency detection image is consistent with the length of the attention pixel graph, and the width of the saliency detection image is consistent with the width of the attention pixel graph.
In the present embodiment, the difference between the sequence of saliency detection images and the corresponding sequence of attention pixel maps measured with the KL-divergence loss function is larger the calculated relative entropy value of the KL-divergence loss function is, the larger the difference between the sequence of saliency detection images and the corresponding sequence of attention pixel maps is. The relative entropy value of the trained undetermined saliency detection model approaches to the local minimum value, and the relative entropy value is within a preset threshold value interval. The difference between the saliency detection image sequence and the corresponding attention pixel image sequence measured by the KL divergence loss function is used for training the undetermined saliency detection model, so that the saliency detection image predicted by the saliency detection model after training is more accurate, and the recognition efficiency of the standard tangent plane is further improved.
Referring to FIG. 3, in one embodiment, the undetermined significance detection model comprises: the image size processing module is used for processing each frame of ultrasonic video image into different sizes; a plurality of groups of processing modules; each processing module comprises a convolution layer and a downsampling layer which are preset in reserves and sizes, and one video frame image with the corresponding size is taken as input; the up-sampling module takes an output set formed by the outputs of the plurality of groups of processing modules as input; and the circulation module takes the output of the up-sampling module as input, and circulates and takes the self output as input to output the saliency detection graph.
In this embodiment, a circulation module is provided to extract time sequence characteristics of each frame of ultrasonic video image, so that a saliency detection map corresponding to a frame of ultrasonic video image can be accurately obtained according to adjacent ultrasonic video images. Compared with a single-frame ultrasonic video image prediction saliency detection image, the accuracy of the saliency detection image obtained by using the cyclic neural network prediction is higher. The accuracy of obtaining the saliency detection map by using the ultrasonic image of only a single frame can be improved. Further improving the recognition efficiency of the standard section.
Referring to fig. 4, in one embodiment, according to the ultrasound image sequence to be identified and the saliency detection image sequence, the identifying the probability that each frame of ultrasound image to be identified belongs to each standard section category through a section identification model in a pre-trained target combination model includes: fusing each frame of ultrasonic image to be identified with a corresponding significance detection image according to preset weight to obtain a corresponding fused image of each frame; each frame of fusion image is subjected to a plurality of convolution layers, a normalization layer, an activation layer and a downsampling layer to obtain the characteristics of each frame of fusion image with a certain size; and inputting the characteristics of each frame of fusion image into a cyclic neural network identification module, and outputting the probability that each frame of ultrasonic image to be identified belongs to each standard section class after passing through the global pooling layer.
In one embodiment, a training method for a model for identifying a section to be determined includes:
selecting a continuous preset number of frames with preset intervals from preset standard section frames from each ultrasonic video image as a first part, and selecting frames with second intervals from other frames as a second part; the first part of ultrasonic video frame images and the second part of ultrasonic video frame images are combined into target ultrasonic video frame images; taking the image of the target ultrasonic video frame image scaled to different sizes and normalized as input, randomly initializing parameters of a neural network model, adopting an Adam optimization algorithm for iterative training, and taking a corresponding section identification model as a section identification model to be determined after a second loss function reaches a second preset value; the second loss function is:
loss 2 =-(1-y_pred) r logy_pred
where y_pred reflects the proximity to group trunk, class y, and a larger value indicates a closer proximity to class y, and a more accurate classification. Gamma >0 is an adjustable factor that can dynamically reduce the weight of easily distinguishable samples during training, thereby rapidly focusing the center on those samples that are difficult to distinguish.
In this embodiment, the second loss function may be a cross entropy loss function, and penalty coefficients (1-y_pred) are added to the corresponding terms of the cross entropy loss function according to the prediction probabilities r The method comprises the steps of carrying out a first treatment on the surface of the The larger the probability of the predicted category correspondence is, the smaller the penalty coefficient is, and the smaller the obtained cross entropy loss result is, namely, the simple sample is restrained, so that the difficult sample is distinguished by energy. Feeding inThe training efficiency of the to-be-determined tangent plane identification model is improved, the probability result of identifying the trained tangent plane identification model can be more accurate, and the identification efficiency of the standard tangent plane is further improved.
Referring to fig. 5, an embodiment of the present application further provides a system for identifying a standard tangent plane, where the system includes:
the acquisition module is used for acquiring an ultrasonic image sequence to be identified and acquiring a saliency detection image sequence corresponding to the ultrasonic image sequence to be identified through a saliency detection model in a pre-trained combined model;
the identification module is used for carrying out image fusion processing on the ultrasonic image sequence to be identified and the saliency detection image sequence to obtain a fusion image sequence, and identifying the probability that each frame of fusion image in the fusion image sequence belongs to each standard section category through a section identification model in a pre-trained target combination model;
and the determining module is used for determining whether each frame of ultrasonic image in the ultrasonic image sequence to be identified corresponds to a standard tangent plane of a certain category according to the probability that each frame of fused image in the fused image sequence belongs to each standard tangent plane category.
The standard section identification system provided in this embodiment may be used to execute the above standard section identification method, and relevant details refer to the above method embodiment, and its implementation principle and technical effects are similar, and are not described herein again. It should be noted that: in the standard section identification system provided in this embodiment, only the division of each functional module is used for illustration when the standard section is identified, and in practical application, the above-mentioned function allocation may be completed by different functional modules according to needs, that is, the internal structure of the standard section identification system is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the system for identifying the standard section provided in this embodiment and the method for identifying the standard section are the same conception, and detailed implementation procedures of the system are shown in the method embodiment, which is not repeated here.
Referring to fig. 6, an embodiment of the present application further provides a device for identifying a standard tangent plane, where the device for identifying a standard tangent plane includes a processor and a memory, where the memory is configured to store a computer program, and when the computer program is executed by the processor, the method for identifying a standard tangent plane is implemented.
The processor may be a central processing unit (Central Processing Unit, CPU). The processor may also be any other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules, corresponding to the methods in embodiments of the present invention. The processor executes various functional applications of the processor and data processing, i.e., implements the methods of the method embodiments described above, by running non-transitory software programs, instructions, and modules stored in memory.
The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the processor, etc. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some implementations, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium is used to store a computer program, where the computer program is executed by a processor to implement the method for identifying a standard tangent plane as described above.
According to the technical scheme, the saliency detection model in the pre-trained combined model can be utilized to predict the saliency detection image of the ultrasonic image to be identified in each frame, and the saliency detection image shows which areas in the ultrasonic image to be identified are noted. And then using the saliency detection image sequence and the ultrasonic image sequence to be identified as the input of a section identification model in the pre-trained target combination model to obtain the probability that each frame of ultrasonic image to be identified is a standard section of each type. And then, after the probability that each frame of ultrasonic image to be identified is the standard section of each type is obtained, determining that the ultrasonic image of the target frame in the ultrasonic image sequence to be identified is the standard section of the corresponding type. Thus, the standard section can be quickly and accurately identified from the ultrasonic image of the equipment to be detected.
Therefore, through the mode, the saliency characteristic of each frame of ultrasonic image in the ultrasonic image sequence to be identified can be detected through the saliency detection model in the trained combined model, the saliency detection image is generated, the saliency characteristic is compared through the section identification model in the pre-trained target combined model, so that the probability that each frame of ultrasonic image to be identified belongs to a certain type of standard section can be accurately determined, the standard section is identified from the ultrasonic image sequence to be identified, and the identification efficiency of the standard section is improved.
It will be appreciated by those skilled in the art that implementing all or part of the above-described methods in the embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program may include the steps of the embodiments of the methods described above when executed. Wherein the storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method for identifying a standard cut surface, comprising:
acquiring an ultrasonic image sequence to be identified, and obtaining a saliency detection image sequence corresponding to the ultrasonic image sequence to be identified through a saliency detection model in a pre-trained target combination model;
according to the ultrasonic image sequence to be identified and the saliency detection image sequence, identifying the probability that each frame of ultrasonic image to be identified belongs to each standard tangent plane category through a tangent plane identification model in a pre-trained target combination model;
and determining whether the corresponding ultrasonic image to be identified is a standard section of a certain category according to the probability that each frame of ultrasonic image to be identified belongs to each standard section category.
2. The method of identifying a standard cut surface according to claim 1, further comprising:
setting weights for a first loss function of the undetermined significance detection model and a second loss function of the undetermined tangent plane identification model to obtain a combined loss function;
and performing secondary training on a combined model formed by the primary trained undetermined significance detection model and the primary trained undetermined tangent plane identification model by taking the combined loss function as a loss function, and determining the corresponding combined model as a target combined model when the combined loss function reaches a set threshold value.
3. The method for identifying a standard tangential plane according to claim 2, wherein an eye tracker is mounted on an ultrasonic device, and the method for training the once trained pending significance detection model comprises the following steps:
taking a multi-frame ultrasonic video image obtained by analyzing the acquired ultrasonic video within a preset time period as input to obtain a corresponding saliency detection image sequence;
processing attention force diagram corresponding to each frame of ultrasonic video image obtained by an eye tracker by adopting a two-dimensional truncated Gaussian kernel function to obtain a corresponding attention pixel diagram sequence with pixel values within [0,1 ];
calculating the similarity between the corresponding attention pixel map sequence and the corresponding saliency detection image sequence by adopting a preset mathematical model;
and when the similarity is smaller than a preset threshold, determining the corresponding saliency detection model as a trained undetermined saliency detection model.
4. A method of identifying a standard tangent plane as defined in claim 3, wherein the mathematical model is:
wherein h is the length of the saliency detection image, w is the width of the saliency detection image, T is a preset frame number parameter,for the pixel value of the ith row and jth column in the attention pixel map of the t-th frame, is>For the pixel value of the ith row and the jth column in the t-th frame saliency detection image, D KL (m||a) is the relative entropy of the saliency detection image sequence M and the attention pixel map sequence a;
the length of the saliency detection image is consistent with the length of the attention pixel graph, and the width of the saliency detection image is consistent with the width of the attention pixel graph.
5. The method of claim 4, wherein the undetermined significance detection model comprises:
the image size processing module is used for processing each frame of ultrasonic video image into different sizes;
a plurality of groups of processing modules; each processing module comprises a convolution layer and a downsampling layer which are preset in reserves and sizes, and one video frame image with the corresponding size is taken as input;
the up-sampling module takes an output set formed by the outputs of the plurality of groups of processing modules as input;
and the circulation module takes the output of the up-sampling module as input, and circulates and takes the self output as input to output the saliency detection graph.
6. The method according to claim 5, wherein the identifying the probability that each frame of the ultrasound image to be identified belongs to each standard tangent plane category by the tangent plane identification model in the pre-trained target combination model according to the ultrasound image sequence to be identified and the saliency detection image sequence comprises:
fusing each frame of ultrasonic image to be identified with a corresponding significance detection image according to preset weight to obtain a corresponding fused image of each frame;
each frame of fusion image is subjected to a plurality of convolution layers, a normalization layer, an activation layer and a downsampling layer to obtain the characteristics of each frame of fusion image with a certain size;
and inputting the characteristics of each frame of fusion image into a cyclic neural network identification module, and outputting the probability that each frame of ultrasonic image to be identified belongs to each standard section class after passing through the global pooling layer.
7. The method of claim 6, wherein the training method of the model for identifying the to-be-determined section comprises:
selecting a continuous preset number of frames with preset intervals from preset standard section frames from each ultrasonic video image as a first part, and selecting frames with second intervals from other frames as a second part; the first part of ultrasonic video frame images and the second part of ultrasonic video frame images are combined into target ultrasonic video frame images;
taking the image of the target ultrasonic video frame image scaled to different sizes and normalized as input, randomly initializing parameters of a neural network model, adopting an Adam optimization algorithm for iterative training, and taking a corresponding section identification model as a section identification model to be determined after a second loss function reaches a second preset value;
the second loss function is:
loss 2 =-(1-y_pred) r logy_pred
wherein y_pred reflects the proximity to group trunk, class y, and γ >0 is an adjustable factor.
8. A system for identifying a standard cut surface, comprising:
the acquisition module is used for acquiring an ultrasonic image sequence to be identified and acquiring a saliency detection image sequence corresponding to the ultrasonic image sequence to be identified through a saliency detection model in a pre-trained combined model;
the identification module is used for carrying out image fusion processing on the ultrasonic image sequence to be identified and the saliency detection image sequence to obtain a fusion image sequence, and identifying the probability that each frame of fusion image in the fusion image sequence belongs to each standard section category through a section identification model in a pre-trained target combination model;
and the determining module is used for determining whether each frame of ultrasonic image in the ultrasonic image sequence to be identified corresponds to a standard tangent plane of a certain category according to the probability that each frame of fused image in the fused image sequence belongs to each standard tangent plane category.
9. A device for identifying a standard cut surface, characterized in that it comprises a processor and a memory for storing a computer program which, when executed by the processor, implements the method according to any one of claims 1 to 7.
10. A computer readable storage medium for storing a computer program which, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202210759731.9A 2022-06-29 2022-06-29 Standard section identification method, system, device and storage medium Pending CN117392040A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210759731.9A CN117392040A (en) 2022-06-29 2022-06-29 Standard section identification method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210759731.9A CN117392040A (en) 2022-06-29 2022-06-29 Standard section identification method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN117392040A true CN117392040A (en) 2024-01-12

Family

ID=89468928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210759731.9A Pending CN117392040A (en) 2022-06-29 2022-06-29 Standard section identification method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN117392040A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117711581A (en) * 2024-02-05 2024-03-15 深圳皓影医疗科技有限公司 Method, system, electronic device and storage medium for automatically adding bookmarks
CN117711581B (en) * 2024-02-05 2024-06-11 深圳皓影医疗科技有限公司 Method, system, electronic device and storage medium for automatically adding bookmarks

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117711581A (en) * 2024-02-05 2024-03-15 深圳皓影医疗科技有限公司 Method, system, electronic device and storage medium for automatically adding bookmarks
CN117711581B (en) * 2024-02-05 2024-06-11 深圳皓影医疗科技有限公司 Method, system, electronic device and storage medium for automatically adding bookmarks

Similar Documents

Publication Publication Date Title
CN110807788B (en) Medical image processing method, medical image processing device, electronic equipment and computer storage medium
EP3767521A1 (en) Object detection and instance segmentation of 3d point clouds based on deep learning
CN111539930A (en) Dynamic ultrasonic breast nodule real-time segmentation and identification method based on deep learning
Zhang et al. Intelligent scanning: Automated standard plane selection and biometric measurement of early gestational sac in routine ultrasound examination
CN111862044A (en) Ultrasonic image processing method and device, computer equipment and storage medium
EP3680854B1 (en) Learning program, learning apparatus, and learning method
KR102128325B1 (en) Image Processing System
Akkasaligar et al. Classification of medical ultrasound images of kidney
KR20230059799A (en) A Connected Machine Learning Model Using Collaborative Training for Lesion Detection
CN114972255B (en) Image detection method and device for cerebral micro-bleeding, computer equipment and storage medium
CN111345847B (en) Method and system for managing beamforming parameters based on tissue density
CN111932495B (en) Medical image detection method, device and storage medium
CN111047608A (en) Distance-AttU-Net-based end-to-end mammary ultrasound image segmentation method
CN113920109A (en) Medical image recognition model training method, recognition method, device and equipment
CN114758137A (en) Ultrasonic image segmentation method and device and computer readable storage medium
CN113570594A (en) Method and device for monitoring target tissue in ultrasonic image and storage medium
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN115082487B (en) Ultrasonic image section quality evaluation method and device, ultrasonic equipment and storage medium
WO2021032325A1 (en) Updating boundary segmentations
CN116468103A (en) Training method, application method and system for lung nodule benign and malignant recognition model
CN117392040A (en) Standard section identification method, system, device and storage medium
CN116168029A (en) Method, device and medium for evaluating rib fracture
CN113222985B (en) Image processing method, image processing device, computer equipment and medium
EP4016453A1 (en) Method and system for automated segmentation of biological object parts in mri
CN116309593B (en) Liver puncture biopsy B ultrasonic image processing method and system based on mathematical model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination