CN116128874B - Carotid plaque ultrasonic real-time identification method and device based on 5G - Google Patents
Carotid plaque ultrasonic real-time identification method and device based on 5G Download PDFInfo
- Publication number
- CN116128874B CN116128874B CN202310348562.4A CN202310348562A CN116128874B CN 116128874 B CN116128874 B CN 116128874B CN 202310348562 A CN202310348562 A CN 202310348562A CN 116128874 B CN116128874 B CN 116128874B
- Authority
- CN
- China
- Prior art keywords
- frame
- image
- real
- carotid
- ultrasonic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0891—Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/58—Testing, adjusting or calibrating the diagnostic device
- A61B8/582—Remote testing of the device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Vascular Medicine (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Analysis (AREA)
Abstract
The carotid plaque ultrasonic real-time identification method and device based on 5G can greatly improve the processing precision of real-time data, thereby ensuring the accuracy of identification. The method comprises the following steps: (1) Collecting dynamic ultrasonic gray-scale images of carotid arteries in an in-vitro mode; (2) The dynamic ultrasonic gray-scale image is automatically segmented and preprocessed in real time, so that the image characteristics are not lost; (3) Automatically identifying and analyzing the preprocessed ultrasonic image by applying a deep learning algorithm, and judging whether carotid plaque exists in the image; (4) And (3) remotely and automatically analyzing dynamic images of carotid ultrasound in real time by means of a 5G network, and identifying carotid plaque.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to a carotid plaque ultrasonic real-time identification method based on 5G and a carotid plaque ultrasonic real-time identification device based on 5G.
Background
The application of machine deep learning in medical image processing has been studied for many times, providing new ideas and methods for medical image diagnosis. The main research fields of ultrasonic image recognition and diagnosis based on artificial intelligence are focused on thyroid, mammary gland and liver. Based on the transfer learning method, a series of research results are obtained in the aspect of optimal image recognition and analysis of static images, and higher precision is obtained. But in terms of real-time detection, there is a high demand for an autonomous development algorithm. The number of mainstream algorithms available for transfer learning is limited, and the YOLO series-based algorithm is another framework for solving the problem of target detection speed, which is proposed next to RCNN, fast-RCNN, and fast-RCNN. YOLOv4 is the most prominent feature and advantage of real-time analysis and high efficiency as a deep neural network recognized in the artificial intelligence field. YOLO suffers from a drawback in accuracy compared to other complex deep learning classification algorithms. YOLO has difficulty achieving classification accuracy for static images similar to other algorithms based on accuracy of 98% or more.
Disclosure of Invention
In order to overcome the defects of the prior art, the technical problem to be solved by the invention is to provide the carotid plaque ultrasonic real-time identification method based on 5G, which can greatly improve the processing precision of real-time data, thereby ensuring the accuracy of identification.
The technical scheme of the invention is as follows: the carotid plaque ultrasonic real-time identification method based on 5G comprises the following steps:
(1) Collecting dynamic ultrasonic gray-scale images of carotid arteries in an in-vitro mode;
(2) The dynamic ultrasonic gray-scale image is automatically segmented and preprocessed in real time, so that the image characteristics are not lost;
(3) Automatically identifying and analyzing the preprocessed ultrasonic image by applying a deep learning algorithm, and judging whether carotid plaque exists in the image;
(4) The dynamic image of carotid ultrasound is automatically analyzed remotely and in real time by means of a 5G network, and carotid plaque is identified;
the step (2) comprises:
(2.1) realizing key frame interception of real-time ultrasonic image data based on OpenCV software;
(2.2) detecting and dividing a moving target by adopting a frame difference method, and storing key frames;
(2.3) extracting key parts and ROIs (region of interest) in the pretreatment process by realizing a mixed domain attention mechanism and anchoring pretreatment;
(2.4) adjusting the k-means algorithm to dynamically obtain a priori frame value, and emphasizing the performance;
(2.5) performing a preprocessing operation of resampling and segmentation, and adjusting the number of structural units of the backbone network and the detection network.
According to the invention, the key frame interception of the real-time ultrasonic image data is realized based on OpenCV, the frame difference method is adopted to detect and divide the moving target, the key frame is stored, the extraction of key parts and ROI sensitive areas in the pretreatment process is improved by realizing the mixed domain attention mechanism and the anchor pretreatment, the prior frame value is dynamically obtained by adjusting the k-means algorithm, the performance is emphasized, the pretreatment operation of resampling and dividing is performed, and the number of structural units of a backbone network and a detection network is adjusted, so that the processing precision of the real-time data can be greatly improved, and the identification accuracy is ensured.
Also provided is a 5G-based carotid plaque ultrasound real-time identification device, comprising:
the image acquisition module is used for acquiring dynamic ultrasonic gray-scale images of the carotid artery in an in-vitro mode;
the segmentation and preprocessing module is used for automatically and real-timely segmenting and preprocessing the dynamic ultrasonic gray-scale image, so that the image characteristics are not lost;
the recognition analysis module is used for automatically recognizing and analyzing the preprocessed ultrasonic image by applying a deep learning algorithm and judging whether carotid plaque exists in the image;
the remote module is used for realizing remote and real-time automatic analysis of dynamic images of carotid ultrasound by means of a 5G network and identifying carotid plaque;
the segmentation and preprocessing module performs:
(2.1) realizing key frame interception of real-time ultrasonic image data based on OpenCV software;
(2.2) detecting and dividing a moving target by adopting a frame difference method, and storing key frames;
(2.3) extracting key parts and ROIs (region of interest) in the pretreatment process by realizing a mixed domain attention mechanism and anchoring pretreatment;
(2.4) adjusting the k-means algorithm to dynamically obtain a priori frame value, and emphasizing the performance;
(2.5) performing a preprocessing operation of resampling and segmentation, and adjusting the number of structural units of the backbone network and the detection network.
Drawings
Fig. 1 is a flow chart of a 5G-based carotid plaque ultrasound real-time identification method according to the invention.
Fig. 2 is a block diagram of an algorithm according to the present invention.
Detailed Description
As shown in fig. 1, the carotid plaque ultrasonic real-time identification method based on 5G comprises the following steps:
(1) Collecting dynamic ultrasonic gray-scale images of carotid arteries in an in-vitro mode;
(2) The dynamic ultrasonic gray-scale image is automatically segmented and preprocessed in real time, so that the image characteristics are not lost;
(3) Automatically identifying and analyzing the preprocessed ultrasonic image by applying a deep learning algorithm, and judging whether carotid plaque exists in the image;
(4) The dynamic image of carotid ultrasound is automatically analyzed remotely and in real time by means of a 5G network, and carotid plaque is identified;
the step (2) comprises:
(2.1) implementing keyframe interception of real-time ultrasound image data based on OpenCV software, which is a cross-platform computer vision and machine learning software library issued based on Apache2.0 license (open source);
(2.2) detecting and dividing a moving target by adopting a frame difference method, and storing key frames;
(2.3) extracting key parts and ROIs (Region of interest, interested areas) in the pretreatment process by realizing a mixed domain attention mechanism and anchor pretreatment;
(2.4) adjusting a k-means algorithm to dynamically obtain a priori frame value, and emphasizing the performance;
(2.5) performing a preprocessing operation of resampling and segmentation, and adjusting the number of structural units of the backbone network and the detection network.
According to the invention, the key frame interception of the real-time ultrasonic image data is realized based on OpenCV, the frame difference method is adopted to detect and divide the moving target, the key frame is stored, the extraction of key parts and ROI sensitive areas in the pretreatment process is improved by realizing the mixed domain attention mechanism and the anchor pretreatment, the prior frame value is dynamically obtained by adjusting the k-means algorithm, the performance is emphasized, the pretreatment operation of resampling and dividing is performed, and the number of structural units of a backbone network and a detection network is adjusted, so that the processing precision of the real-time data can be greatly improved, and the identification accuracy is ensured.
Preferably, in the step (2.2), judging whether a relatively obvious difference appears between frames, subtracting the two frames to obtain an absolute value of a brightness difference of the two frames, judging whether the absolute value is larger than a threshold value to analyze the motion characteristic of the image sequence, and determining whether an object motion exists in the image sequence; and if the threshold value is exceeded, the current frame is considered to be significantly different from the last stored frame data, and the current frame is confirmed to be a key frame and is stored.
Preferably, in the step (2.3), the mixed domain attention mechanism is an effective lightweight module, the additional cost is negligible, and the attention weight is deduced in two dimensions of a channel domain and a space domain and then multiplied by the original feature map, so as to adaptively adjust the features.
Preferably, in the step (2.3), before training is started, checking the labeling information in the dataset, calculating an optimal recall rate of the labeling information in the dataset for a default anchor frame, and when the optimal recall rate is greater than or equal to a threshold value, not updating the anchor frame and using the default anchor frame; if the optimal recall is less than the threshold, then the anchor boxes conforming to this dataset need to be recalculated.
Preferably, in the step (2.4), in the anchoring pretreatment, a sample is randomly selected in the data set as a first initialized cluster center; calculating the distance between each sample point in the sample and the initialized cluster center, and selecting the shortest distance; selecting a point with the largest distance from the probability as a new clustering center; repeating the steps until k clustering centers are selected; and calculating the final clustering result by using a k-means algorithm for the k clustering centers.
Preferably, in the step (3), based on the YOLO model, modifying the network structure layer 36/61/74, and improving the FPS; modifying the activation function to replace the original ReLU activation function, and improving the small target detection efficiency; the downsampling is optimized to prevent the loss of the feature values of interest to the clinician.
Mixed domain attention mechanism:
YOLO is a one-stage detection algorithm, which performs regression classification directly on input data, and has a faster rate and lower accuracy than the two-stage algorithm.
The attention mechanism is mainly proposed according to biological vision characteristics: spatial domain, channel domain, hybrid domain.
In the project, the mixed domain attention mechanism is an effective lightweight module, the additional cost is negligible, attention weights are deduced in two dimensions of a channel domain and a space domain and then multiplied by an original feature map, so that the features are adaptively adjusted. And finally, the detection precision and the faster detection speed are improved.
Anchor pretreatment:
YOLO introduced the concept of an anchor box, greatly increasing the performance of target detection. Anchor boxes are boxes of several different sizes that are statistically or clustered from a training set of real boxes (groups trunk). The blind purpose finding of the model during training is avoided, and the model is facilitated to be quickly converged.
Instead of using only a default anchor frame, checking the labeling information in the dataset before training is started, calculating the optimal recall rate of the labeling information in the dataset for the default anchor frame, and when the optimal recall rate is greater than or equal to a threshold (such as 0.98), updating the anchor frame is not needed; if the optimal recall is less than 0.98, then the anchor boxes conforming to this dataset need to be recalculated.
Based on the YOLO model, modifying the network structure, improving the FPS (frame rate); modifying the activation function to replace the original ReLU activation function, and improving the small target detection efficiency; the downsampling is optimized to prevent the loss of the feature values of interest to the clinician.
YOLO is a one-stage detection algorithm, which performs regression classification directly on input data, and has a faster rate and lower accuracy than the two-stage algorithm. In the project, the mixed domain attention mechanism is an effective lightweight module, the additional cost is negligible, attention weights are deduced in two dimensions of a channel domain and a space domain and then multiplied by the original feature map, so that the features are adaptively adjusted, and finally the detection precision and the detection speed are improved. YOLO introduced the concept of an anchor box, greatly increasing the performance of target detection. Anchor boxes are boxes of several different sizes that are statistically or clustered from a training set of real boxes (groups trunk). The blind purpose finding of the model during training is avoided, and the model is facilitated to be quickly converged. Instead of using only a default anchor frame, checking the labeling information in the dataset before training is started, calculating the optimal recall rate of the labeling information in the dataset for the default anchor frame, and when the optimal recall rate is greater than or equal to a threshold (such as 0.98), updating the anchor frame is not needed; if the optimal recall is less than 0.98, then the anchor boxes conforming to this dataset need to be recalculated. The k-means algorithm is a specific implementation method of the anchor preprocessing, and the algorithm is optimized in the project. The present algorithm also adds an iterative mechanism and updates and retrains the training dataset with results verified by experienced doctors.
It will be understood by those skilled in the art that all or part of the steps in implementing the above embodiment method may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, where the program when executed includes the steps of the above embodiment method, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, etc. Accordingly, the invention also includes, corresponding to the method of the invention, a 5G-based carotid plaque ultrasound real-time identification device, typically represented in the form of functional modules corresponding to the steps of the method. The device comprises:
the image acquisition module is used for acquiring dynamic ultrasonic gray-scale images of the carotid artery in an in-vitro mode;
the segmentation and preprocessing module is used for automatically and real-timely segmenting and preprocessing the dynamic ultrasonic gray-scale image, so that the image characteristics are not lost;
the recognition analysis module is used for automatically recognizing and analyzing the preprocessed ultrasonic image by applying a deep learning algorithm and judging whether carotid plaque exists in the image;
the remote module is used for realizing remote and real-time automatic analysis of dynamic images of carotid ultrasound by means of a 5G network and identifying carotid plaque;
the segmentation and preprocessing module performs:
(2.1) realizing key frame interception of real-time ultrasonic image data based on OpenCV software;
(2.2) detecting and dividing a moving target by adopting a frame difference method, and storing key frames;
(2.3) extracting key parts and ROIs (region of interest) in the pretreatment process by realizing a mixed domain attention mechanism and anchoring pretreatment;
(2.4) adjusting the k-means algorithm to dynamically obtain a priori frame value, and emphasizing the performance;
(2.5) performing a preprocessing operation of resampling and segmentation, and adjusting the number of structural units of the backbone network and the detection network.
Preferably, in the segmentation and preprocessing module, judging whether a relatively obvious difference appears between frames, subtracting the two frames to obtain an absolute value of a brightness difference of the two frames, judging whether the absolute value is larger than a threshold value to analyze the motion characteristic of the image sequence, and determining whether an object motion exists in the image sequence; and if the threshold value is exceeded, the current frame is considered to be significantly different from the last stored frame data, and the current frame is confirmed to be a key frame and is stored.
Preferably, in the segmentation and preprocessing module,
the mixed domain attention mechanism is an effective lightweight module, the additional cost is negligible, attention weights are deduced in two dimensions of a channel domain and a space domain and then multiplied by the original feature map, so that the features are adaptively adjusted;
checking the labeling information in the data set before training is started, calculating the optimal recall rate of the labeling information in the data set for a default anchoring frame, and when the optimal recall rate is greater than or equal to a threshold value, not updating the anchoring frame and using the default anchoring frame; if the optimal recall is less than the threshold, then an anchor box conforming to the dataset needs to be recalculated;
in the anchoring pretreatment, randomly selecting a sample in a data set as a first initialized cluster center; calculating the distance between each sample point in the sample and the initialized cluster center, and selecting the shortest distance; selecting a point with the largest distance from the probability as a new clustering center; repeating the steps until k clustering centers are selected; and calculating the final clustering result by using a k-means algorithm for the k clustering centers.
Preferably, in the recognition analysis module, based on a YOLO model, modifying a 36/61/74 th layer of a network structure to improve FPS; modifying the activation function to replace the original ReLU activation function, and improving the small target detection efficiency; the downsampling is optimized to prevent the loss of the feature values of interest to the clinician.
One specific example is given below:
(1) Collecting carotid artery in vitro to perform dynamic ultrasonic gray scale image; (2) The automatic identification system of the invention carries out automatic and real-time segmentation and pretreatment on the dynamic ultrasonic gray-scale image, thereby ensuring that the image characteristics are not lost; (3) And (3) realizing automatic identification and analysis of the preprocessed ultrasonic image by a deep learning algorithm based on a YOLOv4 algorithm in a remote consultation center by means of a 5G network, and identifying carotid plaque.
The present invention is not limited to the preferred embodiments, but can be modified in any way according to the technical principles of the present invention, and all such modifications, equivalent variations and modifications are included in the scope of the present invention.
Claims (4)
1. The carotid plaque ultrasonic real-time identification method based on 5G is characterized by comprising the following steps of: which comprises the following steps:
(1) Collecting dynamic ultrasonic gray-scale images of carotid arteries in an in-vitro mode;
(2) The dynamic ultrasonic gray-scale image is automatically segmented and preprocessed in real time, so that the image characteristics are not lost;
(3) Automatically identifying and analyzing the preprocessed ultrasonic image by applying a deep learning algorithm, and judging whether carotid plaque exists in the image;
(4) The dynamic image of carotid ultrasound is automatically analyzed remotely and in real time by means of a 5G network, and carotid plaque is identified;
the step (2) comprises:
(2.1) realizing key frame interception of real-time ultrasonic image data based on OpenCV software;
(2.2) detecting and dividing a moving target by adopting a frame difference method, and storing key frames;
(2.3) extracting key parts and ROIs (region of interest) in the pretreatment process by realizing a mixed domain attention mechanism and anchoring pretreatment;
(2.4) adjusting the k-means algorithm to dynamically obtain a priori frame value, and emphasizing the performance;
(2.5) performing resampling and segmentation preprocessing operation, and adjusting the number of structural units of the backbone network and the detection network;
in the step (2.3), the mixed domain attention mechanism is an effective lightweight module, the additional cost is negligible, attention weights are deduced in two dimensions of a channel domain and a space domain, and then multiplied by the original feature map, so that the features are adaptively adjusted; checking the labeling information in the data set before training is started, calculating the optimal recall rate of the labeling information in the data set for a default anchoring frame, and when the optimal recall rate is greater than or equal to a threshold value, not updating the anchoring frame and using the default anchoring frame; if the optimal recall is less than the threshold, then an anchor box conforming to the dataset needs to be recalculated; in the anchoring pretreatment, randomly selecting a sample in a data set as a first initialized cluster center; calculating the distance between each sample point in the sample and the initialized cluster center, and selecting the shortest distance; selecting a point with the largest distance from the probability as a new clustering center; repeating the steps until k clustering centers are selected; calculating final clustering results for k clustering centers by using a k-means algorithm;
in the step (3), based on the YOLO model, modifying the 36/61/74 th layer of the network structure to improve the FPS; modifying the activation function to replace the original ReLU activation function, and improving the small target detection efficiency; the downsampling is optimized to prevent the loss of the feature values of interest to the clinician.
2. The 5G-based carotid plaque ultrasound real-time identification method of claim 1, wherein: in the step (2.2), judging whether a more obvious difference appears between frames, subtracting the two frames to obtain an absolute value of the brightness difference of the two frames, judging whether the absolute value is larger than a threshold value to analyze the motion characteristic of the image sequence, and determining whether an object moves in the image sequence; and if the threshold value is exceeded, the current frame is considered to be significantly different from the last stored frame data, and the current frame is confirmed to be a key frame and is stored.
3. Carotid plaque ultrasonic real-time identification device based on 5G, its characterized in that: it comprises the following steps:
the image acquisition module is used for acquiring dynamic ultrasonic gray-scale images of the carotid artery in an in-vitro mode;
the segmentation and preprocessing module is used for automatically and real-timely segmenting and preprocessing the dynamic ultrasonic gray-scale image, so that the image characteristics are not lost;
the recognition analysis module is used for automatically recognizing and analyzing the preprocessed ultrasonic image by applying a deep learning algorithm and judging whether carotid plaque exists in the image;
the remote module is used for realizing remote and real-time automatic analysis of dynamic images of carotid ultrasound by means of a 5G network and identifying carotid plaque;
the segmentation and preprocessing module performs:
(2.1) realizing key frame interception of real-time ultrasonic image data based on OpenCV software;
(2.2) detecting and dividing a moving target by adopting a frame difference method, and storing key frames;
(2.3) extracting key parts and ROIs (region of interest) in the pretreatment process by realizing a mixed domain attention mechanism and anchoring pretreatment;
(2.4) adjusting the k-means algorithm to dynamically obtain a priori frame value, and emphasizing the performance;
(2.5) performing resampling and segmentation preprocessing operation, and adjusting the number of structural units of the backbone network and the detection network;
in the above-mentioned segmentation and preprocessing module,
the mixed domain attention mechanism is an effective lightweight module, the additional cost is negligible, attention weights are deduced in two dimensions of a channel domain and a space domain and then multiplied by the original feature map, so that the features are adaptively adjusted;
checking the labeling information in the data set before training is started, calculating the optimal recall rate of the labeling information in the data set for a default anchoring frame, and when the optimal recall rate is greater than or equal to a threshold value, not updating the anchoring frame and using the default anchoring frame; if the optimal recall is less than the threshold, then an anchor box conforming to the dataset needs to be recalculated;
in the anchoring pretreatment, randomly selecting a sample in a data set as a first initialized cluster center; calculating the distance between each sample point in the sample and the initialized cluster center, and selecting the shortest distance; selecting a point with the largest distance from the probability as a new clustering center; repeating the steps until k clustering centers are selected; calculating final clustering results for k clustering centers by using a k-means algorithm;
in the recognition analysis module, based on the YOLO model, the 36/61/74 th layer of the network structure is modified, and the FPS is improved; modifying the activation function to replace the original ReLU activation function, and improving the small target detection efficiency; the downsampling is optimized to prevent the loss of the feature values of interest to the clinician.
4. The 5G-based carotid plaque ultrasound real-time identification device of claim 3, wherein: in the segmentation and preprocessing module, judging whether a relatively obvious difference appears between frames, subtracting the two frames to obtain an absolute value of a brightness difference of the two frames, judging whether the absolute value is larger than a threshold value to analyze the motion characteristic of an image sequence, and determining whether an object moves in the image sequence; and if the threshold value is exceeded, the current frame is considered to be significantly different from the last stored frame data, and the current frame is confirmed to be a key frame and is stored.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310348562.4A CN116128874B (en) | 2023-04-04 | 2023-04-04 | Carotid plaque ultrasonic real-time identification method and device based on 5G |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310348562.4A CN116128874B (en) | 2023-04-04 | 2023-04-04 | Carotid plaque ultrasonic real-time identification method and device based on 5G |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116128874A CN116128874A (en) | 2023-05-16 |
CN116128874B true CN116128874B (en) | 2023-06-16 |
Family
ID=86295856
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310348562.4A Active CN116128874B (en) | 2023-04-04 | 2023-04-04 | Carotid plaque ultrasonic real-time identification method and device based on 5G |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116128874B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862044A (en) * | 2020-07-21 | 2020-10-30 | 长沙大端信息科技有限公司 | Ultrasonic image processing method and device, computer equipment and storage medium |
CN113947593A (en) * | 2021-11-03 | 2022-01-18 | 北京航空航天大学 | Method and device for segmenting vulnerable plaque in carotid artery ultrasonic image |
CN114947957A (en) * | 2022-06-01 | 2022-08-30 | 深圳市德力凯医疗设备股份有限公司 | Carotid plaque analysis method and system based on ultrasonic image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9792531B2 (en) * | 2015-09-16 | 2017-10-17 | Siemens Healthcare Gmbh | Intelligent multi-scale medical image landmark detection |
-
2023
- 2023-04-04 CN CN202310348562.4A patent/CN116128874B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862044A (en) * | 2020-07-21 | 2020-10-30 | 长沙大端信息科技有限公司 | Ultrasonic image processing method and device, computer equipment and storage medium |
CN113947593A (en) * | 2021-11-03 | 2022-01-18 | 北京航空航天大学 | Method and device for segmenting vulnerable plaque in carotid artery ultrasonic image |
CN114947957A (en) * | 2022-06-01 | 2022-08-30 | 深圳市德力凯医疗设备股份有限公司 | Carotid plaque analysis method and system based on ultrasonic image |
Also Published As
Publication number | Publication date |
---|---|
CN116128874A (en) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112132156B (en) | Image saliency target detection method and system based on multi-depth feature fusion | |
CN108647585B (en) | Traffic identifier detection method based on multi-scale circulation attention network | |
US8509478B2 (en) | Detection of objects in digital images | |
CN111210443A (en) | Deformable convolution mixing task cascading semantic segmentation method based on embedding balance | |
CN108537751B (en) | Thyroid ultrasound image automatic segmentation method based on radial basis function neural network | |
CN110781744A (en) | Small-scale pedestrian detection method based on multi-level feature fusion | |
CN111553414A (en) | In-vehicle lost object detection method based on improved Faster R-CNN | |
CN114842238B (en) | Identification method of embedded breast ultrasonic image | |
KR102508067B1 (en) | Apparatus and Method for Generating Learning Data for Semantic Image Segmentation Based On Weak Supervised Learning | |
US11132607B1 (en) | Method for explainable active learning, to be used for object detector, by using deep encoder and active learning device using the same | |
CN111126401A (en) | License plate character recognition method based on context information | |
CN116721414A (en) | Medical image cell segmentation and tracking method | |
CN113591825A (en) | Target search reconstruction method and device based on super-resolution network and storage medium | |
CN113177554B (en) | Thyroid nodule identification and segmentation method, system, storage medium and equipment | |
CN116844143B (en) | Embryo development stage prediction and quality assessment system based on edge enhancement | |
Lomanov et al. | Cell detection with deep convolutional networks trained with minimal annotations | |
CN116128874B (en) | Carotid plaque ultrasonic real-time identification method and device based on 5G | |
CN115984202A (en) | Intelligent identification and evaluation method for cardiovascular function of zebra fish | |
CN110728316A (en) | Classroom behavior detection method, system, device and storage medium | |
Jeong et al. | Homogeneity patch search method for voting-based efficient vehicle color classification using front-of-vehicle image | |
CN111882551B (en) | Pathological image cell counting method, system and device | |
Jain et al. | Brain Tumor Detection using MLops and Hybrid Multi-Cloud | |
CN114373219A (en) | Behavior recognition method, electronic device and readable storage medium | |
Zhao et al. | Automatic recognition and tracking of liver blood vessels in ultrasound image using deep neural networks | |
CN113255665B (en) | Target text extraction method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |