CN116091414A - Cardiovascular image recognition method and system based on deep learning - Google Patents

Cardiovascular image recognition method and system based on deep learning Download PDF

Info

Publication number
CN116091414A
CN116091414A CN202211608999.9A CN202211608999A CN116091414A CN 116091414 A CN116091414 A CN 116091414A CN 202211608999 A CN202211608999 A CN 202211608999A CN 116091414 A CN116091414 A CN 116091414A
Authority
CN
China
Prior art keywords
image
feature map
cardiovascular
evaluated
differential
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211608999.9A
Other languages
Chinese (zh)
Inventor
张静雅
余道友
符茂胜
李瑞霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
West Anhui University
Original Assignee
West Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by West Anhui University filed Critical West Anhui University
Priority to CN202211608999.9A priority Critical patent/CN116091414A/en
Publication of CN116091414A publication Critical patent/CN116091414A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application relates to the technical field of medical equipment, and particularly discloses a cardiovascular image recognition method and a cardiovascular image recognition system based on deep learning; then the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image; and then, calculating a difference feature map between the feature map to be evaluated and the reference feature map, shaping and optimizing a high-dimensional manifold of the difference feature map to obtain an optimized difference feature map, and finally, passing the optimized difference feature map through a classifier to obtain a classification result.

Description

Cardiovascular image recognition method and system based on deep learning
Technical Field
The present application relates to the technical field of medical devices, and more particularly, to a cardiovascular image recognition method and system based on deep learning.
Background
With the development of medical big data and intelligent medical treatment, machine vision-based medical image analysis and disease risk early warning are developed. However, prior to the construction of smart medical protocols for use in the cardiovascular field, it is necessary to collect a data set of medical institution-accepted cardiovascular maps.
The existing cardiovascular image data set is manually acquired, but artificial judgment errors are easy to generate in the acquisition process, such as the problem that the definition is not high due to the blurring of the cardiovascular image. Thus, an optimized cardiovascular image recognition scheme is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a cardiovascular image recognition method and a system thereof based on deep learning, and specifically, firstly, a cardiovascular image to be evaluated and a reference cardiovascular image are acquired; then the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image; and then, calculating a difference feature map between the feature map to be evaluated and the reference feature map, shaping and optimizing a high-dimensional manifold of the difference feature map to obtain an optimized difference feature map, and finally, passing the optimized difference feature map through a classifier to obtain a classification result.
According to one aspect of the present application, there is provided a cardiovascular image recognition method based on deep learning, comprising:
acquiring a cardiovascular image to be evaluated and a reference cardiovascular image, wherein the reference cardiovascular image has image quality meeting preset requirements;
the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image;
calculating a difference characteristic diagram between the characteristic diagram to be evaluated and the reference characteristic diagram;
shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map; and
and the optimized differential feature map is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the image quality of the cardiovascular image to be evaluated meets the standard.
According to another aspect of the present application, there is provided a deep learning based cardiovascular image recognition system, comprising:
the image acquisition module is used for acquiring a cardiovascular image to be evaluated and a reference cardiovascular image, wherein the reference cardiovascular image has image quality meeting preset requirements;
The image coding module is used for respectively passing the cardiovascular image to be evaluated and the reference cardiovascular image through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image;
the difference module is used for calculating a difference characteristic diagram between the characteristic diagram to be evaluated and the reference characteristic diagram;
the feature map optimizing module is used for shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map; and
and the image quality detection module is used for enabling the optimized differential feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the image quality of the cardiovascular image to be evaluated meets the standard.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions that, when executed by the processor, cause the processor to perform the deep learning based cardiovascular image recognition method as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform a deep learning based cardiovascular image identification method as described above.
Compared with the prior art, the cardiovascular image recognition method and the system based on the deep learning provided by the application are characterized in that firstly, a cardiovascular image to be evaluated and a reference cardiovascular image are acquired; then the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image; and then, calculating a difference feature map between the feature map to be evaluated and the reference feature map, shaping and optimizing a high-dimensional manifold of the difference feature map to obtain an optimized difference feature map, and finally, passing the optimized difference feature map through a classifier to obtain a classification result.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 illustrates an application scenario diagram of a deep learning-based cardiovascular image recognition method and system thereof according to an embodiment of the present application.
Fig. 2 illustrates a flowchart of a deep learning based cardiovascular image recognition method according to an embodiment of the present application.
Fig. 3 illustrates a schematic diagram of a system architecture of a deep learning based cardiovascular image recognition method according to an embodiment of the present application.
Fig. 4 illustrates a flowchart of passing the cardiovascular image to be evaluated and the reference cardiovascular image through a dual detection model including a first image encoder and a second image encoder, respectively, to obtain a feature map to be evaluated and a reference feature map in a cardiovascular image recognition method based on deep learning according to an embodiment of the present application.
Fig. 5 illustrates a flowchart of the optimized differential feature map passing through a classifier to obtain a classification result in a cardiovascular image recognition method based on deep learning according to an embodiment of the present application.
Fig. 6 illustrates a block diagram schematic of a deep learning based cardiovascular image recognition system in accordance with an embodiment of the present application.
Fig. 7 illustrates a block diagram of an image encoding module in a deep learning based cardiovascular image recognition system according to an embodiment of the present application.
Fig. 8 illustrates a block diagram of an image quality detection module in a deep learning based cardiovascular image recognition system according to an embodiment of the present application.
Fig. 9 illustrates a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Scene overview
As described above, with the development of medical big data and smart medical, machine vision-based medical image analysis and disease risk early warning have been developed. However, prior to the construction of smart medical protocols for use in the cardiovascular field, it is necessary to collect a data set of medical institution-accepted cardiovascular maps.
The existing cardiovascular image data set is manually acquired, but artificial judgment errors are easy to generate in the acquisition process, such as the problem that the definition is not high due to the blurring of the cardiovascular image. Thus, an optimized cardiovascular image recognition scheme is desired.
At present, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, speech signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
In recent years, deep learning and development of neural networks provide new solutions and schemes for intelligent recognition of cardiovascular images.
Accordingly, in order to improve the sharpness of a cardiovascular image to more accurately identify the cardiovascular image, it is necessary to identify whether the image quality thereof meets the criterion. Specifically, in the technical scheme of the application, whether the image to be detected meets the preset standard is evaluated based on the feature distribution difference of the cardiovascular detection image and the reference image in the high-dimensional feature space. In this way, on the one hand, images which are not clearly distinguishable by naked eyes but are clearly separated by machine vision can be incorporated, and on the other hand, the feature layer is directly applicable to subsequent medical image analysis, so that the image quality screening standard is directly constructed by taking the application as a guide. In this way, the quality of the cardiovascular image can be accurately assessed to make it more suitable for subsequent medical image analysis while obtaining a cardiovascular image of quality that meets the criteria.
Specifically, in the technical solution of the present application, first, a cardiovascular image to be evaluated and a reference cardiovascular image are acquired, wherein the reference cardiovascular image has an image quality satisfying a predetermined requirement. And then, the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a double detection model comprising a first image encoder and a second image encoder to obtain a characteristic image to be evaluated and a reference characteristic image. That is, the dual detection model with the first image encoder and the second image encoder uses a pyramid network as an image encoder to respectively encode the cardiovascular image to be evaluated and the reference cardiovascular image, so as to extract deep implicit features of the cardiovascular image to be evaluated and the reference cardiovascular image, and simultaneously retain texture feature information as shallow features, thereby improving accuracy of image quality evaluation of the cardiovascular image in subsequent classification.
In particular, here, the first image encoder and the second image encoder are both pyramid networks, and the pyramid network of the first image encoder and the pyramid network of the second image encoder have the same network structure. It should be understood that the pyramid network mainly solves the multi-scale problem in target detection, and can independently predict on different feature layers by simply changing network connection under the condition of basically not increasing the calculation amount of the original model, thereby greatly improving the performance of small target detection. In this way, deep mining of multiple scales can be performed on the hidden features of the blood vessels in the cardiovascular image and preservation can be performed on the texture features of the blood vessels in the shallow layers, so that higher accuracy can be provided when the quality of the cardiovascular image is evaluated later, and the method can be directly applied as a guide to construct the image quality screening standard.
Further, after obtaining the feature map to be evaluated and the reference feature map having multi-scale global implicit feature information of the cardiovascular image to be evaluated and the reference cardiovascular image, respectively, a differential feature map between the feature map to be evaluated and the reference feature map is further calculated to evaluate whether the image quality of the cardiovascular image to be evaluated meets a predetermined criterion based on a feature distribution difference of the cardiovascular detection image and the cardiovascular reference image in a high-dimensional feature space. And then, the obtained differential feature map is passed through a classifier to obtain a classification result for indicating whether the image quality of the cardiovascular image to be evaluated meets the standard.
In particular, in the technical solution of the present application, the first image encoder and the second image encoder are capable of enlarging the difference between the cardiovascular image to be evaluated and the source image of the reference image by combining feature encoding of global image semantics and local image semantics, so that the differential feature map enhances the expression of the image quality difference. On the other hand, however, image noise in the source images of the cardiovascular image to be evaluated and the reference image, which is not used for image semantic coding, is amplified, so that distribution divergence in a high-dimensional feature semantic space is formed in the differential feature map, and therefore, induction divergence exists when the differential feature map passes through a classifier, and the training speed of the classifier and the accuracy of classification results are affected.
Therefore, preferably, the differential feature map is optimized for class-bounded domain oriented distribution transfer:
Figure BDA0003998684430000051
f i is a feature value of the differential feature map, N is a scale of the differential feature map, i.e., width times height times channel number, and log represents a logarithm based on 2.
Here, the distribution transfer optimization for the class-oriented bounded domain optimizes the generalization and divergence that may exist when the high-dimensional feature distribution represented by the differential feature map is transferred to the target domain of the classification problem, and converges the feature distribution towards the bounded domain of the feature set based on the structural information constraint of the conditional classification, so that the feature distribution of the differential feature map is transferred to the range with stable structurable boundary under the target domain, and the stability of the generalization iteration of the classification solution is improved, namely, the training speed of the classifier and the accuracy of the classification result are improved. In this way, the quality of the cardiovascular image to be evaluated can be accurately evaluated, and the cardiovascular image with the quality meeting the standard can be obtained, so that the cardiovascular image is more suitable for subsequent medical image analysis.
Based on this, the present application provides a cardiovascular image recognition method based on deep learning, which includes: acquiring a cardiovascular image to be evaluated and a reference cardiovascular image, wherein the reference cardiovascular image has image quality meeting preset requirements; the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image; calculating a difference characteristic diagram between the characteristic diagram to be evaluated and the reference characteristic diagram; shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map; and the optimized differential feature map is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the image quality of the cardiovascular image to be evaluated meets the standard.
Fig. 1 illustrates an application scenario diagram of a deep learning-based cardiovascular image recognition method and system thereof according to an embodiment of the present application. As shown in fig. 1, in this application scenario, the cardiovascular image to be evaluated and the reference cardiovascular image are acquired by a cardiovascular imaging instrument (e.g., C as illustrated in fig. 1). The acquired cardiovascular images to be assessed and the reference cardiovascular image are then input into a server (e.g., S illustrated in fig. 1) deployed with a deep-learning based cardiovascular image recognition algorithm, wherein the server is capable of processing the cardiovascular images to be assessed and the reference cardiovascular image using the deep-learning based cardiovascular image recognition algorithm to generate a classification result that indicates whether the image quality of the cardiovascular images to be assessed meets a criterion.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
Fig. 2 illustrates a flowchart of a deep learning based cardiovascular image recognition method according to an embodiment of the present application. As shown in fig. 2, the cardiovascular image recognition method based on deep learning according to the embodiment of the application includes: s110, acquiring a cardiovascular image to be evaluated and a reference cardiovascular image, wherein the reference cardiovascular image has image quality meeting preset requirements; s120, the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image; s130, calculating a difference characteristic diagram between the characteristic diagram to be evaluated and the reference characteristic diagram; s140, shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map; and S150, enabling the optimized differential feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the image quality of the cardiovascular image to be evaluated meets the standard.
Fig. 3 illustrates a schematic diagram of a system architecture of a deep learning based cardiovascular image recognition method according to an embodiment of the present application. As shown in fig. 3, in the system architecture of the deep learning-based cardiovascular image recognition method in the embodiment of the present application, first, a cardiovascular image to be evaluated is obtained, and the cardiovascular image to be evaluated is passed through a first image encoder of a dual detection model to obtain a feature map to be evaluated. Meanwhile, a reference cardiovascular image is acquired, the reference cardiovascular image has image quality meeting preset requirements, and the reference cardiovascular image is passed through a second image encoder of the dual detection model to obtain a reference feature map. And then, calculating a difference feature map between the feature map to be evaluated and the reference feature map, and shaping and optimizing a high-dimensional data manifold of the difference feature map to obtain an optimized difference feature map. And finally, the optimized differential feature map is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the image quality of the cardiovascular image to be evaluated meets the standard.
In step S110 of the embodiment of the present application, a cardiovascular image to be evaluated and a reference cardiovascular image are acquired, wherein the reference cardiovascular image has an image quality that meets a predetermined requirement. As mentioned above, prior to the construction of smart medical protocols for use in the cardiovascular field, it is necessary to collect a data set of medical institution-accepted cardiovascular maps. The existing cardiovascular image data set is manually acquired, but artificial judgment errors are easy to generate in the acquisition process, such as the problem that the definition is not high due to the blurring of the cardiovascular image. Thus, an optimized cardiovascular image recognition scheme is desired. That is, in order to improve the sharpness of a cardiovascular image to more accurately identify the cardiovascular image, it is necessary to identify whether the image quality thereof meets the criterion. Specifically, in the technical scheme of the application, whether the image to be detected meets the preset standard is evaluated based on the feature distribution difference of the cardiovascular detection image and the reference image in the high-dimensional feature space. In this way, on the one hand, images which are not clearly distinguishable by naked eyes but are clearly separated by machine vision can be incorporated, and on the other hand, the feature layer is directly applicable to subsequent medical image analysis, so that the image quality screening standard is directly constructed by taking the application as a guide. In this way, the quality of the cardiovascular image can be accurately assessed to make it more suitable for subsequent medical image analysis while obtaining a cardiovascular image of quality that meets the criteria.
In step S120 of the embodiment of the present application, the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a dual detection model including a first image encoder and a second image encoder to obtain a feature map to be evaluated and a reference feature map. It will be appreciated that where subsequent medical image analysis is often judged by texture shape, it is desirable to preserve the texture features of shallow features of the cardiovascular image while it is being encoded using a convolutional neural network model. If a standard convolutional neural network model is used, shallow features become blurred or even lost as their coding depth deepens. Therefore, the pyramid network is used as an image encoder by the dual detection model with the first image encoder and the second image encoder to respectively encode the cardiovascular image to be evaluated and the reference cardiovascular image, so that the deep implicit characteristics of the cardiovascular image to be evaluated and the reference cardiovascular image are extracted, and meanwhile, the texture characteristic information serving as shallow characteristics is reserved, and the accuracy of the image quality evaluation of the cardiovascular is further improved in the follow-up classification.
Fig. 4 illustrates a flowchart of passing the cardiovascular image to be evaluated and the reference cardiovascular image through a dual detection model including a first image encoder and a second image encoder, respectively, to obtain a feature map to be evaluated and a reference feature map in a cardiovascular image recognition method based on deep learning according to an embodiment of the present application. As shown in fig. 4, in a specific embodiment of the present application, the step of passing the cardiovascular image to be evaluated and the reference cardiovascular image through a dual detection model including a first image encoder and a second image encoder to obtain a feature map to be evaluated and a reference feature map includes: s210, respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers by using each layer of a first image encoder of the dual detection model to output the feature map to be evaluated by the last layer of the first image encoder, wherein the input of the first layer of the first image encoder is the cardiovascular image to be evaluated; and S220, respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transmission of layers by using each layer of a second image encoder of the dual detection model to output the reference characteristic map by the last layer of the second image encoder, wherein the input of the first layer of the second image encoder is the reference cardiovascular image.
In a specific embodiment of the present application, the first image encoder and the second image encoder have the same network structure. It should be appreciated that, considering that the data amounts and the data distributions of the cardiovascular image to be evaluated and the reference cardiovascular image are the same at the source domain end, the first image encoder and the second image encoder have the same network structure, and the feature map obtained by the first image encoder and the second image encoder can be unified in dimension and size by using the same network structure, so as to facilitate subsequent feature distribution difference calculation in a high-dimensional feature space.
In a specific embodiment of the present application, the first image encoder and the second image encoder are pyramid networks. It should be understood that the pyramid network mainly solves the multi-scale problem in target detection, and can independently predict on different feature layers by simply changing network connection under the condition of basically not increasing the calculation amount of the original model, thereby greatly improving the performance of small target detection. In this way, deep mining of multiple scales can be performed on the hidden features of the blood vessels in the cardiovascular image and preservation can be performed on the texture features of the blood vessels in the shallow layers, so that higher accuracy can be provided when the quality of the cardiovascular image is evaluated later, and the method can be directly applied as a guide to construct the image quality screening standard.
In step S130 of the embodiment of the present application, a differential feature map between the feature map to be evaluated and the reference feature map is calculated. It should be appreciated that, in order to obtain the feature distribution differences of the cardiovascular detection image and the cardiovascular reference image in a high-dimensional feature space, after obtaining the feature map to be evaluated and the reference feature map having multi-scale global implicit feature information of the cardiovascular image to be evaluated and the reference cardiovascular image, respectively, a differential feature map between the feature map to be evaluated and the reference feature map is further calculated.
In a specific embodiment of the present application, the calculating a differential feature map between the feature map to be evaluated and the reference feature map includes:
wherein, the formula is:
Figure BDA0003998684430000091
wherein F is 1 The feature map to be evaluated is represented,
Figure BDA0003998684430000092
indicating difference in position, F 2 Representing the reference feature map, and F n Representing the differential feature map.
In step S140 of the embodiment of the present application, shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map. It should be appreciated that in the technical solution of the present application, the first image encoder and the second image encoder are able to enlarge the difference between the cardiovascular image to be evaluated and the source image of the reference image by combining feature encoding of global image semantics and local image semantics, so that the differential feature map enhances the expression of the image quality difference. On the other hand, however, image noise in the source images of the cardiovascular image to be evaluated and the reference image, which is not used for image semantic coding, is amplified, so that distribution divergence in a high-dimensional feature semantic space is formed in the differential feature map, and therefore, induction divergence exists when the differential feature map passes through a classifier, and the training speed of the classifier and the accuracy of classification results are affected. Therefore, the differential feature map is preferably optimized for class-bounded domain oriented distribution transfer.
In a specific embodiment of the present application, the shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map includes: shaping and optimizing the high-dimensional data manifold of the differential feature map by the following formula to obtain the optimized differential feature map;
wherein, the formula is:
Figure BDA0003998684430000101
wherein f i Characteristic values f representing respective positions of the differential characteristic map i And the characteristic value of each position of the optimized differential characteristic diagram is represented, N is the scale of the differential characteristic diagram, and log represents a logarithmic function value based on 2.
Here, the distribution transfer optimization for the class-oriented bounded domain optimizes the generalization and divergence that may exist when the high-dimensional feature distribution represented by the differential feature map is transferred to the target domain of the classification problem, and converges the feature distribution towards the bounded domain of the feature set based on the structural information constraint of the conditional classification, so that the feature distribution of the differential feature map is transferred to the range with stable structurable boundary under the target domain, and the stability of the generalization iteration of the classification solution is improved, namely, the training speed of the classifier and the accuracy of the classification result are improved. In this way, the quality of the cardiovascular image to be evaluated can be accurately evaluated, and the cardiovascular image with the quality meeting the standard can be obtained, so that the cardiovascular image is more suitable for subsequent medical image analysis.
In step S150 of the embodiment of the present application, the optimized differential feature map is passed through a classifier to obtain a classification result, where the classification result is used to indicate whether the image quality of the cardiovascular image to be evaluated meets the standard. That is, the feature distribution difference of the corrected cardiovascular detection image and the cardiovascular reference image in the high-dimensional feature space is classified in a classifier to obtain an image quality for representing whether the cardiovascular image to be evaluated meets the standard. In this way, the quality of the cardiovascular image can be accurately assessed to make it more suitable for subsequent medical image analysis while obtaining a cardiovascular image of quality that meets the criteria.
Fig. 5 illustrates a flowchart of the optimized differential feature map passing through a classifier to obtain a classification result in a cardiovascular image recognition method based on deep learning according to an embodiment of the present application. As shown in fig. 5, in a specific embodiment of the present application, the step of passing the optimized differential feature map through a classifier to obtain a classification result includes: s310, expanding the optimized differential feature map into row vectors according to rows; s320, performing full-connection coding on the row vectors by using a full-connection layer of the classifier to obtain classification feature vectors; and S330, passing the classification feature vector through a Softmax classification function of the classifier to obtain the classification result. It should be understood that after the optimized differential feature map is expanded into a row vector according to rows, although the optimized differential feature map is reduced into a one-dimensional vector, partial associated information is lost at the same time, so that the full-connection layer of the classifier is used for full-connection encoding of the row vector so as to fully utilize the information of each position in the optimized differential feature map to obtain the classification feature vector. Then, a Softmax function value of the one-dimensional classification feature vector is calculated, i.e. a probability value that the classification feature vector belongs to each classification label, which in the embodiment of the application comprises that the image quality of the cardiovascular image to be evaluated meets the criterion (first label) and that the image quality of the cardiovascular image to be evaluated does not meet the criterion (second label). And finally, taking the label corresponding to the label with the larger probability value as the classification result.
In a specific embodiment of the present application, the step of passing the optimized differential feature map through a classifier to obtain a classification result includes: processing the optimized differential feature map by using the classifier according to the following formula to obtain a classification result;
wherein, the formula is: o=softmax { (W) c ,B c ) Project (F), where Project (F) represents projecting the optimized differential feature map as a vector, W c Is a weight matrix, B c Representing the bias vector.
In summary, according to the cardiovascular image recognition method based on deep learning of the embodiment of the present application, specifically, first, a cardiovascular image to be evaluated and a reference cardiovascular image are obtained; then the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image; and then, calculating a difference feature map between the feature map to be evaluated and the reference feature map, shaping and optimizing a high-dimensional manifold of the difference feature map to obtain an optimized difference feature map, and finally, passing the optimized difference feature map through a classifier to obtain a classification result.
Exemplary method
Fig. 6 illustrates a block diagram schematic of a deep learning based cardiovascular image recognition system in accordance with an embodiment of the present application. As shown in fig. 6, the deep learning-based cardiovascular image recognition system 100 according to the embodiment of the present application includes: an image acquisition module 110 for acquiring a cardiovascular image to be evaluated and a reference cardiovascular image, wherein the reference cardiovascular image has an image quality meeting a predetermined requirement; an image encoding module 120, configured to pass the cardiovascular image to be evaluated and the reference cardiovascular image through a dual detection model including a first image encoder and a second image encoder, respectively, so as to obtain a feature map to be evaluated and a reference feature map; a difference module 130, configured to calculate a difference feature map between the feature map to be evaluated and the reference feature map; the feature map optimizing module 140 is configured to perform shaping optimization on the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map; and an image quality detection module 150, configured to pass the optimized differential feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the image quality of the cardiovascular image to be evaluated meets a standard.
Fig. 7 illustrates a block diagram of an image encoding module in a deep learning based cardiovascular image recognition system according to an embodiment of the present application. As shown in fig. 7, in a specific embodiment of the present application, the image encoding module 120 includes: a first image encoding unit 121, configured to perform convolution processing, pooling processing, and nonlinear activation processing on input data in forward transfer of layers respectively using layers of a first image encoder of the dual detection model to output the feature map to be evaluated by a last layer of the first image encoder, where an input of the first layer of the first image encoder is the cardiovascular image to be evaluated; and a second image encoding unit 122 for performing convolution processing, pooling processing, and nonlinear activation processing on input data in forward transfer of layers, respectively, of each layer of a second image encoder using the dual detection model to output the reference feature map by a last layer of the second image encoder, wherein an input of a first layer of the second image encoder is the reference cardiovascular image.
In a specific embodiment of the present application, the first image encoder and the second image encoder have the same network structure.
In a specific embodiment of the present application, the first image encoder and the second image encoder are pyramid networks.
In a specific embodiment of the present application, the calculating a differential feature map between the feature map to be evaluated and the reference feature map includes:
wherein, the formula is:
Figure BDA0003998684430000121
wherein F is 1 The feature map to be evaluated is represented,
Figure BDA0003998684430000122
indicating difference in position, F 2 Representing the reference feature map, and F n Representing the differential feature map.
In a specific embodiment of the present application, the shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map includes: shaping and optimizing the high-dimensional data manifold of the differential feature map by the following formula to obtain the optimized differential feature map;
wherein, the formula is:
Figure BDA0003998684430000131
wherein f i Characteristic values f representing respective positions of the differential characteristic map i And the characteristic value of each position of the optimized differential characteristic diagram is represented, N is the scale of the differential characteristic diagram, and log represents a logarithmic function value based on 2.
Fig. 8 illustrates a block diagram of an image quality detection module in a deep learning based cardiovascular image recognition system according to an embodiment of the present application. As shown in fig. 8, in a specific embodiment of the present application, the image quality detection module 150 includes: a feature map expansion unit 151, configured to expand the optimized differential feature map into row vectors according to rows; a full-connection encoding unit 152, configured to perform full-connection encoding on the row vectors by using a full-connection layer of the classifier to obtain classification feature vectors; and a classification unit 153, configured to pass the classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described deep learning-based cardiovascular image recognition system have been described in detail in the above description of the deep learning-based cardiovascular image recognition method with reference to fig. 2 to 5, and thus, repetitive descriptions thereof will be omitted.
As described above, the deep learning-based cardiovascular image recognition system 100 according to the embodiment of the present application may be implemented in various terminal devices, for example, a server or the like deployed with a deep learning-based cardiovascular image recognition algorithm. In one example, the deep learning based cardiovascular image recognition system 100 can be integrated into the terminal device as a software module and/or hardware module. For example, the deep learning based cardiovascular image recognition system 100 may be a software module in the operating system of the terminal device or may be an application developed for the terminal device; of course, the deep learning based cardiovascular image recognition system 100 can also be one of a number of hardware modules of the terminal device.
Alternatively, in another example, the deep learning based cardiovascular image recognition system 100 and the terminal device may be separate devices, and the deep learning based cardiovascular image recognition system 100 may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information in a agreed data format.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 9.
Fig. 9 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 9, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that may be executed by the processor 11 to implement the deep learning based cardiovascular image recognition and/or other desired functions of the various embodiments of the present application described above. Various content such as the cardiovascular image to be evaluated and the reference cardiovascular image may also be stored in the computer readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information including the classification result and the like to the outside. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the deep learning based cardiovascular image recognition method according to various embodiments of the present application described in the "exemplary methods" section of the present specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps of deep learning based cardiovascular image recognition according to various embodiments of the present application described in the above "exemplary methods" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A cardiovascular image recognition method based on deep learning, comprising:
acquiring a cardiovascular image to be evaluated and a reference cardiovascular image, wherein the reference cardiovascular image has image quality meeting preset requirements;
the cardiovascular image to be evaluated and the reference cardiovascular image are respectively passed through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image;
calculating a difference characteristic diagram between the characteristic diagram to be evaluated and the reference characteristic diagram;
shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map; and
and the optimized differential feature map is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the image quality of the cardiovascular image to be evaluated meets the standard.
2. The deep learning based cardiovascular image recognition method according to claim 1, wherein the passing the cardiovascular image to be evaluated and the reference cardiovascular image through a dual detection model comprising a first image encoder and a second image encoder to obtain a feature map to be evaluated and a reference feature map, respectively, comprises:
each layer of a first image encoder using the dual detection model respectively carries out convolution processing, pooling processing and nonlinear activation processing on input data in forward transmission of the layers so as to output the characteristic diagram to be evaluated by the last layer of the first image encoder, wherein the input of the first layer of the first image encoder is the cardiovascular image to be evaluated; and
each layer of a second image encoder using the dual detection model performs convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers respectively to output the reference feature map by a last layer of the second image encoder, wherein an input of a first layer of the second image encoder is the reference cardiovascular image.
3. The deep learning based cardiovascular image recognition method of claim 2, wherein the first image encoder and the second image encoder have the same network structure.
4. The deep learning based cardiovascular image recognition method of claim 3, wherein the first image encoder and the second image encoder are pyramid networks.
5. The deep learning based cardiovascular image recognition method as claimed in claim 4, wherein the calculating a differential feature map between the feature map to be evaluated and the reference feature map comprises:
wherein, the formula is:
Figure FDA0003998684420000021
wherein F is 1 The feature map to be evaluated is represented,
Figure FDA0003998684420000022
indicating difference in position, F 2 Representing the reference feature map, and F n Representing the differential feature map.
6. The deep learning based cardiovascular image recognition method of claim 5, wherein said shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map comprises:
shaping and optimizing the high-dimensional data manifold of the differential feature map by the following formula to obtain the optimized differential feature map;
wherein, the formula is:
Figure FDA0003998684420000023
wherein f i Characteristic values f representing respective positions of the differential characteristic map i And the characteristic value of each position of the optimized differential characteristic diagram is represented, N is the scale of the differential characteristic diagram, and log represents a logarithmic function value based on 2.
7. The deep learning based cardiovascular image recognition method according to claim 6, wherein the passing the optimized differential feature map through a classifier to obtain a classification result comprises:
expanding the optimized differential feature map into row vectors according to rows;
performing full-connection coding on the row vectors by using a full-connection layer of the classifier to obtain classification feature vectors; and
and the classification feature vector is passed through a Softmax classification function of the classifier to obtain the classification result.
8. A deep learning based cardiovascular image recognition system, comprising:
the image acquisition module is used for acquiring a cardiovascular image to be evaluated and a reference cardiovascular image, wherein the reference cardiovascular image has image quality meeting preset requirements;
the image coding module is used for respectively passing the cardiovascular image to be evaluated and the reference cardiovascular image through a double detection model comprising a first image encoder and a second image encoder to obtain a feature image to be evaluated and a reference feature image;
the difference module is used for calculating a difference characteristic diagram between the characteristic diagram to be evaluated and the reference characteristic diagram;
The feature map optimizing module is used for shaping and optimizing the high-dimensional data manifold of the differential feature map to obtain an optimized differential feature map; and
and the image quality detection module is used for enabling the optimized differential feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the image quality of the cardiovascular image to be evaluated meets the standard.
9. The deep learning based cardiovascular image recognition system of claim 8, wherein the differencing module is further configured to:
wherein, the formula is:
Figure FDA0003998684420000031
wherein F is 1 The feature map to be evaluated is represented,
Figure FDA0003998684420000032
indicating difference in position, F 2 Representing the reference feature map, and F n Representing the differential feature map.
10. The deep learning based cardiovascular image recognition system of claim 9, wherein the feature map optimization module is further configured to:
shaping and optimizing the high-dimensional data manifold of the differential feature map by the following formula to obtain the optimized differential feature map;
wherein, the formula is:
Figure FDA0003998684420000033
wherein f i Characteristic values f representing respective positions of the differential characteristic map i And the characteristic value of each position of the optimized differential characteristic diagram is represented, N is the scale of the differential characteristic diagram, and log represents a logarithmic function value based on 2.
CN202211608999.9A 2022-12-14 2022-12-14 Cardiovascular image recognition method and system based on deep learning Pending CN116091414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211608999.9A CN116091414A (en) 2022-12-14 2022-12-14 Cardiovascular image recognition method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211608999.9A CN116091414A (en) 2022-12-14 2022-12-14 Cardiovascular image recognition method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN116091414A true CN116091414A (en) 2023-05-09

Family

ID=86185931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211608999.9A Pending CN116091414A (en) 2022-12-14 2022-12-14 Cardiovascular image recognition method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN116091414A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980976A (en) * 2023-09-21 2023-10-31 深圳市聚亚科技有限公司 Data transparent transmission method based on 4G communication module
CN117649943A (en) * 2024-01-30 2024-03-05 吉林大学 Shaping data intelligent analysis system and method based on machine learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980976A (en) * 2023-09-21 2023-10-31 深圳市聚亚科技有限公司 Data transparent transmission method based on 4G communication module
CN117649943A (en) * 2024-01-30 2024-03-05 吉林大学 Shaping data intelligent analysis system and method based on machine learning
CN117649943B (en) * 2024-01-30 2024-04-30 吉林大学 Shaping data intelligent analysis system and method based on machine learning

Similar Documents

Publication Publication Date Title
CN115203380B (en) Text processing system and method based on multi-mode data fusion
JP6843086B2 (en) Image processing systems, methods for performing multi-label semantic edge detection in images, and non-temporary computer-readable storage media
CN116091414A (en) Cardiovascular image recognition method and system based on deep learning
CN111414987B (en) Training method and training device of neural network and electronic equipment
CN115953665B (en) Target detection method, device, equipment and storage medium
CN115908311B (en) Lens forming detection equipment and method based on machine vision
CN116015837A (en) Intrusion detection method and system for computer network information security
CN116563302B (en) Intelligent medical information management system and method thereof
CN116089648B (en) File management system and method based on artificial intelligence
CN116343301B (en) Personnel information intelligent verification system based on face recognition
CN114861842B (en) Few-sample target detection method and device and electronic equipment
CN116167989A (en) Intelligent production method and system for aluminum cup
CN115205788A (en) Food material quality monitoring system
CN112960213A (en) Intelligent package quality detection method using characteristic probability distribution representation
CN116486465A (en) Image recognition method and system for face structure analysis
CN110889290B (en) Text encoding method and apparatus, text encoding validity checking method and apparatus
CN116797533B (en) Appearance defect detection method and system for power adapter
CN115984745A (en) Moisture control method for black garlic fermentation
CN109657710B (en) Data screening method and device, server and storage medium
CN116467485A (en) Video image retrieval construction system and method thereof
CN111967383A (en) Age estimation method, and training method and device of age estimation model
CN116994049A (en) Full-automatic flat knitting machine and method thereof
CN116238125A (en) Product quality monitoring method and system for injection molding production of injector
CN116258947A (en) Industrial automatic processing method and system suitable for home customization industry
CN116109649A (en) 3D point cloud instance segmentation method based on semantic error correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination