CN114708156A - Method, device and equipment for generating reference image of ultrasonic image and storage medium - Google Patents

Method, device and equipment for generating reference image of ultrasonic image and storage medium Download PDF

Info

Publication number
CN114708156A
CN114708156A CN202210253006.4A CN202210253006A CN114708156A CN 114708156 A CN114708156 A CN 114708156A CN 202210253006 A CN202210253006 A CN 202210253006A CN 114708156 A CN114708156 A CN 114708156A
Authority
CN
China
Prior art keywords
image
feature
reference image
original
ultrasonic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210253006.4A
Other languages
Chinese (zh)
Inventor
郑喜民
胡浩楠
舒畅
陈又新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210253006.4A priority Critical patent/CN114708156A/en
Priority to PCT/CN2022/090159 priority patent/WO2023173545A1/en
Publication of CN114708156A publication Critical patent/CN114708156A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to the field of image feature processing, and in particular, to a method, an apparatus, a computer device, and a storage medium for generating a reference image of an ultrasound image, where the method includes: acquiring pre-configured mask information; obtaining an original ultrasonic image; performing product processing on the original ultrasonic image and the mask information to obtain a characteristic region ultrasonic image; inputting the ultrasonic image of the characteristic region into a deep learning network to obtain a first characteristic variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable; obtaining shape features of the image according to the first feature variables, and obtaining fine grain features of the image according to the second feature variables; generating a reference image from the shape feature and the fine grain feature. The method and the device can generate the accurate and reliable interpretable reference diagram, and improve the reliability of the prediction result.

Description

Method, device and equipment for generating reference image of ultrasonic image and storage medium
Technical Field
The present application relates to the field of image feature processing, and in particular, to a method and an apparatus for generating a reference image of an ultrasound image, a computer device, and a storage medium.
Background
With the development of imaging technology, digital images have become the main data of medicine, and image recognition is performed through artificial intelligence to assist the clinical decision of doctors. The biggest barrier to penetration of current artificial intelligence technologies into the medical field stems from the "black box" problem of deep neural networks, the inability of humans to trust decisions made by unexplainable AI, due to the inability to make reasonable interpretations of AI decisions accurately. At present, a mode is to calculate the contribution of an ultrasonic image to a prediction result of a network through a null map, and since an accurate reference image cannot be provided, the mode cannot distinguish whether the network focuses on the shape feature or the texture feature of the ultrasonic image, and thus cannot judge whether the network focuses on the correct feature, that is, the current reference map does not have good ultrasonic reference map characteristics, and cannot accurately represent feature information in the image.
Disclosure of Invention
The application mainly aims to provide a method for generating a reference image of an ultrasonic image, a method and a device for controlling screen projection connection, computer equipment and a storage medium, and aims to solve the problem of low accuracy of feature representation of an ultrasonic reference image.
In order to achieve the above object, the present application provides a method for generating a reference image of an ultrasound image, including:
acquiring pre-configured mask information;
obtaining an original ultrasonic image;
performing product processing on the original ultrasonic image and the mask information to obtain a characteristic region ultrasonic image;
inputting the ultrasonic image of the characteristic region into a deep learning network to obtain a first characteristic variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable;
obtaining shape features of the image according to the first feature variables, and obtaining fine grain features of the image according to the second feature variables;
generating a reference image from the shape feature and the fine grain feature.
Further, the ultrasonic image of the characteristic region is input into a deep learning network to obtain a first characteristic variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable, wherein the second characteristic variable comprises:
inputting the feature region ultrasonic image into a deep learning network, and transforming potential codes of the feature region ultrasonic image based on a nonlinear mapping network and affine transformation in the deep learning network to obtain a first feature variable;
and inputting the original ultrasonic image into a deep learning network, and transforming the potential code of the original ultrasonic image based on a nonlinear mapping network and affine transformation in the deep learning network to obtain a second characteristic variable.
Further, the obtaining of the shape feature of the image according to the first feature variable and the obtaining of the fine grain feature of the image according to the second feature variable includes:
inputting the first characteristic variable into a low-resolution convolution layer in a generation network to obtain the shape characteristic of the image;
and inputting the second characteristic variable into a high-resolution convolution layer in a generation network to obtain the fine-grain characteristics of the image.
Further, the low resolution convolutional layer has a resolution of 42-322The convolutional layer of (a); the high-resolution convolutional layer has a resolution of 642-10242The above-mentioned convolutional layer.
Further, after generating the reference image according to the shape feature and the fine grain feature, the method further includes:
inputting the original ultrasonic image and the reference image into a classification network, and calculating the increment of the original ultrasonic image compared with the reference image based on the classification network;
an attribution map is generated according to the increments.
Further, the inputting the original ultrasound image and the reference image into a classification network, calculating an increment of the original ultrasound image compared to the reference image based on the classification network, including;
dividing the original ultrasonic image into a plurality of regional subimages;
inputting each regional sub-image and the reference image into a classification network respectively, and calculating a first contribution amount of a prediction score of each regional sub-image to a respective result compared with the reference image based on the classification network;
acquiring a second contribution amount of the environmental feature in the reference image;
determining the increment according to the first contribution amount and the second contribution amount.
Further, after generating the attribution graph according to the increment, the method further includes:
acquiring a target subregion image with the highest contribution amount in the cause graph;
and determining effective characteristic information of the reference image according to the target subregion image.
The present application also provides an apparatus for generating a reference image of an ultrasound image, including:
the configuration information module is used for acquiring pre-configured mask information;
the original image module is used for acquiring an original ultrasonic image;
the image processing module is used for performing product processing on the original ultrasonic image and the mask information to obtain a characteristic region ultrasonic image;
the variable conversion module is used for inputting the feature region ultrasonic image into a deep learning network to obtain a first feature variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable;
the feature extraction module is used for obtaining shape features of the image according to the first feature variables and obtaining fine grain features of the image according to the second feature variables;
a reference image module to generate a reference image from the shape features and the fine grain features.
The present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the method for generating a reference image of an ultrasound image according to any one of the above methods when executing the computer program.
The present application also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, realizes the steps of the method for generating a reference image of an ultrasound image according to any one of the above.
The application example provides a method for generating a reference image for an ultrasonic image by stripping information such as background and noise in the ultrasonic image, which comprises the steps of firstly obtaining an original ultrasonic image and obtaining preconfigured mask information, wherein the mask information can process an interested area in the original ultrasonic image, randomly selecting one mask from a mask information set as the preconfigured mask information, and also randomly generating one mask information as the preconfigured mask information by configuring a generation rule of the mask information, then multiplying the original ultrasonic image and the mask information, firstly changing the mask information into the same size as the original ultrasonic image, then multiplying the original ultrasonic image and the mask information to obtain a characteristic area ultrasonic image, and inputting the characteristic area ultrasonic image into a deep learning network, extracting features of the ultrasonic image in the feature region through the deep learning network, wherein the extracted features are features capable of influencing a reference image, coding and converting the extracted features to obtain a first feature variable, similarly, inputting the original ultrasonic image into the deep learning network to obtain a second feature variable, obtaining shape features of the image according to the first feature variable, obtaining fine grain features of the image according to the second feature variable, wherein the shape features are low-dimensional features of the image, the fine grain features are high-dimensional features of the image, then generating the reference image according to the shape features and the fine grain features, and the finally generated reference image is an environment feature which does not contain effective features, only contains background information and noise information in the original ultrasonic image, has the same environment as the original ultrasonic image but does not provide effective information, the method is close to an ideal reference image, so that information except effective features in the ultrasonic image is accurately represented, and an accurate reference image and an accurate explanatory explanation are provided for the prediction result of the AI decision.
Drawings
Fig. 1 is a flowchart illustrating an embodiment of a method for generating a reference image of an ultrasound image according to the present application;
FIG. 2 is a flowchart illustrating an embodiment of calculating the increment of the original ultrasound image compared to the reference image according to the present disclosure;
fig. 3 is a schematic structural diagram of an embodiment of a device for generating a reference image of an ultrasound image according to the present application;
FIG. 4 is a block diagram illustrating a computer device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, the present application provides a method for generating a reference image of an ultrasound image, which includes steps S10-S60, and the steps of the method for generating a reference image of an ultrasound image are described in detail as follows.
And S10, acquiring the pre-configured mask information.
The embodiment is applied to the generation scene of an ultrasonic reference image, the biggest obstacle of the current Artificial Intelligence technology to the penetration of the medical field is caused by the problem of 'black box' of a deep neural network, and human beings cannot believe the decision made by unexplainable AI (Artificial Intelligence), so that the explanation of the prediction result made by AI is needed to be explained, and the explanation of the prediction result is determined by calculating the contribution of each different area on the ultrasonic image to the prediction result compared with a reference image (baseline image). In order to accurately determine the contribution of each different region on the ultrasound image to the prediction result compared with a reference image (baseline image), the present embodiment generates a corresponding reference image for each ultrasound image, where the reference image is the information left after the effective features of the original ultrasound image are stripped, that is, the environmental features in the original ultrasound image. Firstly, acquiring preconfigured mask information, wherein the mask information is mask, and the mask information consists of a small pieces of a × a; in another embodiment, one mask information is randomly generated as the pre-configured mask information by configuring a generation rule of the mask information.
And S20, acquiring an original ultrasonic image.
In this embodiment, after the pre-configured mask information is obtained, in order to generate a corresponding reference image for each ultrasound image, an original ultrasound image is obtained, specifically, other medical devices may be docked, and after the original ultrasound image is captured by the other medical devices, the original ultrasound image is obtained.
And S30, performing product processing on the original ultrasonic image and the mask information to obtain a characteristic region ultrasonic image.
In this embodiment, after acquiring preconfigured mask information and after acquiring an original ultrasound image, multiplying the original ultrasound image by the mask informationProduct processing, wherein the pixel value x on each patch of the mask informationijObey uniform distribution, and xijThe method comprises the steps that the mask information belongs to {0,1}, so that the interested area can be processed more accurately through the mask information, then the mask information is changed to be the same as the original ultrasonic image in size, then the original ultrasonic image and the mask information are subjected to product processing to obtain an ultrasonic image of a characteristic area, the ultrasonic image of the characteristic area is defined as source A, and the original ultrasonic image is defined as source B.
S40, inputting the ultrasonic image of the characteristic area into a deep learning network to obtain a first characteristic variable; and inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable.
In this embodiment, after an original ultrasound image is acquired and a product processing is performed on the original ultrasound image and the mask information to obtain a feature area ultrasound image, the feature area ultrasound image is input to a deep learning network, feature extraction is performed on the feature area ultrasound image through the deep learning network, the extracted feature is a feature that can affect a reference image, and then the extracted feature is subjected to coding conversion to obtain a first feature variable; similarly, the original ultrasonic image is input into a deep learning network, feature extraction is carried out on the original ultrasonic image through the deep learning network, the extracted features are features capable of influencing a reference image, and then coding conversion is carried out on the features extracted from the original ultrasonic image to obtain a second feature variable.
And S50, obtaining the shape feature of the image according to the first feature variable, and obtaining the fine grain feature of the image according to the second feature variable.
In this embodiment, the feature region ultrasound image is input to a deep learning network to obtain a first feature variable; inputting the original ultrasonic image into a deep learning network, obtaining shape features of the image according to the first feature variable after obtaining a second feature variable, and obtaining fine grain features of the image according to the second feature variable, wherein the shape features are low-dimensional features of the image and comprise an image packetThe fine-grain features are high-dimensional features of the image, including texture features of the image and color features of the image, and are combined to be used as a style variable of the image after the first feature variable and the second feature variable are obtained, and the style variable y is defined as (y-y)s,yb) Wherein y isSIs a first characteristic variable, ybAs the second characteristic variable, by combining the first characteristic variable and the second characteristic variable and inputting them into the convolutional layer, the corresponding shape characteristic and fine particle characteristic can be obtained.
And S60, generating a reference image according to the shape feature and the fine grain feature.
In this embodiment, after the shape feature of the image is obtained according to the first feature variable, the fine-grain feature of the image is obtained according to the second feature variable, and then the reference image is generated according to the shape feature and the fine-grain feature, the finally generated reference image is an environment feature that does not include an effective feature, but only includes background information and noise information in the original ultrasound image, has the same environment as the original ultrasound image but does not provide effective information, and is close to an ideal reference image, so that information such as background and noise in the original ultrasound image is accurately stripped off, and then the reference image is generated, and the reference image can show abundant environment features, thereby accurately showing information in the image except the effective feature, and providing an accurate reference image and an accurate explanatory description for the prediction result of the AI decision.
The embodiment provides a method for generating a reference image for an ultrasound image by stripping information such as background and noise in the ultrasound image, which includes obtaining an original ultrasound image and obtaining preconfigured mask information, where the mask information can process an area of interest in the original ultrasound image, randomly selecting one from a set of mask information as preconfigured mask information, or randomly generating one mask information as preconfigured mask information by configuring a generation rule of the mask information, then performing product processing on the original ultrasound image and the mask information, first changing the mask information to the same size as the original ultrasound image, then performing product processing on the original ultrasound image and the mask information to obtain a feature area ultrasound image, and inputting the feature area ultrasound image into a deep learning network, extracting features of the ultrasonic image in the feature region through the deep learning network, wherein the extracted features are features capable of influencing a reference image, coding and converting the extracted features to obtain a first feature variable, similarly, inputting the original ultrasonic image into the deep learning network to obtain a second feature variable, obtaining shape features of the image according to the first feature variable, obtaining fine grain features of the image according to the second feature variable, wherein the shape features are low-dimensional features of the image, the fine grain features are high-dimensional features of the image, then generating the reference image according to the shape features and the fine grain features, and the finally generated reference image is an environment feature which does not contain effective features, only contains background information and noise information in the original ultrasonic image, has the same environment as the original ultrasonic image but does not provide effective information, the method is close to an ideal reference image, so that information except effective features in the ultrasonic image is accurately represented, and an accurate reference image and an accurate explanatory explanation are provided for the prediction result of the AI decision.
In one embodiment, the feature region ultrasonic image is input to a deep learning network to obtain a first feature variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable, wherein the second characteristic variable comprises:
inputting the feature region ultrasonic image into a deep learning network, and transforming potential codes of the feature region ultrasonic image based on a nonlinear mapping network and affine transformation in the deep learning network to obtain a first feature variable;
and inputting the original ultrasonic image into a deep learning network, and transforming the potential code of the original ultrasonic image based on a nonlinear mapping network and affine transformation in the deep learning network to obtain a second characteristic variable.
In this embodiment, the feature region ultrasound image is input to a deep learning network to obtain a first feature variable; inputting the original ultrasonic image into a deep learning network, inputting the ultrasonic image of the characteristic region into the deep learning network in the process of obtaining a second characteristic variable, wherein the deep learning network comprises a nonlinear mapping network, then transforming the potential code of the ultrasonic image of the characteristic region based on the nonlinear mapping network and affine transformation in the deep learning network to obtain a variable capable of representing the corresponding characteristic in the ultrasonic image of the characteristic region, and defining the variable as a first characteristic variable; similarly, for an original ultrasonic image, the original ultrasonic image is input to a deep learning network, potential codes of the original ultrasonic image are transformed based on a nonlinear mapping network and affine transformation in the deep learning network, variables capable of representing corresponding features in the original ultrasonic image are obtained, and the variables are defined as second feature variables, so that image features of the original ultrasonic image and the feature region ultrasonic image are accurately extracted, and accuracy of generation of a reference image is improved.
In one embodiment, the obtaining of the shape feature of the image according to the first feature variable and the obtaining of the fine grain feature of the image according to the second feature variable includes:
inputting the first characteristic variable into a low-resolution convolution layer in a generation network to obtain the shape characteristic of the image;
and inputting the second characteristic variable into a high-resolution convolution layer in the generation network to obtain the fine-grained characteristic of the image.
In this embodiment, in the process of obtaining the shape feature of the image according to the first feature variable and obtaining the fine-grain feature of the image according to the second feature variable, the first feature variable is input into a generation network, the generation network includes a plurality of convolution layers, the first feature variable is input into a low-resolution convolution layer in the generation network, the low-resolution convolution layer can control to generate the low-dimensional feature of the image, that is, the shape feature of the image is obtained, meanwhile, the second feature variable is input into a high-resolution convolution layer in the generation network, the high-resolution convolution layer can control to generate the high-dimensional feature of the image, that is, the fine-grain feature of the image is obtained, by inputting the different variables generated by controlling the images into different convolution layers, the high-dimensional feature and the low-dimensional feature of the image are obtained, and the image features of the original ultrasound image and the ultrasound image in the feature area are accurately extracted, thereby improving the accuracy of reference image generation.
In one embodiment, the low resolution convolutional layer has a resolution of 42-322The convolutional layer of (1); the high-resolution convolutional layer has a resolution of 642-10242The above-mentioned convolutional layer.
In this embodiment, the low-resolution convolutional layer has a resolution of 42-322The convolutional layer of (1); the high-resolution convolutional layer has a resolution of 642-10242The convolution layer of (2) is configured with a plurality of convolution layers with different resolutions, and different variables generated by control images are input to different convolution layers, so that high-latitude characteristics and low-latitude characteristics of the images are obtained, and image characteristics of an original ultrasonic image and an ultrasonic image in a characteristic area are accurately extracted, thereby improving the accuracy of generating a reference image.
In one embodiment, after generating the reference image according to the shape feature and the fine grain feature, the method further includes:
inputting the original ultrasonic image and the reference image into a classification network, and calculating the increment of the original ultrasonic image compared with the reference image based on the classification network;
an attribution map is generated according to the increments.
In this embodiment, after a reference image is generated according to the shape feature and the fine grain feature, the original ultrasound image and the reference image are input to a classification network, the original ultrasound image and the reference image are compared, and an increment of the original ultrasound image compared with the reference image is calculated based on the classification network, where the increment represents a contribution of each different region in the original ultrasound image to a determination result, and an attribution map (attribution map) is generated according to the increment, where the attribution map represents a contribution of each different region in the original ultrasound image to the determination result, and it can be determined whether the classification network correctly focuses on correct image features, and environmental features of the image are omitted, so that interpretability of the determination result is improved.
In one embodiment, as shown in fig. 2, the inputting the original ultrasound image and the reference image to a classification network, calculating an increment of the original ultrasound image compared to the reference image based on the classification network, includes;
s61: dividing the original ultrasonic image into a plurality of regional subimages;
s62: inputting each regional sub-image and the reference image into a classification network respectively, and calculating a first contribution amount of a prediction score of each regional sub-image to a respective result compared with the reference image based on the classification network;
s63: acquiring a second contribution amount of the environmental feature in the reference image;
s64: determining the increment according to the first contribution amount and the second contribution amount.
In this embodiment, in the process of inputting the original ultrasound image and the reference image into a classification network, and calculating an increment of the original ultrasound image compared with the reference image based on the classification network, the original ultrasound image is first divided into a plurality of region sub-images, for example, 9 × 9 region sub-images, then each region sub-image and the reference image are respectively input into the classification network, a contribution amount of a prediction score of each region sub-image to a respective result compared with the reference image is calculated based on the classification network, which is defined as a first contribution amount, a second contribution amount of an environmental feature in the reference image is then obtained, the increment is determined according to the first contribution amount and the second contribution amount, and specifically, the true contribution amount of each region sub-image is obtained by subtracting the second contribution amount from the first contribution amount, and then the real contribution amount of each area sub-image is overall to obtain the increment of the original ultrasonic image compared with the reference image, thereby accurately representing the contribution of each different area in the original ultrasonic image to the judgment result and improving the interpretability of the judgment result.
In one embodiment, after the generating the attribution graph according to the increment, the method further comprises:
acquiring a target subregion image with the highest contribution amount in the cause graph;
and determining effective characteristic information of the reference image according to the target subregion image.
In this embodiment, after the attribution map is generated according to the increment, the region with the highest contribution amount in the attribution map is acquired, and is defined as a target sub-region image, and then the effective feature information of the reference image is determined according to the target sub-region image, so as to determine the effective feature information of the reference image, and manual comparison can be performed according to the effective feature information, so as to improve the accuracy of the reference image.
Referring to fig. 3, the present application also provides an apparatus for generating a reference image of an ultrasound image, including:
a configuration information module 10, configured to obtain pre-configured mask information;
an original image module 20, configured to obtain an original ultrasound image;
an image processing module 30, configured to perform product processing on the original ultrasound image and the mask information to obtain an ultrasound image of a characteristic region;
the variable conversion module 40 is configured to input the feature region ultrasound image to a deep learning network to obtain a first feature variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable;
the feature extraction module 50 is configured to obtain shape features of the image according to the first feature variable, and obtain fine-grain features of the image according to the second feature variable;
a reference image module 60 for generating a reference image based on the shape features and the fine grain features.
As described above, it is understood that the components of the apparatus for generating a reference image of an ultrasound image proposed in the present application can implement the functions of any of the methods for generating a reference image of an ultrasound image described above.
In one embodiment, the feature region ultrasonic image is input to a deep learning network to obtain a first feature variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable, wherein the second characteristic variable comprises:
inputting the feature region ultrasonic image into a deep learning network, and transforming potential codes of the feature region ultrasonic image based on a nonlinear mapping network and affine transformation in the deep learning network to obtain a first feature variable;
and inputting the original ultrasonic image into a deep learning network, and transforming the potential code of the original ultrasonic image based on a nonlinear mapping network and affine transformation in the deep learning network to obtain a second characteristic variable.
In one embodiment, the obtaining of the shape feature of the image according to the first feature variable and the obtaining of the fine grain feature of the image according to the second feature variable includes:
inputting the first characteristic variable into a low-resolution convolution layer in a generation network to obtain the shape characteristic of the image;
and inputting the second characteristic variable into a high-resolution convolution layer in the generation network to obtain the fine-grained characteristic of the image.
In one embodiment, the low resolution convolutional layer has a resolution of 42-322The convolutional layer of (a); the high-resolution convolutional layer has a resolution of 642-10242The above-mentioned convolutional layer.
In one embodiment, after generating the reference image according to the shape feature and the fine grain feature, the method further includes:
inputting the original ultrasonic image and the reference image into a classification network, and calculating the increment of the original ultrasonic image compared with the reference image based on the classification network;
and generating an attribution graph according to the increment.
In one embodiment, said inputting said original ultrasound image and said reference image to a classification network, calculating an increment of said original ultrasound image compared to said reference image based on said classification network, comprises;
dividing the original ultrasonic image into a plurality of regional subimages;
inputting each regional sub-image and the reference image into a classification network respectively, and calculating a first contribution amount of a prediction score of each regional sub-image to a respective result compared with the reference image based on the classification network;
acquiring a second contribution amount of the environmental feature in the reference image;
determining the increment according to the first contribution amount and the second contribution amount.
In one embodiment, after the generating the attribution graph according to the increment, the method further comprises:
acquiring a target subregion image with the highest contribution amount in the cause graph;
and determining effective characteristic information of the reference image according to the target subregion image.
Referring to fig. 4, a computer device, which may be a mobile terminal and whose internal structure may be as shown in fig. 4, is also provided in the embodiment of the present application. The computer equipment comprises a processor, a memory, a network interface, a display device and an input device which are connected through a system bus. Wherein, the network interface of the computer equipment is used for communicating with an external terminal through network connection. The display device of the computer device is used for displaying the offline application. The input device of the computer device is used for receiving the input of the user in offline application. The computer designed processor is used to provide computational and control capabilities. The memory of the computer device includes non-volatile storage media. The non-volatile storage medium stores an operating system, a computer program, and a database. The database of the computer device is used for storing the original data. The computer program is executed by a processor to implement a method of generating a reference image of an ultrasound image.
The processor executes the method for generating the reference image of the ultrasonic image, and the method comprises the following steps: acquiring pre-configured mask information; obtaining an original ultrasonic image; performing product processing on the original ultrasonic image and the mask information to obtain a characteristic region ultrasonic image; inputting the ultrasonic image of the characteristic region into a deep learning network to obtain a first characteristic variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable; obtaining shape features of the image according to the first feature variables, and obtaining fine grain features of the image according to the second feature variables; generating a reference image from the shape feature and the fine grain feature.
The computer equipment provides a method for generating a reference image for an ultrasonic image by stripping information such as background, noise and the like in the ultrasonic image, firstly, an original ultrasonic image and mask information which is pre-configured are obtained, the mask information can process an interested area in the original ultrasonic image, one mask information which is pre-configured can be randomly selected from a mask information set, a mask information which is pre-configured can also be randomly generated by configuring a generation rule of the mask information, then, the original ultrasonic image and the mask information are subjected to product processing, firstly, the mask information is changed into the size which is the same as that of the original ultrasonic image, then, the original ultrasonic image and the mask information are subjected to product processing to obtain an ultrasonic image of a characteristic area, and the mask information is obtained through the mask information which is pre-configured and is obtained through the mask information which is pre-configured, the method comprises the steps of obtaining a mask information which is pre-configured and is used for the mask information, and obtaining the mask information of the mask information which is pre-configured to be used for the mask information, and then, obtaining the mask information of the original ultrasonic image of the mask information, and the mask information of the original ultrasonic image of the mask information of the original ultrasonic image of the mask information of, inputting the feature region ultrasonic image into a deep learning network, performing feature extraction on the feature region ultrasonic image through the deep learning network, wherein the extracted feature is a feature capable of influencing a reference image, performing coding conversion on the extracted feature to obtain a first feature variable, similarly, inputting the original ultrasonic image into the deep learning network to obtain a second feature variable, obtaining a shape feature of the image according to the first feature variable, obtaining a fine grain feature of the image according to the second feature variable, wherein the shape feature is a low-dimensional feature of the image, the fine grain feature is a high-dimensional feature of the image, then generating the reference image according to the shape feature and the fine grain feature, and the finally generated reference image is an environmental feature which does not contain effective features and only contains background information and noise information in the original ultrasonic image, the method has the same environment as the original ultrasonic image but does not provide effective information, is close to an ideal reference image, thereby accurately representing the information except the effective features in the ultrasonic image, and providing an accurate reference image and an accurate explanatory description for the prediction result of the AI decision.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by the processor, the computer program implements a method for generating a reference image of an ultrasound image, including the steps of: acquiring pre-configured mask information; obtaining an original ultrasonic image; performing product processing on the original ultrasonic image and the mask information to obtain a characteristic region ultrasonic image; inputting the ultrasonic image of the characteristic region into a deep learning network to obtain a first characteristic variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable; obtaining shape features of the image according to the first feature variables, and obtaining fine grain features of the image according to the second feature variables; generating a reference image from the shape feature and the fine grain feature.
The computer readable storage medium provides a method for generating a reference image for an ultrasound image by stripping information such as background and noise in the ultrasound image, and the method comprises the steps of firstly obtaining an original ultrasound image and obtaining preconfigured mask information, wherein the mask information can process an interested area in the original ultrasound image, randomly selecting one mask from a mask information set as the preconfigured mask information, or randomly generating one mask as the preconfigured mask information by configuring a generation rule of the mask information, then performing product processing on the original ultrasound image and the mask information, firstly changing the mask information into the same size as the original ultrasound image, and then performing product processing on the original ultrasound image and the mask information to obtain a characteristic area ultrasound image, inputting the feature region ultrasonic image into a deep learning network, performing feature extraction on the feature region ultrasonic image through the deep learning network, wherein the extracted feature is a feature capable of influencing a reference image, performing coding conversion on the extracted feature to obtain a first feature variable, similarly, inputting the original ultrasonic image into the deep learning network to obtain a second feature variable, obtaining a shape feature of the image according to the first feature variable, obtaining a fine grain feature of the image according to the second feature variable, wherein the shape feature is a low-dimensional feature of the image, the fine grain feature is a high-dimensional feature of the image, then generating the reference image according to the shape feature and the fine grain feature, and the finally generated reference image is an environmental feature which does not contain effective features and only contains background information and noise information in the original ultrasonic image, the method has the same environment as the original ultrasonic image but does not provide effective information, is close to an ideal reference image, thereby accurately representing the information except the effective features in the ultrasonic image, and providing an accurate reference image and an accurate explanatory description for the prediction result of the AI decision.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method for generating a reference image of an ultrasound image, comprising:
acquiring pre-configured mask information;
obtaining an original ultrasonic image;
performing product processing on the original ultrasonic image and the mask information to obtain a characteristic region ultrasonic image;
inputting the feature region ultrasonic image into a deep learning network to obtain a first feature variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable;
obtaining shape features of the image according to the first feature variables, and obtaining fine grain features of the image according to the second feature variables;
generating a reference image from the shape feature and the fine grain feature.
2. The method for generating a reference image of an ultrasound image according to claim 1, wherein the ultrasound image of the feature region is input to a deep learning network to obtain a first feature variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable, wherein the second characteristic variable comprises:
inputting the feature region ultrasonic image into a deep learning network, and transforming potential codes of the feature region ultrasonic image based on a nonlinear mapping network and affine transformation in the deep learning network to obtain a first feature variable;
and inputting the original ultrasonic image into a deep learning network, and transforming the potential code of the original ultrasonic image based on a nonlinear mapping network and affine transformation in the deep learning network to obtain a second characteristic variable.
3. The method for generating a reference image of an ultrasound image according to claim 1, wherein the obtaining of the shape feature of the image according to the first feature variable and the obtaining of the fine-grained feature of the image according to the second feature variable comprises:
inputting the first characteristic variable into a low-resolution convolution layer in a generation network to obtain the shape characteristic of the image;
and inputting the second characteristic variable into a high-resolution convolution layer in the generation network to obtain the fine-grained characteristic of the image.
4. The method for generating a reference image of an ultrasound image according to claim 3, wherein the low-resolution convolution layer has a resolution of 42-322The convolutional layer of (1); the high-resolution convolutional layer has a resolution of 642-10242The above-mentioned convolutional layer.
5. The method for generating a reference image of an ultrasound image according to claim 1, further comprising, after generating the reference image from the shape feature and the fine-grain feature:
inputting the original ultrasonic image and the reference image into a classification network, and calculating the increment of the original ultrasonic image compared with the reference image based on the classification network;
an attribution map is generated according to the increments.
6. The method for generating a reference image of an ultrasound image according to claim 5, wherein the inputting the original ultrasound image and the reference image into a classification network, calculating an increment of the original ultrasound image compared to the reference image based on the classification network comprises;
dividing the original ultrasonic image into a plurality of regional subimages;
inputting each regional sub-image and the reference image into a classification network respectively, and calculating a first contribution amount of a prediction score of each regional sub-image to a respective result compared with the reference image based on the classification network;
acquiring a second contribution amount of the environmental feature in the reference image;
determining the increment according to the first contribution amount and the second contribution amount.
7. The method for generating a reference image of an ultrasound image according to claim 5, further comprising, after generating the attribution map according to the increment:
acquiring a target subregion image with the highest contribution amount in the cause graph;
and determining effective characteristic information of the reference image according to the target subregion image.
8. An apparatus for generating a reference image of an ultrasound image, comprising:
the configuration information module is used for acquiring pre-configured mask information;
the original image module is used for acquiring an original ultrasonic image;
the image processing module is used for performing product processing on the original ultrasonic image and the mask information to obtain a characteristic region ultrasonic image;
the variable conversion module is used for inputting the feature region ultrasonic image into a deep learning network to obtain a first feature variable; inputting the original ultrasonic image into a deep learning network to obtain a second characteristic variable;
the feature extraction module is used for obtaining shape features of the image according to the first feature variables and obtaining fine grain features of the image according to the second feature variables;
a reference image module to generate a reference image from the shape features and the fine grain features.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method for generating a reference image of an ultrasound image according to any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for generating a reference image of an ultrasound image according to any one of claims 1 to 7.
CN202210253006.4A 2022-03-15 2022-03-15 Method, device and equipment for generating reference image of ultrasonic image and storage medium Pending CN114708156A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210253006.4A CN114708156A (en) 2022-03-15 2022-03-15 Method, device and equipment for generating reference image of ultrasonic image and storage medium
PCT/CN2022/090159 WO2023173545A1 (en) 2022-03-15 2022-04-29 Method and apparatus for generating reference image of ultrasound image, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210253006.4A CN114708156A (en) 2022-03-15 2022-03-15 Method, device and equipment for generating reference image of ultrasonic image and storage medium

Publications (1)

Publication Number Publication Date
CN114708156A true CN114708156A (en) 2022-07-05

Family

ID=82169344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210253006.4A Pending CN114708156A (en) 2022-03-15 2022-03-15 Method, device and equipment for generating reference image of ultrasonic image and storage medium

Country Status (2)

Country Link
CN (1) CN114708156A (en)
WO (1) WO2023173545A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007080895A1 (en) * 2006-01-10 2007-07-19 Kabushiki Kaisha Toshiba Ultrasonograph and ultrasonogram creating method
KR101286222B1 (en) * 2011-09-19 2013-07-15 삼성메디슨 주식회사 Method and apparatus for processing image, ultrasound diagnosis apparatus and medical imaging system
CN110032985A (en) * 2019-04-22 2019-07-19 清华大学深圳研究生院 A kind of automatic detection recognition method of haemocyte
CN112001226B (en) * 2020-07-07 2024-05-28 中科曙光(南京)计算技术有限公司 Unmanned 3D target detection method, device and storage medium
CN112102311B (en) * 2020-09-27 2023-07-18 平安科技(深圳)有限公司 Thyroid nodule image processing method and device and computer equipment

Also Published As

Publication number Publication date
WO2023173545A1 (en) 2023-09-21

Similar Documents

Publication Publication Date Title
CN111797650B (en) Obstacle identification method, obstacle identification device, computer equipment and storage medium
CN112101318A (en) Image processing method, device, equipment and medium based on neural network model
CN112818963B (en) Training method, device and equipment of face recognition model and storage medium
CN112802063A (en) Satellite cloud picture prediction method and device and computer readable storage medium
CN113792682A (en) Human face quality evaluation method, device, equipment and medium based on human face image
CN115586749B (en) Workpiece machining track control method based on machine vision and related device
CN113160087A (en) Image enhancement method and device, computer equipment and storage medium
CN111242840A (en) Handwritten character generation method, apparatus, computer device and storage medium
CN116883466A (en) Optical and SAR image registration method, device and equipment based on position sensing
CN112464945A (en) Text recognition method, device and equipment based on deep learning algorithm and storage medium
CN113449586A (en) Target detection method, target detection device, computer equipment and storage medium
CN112241952A (en) Method and device for recognizing brain central line, computer equipment and storage medium
CN113537020B (en) Complex SAR image target identification method based on improved neural network
CN111242228A (en) Hyperspectral image classification method, device, equipment and storage medium
CN111932538B (en) Method, device, computer equipment and storage medium for analyzing thyroid gland atlas
CN114708156A (en) Method, device and equipment for generating reference image of ultrasonic image and storage medium
CN115170632A (en) Point cloud representation generation method, device and equipment of three-dimensional point cloud and storage medium
CN115860067A (en) Method and device for training generation confrontation network, computer equipment and storage medium
CN114742990A (en) Target detection method, device and equipment based on artificial intelligence and storage medium
CN112614199A (en) Semantic segmentation image conversion method and device, computer equipment and storage medium
CN112184884A (en) Three-dimensional model construction method and device, computer equipment and storage medium
CN113220859A (en) Image-based question and answer method and device, computer equipment and storage medium
CN111860387A (en) Method and device for expanding data and computer equipment
CN112016571A (en) Feature extraction method and device based on attention mechanism and electronic equipment
CN113538240B (en) SAR image superpixel generation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination