CN111292230B - Spiral transformation data amplification method, system, medium and equipment in deep learning - Google Patents

Spiral transformation data amplification method, system, medium and equipment in deep learning Download PDF

Info

Publication number
CN111292230B
CN111292230B CN202010098682.XA CN202010098682A CN111292230B CN 111292230 B CN111292230 B CN 111292230B CN 202010098682 A CN202010098682 A CN 202010098682A CN 111292230 B CN111292230 B CN 111292230B
Authority
CN
China
Prior art keywords
spiral
transformation
dimensional image
data
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010098682.XA
Other languages
Chinese (zh)
Other versions
CN111292230A (en
Inventor
钱晓华
陈夏晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202010098682.XA priority Critical patent/CN111292230B/en
Publication of CN111292230A publication Critical patent/CN111292230A/en
Application granted granted Critical
Publication of CN111292230B publication Critical patent/CN111292230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention provides a spiral transformation data amplification method, a system, a medium and equipment in deep learning, wherein the spiral transformation data amplification method in deep learning comprises the following steps: acquiring three-dimensional image data; performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image; changing the mode of the spiral transformation of the three-dimensional image to convert into an amplified two-dimensional image; and integrating the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set, dividing different original images into a training set and a test set according to specified requirements, so that the two-dimensional image set belonging to the training set is used for constructing a training model, and the two-dimensional image set belonging to the test set is used for evaluating the training model. The invention reserves the relativity of the characteristics of textures and the like in the three-dimensional space to a certain extent, and for one sample, the two-dimensional image obtained by spiral transformation contains more comprehensive and complete three-dimensional information than the two-dimensional image obtained by one tangent plane.

Description

Spiral transformation data amplification method, system, medium and equipment in deep learning
Technical Field
The invention belongs to the technical field of image data processing, relates to an image data transformation method, and in particular relates to a spiral transformation data amplification method, system, medium and equipment in deep learning.
Background
In the prior art, a convolutional neural network becomes one of core algorithms in the field of image recognition, and has stable performance when learning data are sufficient. For general large-scale image classification problems, convolutional neural networks can be used for constructing hierarchical classifiers, and can also be used for extracting distinguishing features of images in fine classification recognition for learning by other classifiers. For the latter, the feature extraction can manually input different parts of the image into the convolutional neural network respectively, and can also be automatically extracted by the convolutional neural network, however, when three-dimensional data is processed, the three-dimensional data is processed by directly using the three-dimensional convolutional neural network, so that a large amount of computing resources are occupied, and the feasibility of processing two-dimensional data is higher. While most two-dimensional convolutional neural networks use cross-sectional slices as inputs to the network, containing only two-dimensional information for one slice. However, each layer of the three-dimensional target region has a strong spatial correlation, and a simple two-dimensional section ignores the layer-to-layer correlation. Meanwhile, the visual angle of the cross section is single, the image characteristics of other visual angles cannot be comprehensively represented, and the three-dimensional texture characteristics are not fully represented.
Furthermore, the most commonly used data amplification methods are geometric transformations of the image, such as horizontal flipping of a two-dimensional image, scaling within a small range of multiples (e.g., 0.8-1.15), rotation, etc. These methods increase the amount of data to some extent, but the transformation results are all from the original data. For example, the horizontal flip changes only the view angle of the two-dimensional image, hardly changes the information amount of the data set, and the data before and after amplification are very similar, thus limiting the effect of model prediction.
Therefore, how to provide a method, a system, a medium and a device for amplifying spiral transformation data in deep learning, so as to solve the defects that a single two-dimensional image cannot retain more three-dimensional image information, effective dimension reduction is realized, and the like in the prior art, which are technical problems to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention aims to provide a method, a system, a medium and a device for amplifying spiral transformation data in deep learning, which are used for solving the problem that a single two-dimensional image cannot retain more three-dimensional image information and realize effective dimension reduction in the prior art.
To achieve the above and other related objects, an aspect of the present invention provides a spiral transform data amplification method in deep learning, including: acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter; performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image; changing the mode of the spiral transformation of the three-dimensional image data to convert into an amplified two-dimensional image; and integrating the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
In an embodiment of the invention, the three-dimensional image data comprises a magnetic resonance image showing the position of the target region of interest.
In an embodiment of the present invention, the step of performing a spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image includes: selecting a transformation reference point in the target region of interest as a spiral transformation midpoint; determining a spiral transition maximum radius from a maximum distance from the spiral transition midpoint to the target region of interest edge; and generating a spiral line by combining a spiral transformation radius, a transformation angle and the spiral transformation midpoint, wherein the spiral transformation radius is the distance from the spiral transformation midpoint to any point in the edge of the target region of interest and is within the range determined by the maximum radius of the spiral transformation.
In one embodiment of the present invention, the transformation angle includes an azimuth angle and an elevation angle, and the step of generating a spiral by combining a spiral transformation radius, a transformation angle, and a spiral transformation midpoint includes: constructing a conversion relation between the azimuth angle and the elevation angle; and generating a spiral line by combining the conversion relation and the spiral transformation radius.
In an embodiment of the present invention, the step of constructing the conversion relation between the azimuth angle and the elevation angle includes: constructing the conversion relation by uniformly changing the azimuth angle and the elevation angle in a value range; or the conversion relation is constructed by making the surface density and the bulk density of the sampling points equal; or constructing the conversion relation through a specified preset sampling point distribution rule.
In an embodiment of the present invention, after the step of generating a spiral line by combining a spiral transformation radius, a transformation angle, and a spiral transformation midpoint, the step of performing a spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image further includes: correspondingly determining position coordinates of all points on the spiral line in the three-dimensional image data; and calculating gray values of all points on the spiral line according to the position coordinates, and filling the gray values into a two-dimensional matrix to obtain a two-dimensional image expanded by spiral transformation.
In one embodiment of the present invention, the step of changing the mode of the spiral transformation of the three-dimensional image data to convert to the amplified two-dimensional image includes: changing the original point position of the coordinate system in the three-dimensional data in the spiral transformation to perform the spiral transformation; changing the angle and direction of the positive direction of the coordinate axis relative to the three-dimensional data in the spiral transformation to perform the spiral transformation; the three-dimensional image data is horizontally turned over, and then spiral transformation is carried out; the three-dimensional image data is vertically overturned, and then spiral transformation is carried out; the three-dimensional image data is amplified, reduced or stretched, and then spiral transformation is carried out; and changing the color saturation, contrast and brightness of the three-dimensional image data, and performing spiral transformation.
In another aspect, the present invention provides a spiral transformation data amplification system in deep learning, including: the data acquisition module is used for acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter; the first transformation module is used for performing spiral transformation on the three-dimensional image data so as to convert the three-dimensional image data into an original two-dimensional image; the second transformation module is used for changing the spiral transformation mode of the three-dimensional image data so as to convert the three-dimensional image data into an amplified two-dimensional image; and the data integration module is used for integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
In yet another aspect, the present invention provides a medium having stored thereon a computer program which, when executed by a processor, implements the spiral transform data amplification method in deep learning.
In a final aspect the invention provides an apparatus comprising: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so that the device executes the spiral transformation data amplification method in deep learning.
As described above, the spiral transformation data amplification method, system, medium and equipment in deep learning have the following beneficial effects:
when a two-dimensional image is generated, the data set obtained by spiral transformation is widely distributed, namely, more comprehensive three-dimensional information is contained. According to the data amplification mode of spiral transformation, on one hand, 3D information can be reserved for a single 2D image, on the other hand, when data amplification is carried out each time, different two-dimensional image information can be obtained by only changing the angle of a coordinate axis of the spiral transformation, the amplified data of each time are different, the amplified sample contains more information, and a very effective data amplification method is provided through the spiral transformation.
Drawings
FIG. 1 is a diagram showing an exemplary data set of the spiral transform data amplification method in deep learning according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of data transformation in an embodiment of the spiral transformation data amplification method in deep learning according to the present invention.
FIG. 3 is a schematic flow chart of a spiral transform data amplification method in deep learning according to an embodiment of the invention.
FIG. 4 is a flowchart of a spiral transform in one embodiment of the method for enhancing spiral transform data in deep learning according to the present invention.
FIG. 5 is a flow chart of spiral generation in an embodiment of the method for amplifying spiral transition data in deep learning according to the present invention.
FIG. 6 is a schematic diagram showing the construction of a coordinate system in an embodiment of the method for amplifying spiral transformation data in deep learning according to the present invention.
FIG. 7 is a simulation of data amplification of spiral transitions in one embodiment of the method for data amplification of spiral transitions in deep learning according to the present invention.
FIG. 8A is a graph showing the result of the spiral transform data amplification method in deep learning according to the present invention in one embodiment compared with another data amplification effect.
FIG. 8B is a schematic diagram showing the data distribution of the spiral transform data amplification method in deep learning according to an embodiment of the invention.
FIG. 9 is a schematic diagram showing the structure of the spiral data amplification system in deep learning according to an embodiment of the present invention.
Description of element reference numerals
9. Spiral transformation data amplification system in deep learning
91. Data acquisition module
92. First conversion module
93. Second conversion module
94. Data integration module
S31 to S34 steps
S321 to S325 steps
Steps S323A to S323B
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
The technical principles of the spiral transformation data amplification method, system, medium and equipment in deep learning are as follows: acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter; performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image; changing the mode of the spiral transformation of the three-dimensional image data to convert into an amplified two-dimensional image; and integrating the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
Example 1
The embodiment provides a spiral transformation data amplification method in deep learning, which comprises the following steps:
acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter;
performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image;
changing the mode of the spiral transformation of the three-dimensional image to convert into an amplified two-dimensional image;
and integrating the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
The spiral transition data amplification method in deep learning provided by the present embodiment will be described in detail below with reference to the drawings.
Referring to fig. 1, an exemplary diagram of a data set of the spiral transform data amplification method in deep learning according to an embodiment of the invention is shown. Pancreatic cancer is small in size and difficult to automatically divide, and pancreatic tumors are closely connected with surrounding tissues, and show similar strength to the tissues, so that the pancreatic cancer is difficult to identify. Referring to fig. 2, a schematic diagram of data transformation in an embodiment of the spiral transformation data amplification method in deep learning according to the present invention is shown. In fig. 2, the embodiment uses the magnetic resonance image of the pancreatic cancer patient as three-dimensional image data, and performs spiral transformation and data amplification on the magnetic resonance image of the pancreatic cancer based on a deep learning method to provide a two-dimensional image dataset containing more three-dimensional information for predicting the pancreatic cancer, wherein the image after spiral transformation is x= [ X ] 1 ,X 2 ,…,X n ]。
Referring to fig. 3, a schematic flow chart of a spiral transformation data amplification method in deep learning according to an embodiment of the invention is shown. The spiral transformation data amplification method in the deep learning specifically comprises the following steps:
s31, acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter.
In this embodiment, the three-dimensional image data includes magnetic resonance imaging, CT, and other three-dimensional imaging, which presents a region of interest (ROI, region of Interest).
Specifically, pancreatic cancer data is acquired from a magnetic resonance image of a patient suffering from pancreatic cancer, the acquired data needs to include image information of a plurality of imaging parameters, in this embodiment, three modes including ADC (Apparent Diffusion Coefficient, apparent diffusion coefficient imaging), DWI (Diffusion Weighted Imaging ) and T2 (transverse relaxation time, transverse relaxation time weighted imaging) are adopted as MRI data of 64 patients, and the data of the three modes are image data corresponding to three different imaging parameters. At the same time, the location of the tumor has been determined in the image data. In this example, the data set was from pancreatic cancer patients who received surgery at the Ruijin Hospital from 1 st 2016 to 12 th 2016, each of which contained the pathological examination results of tumors, i.e., mutations in TP53 (an oncogene) and KRAS gene (a protooncogene).
S32, performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image.
Referring to fig. 4, a spiral transform flow chart of an embodiment of the spiral transform data amplification method in deep learning according to the present invention is shown. As shown in fig. 4, in the present embodiment, S32 includes:
s321, selecting a transformation reference point in the target region of interest as a spiral transformation midpoint. The transformation reference point is a point within the range of the target region of interest and is used as a spiral transformation midpoint.
Specifically, the target region of interest is a tumor in a magnetic resonance image, and a point in the tumor in the original three-dimensional MRI (Magnetic Resonance Imaging ) is selected, for example, a center point of the tumor is taken as a midpoint O of the spiral transformation.
S322, determining the maximum radius of the spiral transformation according to the maximum distance from the midpoint of the spiral transformation to the edge of the target region of interest.
In particular, the maximum distance of the tumor margin to the O-point determines the maximum radius R of the spiral transition.
S323, generating a spiral line by combining the spiral transformation radius, the transformation angle and the spiral transformation midpoint. In this embodiment, the spiral transition radius is the distance from the midpoint of the spiral transition to any point in the edge of the target region of interest, which is within the range determined by the maximum radius of the spiral transition.
Specifically, the distance from any point of the tumor edge to the O point is defined as R, and then R is equal to or less than R and equal to or less than R.
Referring to fig. 5, a flow chart of spiral line generation in an embodiment of the spiral transformation data amplification method in deep learning according to the present invention is shown. As shown in fig. 5, in the present embodiment, the transformation angle includes an azimuth angle and an elevation angle, and S323 includes:
S323A, constructing a conversion relation between the azimuth angle and the elevation angle.
In this embodiment, the conversion relationship is constructed by uniformly changing the azimuth angle and the elevation angle within a value range; or the conversion relation is constructed by making the surface density and the bulk density of the sampling points equal; or constructing the conversion relation through a specified preset sampling point distribution rule.
Specifically, the key to the spiral transformation is to construct the relationship of the two angles Θ and ψ. Depending on the requirements we can construct different relations. For example, to have the sampling points evenly distributed at the two poles and equator of the sphere, the radian between the fixed sampling points is constant. The circle on the equator is provided with 2N sampling points, the sampling radian is defined as the distance d between the two points on the equator, and the distance d is calculated by the formula (1).
Figure BDA0002386164850000061
Wherein d represents the sampling radian, which is defined as the distance between two points on the equator, r represents the distance from the edge of the tumor to the point O of the spiral transformation, and 2N represents the number of sampling points.
Further, according to a preset sampling point distribution rule, when the sampling points are set, the number of horizontal plane sampling points corresponding to the theta angle is expressed as
Figure BDA0002386164850000062
Setting Θ to be divided into N angles in the value range, if N is large enough under the condition of the specified radius, the total sampling point number can be obtained through integral calculation of a formula (2):
Figure BDA0002386164850000063
/>
thus, the surface of a sphere of a given radius samples the total number of points
Figure BDA0002386164850000064
Further, knowing the coordinates of point A, the radian between two adjacent points can be expressed as ψ * Sin Θ, where ψ * Is the difference between the two adjacent coordinate points and the positive X-axis direction included angle psi. Then pass through
Figure BDA0002386164850000071
And establishing a conversion relation between Θ and ψ. For example, in targeted sampling, Θ and ψ satisfy +.>
Figure BDA0002386164850000072
The maximum radius of the spiral transformation is 60, N is 20, and finally a two-dimensional image of 120 multiplied by 254 is obtained.
S323B, generating a spiral line by combining the conversion relation and the spiral conversion radius.
Specifically, referring to fig. 6, a schematic diagram of coordinate system construction of an embodiment of a spiral transformation data amplification method in deep learning according to the present invention is shown. In three dimensions, the spiral A is determined by the azimuth angle ψ and the elevation angle 1- Θ, and the distance r to the origin. The a point coordinates are expressed by formula (3) according to the conversion relation of the coordinate system.
Figure BDA0002386164850000073
S324, position coordinates of all points on the spiral line are correspondingly determined in the three-dimensional image data.
S325, calculating gray values of all points on the spiral line according to the position coordinates, and filling the gray values into a two-dimensional matrix to obtain a two-dimensional image expanded by spiral transformation.
Specifically, referring to fig. 7, a simulation diagram of data amplification of spiral transformation in an embodiment of the method for data amplification of spiral transformation in deep learning according to the present invention is shown. Its coordinates in three-dimensional space are then mapped to the position of the original matrix and the gray value of the point is determined using a tri-linear interpolation method. And finally, filling the gray value into a two-dimensional matrix to obtain the two-dimensional image expanded by spiral transformation.
It should be noted that, the method of tri-linear interpolation is only one implementation of determining the gray value in this embodiment, and other methods for calculating the gray value besides tri-linear interpolation are also within the scope of the present invention.
S33, changing the mode of three-dimensional image spiral transformation to convert into amplified two-dimensional image
In the embodiment, changing the origin position of the coordinate system in the three-dimensional data in the spiral transformation to perform the spiral transformation; and/or changing the angle and direction of the positive direction of the coordinate axis relative to the three-dimensional data in the spiral transformation, such as rotating by different angles along the coordinate axis (x-axis, y-axis or z-axis); and/or horizontally overturning the three-dimensional image data, and performing spiral transformation; and/or vertically overturning the three-dimensional image data, and performing spiral transformation; and/or the three-dimensional image data is enlarged, reduced or stretched, and then spiral transformation is carried out; and/or changing the color saturation, contrast and brightness of the three-dimensional image data, and performing spiral transformation.
Specifically, the spiral transformation method is applied to data amplification and effective dimension reduction of three-dimensional data. The purpose of data amplification is to increase the diversity of the data in the sample against network overfitting. There are three ways of data augmentation in deep learning: carrying out data amplification on the training set, and not carrying out data amplification on the test set; respectively carrying out data amplification on the training set and the testing set; mixing the amplified data with the original data, and randomly dividing the data into a training set and a testing set.
The result of the spiral transformation depends on the constructed space rectangular coordinate system and the parameter setting of the spiral transformation. Under the condition of the same parameters, different spiral transformation results can be obtained by constructing different coordinate systems for the same three-dimensional data. In order to facilitate comparison of differences in the down-conversion results of the two coordinate systems, the same origin of coordinates and positive direction of the z-axis are set, and only the positive direction of the x-axis is changed. Assuming that the positive direction of the x-axis changes by Δψ, the corresponding point a coordinates a' can be expressed as:
Figure BDA0002386164850000081
if a '(x', y ', z')=a (x, y, z), the combination of equation (3) and equation (4) can be obtained:
Figure BDA0002386164850000082
simplified to give the disclosure (6):
Figure BDA0002386164850000083
solving the equation set is available, cosΔψ=1, then a '(x', y ', z')=a (x, y, z) if and only if Δψ=2ρk. It is described that different spiral transformation results can be obtained by changing the positive direction angle of the x-axis in the space coordinate system XOY plane.
Similarly, in addition to changing the positive direction angle of the coordinate axis, different spiral transformation results can be obtained by using other transformation modes for the same three-dimensional data. Such as changing the origin position of the coordinate system, performing geometric transformation on the original data, changing parameters of spiral transformation (including rotation number, sampling interval, etc.), horizontal and vertical flipping, scaling in a small range multiple (0.8-1.15 times), etc. The transformed two-dimensional image is a part of the original three-dimensional image, and the result of the spiral transformation is equivalent to a subset of the original data, so that for the same three-dimensional original data, the amplified data obtained based on different coordinate systems have a certain complementary relationship.
The TP53 gene prediction of pancreatic cancer is a very challenging task, so that the proportion of tumor areas is small, the recognition difficulty is high, the sample size is insufficient due to the difficulty in acquiring multi-mode data, and the difficulty of the task is increased. This embodiment improves the small sample problem to some extent. Considering that the conventional tangent plane image loses a great deal of space information, and the direct use of three-dimensional convolution can increase a great deal of calculation amount for a three-mode network with larger self-parameters, a novel spiral transformation method is provided, and the image is input into a convolution neural network for operation after spiral transformation. Compared to 3D models, the computational resources and model parameters are reduced.
In the test process, the original image is amplified to 27 times according to the spiral transformation and the geometric amplification mode in the prior art, and the effect of data amplification is evaluated by using normalized mutual information. The geometric amplification is to perform geometric transformation data amplification such as horizontal and vertical overturn on a 2D section with the largest tumor area. Referring to fig. 8A, a comparison of the effect of the spiral transform data amplification method in deep learning according to an embodiment of the invention and another data amplification effect is shown. The results of two methods for treatment of one case are shown in fig. 8A, (a) for spiral transformation data amplification and (b) for 2D tangent plane geometric transformation data amplification with the largest tumor area, where the top left corner is the original image and the other three are amplified images.
In order to compare the similarity of the images before and after amplification by the two methods, normalized mutual information of the 26 images obtained by amplification in fig. 8A (a) and fig. 8A (b) and the original image is calculated, the sum of the mutual information of the spiral transformation obtained by summing the two sets of data is 32.8838, and the tangential plane image is 38.3224. The normalized mutual information is one way to measure the similarity of two images, and is a measure that one image contains the other image, and the larger the value of the normalized mutual information is the higher the similarity of the two images, the higher the value of the normalized mutual information is, and the normalized mutual information can be realized by calculating the information entropy and the joint information entropy of the images. In addition, t-test was performed on both sets of data, resulting in p=6.4920×10 -7 Far less than 0.01 confidence, two sets of numbers are explainedThe normalized mutual information is said to have a significant difference, i.e., the image amplified by the spiral transformation is less similar.
The data information after the spiral transformation and geometric transformation amplification in the prior art is subjected to list management in combination with fig. 8A, and is edited in a table form, see the data amplification comparison table in table 1.
Table 1: data amplification comparison table
Spiral transformation Geometric transformation
Normalizing mutual information 32.8838 38.3224
Degree of discretization 0.2709 0.0927
Euclidean distance 14.5826 7.7633
In order to observe the effect of data amplification intuitively, we can only perform dimension reduction visualization on the original data and the data obtained after twice amplification (horizontal inversion and vertical inversion), please refer to fig. 8B, which shows a schematic diagram of data distribution of the spiral transformation data amplification method in an embodiment of the deep learning of the present invention. As shown in fig. 8B, the first graph in (a) shows the discrete degree of data distribution in geometric transformation, the second graph in (a) shows an enlarged graph of a portion of the first graph where data is concentrated, and the second graph in (B) shows the discrete degree S of two-dimensional discrete points after the discrete degree of data distribution in spiral transformation and data amplification is normalized, as can be seen from table 1, the discrete degree of the geometric transformation mode in the prior art is 0.0927, the discrete degree of spiral transformation in one embodiment of the present invention is 0.2709, and the discrete degree is significantly higher than that in the prior art. Furthermore, the Euclidean distance d from each amplified point to the original point was calculated and summed to give a geometric transformed distance of 7.7633 and a spiral transformed 14.5826. In summary, the normalization mutual information of the spiral transformation mode is lower, the discrete degree is higher, and the euclidean distance is larger, so that the similarity between data is smaller, the distribution range is wider, and the effect of data amplification is better.
The above results demonstrate that the data set obtained by the spiral transformation is also a two-dimensional image and is widely distributed, i.e. contains more comprehensive three-dimensional information. On the one hand, the data amplification mode of the method can ensure that 3D information is reserved for a single 2D image, and the spatial distribution characteristics and the spatial texture relation of a tumor area are reflected; on the other hand, when data amplification is carried out each time, different tumor information can be obtained by changing the angle of the coordinate axis of the spiral transformation, so that the amplified data of each time are different, the amplified sample contains more information, and the spiral transformation is used as a very effective data amplification method.
In addition, when spiral transformation and data amplification are applied to deep learning, the model-driven loss function adds the constraint of priori knowledge, and is also beneficial to relieving over-fitting; the main network is initialized by using the parameters of the image network pre-training, and the idea of migration learning is combined, so that the network parameters have better initialization distribution, and the characteristics (such as angles, edges and the like) of the lowest level can be rapidly extracted under the condition of small samples, thereby accelerating the convergence speed and reducing the overfitting.
S34, carrying out data integration on the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
In this embodiment, the data integration is a data integration method that collects, sorts and cleans data from different data sources, and loads the data into a new data source after conversion, thereby providing a unified data view for data consumers. In this embodiment, data integration specifically refers to sharing or merging data from the two-dimensional image and the original two-dimensional image to form a two-dimensional image set.
It should be noted that, in this embodiment, the training set and the test set are respectively subjected to data augmentation, so that corresponding two-dimensional image sets are obtained respectively, that is, the two parts of data of the training set and the test set are divided by patient in the example, that is, the two parts of data of the training set and the test set are respectively from different original images, and there is no cross-mixing. The method comprises the steps of dividing different original images into a training set and a testing set according to specified requirements, so that a two-dimensional image set belonging to the training set is used for constructing a training model, and a two-dimensional image set belonging to the testing set is used for evaluating the training model.
Specifically, in the data amplification process, in this embodiment, parameters of the fixed spiral transformation, and the origin and the positive direction of the space rectangular coordinate system are selected, and the original data is geometrically transformed. The three-dimensional data is subjected to geometric transformation such as rotation by different angles along a z-axis, horizontal overturning, vertical overturning and the like, and then is subjected to spiral transformation to be converted into a two-dimensional space, so that the data is amplified to 27 times of the original data. The data set is then divided into five parts, four of which are training sets and one of which is a test set, based on the ratio of positive and negative samples.
Based on the characteristics, the data can be amplified by using a spiral transformation method, so that the information quantity of training samples in a deep learning method is increased. Besides being applied to pancreatic cancer data sets, the spiral transformation data amplification method is also applicable to data sets with other target areas similar to spheres, and provides a new data amplification thought for solving the problem of insufficient deep learning data quantity.
List management is performed on pancreatic cancer TP53 gene prediction effects in different models, and the results are compiled in a table form, see a multi-mode model effect comparison table in Table 2, and the results are widely applied to the classification field by taking Accuracy (Accuracy), AUC (area under a subject operation characteristic curve), recall (Recall), precision (Accuracy) and F1Score (F1 value) as evaluation indexes. The definition of each index is as follows:
Figure BDA0002386164850000101
Figure BDA0002386164850000102
Figure BDA0002386164850000103
Figure BDA0002386164850000104
where TP is true positive rate, TN is true negative rate, FP is false positive rate, and FN is false negative rate.
The two-dimensional rotation transforms of lines 3 and 5 in table 2 are hybrid-driven multi-modal fusion models using spiral transforms. The two-dimensional tangent plane is a two-dimensional image obtained from 27 different tangent plane angles in a three-dimensional space, and the three-dimensional image is obtained by directly inputting three-dimensional data into a 3D convolutional neural network frame. Experiments were performed in a frame without double-wire pooling, since the 3D convolved network frame cannot incorporate a double-wire pooling structure. Compared with the same model of the network frame, the model is superior to the 3D model in Accumey, AUC, precision and F1Score, and compared with a two-dimensional section, the multi-mode prediction model based on spiral transformation has better performance on five indexes. The image after spiral transformation has better effect than other forms of input images in pancreatic cancer gene prediction tasks, and the image after spiral transformation has more comprehensive information than a 2D section image to a certain extent, and simultaneously expresses the spatial relationship of three-dimensional images. The spiral transformation method not only obviously improves the prediction performance in the multi-mode pancreatic cancer data set, but also has certain reference significance in the processing of other data sets (especially the data set with the target object being a sphere).
Table 2: multi-modal model effect comparison table
Figure BDA0002386164850000111
The present embodiment provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the spiral transform data augmentation method in deep learning.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned computer-readable storage medium includes: various computer storage media such as ROM, RAM, magnetic or optical disks may store program code.
According to the spiral transformation data amplification method in the deep learning, three-dimensional information is fully utilized, a three-dimensional target area is spirally unfolded to a two-dimensional plane, correlation between original adjacent pixels is reserved in the transformation process, and then a transformed image is used for predicting gene mutation.
Example two
The embodiment provides a spiral transformation data amplification system in deep learning, including:
The data acquisition module is used for acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter;
the first transformation module is used for performing spiral transformation on the three-dimensional image data so as to convert the three-dimensional image data into an original two-dimensional image;
the second transformation module is used for changing the spiral transformation mode of the three-dimensional image so as to convert the three-dimensional image into an amplified two-dimensional image;
and the data integration module is used for integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
The spiral transition data amplification system in deep learning provided in this embodiment will be described in detail below with reference to the drawings. It should be noted that, it should be understood that the division of the modules of the following system is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. The modules can be realized in a form of calling the processing element through software, can be realized in a form of hardware, can be realized in a form of calling the processing element through part of the modules, and can be realized in a form of hardware. For example: a module may be a separately established processing element or may be integrated in a chip of a system as described below. In addition, a certain module may be stored in the memory of the following system in the form of program codes, and the functions of the following certain module may be called and executed by a certain processing element of the following system. The implementation of the other modules is similar. All or part of the modules can be integrated together or can be implemented independently. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module below may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
The following modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), one or more digital signal processors (Digital Singnal Processor, DSP for short), one or more field programmable gate arrays (Field Programmable Gate Array, FPGA for short), and the like. When a module is implemented in the form of a processing element calling program code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may call program code. These modules may be integrated together and implemented in the form of a System-on-a-chip (SOC) for short.
Referring to fig. 9, a schematic diagram of a spiral transformation data amplification system in deep learning according to an embodiment of the invention is shown. As shown in fig. 9, the spiral transformation data amplification system 9 in deep learning includes: a data acquisition module 91, a first transformation module 92, a second transformation module 93 and a data integration module 94.
The data acquisition module 91 is configured to acquire three-dimensional image data, where the three-dimensional image data includes image data corresponding to at least one imaging parameter.
In this embodiment, the three-dimensional image data includes magnetic resonance imaging, CT, and other three-dimensional imaging, which presents a region of interest (ROI, region of Interest).
The first transformation module 92 is configured to perform a spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image.
In this embodiment, the first transformation module 92 is specifically configured to select a transformation reference point in the target area of interest as a spiral transformation midpoint; determining a spiral transition maximum radius from a maximum distance from the spiral transition midpoint to the target region of interest edge; and generating a spiral line by combining a spiral transformation radius, a transformation angle and the spiral transformation midpoint, wherein the spiral transformation radius is the distance from the spiral transformation midpoint to any point in the edge of the target region of interest and is within the range determined by the maximum radius of the spiral transformation. The transformation angle comprises an azimuth angle and an elevation angle, and a transformation relation of the azimuth angle and the elevation angle is constructed; and generating a spiral line by combining the conversion relation and the spiral transformation radius. Correspondingly determining position coordinates of all points on the spiral line in the three-dimensional image data; and calculating gray values of all points on the spiral line according to the position coordinates, and filling the gray values into a two-dimensional matrix to obtain a two-dimensional image expanded by spiral transformation.
Specifically, the first transformation module 92 is configured to construct the transformation relationship by uniformly changing the azimuth angle and the elevation angle within a value range; or the conversion relation is constructed by making the surface density and the bulk density of the sampling points equal; or constructing the conversion relation through a specified preset sampling point distribution rule.
The second transformation module 93 is used for changing the spiral transformation mode of the three-dimensional image to convert into an amplified two-dimensional image
In this embodiment, the second transformation module 93 is specifically configured to change an origin position of the coordinate system in the three-dimensional data in the spiral transformation to perform the spiral transformation; and/or changing the angle and direction of the positive direction of the coordinate axis relative to the three-dimensional data in the spiral transformation, such as rotating by different angles along the coordinate axis (x-axis, y-axis or z-axis); and/or horizontally overturning the three-dimensional image data; and/or vertically overturning the three-dimensional image data; and/or enlarging, reducing or stretching the three-dimensional image data; and/or changing the color saturation, contrast, brightness of the three-dimensional image data.
The data integration module 94 is configured to integrate the original two-dimensional image and the amplified two-dimensional image into a two-dimensional image set.
In the deep learning, the spiral transformation data amplification system reserves the correlation of the characteristics such as textures among original adjacent pixels in a three-dimensional space to a certain extent, and for one sample, a two-dimensional image obtained by spiral transformation contains more comprehensive and complete three-dimensional information than a two-dimensional image obtained by a tangent plane.
Example III
The present embodiment provides an apparatus including: a processor, memory, transceiver, communication interface, or/and system bus; the memory and the communication interface are connected with the processor and the transceiver through the system bus and complete the communication among each other, the memory is used for storing a computer program, the communication interface is used for communicating with other equipment, and the processor and the transceiver are used for running the computer program to enable the equipment to execute the steps of the spiral transformation data amplification method in the deep learning.
The system bus mentioned above may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. The communication interface is used for realizing communication between the database access device and other devices (such as a client, a read-write library and a read-only library). The memory may comprise random access memory (Random Access Memory, RAM) and may also comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (scan application lication Specific Integrated Circuit, ASIC for short), field programmable gate arrays (Field Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The protection scope of the spiral transformation data amplification method in deep learning is not limited to the execution sequence of the steps listed in the embodiment, and all the schemes realized by the steps of increasing and decreasing and step replacement in the prior art according to the principles of the invention are included in the protection scope of the invention.
The invention also provides a system for amplifying the spiral transformation data in the deep learning, which can realize the method for amplifying the spiral transformation data in the deep learning, but the device for realizing the method for amplifying the spiral transformation data in the deep learning comprises but is not limited to the structure of the system for amplifying the spiral transformation data in the deep learning in the embodiment, and all the structural deformation and replacement of the prior art according to the principle of the invention are included in the protection scope of the invention. It should be noted that, the spiral transformation data amplification method in deep learning and the spiral transformation data amplification system in deep learning are also applicable to content in other multimedia forms such as video, friend circle message, and the like, and are included in the protection scope of the present invention.
In summary, the method, system, medium and device for amplifying spiral transformation data in deep learning of the invention have wider distribution of data sets obtained by spiral transformation, namely, more comprehensive three-dimensional information is included when two-dimensional images are generated. According to the data amplification mode of spiral transformation, on one hand, 3D information can be reserved for a single 2D image, on the other hand, when data amplification is carried out each time, different two-dimensional image information can be obtained by only changing the angle of a coordinate axis of the spiral transformation, the amplified data of each time are different, the amplified sample contains more information, and a very effective data amplification method is provided through the spiral transformation. The invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (7)

1. The spiral transformation data amplification method in the deep learning is characterized by comprising the following steps of:
acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter;
performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image; selecting a transformation reference point in the target region of interest as a spiral transformation midpoint; determining a spiral transition maximum radius from a maximum distance from the spiral transition midpoint to the target region of interest edge; generating a spiral line by combining a spiral transformation radius, a transformation angle and the spiral transformation midpoint, wherein the spiral transformation radius is the distance from the spiral transformation midpoint to any point in the edge of the target region of interest, and is within the range determined by the maximum radius of the spiral transformation; wherein the transformation angle comprises an azimuth angle and an elevation angle, and the step of generating a helix by combining a helix transformation radius, a transformation angle and the helix transformation midpoint comprises: constructing a conversion relation between the azimuth angle and the elevation angle; generating a spiral line by combining the conversion relation and the spiral transformation radius;
Changing the mode of the spiral transformation of the three-dimensional image data to convert into an amplified two-dimensional image; changing the original point position of the coordinate system in the three-dimensional data in the spiral transformation to perform the spiral transformation; changing the angle and direction of the positive direction of the coordinate axis relative to the three-dimensional data in the spiral transformation to perform the spiral transformation; the three-dimensional image data is horizontally turned over, and then spiral transformation is carried out; the three-dimensional image data is vertically overturned, and then spiral transformation is carried out; the three-dimensional image data is amplified, reduced or stretched, and then spiral transformation is carried out; changing the color saturation, contrast and brightness of the three-dimensional image data, and performing spiral transformation;
and integrating the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
2. The method for amplifying spiral transition data in deep learning according to claim 1,
the three-dimensional image data includes magnetic resonance images that present a target region of interest.
3. The method of claim 1, wherein the step of constructing the conversion relation between the azimuth angle and the elevation angle comprises:
Constructing the conversion relation by uniformly changing the azimuth angle and the elevation angle in a value range; or (b)
The conversion relation is constructed by making the surface density and the bulk density of the sampling points equal; or (b)
And constructing the conversion relation through a specified preset sampling point distribution rule.
4. The method of augmenting spiral transform data in deep learning of claim 1, wherein after the step of generating a spiral combining spiral transform radius, transform angle, and spiral transform midpoint, the step of spiral transforming the three-dimensional image data to convert to an original two-dimensional image further comprises:
correspondingly determining position coordinates of all points on the spiral line in the three-dimensional image data;
and calculating gray values of all points on the spiral line according to the position coordinates, and filling the gray values into a two-dimensional matrix to obtain a two-dimensional image expanded by spiral transformation.
5. A spiral transformation data amplification system in deep learning, the spiral transformation data amplification system in deep learning comprising:
the data acquisition module is used for acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter;
The first transformation module is used for performing spiral transformation on the three-dimensional image data so as to convert the three-dimensional image data into an original two-dimensional image; selecting a transformation reference point in the target region of interest as a spiral transformation midpoint; determining a spiral transition maximum radius from a maximum distance from the spiral transition midpoint to the target region of interest edge; generating a spiral line by combining a spiral transformation radius, a transformation angle and the spiral transformation midpoint, wherein the spiral transformation radius is the distance from the spiral transformation midpoint to any point in the edge of the target region of interest, and is within the range determined by the maximum radius of the spiral transformation; wherein the transformation angle comprises an azimuth angle and an elevation angle, and the step of generating a helix by combining a helix transformation radius, a transformation angle and the helix transformation midpoint comprises: constructing a conversion relation between the azimuth angle and the elevation angle; generating a spiral line by combining the conversion relation and the spiral transformation radius;
the second transformation module is used for changing the spiral transformation mode of the three-dimensional image data so as to convert the three-dimensional image data into an amplified two-dimensional image; changing the original point position of the coordinate system in the three-dimensional data in the spiral transformation to perform the spiral transformation; changing the angle and direction of the positive direction of the coordinate axis relative to the three-dimensional data in the spiral transformation to perform the spiral transformation; the three-dimensional image data is horizontally turned over, and then spiral transformation is carried out; the three-dimensional image data is vertically overturned, and then spiral transformation is carried out; the three-dimensional image data is amplified, reduced or stretched, and then spiral transformation is carried out; changing the color saturation, contrast and brightness of the three-dimensional image data, and performing spiral transformation;
And the data integration module is used for integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
6. A medium having stored thereon a computer program which, when executed by a processor, implements the spiral transform data amplification method in deep learning of any one of claims 1 to 4.
7. A computer device, comprising: a processor and a memory;
the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory, so that the computer device executes the spiral transformation data amplification method in deep learning according to any one of claims 1 to 4.
CN202010098682.XA 2020-02-18 2020-02-18 Spiral transformation data amplification method, system, medium and equipment in deep learning Active CN111292230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010098682.XA CN111292230B (en) 2020-02-18 2020-02-18 Spiral transformation data amplification method, system, medium and equipment in deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010098682.XA CN111292230B (en) 2020-02-18 2020-02-18 Spiral transformation data amplification method, system, medium and equipment in deep learning

Publications (2)

Publication Number Publication Date
CN111292230A CN111292230A (en) 2020-06-16
CN111292230B true CN111292230B (en) 2023-04-28

Family

ID=71029285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010098682.XA Active CN111292230B (en) 2020-02-18 2020-02-18 Spiral transformation data amplification method, system, medium and equipment in deep learning

Country Status (1)

Country Link
CN (1) CN111292230B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699952B (en) * 2021-01-06 2021-08-24 哈尔滨市科佳通用机电股份有限公司 Train fault image amplification method and system based on deep learning
CN116512254B (en) * 2023-04-11 2024-01-23 中国人民解放军军事科学院国防科技创新研究院 Direction-based intelligent control method and system for mechanical arm, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055996A (en) * 2011-02-23 2011-05-11 南京航空航天大学 Real three-dimensional display system and method based on space layer-by-layer scanning
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
WO2018196371A1 (en) * 2017-04-26 2018-11-01 华南理工大学 Three-dimensional finger vein recognition method and system
WO2019057190A1 (en) * 2017-09-25 2019-03-28 腾讯科技(深圳)有限公司 Method and apparatus for displaying knowledge graph, terminal device, and readable storage medium
CN109767440A (en) * 2019-01-11 2019-05-17 南京信息工程大学 A kind of imaged image data extending method towards deep learning model training and study

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055996A (en) * 2011-02-23 2011-05-11 南京航空航天大学 Real three-dimensional display system and method based on space layer-by-layer scanning
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
WO2018196371A1 (en) * 2017-04-26 2018-11-01 华南理工大学 Three-dimensional finger vein recognition method and system
WO2019057190A1 (en) * 2017-09-25 2019-03-28 腾讯科技(深圳)有限公司 Method and apparatus for displaying knowledge graph, terminal device, and readable storage medium
CN109767440A (en) * 2019-01-11 2019-05-17 南京信息工程大学 A kind of imaged image data extending method towards deep learning model training and study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Tao Yan等.Depth Estimation From a Light Field Image Pair With a Generative Model .IEEE ACCESS.2019,12768-12778. *

Also Published As

Publication number Publication date
CN111292230A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN111275130B (en) Multi-mode-based deep learning prediction method, system, medium and equipment
Hussein et al. Lung and pancreatic tumor characterization in the deep learning era: novel supervised and unsupervised learning approaches
Jin et al. A deep 3D residual CNN for false‐positive reduction in pulmonary nodule detection
Wang et al. A multi-view deep convolutional neural networks for lung nodule segmentation
Liu et al. Pulmonary nodule classification in lung cancer screening with three-dimensional convolutional neural networks
CN111275129A (en) Method and system for selecting image data augmentation strategy
Muenzing et al. Supervised quality assessment of medical image registration: Application to intra-patient CT lung registration
de Carvalho Filho et al. Computer-aided diagnosis system for lung nodules based on computed tomography using shape analysis, a genetic algorithm, and SVM
Ming et al. Rapid reconstruction of 3D neuronal morphology from light microscopy images with augmented rayburst sampling
CN111292230B (en) Spiral transformation data amplification method, system, medium and equipment in deep learning
Filho et al. 3D shape analysis to reduce false positives for lung nodule detection systems
CN111028327A (en) Three-dimensional point cloud processing method, device and equipment
CN111260694B (en) Satellite remote sensing video target tracking method and device
Cheng et al. Volume segmentation using convolutional neural networks with limited training data
Li et al. Segmentation of pulmonary nodules using a GMM fuzzy C-means algorithm
CN111680755A (en) Medical image recognition model construction method, medical image recognition device, medical image recognition medium and medical image recognition terminal
Zhang et al. Dspoint: Dual-scale point cloud recognition with high-frequency fusion
Zuo et al. Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network
CN115527065A (en) Hip joint typing method, device and storage medium
Haiying et al. False-positive reduction of pulmonary nodule detection based on deformable convolutional neural networks
CN116228753B (en) Tumor prognosis evaluation method, device, computer equipment and storage medium
CN113781387A (en) Model training method, image processing method, device, equipment and storage medium
Pezeshki et al. Mass classification of mammograms using fractal dimensions and statistical features
CN115131384B (en) Bionic robot 3D printing method, device and medium based on edge preservation
Wu et al. Semiautomatic segmentation of glioma on mobile devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant