CN111292230A - Method, system, medium, and apparatus for spiral transform data augmentation in deep learning - Google Patents

Method, system, medium, and apparatus for spiral transform data augmentation in deep learning Download PDF

Info

Publication number
CN111292230A
CN111292230A CN202010098682.XA CN202010098682A CN111292230A CN 111292230 A CN111292230 A CN 111292230A CN 202010098682 A CN202010098682 A CN 202010098682A CN 111292230 A CN111292230 A CN 111292230A
Authority
CN
China
Prior art keywords
dimensional image
spiral
data
transformation
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010098682.XA
Other languages
Chinese (zh)
Other versions
CN111292230B (en
Inventor
钱晓华
陈夏晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202010098682.XA priority Critical patent/CN111292230B/en
Publication of CN111292230A publication Critical patent/CN111292230A/en
Application granted granted Critical
Publication of CN111292230B publication Critical patent/CN111292230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method, a system, a medium and equipment for amplifying spiral transformation data in deep learning, wherein the method for amplifying the spiral transformation data in the deep learning comprises the following steps: acquiring three-dimensional image data; performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image; changing the spiral transformation mode of the three-dimensional image to convert the three-dimensional image into an amplified two-dimensional image; and integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set, dividing different original images into a training set and a testing set according to specified requirements, so that the two-dimensional image set belonging to the training set is used for constructing a training model, and the two-dimensional image set belonging to the testing set is used for evaluating the training model. The invention keeps the relativity of the characteristics such as texture and the like in the three-dimensional space to a certain extent, and for a sample, the two-dimensional image obtained by the spiral transformation contains more comprehensive and complete three-dimensional information than the two-dimensional image obtained by a section.

Description

Method, system, medium, and apparatus for spiral transform data augmentation in deep learning
Technical Field
The invention belongs to the technical field of image data processing, relates to an image data transformation method, and particularly relates to a spiral transformation data amplification method, a spiral transformation data amplification system, a spiral transformation data amplification medium and a spiral transformation data amplification device in deep learning.
Background
In the prior art, a convolutional neural network becomes one of core algorithms in the field of image recognition, and has stable performance when the learning data is sufficient. For a general large-scale image classification problem, the convolutional neural network can be used for constructing a hierarchical classifier, and can also be used for extracting the distinguishing features of the image in fine classification recognition so as to be used for other classifiers to learn. For the latter, the feature extraction can artificially input different parts of the image into the convolutional neural network respectively, and can also be extracted by the convolutional neural network, however, when three-dimensional data is processed, the three-dimensional data processed by directly using the three-dimensional convolutional neural network occupies a large amount of computing resources, and the feasibility of processing two-dimensional data is higher. Most two-dimensional convolutional neural networks use slices of cross sections as input of the network, and only contain two-dimensional information of one section. However, each layer of the three-dimensional target region has strong correlation in space, and the simple two-dimensional section ignores the inter-layer relation. Meanwhile, the visual angle of the cross section is single, the image characteristics of other visual angles cannot be comprehensively represented, and the representation of the texture characteristics on the three-dimensional space is insufficient.
Furthermore, the most common data amplification methods are geometric transformation of the image, such as horizontal flipping of the two-dimensional image, scaling within a small range of multiples (e.g., 0.8-1.15), rotation, etc. These methods increase the amount of data to some extent, but the transformation results are all from the original data. For example, horizontal flipping changes only the view angle of the two-dimensional image, hardly changes the information content of the data set, and the data before and after augmentation are very similar, thereby limiting the effect of model prediction.
Therefore, how to provide a method, a system, a medium and a device for amplifying spiral transformation data in deep learning to solve the defects that a single two-dimensional image cannot retain more three-dimensional image information and effective dimension reduction cannot be realized in the prior art becomes a technical problem to be solved by the technical staff in the field.
Disclosure of Invention
In view of the foregoing disadvantages of the prior art, an object of the present invention is to provide a method, a system, a medium, and an apparatus for expanding spiral transform data in deep learning, which are used to solve the problem that the prior art cannot make a single two-dimensional image retain more three-dimensional image information and achieve effective dimensionality reduction.
To achieve the above and other related objects, an aspect of the present invention provides a method for amplifying spiral transform data in deep learning, including: acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter; performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image; changing the mode of the spiral transformation of the three-dimensional image data to convert the three-dimensional image data into an amplified two-dimensional image; and integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
In an embodiment of the invention, the three-dimensional image data includes a magnetic resonance image showing a location of the target region of interest.
In an embodiment of the present invention, the step of performing a spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image includes: selecting a transformation reference point in the interested target region as a spiral transformation midpoint; determining the maximum radius of the spiral transformation according to the maximum distance from the midpoint of the spiral transformation to the edge of the interested target region; and generating a spiral line by combining a spiral transformation radius, a transformation angle and the spiral transformation midpoint, wherein the spiral transformation radius is the distance from the spiral transformation midpoint to any point in the edge of the interested target region and is within the range determined by the maximum radius of the spiral transformation.
In an embodiment of the present invention, the transformation angle includes an azimuth angle and an elevation angle, and the step of generating a spiral line by combining the spiral transformation radius, the transformation angle, and the spiral transformation midpoint includes: constructing a conversion relation between the azimuth angle and the elevation angle; and combining the conversion relation and the spiral conversion radius to generate a spiral line.
In an embodiment of the invention, the step of constructing the conversion relationship between the azimuth angle and the elevation angle includes: constructing the conversion relation through uniform change of the azimuth angle and the elevation angle in a value range; or the conversion relation is constructed by making the surface density and the bulk density of the sampling points equal; or constructing the conversion relation through a specified preset sampling point distribution rule.
In an embodiment of the present invention, after the step of generating a spiral line by combining the spiral transformation radius, the transformation angle and the spiral transformation midpoint, the step of performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image further includes: correspondingly determining the position coordinates of all points on the spiral line in the three-dimensional image data; and calculating gray values of all points on the spiral line according to the position coordinates, and filling the gray values into a two-dimensional matrix to obtain a two-dimensional image expanded by spiral transformation.
In an embodiment of the present invention, the step of changing the way of the spiral transformation of the three-dimensional image data to convert into the augmented two-dimensional image includes: changing the position of the origin of the coordinate system in the three-dimensional data in the spiral transformation to perform spiral transformation; changing the angle and direction of the positive direction of the coordinate axis relative to the three-dimensional data in the spiral transformation to perform spiral transformation; horizontally turning the three-dimensional image data, and then carrying out spiral transformation; vertically overturning the three-dimensional image data, and then carrying out spiral transformation; the three-dimensional image data is amplified, reduced or stretched, and then spiral transformation is carried out; and changing the color saturation, the contrast and the brightness of the three-dimensional image data, and then carrying out spiral transformation.
In another aspect, the present invention provides a system for amplifying data obtained by performing deep learning with helical transformation, including: the data acquisition module is used for acquiring three-dimensional image data, and the three-dimensional image data comprises image data corresponding to at least one imaging parameter; the first transformation module is used for carrying out spiral transformation on the three-dimensional image data so as to convert the three-dimensional image data into an original two-dimensional image; the second transformation module is used for changing the spiral transformation mode of the three-dimensional image data so as to convert the three-dimensional image data into an amplified two-dimensional image; and the data integration module is used for integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
Yet another aspect of the present invention provides a medium having a computer program stored thereon, where the computer program, when executed by a processor, implements the method for augmenting spiral transformed data in deep learning.
A final aspect of the invention provides an apparatus comprising: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory so as to enable the equipment to execute the spiral transformation data amplification method in deep learning.
As described above, the method, system, medium, and apparatus for amplifying helical transform data in deep learning according to the present invention have the following advantages:
when a two-dimensional image is generated, the data set obtained by the spiral transformation is more widely distributed, namely, more comprehensive three-dimensional information is contained. According to the data amplification method of the spiral transformation, on one hand, a single 2D image can keep 3D information, on the other hand, when data amplification is carried out each time, different two-dimensional image information can be obtained only by changing the coordinate axis angle of the spiral transformation, so that the data amplified each time are different, the amplified sample contains more information, and a very effective data amplification method is provided through the spiral transformation.
Drawings
FIG. 1 is a diagram illustrating an example data set of a method for augmenting spiral transformed data in deep learning according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of data transformation in an embodiment of the method for augmenting spiral transformed data in deep learning according to the present invention.
FIG. 3 is a schematic flow chart diagram illustrating a method for augmenting spiral transformed data in deep learning according to an embodiment of the present invention.
FIG. 4 is a flowchart illustrating a spiral transformation method for augmenting spiral transformation data in deep learning according to an embodiment of the present invention.
FIG. 5 is a flowchart illustrating a spiral generation process of the method for amplifying spiral transform data in deep learning according to an embodiment of the present invention.
FIG. 6 is a schematic diagram of a coordinate system construction of the method for augmenting spiral transformed data in deep learning according to an embodiment of the present invention.
FIG. 7 is a data amplification simulation diagram of the spiral transformation in an embodiment of the method for amplifying spiral transformation data in deep learning according to the present invention.
FIG. 8A is a graph showing the comparison between the result of the method for amplifying data by spiral transformation in deep learning according to an embodiment of the present invention and another data amplification effect.
FIG. 8B is a schematic diagram illustrating data distribution of the method for amplifying helical transform data in deep learning according to an embodiment of the present invention.
FIG. 9 is a schematic diagram illustrating the structure of the spiral transform data amplification system in deep learning according to an embodiment of the present invention.
Description of the element reference numerals
Spiral transformation data amplification system in 9-depth learning
91 data acquisition module
92 first conversion module
93 second conversion module
94 data integration module
S31-S34
S321 to S325 steps
S323A-S323B steps
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The technical principle of the method, the system, the medium and the equipment for amplifying the spiral transformation data in the deep learning is as follows: acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter; performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image; changing the mode of the spiral transformation of the three-dimensional image data to convert the three-dimensional image data into an amplified two-dimensional image; and integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
Example one
The embodiment provides a method for amplifying spiral transformation data in deep learning, which includes:
acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter;
performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image;
changing the spiral transformation mode of the three-dimensional image to convert the three-dimensional image into an amplified two-dimensional image;
and integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
The method for amplifying the spiral transform data in the deep learning provided by the present embodiment will be described in detail below with reference to the drawings.
Referring to fig. 1, a data set illustration of a spiral transform data amplification method in deep learning according to an embodiment of the present invention is shown. Pancreatic cancer is small and difficult to segment automatically, while pancreatic tumors are closely associated with surrounding tissues, while displaying tissue-like intensity and are difficult to identify by themselves. Please refer to fig. 2, which is a schematic diagram of data transformation in an embodiment of the method for amplifying spiral transformation data in deep learning according to the present invention. FIG. 2 shows the magnetic resonance of a patient with pancreatic cancer in this embodimentThe images are taken as three-dimensional image data, a magnetic resonance image of pancreatic cancer is subjected to spiral transformation and data amplification by a deep learning method to provide a two-dimensional image dataset containing more three-dimensional information for predicting the pancreatic cancer, and the image after the spiral transformation is X ═ X1,X2,…,Xn]。
Please refer to fig. 3, which is a schematic flowchart illustrating a method for amplifying spiral transform data in deep learning according to an embodiment of the present invention. The method for amplifying the spiral transformation data in the deep learning specifically comprises the following steps:
s31, three-dimensional image data is obtained, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter.
In the present embodiment, the three-dimensional image data includes magnetic resonance imaging, CT, and other three-dimensional imaging, and the three-dimensional image data is presented with a Region of Interest (ROI).
Specifically, the pancreatic cancer data is acquired from a magnetic resonance image of a pancreatic cancer patient, the acquired data needs image information including a plurality of Imaging parameters, in this embodiment, three modalities, i.e., ADC (application Diffusion Coefficient Imaging), DWI (Diffusion Weighted Imaging), and T2(transverse relaxation time Weighted Imaging), are used to obtain MRI data of 64 patients, and the data of the three modalities are image data corresponding to three different Imaging parameters. At the same time, the location of the tumor has been determined in the image data. In this example, the data set was from patients with pancreatic cancer treated by surgery at Rekin Hospital, from 1 month 2016 to 12 months 2016, each case containing pathological examination of the tumor, i.e., mutations in TP53 (an anti-cancer gene) and KRAS (a proto-oncogene).
S32, performing spiral transformation on the three-dimensional image data to convert into an original two-dimensional image.
Please refer to fig. 4, which is a flowchart illustrating a spiral transformation process of the method for amplifying spiral transformation data in deep learning according to an embodiment of the present invention. As shown in fig. 4, in the present embodiment, S32 includes:
s321, selecting a transformation reference point in the interested target region as a spiral transformation midpoint. The transformation reference point is a point within the range of the target region of interest and is used as a spiral transformation midpoint.
Specifically, the target region of interest is a tumor in a Magnetic Resonance image, and a point located in the tumor in an original three-dimensional MRI (Magnetic Resonance Imaging), for example, a center point of the tumor, is selected as a midpoint O of the spiral transformation.
S322, determining the maximum radius of the spiral transformation according to the maximum distance from the midpoint of the spiral transformation to the edge of the interested target region.
In particular, the maximum distance of the tumor margin to the point O determines the maximum radius R of the helical transformation.
And S323, combining the spiral transformation radius, the transformation angle and the spiral transformation midpoint to generate a spiral line. In this embodiment, the spiral transformation radius is a distance from a midpoint of the spiral transformation to any one point of the edges of the target region of interest, and is within a range determined by a maximum radius of the spiral transformation.
Specifically, the distance from any point of the tumor margin to the point O is defined as R, and then R is less than or equal to R.
Please refer to fig. 5, which is a flowchart illustrating a spiral generation process of the spiral transformation data amplification method in deep learning according to an embodiment of the present invention. As shown in fig. 5, in the present embodiment, the transformation angle includes an azimuth angle and an elevation angle, and S323 includes:
and S323A, constructing a conversion relation between the azimuth angle and the elevation angle.
In the embodiment, the conversion relation is constructed by uniformly changing the azimuth angle and the elevation angle within a value range; or the conversion relation is constructed by making the surface density and the bulk density of the sampling points equal; or constructing the conversion relation through a specified preset sampling point distribution rule.
In particular, the key to the spiral transformation is to construct a relationship of the two angles Θ and Ψ. According to different requirements, different relations can be constructed. For example, to have the sampling points evenly distributed at the two poles and equator of the sphere, the radian between fixed sampling points is constant. And (3) setting the circle on the equator to have 2N sampling points, defining the sampling radian as the distance d between two points on the equator, and calculating by the formula (1).
Figure BDA0002386164850000061
Wherein d represents the sampling radian and is defined as the distance between two points on the equator, r represents the distance from the tumor edge to the spiral transformation midpoint O, and 2N represents the number of sampling points.
Further, according to the preset sampling point distribution rule, when the sampling points are set, the number of the horizontal plane sampling points corresponding to the theta angle is expressed as
Figure BDA0002386164850000062
Setting theta to be divided into N angles in the value range, and then under the condition of a specified radius, if N is large enough, the total number of sampling points can be obtained by integral calculation of a formula (2):
Figure BDA0002386164850000063
therefore, the total number of sampling points on the surface of a sphere with a given radius
Figure BDA0002386164850000064
Further, knowing the coordinates of point A, the arc between two adjacent points can be expressed as Ψ*Sin theta, where psi*Is the difference value of the included angle psi between the two adjacent coordinate points and the positive direction of the X axis. Then pass through
Figure BDA0002386164850000071
And establishing a conversion relation between theta and psi. For example, in targeted sampling, Θ and Ψ satisfy
Figure BDA0002386164850000072
The maximum radius of the spiral transformation is 60, N is 20, and the maximum radius is maximumResulting in a 120 x 254 two-dimensional image.
And S323B, combining the conversion relation and the spiral transformation radius to generate a spiral line.
Specifically, please refer to fig. 6, which is a schematic diagram illustrating a coordinate system constructing method for the spiral transform data amplification method in deep learning according to an embodiment of the present invention. In three-dimensional space, the spiral line A is determined by the azimuth angle psi, the elevation angle 1-theta and the distance r from the origin. The coordinates of the point a are expressed by the formula (3) according to the conversion relationship of the coordinate system.
Figure BDA0002386164850000073
And S324, correspondingly determining the position coordinates of all points on the spiral line in the three-dimensional image data.
S325, calculating the gray value of all the points on the spiral line according to the position coordinates, and filling the gray value into a two-dimensional matrix to obtain a two-dimensional image which is expanded by spiral transformation.
Specifically, please refer to fig. 7, which shows a data amplification simulation diagram of the spiral transformation in an embodiment of the method for amplifying spiral transformation data in deep learning according to the present invention. And then the coordinates of the three-dimensional space are corresponding to the position of the original matrix, and the gray value of the point is determined by using a trilinear interpolation method. And finally, filling the gray value into a two-dimensional matrix to obtain a two-dimensional image which is spirally transformed and expanded.
It should be noted that the method of trilinear interpolation is only one embodiment of determining the gray scale value in this embodiment, and other methods for calculating the gray scale value besides trilinear interpolation are also within the scope of the present invention.
S33, changing the spiral transformation mode of the three-dimensional image to convert the three-dimensional image into an amplified two-dimensional image
In the embodiment, the position of the origin of the coordinate system in the three-dimensional data in the spiral transformation is changed to carry out the spiral transformation; and/or changing the positive direction of the coordinate axis in the spiral transformation relative to the angle and the direction of the three-dimensional data to perform spiral transformation, for example, rotating different angles along the coordinate axis (x axis, y axis or z axis); and/or horizontally turning the three-dimensional image data, and then carrying out spiral transformation; and/or vertically turning the three-dimensional image data, and then carrying out spiral transformation; and/or the three-dimensional image data is amplified, reduced or stretched and then is subjected to spiral transformation; and/or changing the color saturation, the contrast and the brightness of the three-dimensional image data, and then carrying out spiral transformation.
In particular, the method of the spiral transformation is applied to data amplification and effective dimension reduction of three-dimensional data. The purpose of data amplification is to increase the diversity of data in the sample, to combat overfitting of the network. There are three ways of data amplification in deep learning: performing data amplification on the training set, and not performing data amplification on the test set; respectively carrying out data amplification on the training set and the test set; and mixing the amplified data with the original data, and randomly dividing the data into a training set and a testing set.
The result obtained by the spiral transformation depends on the constructed space rectangular coordinate system and the parameter setting of the spiral transformation. Under the condition of the same parameters, different coordinate systems are constructed for the same three-dimensional data, and different spiral transformation results can be obtained. In order to facilitate comparison of the difference of transformation results under two coordinate systems, the same coordinate origin and the positive direction of the z axis are set, and only the positive direction of the x axis is changed. Assuming that the positive direction of the x-axis is changed by Δ Ψ, the corresponding coordinate a' of point a can be expressed as:
Figure BDA0002386164850000081
if a '(x', y ', z') is a (x, y, z), equation (3) and equation (4) are combined, one can obtain:
Figure BDA0002386164850000082
simplified to show (6):
Figure BDA0002386164850000083
solving the system of equations yields that cos Δ Ψ is 1, then a '(x', y ', z') ═ a (x, y, z) if and only if Δ Ψ is 2 π k. The different spiral transformation results can be obtained by only changing the positive direction angle of the x axis in the XOY plane of the space coordinate system.
Similarly, besides changing the positive direction angle of the coordinate axis, different spiral transformation results can be obtained from the same three-dimensional data by using other transformation modes. Such as changing the position of the origin of the coordinate system, geometrically transforming the original data, changing the parameters of the spiral transformation (including the number of rotations, sampling interval, etc.), horizontal and vertical flipping, zooming in to a small range multiple (0.8-1.15 times), etc. The transformed two-dimensional image is a part of the original three-dimensional image, and the result of the spiral transformation is equivalent to a subset of the original data, so that the amplification data obtained based on different coordinate systems have a certain complementary relation to the same three-dimensional original data.
Since the TP53 gene prediction of pancreatic cancer is a very challenging task, the proportion of tumor regions is small, the identification difficulty is high, and the difficulty in obtaining multi-modal data causes insufficient sample size, which increases the difficulty of the task. This embodiment improves the small sample problem to some extent. Considering that a large amount of spatial information is lost in a conventional section image, and a large amount of calculation is added to a three-mode network with a large parameter by directly using three-dimensional convolution, a novel spiral transformation method is provided, and an image is input into a convolution neural network for operation after being subjected to spiral transformation. Computational resources and model parameters are reduced compared to 3D models.
During the experiment, the original image is amplified to 27 times by the way of spiral transformation and geometric amplification in the prior art, and the effect of data amplification is evaluated by using normalized mutual information. Wherein the geometric amplification is to perform geometric transformation data amplification such as horizontal and vertical turnover on a 2D section with the largest tumor area. Please refer to fig. 8A, which is a graph illustrating the effect of the spiral transform data amplification method in deep learning according to an embodiment of the present invention compared with another data amplification effect. The results of two methods for one case are shown in fig. 8A, (a) for the amplification of the helical transform data, (b) for the amplification of the 2D slice geometry transform data with the largest tumor area, where the top left corner is the original image and the other three are the amplified images.
In order to compare the similarity of the images before and after amplification by the two methods, the normalized mutual information of the 26 images obtained by amplification and the original image obtained by amplification in FIG. 8A (a) and FIG. 8A (b) is respectively calculated, the sum of the mutual information of the spiral transformation obtained by summing the two groups of data is 32.8838, and the section image is 38.3224. The normalized mutual information is a way of measuring the similarity of two images, and is a measure of the similarity of one image containing the other image, and the larger the value of the normalized mutual information is, the higher the similarity of the two images is, and the normalized mutual information can be realized by calculating the information entropy and the joint information entropy of the images. In addition, two sets of data were subjected to t-test to obtain p-6.4920 × 10-7And the confidence coefficient is far less than 0.01, which shows that the normalized mutual information of the two groups of data has a significant difference, namely the similarity of images amplified by the spiral transformation is smaller.
The data information after the spiral transformation and the geometric transformation amplification in the prior art of the present invention are subjected to list management and edited in a tabular form with reference to the data amplification comparison table in table 1 by combining with fig. 8A.
Table 1: data amplification comparison table
Helical transformation Geometric transformation
Normalized mutual information 32.8838 38.3224
Degree of dispersion 0.2709 0.0927
Euclidean distance 14.5826 7.7633
In order to facilitate the visual observation of the data amplification effect, we only perform dimension reduction visualization on the original data and the data obtained by doubling (horizontally flipping and vertically flipping) the original data, please refer to fig. 8B, which is shown as a data distribution diagram of the spiral transformation data amplification method in the deep learning of the present invention in an embodiment. As shown in fig. 8B, the first graph in (a) shows the degree of dispersion of the data distribution in the geometric transformation, the second graph in (a) shows an enlarged view of a portion of the first graph in which the data is more concentrated, and the second graph in (B) shows the degree of dispersion of the data distribution in the helical transformation and the data amplification, and calculates the degree of dispersion S of the two-dimensional discrete points after normalizing the data, as shown in table 1, the degree of dispersion of the geometric transformation in the prior art is 0.0927, and the degree of dispersion of the helical transformation in one embodiment of the present invention is 0.2709, which is significantly higher and better than the prior art. In addition, Euclidean distances d from each amplified point to the original point are calculated and summed to obtain 7.7633 as a distance of geometric transformation, and 14.5826 as a spiral transformation. In conclusion, the normalization mutual information of the spiral transformation mode is lower, the discrete degree is higher, and the Euclidean distance is larger, so that the similarity among data is smaller, the distribution range is wider, and the data amplification effect is better.
The above results show that, also in the case of a two-dimensional image, the data set obtained by the spiral transformation is more widely distributed, i.e. contains more comprehensive three-dimensional information. The data amplification method of the method is proved to be capable of enabling a single 2D image to keep 3D information and reflecting the spatial distribution characteristics and spatial texture relation of a tumor region on one hand; on the other hand, when data amplification is carried out each time, different tumor information can be obtained only by changing the coordinate axis angle of the spiral transformation, so that the data amplified each time are different, the amplified sample contains more information, and the spiral transformation is a very effective data amplification method.
In addition, when the spiral transformation and data amplification are applied to deep learning, the constraint of prior knowledge is added into a model-driven loss function, and overfitting is also facilitated to be relieved; the main network is initialized by using the parameters of the image network pre-training, and the idea of transfer learning is combined, so that the network parameters have better initialization distribution, the features (such as angles, edges and the like) of the lowest level can be quickly extracted under the condition of a small sample, the convergence speed is accelerated, and overfitting is reduced.
And S34, integrating the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
In this embodiment, data integration is a data integration mode in which data from different data sources is collected, sorted, cleaned, converted, and loaded to a new data source, thereby providing a unified data view for data consumers. In this embodiment, data integration specifically refers to sharing or combining data from the augmented two-dimensional image and the original two-dimensional image to form a two-dimensional image set.
It should be noted that, in this embodiment, the training set and the test set are respectively subjected to data amplification to respectively obtain corresponding two-dimensional image sets, that is, the data of the training set and the test set are divided according to the patient as indicated in the example, that is, the data of the training set and the data of the test set are respectively from different original images without cross-mixing. And dividing different original images into a training set and a test set according to specified requirements, so that the two-dimensional image set belonging to the training set is used for constructing a training model, and the two-dimensional image set belonging to the test set is used for evaluating the training model.
Specifically, in the data amplification process, the parameters of the fixed spiral transformation and the origin and the positive direction of the spatial rectangular coordinate system are selected in the present embodiment, and the geometric transformation is performed on the original data. The three-dimensional data is subjected to geometrical transformation such as rotation at different angles along the z axis, horizontal turning, vertical turning and the like, and then is converted into a two-dimensional space through spiral transformation, so that the data is expanded to 27 times of the original data. And then, dividing according to a specified requirement, wherein the specified requirement refers to that the data set is divided into five parts according to the proportion of the positive samples and the negative samples, wherein the four parts are training sets, and the other part is a testing set.
Based on the characteristics, the data amplification can be carried out by using a spiral transformation method, and the information content of a training sample in a deep learning method is increased. Besides being applied to a pancreatic cancer data set, the data amplification method of the spiral transformation is also applicable to other data sets of which the target areas are similar to spheres, and a new data amplification idea is provided for solving the problem of insufficient data volume in deep learning.
The predicted effect of the TP53 gene of pancreatic cancer in different models is subjected to list management and compiled in a tabular form, see a multi-modal model effect comparison table in table 2, and the evaluation indexes are accuracay (Accuracy), AUC (area under a curve of operating characteristics of a subject), Recall (Recall), Precision and F1Score (F1 value), and are widely applied to the classification field. Each index is defined as follows:
Figure BDA0002386164850000101
Figure BDA0002386164850000102
Figure BDA0002386164850000103
Figure BDA0002386164850000104
wherein TP is the true positive rate, TN is the true negative rate, FP is the false positive rate, and FN is the false negative rate.
The two-dimensional rotation transformations in rows 3 and 5 in table 2 are hybrid-driven multi-modal fusion models using a spiral transformation. The two-dimensional section is a two-dimensional image obtained from 27 different section angles in a three-dimensional space, and the three-dimensional image directly inputs three-dimensional data into a 3D convolution neural network framework. Experiments were performed in a framework without bi-pooling, since the network framework for 3D convolution cannot incorporate bi-linear pooling structures. Compared with models with the same network framework, the model is superior to a 3D model in Accuracy, AUC, Precision and F1Score, and compared with a two-dimensional section, the performance of the multi-modal prediction model based on the spiral transformation is better on five indexes. The method fully shows that the image after the spiral transformation has better effect than other input images in the pancreatic cancer gene prediction task, and also shows that the image after the spiral transformation has more comprehensive information than a 2D sectional image and expresses the spatial relationship of a three-dimensional image to a certain extent. Besides obviously improving the prediction performance in the multi-modal pancreatic cancer data set, the method of the spiral transformation also has certain reference significance in the processing of other data sets (particularly the data set with a spherical target object).
Table 2: multi-modal model effect comparison table
Figure BDA0002386164850000111
The present embodiment provides a computer storage medium having a computer program stored thereon, which when executed by a processor implements the method for augmenting spiral transformed data in deep learning.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned computer-readable storage media comprise: various computer storage media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The method for amplifying the spiral transformation data in the deep learning fully utilizes three-dimensional information, spirally expands a three-dimensional target area to a two-dimensional plane, retains the correlation between original adjacent pixels in the transformation process, and then uses the transformed image for prediction of gene mutation.
Example two
The present embodiment provides a system for amplifying data obtained by performing spiral transformation in deep learning, where the system for amplifying data obtained by performing spiral transformation in deep learning includes:
the data acquisition module is used for acquiring three-dimensional image data, and the three-dimensional image data comprises image data corresponding to at least one imaging parameter;
the first transformation module is used for carrying out spiral transformation on the three-dimensional image data so as to convert the three-dimensional image data into an original two-dimensional image;
the second transformation module is used for changing the spiral transformation mode of the three-dimensional image so as to convert the three-dimensional image into an amplified two-dimensional image;
and the data integration module is used for integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
The system for amplifying the spiral transform data in deep learning provided by the present embodiment will be described in detail with reference to the drawings. It should be noted that the division of the modules of the following system is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. And the modules can be realized in a form that all software is called by the processing element, or in a form that all the modules are realized in a form that all the modules are called by the processing element, or in a form that part of the modules are called by the hardware. For example: a module may be a separate processing element, or may be integrated into a chip of the system described below. Further, a certain module may be stored in the memory of the following system in the form of program code, and a certain processing element of the following system may call and execute the function of the following certain module. Other modules are implemented similarly. All or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, the steps of the above method or the following modules may be implemented by hardware integrated logic circuits in a processor element or instructions in software.
The following modules may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable Gate Arrays (FPGAs), and the like. When some of the following modules are implemented in the form of a program code called by a processing element, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling the program code. These modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
Please refer to fig. 9, which is a schematic structural diagram of a spiral transform data amplification system in deep learning according to an embodiment of the present invention. As shown in fig. 9, the system 9 for amplifying spiral transform data in deep learning includes: a data acquisition module 91, a first transformation module 92, a second transformation module 93, and a data integration module 94.
The data acquiring module 91 is configured to acquire three-dimensional image data, where the three-dimensional image data includes image data corresponding to at least one imaging parameter.
In the present embodiment, the three-dimensional image data includes magnetic resonance imaging, CT, and other three-dimensional imaging, and the three-dimensional image data is presented with a Region of Interest (ROI).
The first transformation module 92 is configured to perform a spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image.
In this embodiment, the first transformation module 92 is specifically configured to select a transformation reference point in the target region of interest as a spiral transformation midpoint; determining the maximum radius of the spiral transformation according to the maximum distance from the midpoint of the spiral transformation to the edge of the interested target region; and generating a spiral line by combining a spiral transformation radius, a transformation angle and the spiral transformation midpoint, wherein the spiral transformation radius is the distance from the spiral transformation midpoint to any point in the edge of the interested target region and is within the range determined by the maximum radius of the spiral transformation. The transformation angle comprises an azimuth angle and an elevation angle, and a transformation relation between the azimuth angle and the elevation angle is constructed; and combining the conversion relation and the spiral conversion radius to generate a spiral line. Correspondingly determining the position coordinates of all points on the spiral line in the three-dimensional image data; and calculating gray values of all points on the spiral line according to the position coordinates, and filling the gray values into a two-dimensional matrix to obtain a two-dimensional image expanded by spiral transformation.
Specifically, the first transformation module 92 is configured to construct the transformation relationship by uniformly changing the azimuth angle and the elevation angle within a value range; or the conversion relation is constructed by making the surface density and the bulk density of the sampling points equal; or constructing the conversion relation through a specified preset sampling point distribution rule.
The second transformation module 93 is used for changing the way of the spiral transformation of the three-dimensional image to convert the three-dimensional image into an amplified two-dimensional image
In this embodiment, the second transformation module 93 is specifically configured to change an origin position of a coordinate system in the three-dimensional data in the spiral transformation to perform the spiral transformation; and/or changing the positive direction of the coordinate axis in the spiral transformation relative to the angle and the direction of the three-dimensional data to perform spiral transformation, for example, rotating different angles along the coordinate axis (x axis, y axis or z axis); and/or horizontally flipping the three-dimensional image data; and/or vertically flipping the three-dimensional image data; and/or enlarging, reducing or stretching the three-dimensional image data; and/or changing color saturation, contrast, brightness of the three-dimensional image data.
The data integration module 94 is configured to integrate the original two-dimensional image and the amplified two-dimensional image into a two-dimensional image set.
In the embodiment, the spiral transformation data amplification system in the deep learning retains the correlation of the characteristics such as textures and the like between original adjacent pixels on a three-dimensional space to a certain extent, and for a sample, a two-dimensional image obtained by spiral transformation contains more comprehensive and complete three-dimensional information than a two-dimensional image obtained by a section.
EXAMPLE III
This embodiment provides an apparatus, the apparatus comprising: a processor, memory, transceiver, communication interface, or/and system bus; the memory and the communication interface are connected with the processor and the transceiver through a system bus and are used for completing mutual communication, the memory is used for storing the computer program, the communication interface is used for communicating with other equipment, and the processor and the transceiver are used for running the computer program to enable the equipment to execute all steps of the spiral transformation data amplification method in the deep learning.
The above-mentioned system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
The protection scope of the method for amplifying the spiral transform data in the deep learning is not limited to the execution sequence of the steps listed in this embodiment, and all the schemes of adding, subtracting, and replacing the steps in the prior art according to the principles of the present invention are included in the protection scope of the present invention.
The invention also provides a system for amplifying the data of the spiral transformation in the deep learning, which can realize the method for amplifying the data of the spiral transformation in the deep learning, but the device for realizing the method for amplifying the data of the spiral transformation in the deep learning comprises but is not limited to the structure of the system for amplifying the data of the spiral transformation in the deep learning, and all the structural deformation and the replacement of the prior art according to the principle of the invention are included in the protection scope of the invention. It should be noted that the method for amplifying the spiral transform data in the deep learning and the system for amplifying the spiral transform data in the deep learning are also applicable to other multimedia contents such as videos, friend circles and the like, and are included in the protection scope of the present invention.
In summary, when the method, the system, the medium, and the device for amplifying the spiral transform data in the deep learning generate the two-dimensional image, the data set obtained by the spiral transform is distributed more widely, that is, the data set contains more comprehensive three-dimensional information. According to the data amplification method of the spiral transformation, on one hand, a single 2D image can keep 3D information, on the other hand, when data amplification is carried out each time, different two-dimensional image information can be obtained only by changing the coordinate axis angle of the spiral transformation, so that the data amplified each time are different, the amplified sample contains more information, and a very effective data amplification method is provided through the spiral transformation. The invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A method for amplifying spiral transformation data in deep learning is characterized by comprising the following steps:
acquiring three-dimensional image data, wherein the three-dimensional image data comprises image data corresponding to at least one imaging parameter;
performing spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image;
changing the mode of the spiral transformation of the three-dimensional image data to convert the three-dimensional image data into an amplified two-dimensional image;
and integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
2. The method for expanding data of spiral transform in deep learning according to claim 1,
the three-dimensional image data includes a magnetic resonance image that is presented with a target region of interest.
3. The method for amplifying helical transform data in deep learning according to claim 2, wherein the step of performing a helical transform on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image comprises:
selecting a transformation reference point in the interested target region as a spiral transformation midpoint;
determining the maximum radius of the spiral transformation according to the maximum distance from the midpoint of the spiral transformation to the edge of the interested target region;
and generating a spiral line by combining a spiral transformation radius, a transformation angle and the spiral transformation midpoint, wherein the spiral transformation radius is the distance from the spiral transformation midpoint to any point in the edge of the interested target region and is within the range determined by the maximum radius of the spiral transformation.
4. The method for amplifying spiral transform data in deep learning according to claim 3, wherein the transform angles include azimuth angles and elevation angles, and the step of generating a spiral line by combining a spiral transform radius, a transform angle and a spiral transform midpoint comprises:
constructing a conversion relation between the azimuth angle and the elevation angle;
and combining the conversion relation and the spiral conversion radius to generate a spiral line.
5. The method for amplifying helical transform data in deep learning according to claim 4, wherein the step of constructing the conversion relationship between the azimuth angle and the elevation angle comprises:
constructing the conversion relation through uniform change of the azimuth angle and the elevation angle in a value range; or
Establishing the conversion relation by making the surface density and the bulk density of the sampling points equal; or
And constructing the conversion relation through a specified preset sampling point distribution rule.
6. The method of claim 3, wherein the step of performing a spiral transformation on the three-dimensional image data to convert the three-dimensional image data into an original two-dimensional image after the step of generating a spiral line by combining a radius of the spiral transformation, a transformation angle, and a midpoint of the spiral transformation further comprises:
correspondingly determining the position coordinates of all points on the spiral line in the three-dimensional image data;
and calculating gray values of all points on the spiral line according to the position coordinates, and filling the gray values into a two-dimensional matrix to obtain a two-dimensional image expanded by spiral transformation.
7. The method for augmenting the data spirally transformed in the deep learning according to claim 1, wherein the step of changing the way of spirally transforming the three-dimensional image data to transform into the augmented two-dimensional image comprises:
changing the position of the origin of the coordinate system in the three-dimensional data in the spiral transformation to perform spiral transformation;
changing the angle and direction of the positive direction of the coordinate axis relative to the three-dimensional data in the spiral transformation to perform spiral transformation;
horizontally turning the three-dimensional image data, and then carrying out spiral transformation;
vertically overturning the three-dimensional image data, and then carrying out spiral transformation;
the three-dimensional image data is amplified, reduced or stretched, and then spiral transformation is carried out;
and changing the color saturation, the contrast and the brightness of the three-dimensional image data, and then carrying out spiral transformation.
8. A system for amplifying data of a spiral transform in deep learning, comprising:
the data acquisition module is used for acquiring three-dimensional image data, and the three-dimensional image data comprises image data corresponding to at least one imaging parameter;
the first transformation module is used for carrying out spiral transformation on the three-dimensional image data so as to convert the three-dimensional image data into an original two-dimensional image;
the second transformation module is used for changing the spiral transformation mode of the three-dimensional image data so as to convert the three-dimensional image data into an amplified two-dimensional image;
and the data integration module is used for integrating the data of the original two-dimensional image and the amplified two-dimensional image to form a two-dimensional image set.
9. A medium having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, implements the method for augmenting spiral transformed data in deep learning according to any one of claims 1 to 7.
10. An apparatus, comprising: a processor and a memory;
the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory to enable the device to execute the spiral transformation data amplification method in deep learning according to any one of claims 1 to 7.
CN202010098682.XA 2020-02-18 2020-02-18 Spiral transformation data amplification method, system, medium and equipment in deep learning Active CN111292230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010098682.XA CN111292230B (en) 2020-02-18 2020-02-18 Spiral transformation data amplification method, system, medium and equipment in deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010098682.XA CN111292230B (en) 2020-02-18 2020-02-18 Spiral transformation data amplification method, system, medium and equipment in deep learning

Publications (2)

Publication Number Publication Date
CN111292230A true CN111292230A (en) 2020-06-16
CN111292230B CN111292230B (en) 2023-04-28

Family

ID=71029285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010098682.XA Active CN111292230B (en) 2020-02-18 2020-02-18 Spiral transformation data amplification method, system, medium and equipment in deep learning

Country Status (1)

Country Link
CN (1) CN111292230B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699952A (en) * 2021-01-06 2021-04-23 哈尔滨市科佳通用机电股份有限公司 Train fault image amplification method and system based on deep learning
CN116512254A (en) * 2023-04-11 2023-08-01 中国人民解放军军事科学院国防科技创新研究院 Direction-based intelligent control method and system for mechanical arm, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055996A (en) * 2011-02-23 2011-05-11 南京航空航天大学 Real three-dimensional display system and method based on space layer-by-layer scanning
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
WO2018196371A1 (en) * 2017-04-26 2018-11-01 华南理工大学 Three-dimensional finger vein recognition method and system
WO2019057190A1 (en) * 2017-09-25 2019-03-28 腾讯科技(深圳)有限公司 Method and apparatus for displaying knowledge graph, terminal device, and readable storage medium
CN109767440A (en) * 2019-01-11 2019-05-17 南京信息工程大学 A kind of imaged image data extending method towards deep learning model training and study

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055996A (en) * 2011-02-23 2011-05-11 南京航空航天大学 Real three-dimensional display system and method based on space layer-by-layer scanning
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
WO2018196371A1 (en) * 2017-04-26 2018-11-01 华南理工大学 Three-dimensional finger vein recognition method and system
WO2019057190A1 (en) * 2017-09-25 2019-03-28 腾讯科技(深圳)有限公司 Method and apparatus for displaying knowledge graph, terminal device, and readable storage medium
CN109767440A (en) * 2019-01-11 2019-05-17 南京信息工程大学 A kind of imaged image data extending method towards deep learning model training and study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TAO YAN等: "Depth Estimation From a Light Field Image Pair With a Generative Model" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699952A (en) * 2021-01-06 2021-04-23 哈尔滨市科佳通用机电股份有限公司 Train fault image amplification method and system based on deep learning
CN112699952B (en) * 2021-01-06 2021-08-24 哈尔滨市科佳通用机电股份有限公司 Train fault image amplification method and system based on deep learning
CN116512254A (en) * 2023-04-11 2023-08-01 中国人民解放军军事科学院国防科技创新研究院 Direction-based intelligent control method and system for mechanical arm, equipment and storage medium
CN116512254B (en) * 2023-04-11 2024-01-23 中国人民解放军军事科学院国防科技创新研究院 Direction-based intelligent control method and system for mechanical arm, equipment and storage medium

Also Published As

Publication number Publication date
CN111292230B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN111275130B (en) Multi-mode-based deep learning prediction method, system, medium and equipment
Jin et al. A deep 3D residual CNN for false‐positive reduction in pulmonary nodule detection
Toivonen et al. Radiomics and machine learning of multisequence multiparametric prostate MRI: Towards improved non-invasive prostate cancer characterization
CN110458813B (en) Image area positioning method and device and medical image processing equipment
Dou et al. Multilevel contextual 3-D CNNs for false positive reduction in pulmonary nodule detection
Lee et al. Cephalometric landmark detection in dental x-ray images using convolutional neural networks
Liu et al. Pulmonary nodule classification in lung cancer screening with three-dimensional convolutional neural networks
Ahmad et al. Convolutional-neural-network-based feature extraction for liver segmentation from CT images
Radiuk Applying 3D U-Net architecture to the task of multi-organ segmentation in computed tomography
Nemoto et al. Effects of sample size and data augmentation on U-Net-based automatic segmentation of various organs
Feng et al. Supervoxel based weakly-supervised multi-level 3D CNNs for lung nodule detection and segmentation
Cheng et al. Volume segmentation using convolutional neural networks with limited training data
Kshatri et al. Convolutional neural network in medical image analysis: a review
Hsu et al. Capturing implicit hierarchical structure in 3d biomedical images with self-supervised hyperbolic representations
Kurmi et al. Content-based image retrieval algorithm for nuclei segmentation in histopathology images: CBIR algorithm for histopathology image segmentation
Li et al. Segmentation of pulmonary nodules using a GMM fuzzy C-means algorithm
CN111292230A (en) Method, system, medium, and apparatus for spiral transform data augmentation in deep learning
Pominova et al. 3D deformable convolutions for MRI classification
Cheng et al. Dense point cloud completion based on generative adversarial network
WO2022152866A1 (en) Devices and process for synthesizing images from a source nature to a target nature
Lee et al. Reducing the model variance of a rectal cancer segmentation network
Flexa et al. Polygonal Coordinate System: Visualizing high-dimensional data using geometric DR, and a deterministic version of t-SNE
Behar et al. ResNet50-Based Effective Model for Breast Cancer Classification Using Histopathology Images.
Zhu et al. Improving segmentation and classification of renal tumors in small sample 3D CT images using transfer learning with convolutional neural networks
Preibisch et al. Image-based representation of massive spatial transcriptomics datasets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant