CN112748089A - Phase unwrapping method and device in Doppler optical coherence tomography - Google Patents

Phase unwrapping method and device in Doppler optical coherence tomography Download PDF

Info

Publication number
CN112748089A
CN112748089A CN201911050545.2A CN201911050545A CN112748089A CN 112748089 A CN112748089 A CN 112748089A CN 201911050545 A CN201911050545 A CN 201911050545A CN 112748089 A CN112748089 A CN 112748089A
Authority
CN
China
Prior art keywords
phase
winding
image
residual
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911050545.2A
Other languages
Chinese (zh)
Other versions
CN112748089B (en
Inventor
黄勇
吴传超
艾丹妮
杨健
王涌天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201911050545.2A priority Critical patent/CN112748089B/en
Publication of CN112748089A publication Critical patent/CN112748089A/en
Application granted granted Critical
Publication of CN112748089B publication Critical patent/CN112748089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/41Refractivity; Phase-affecting properties, e.g. optical path length
    • G01N21/45Refractivity; Phase-affecting properties, e.g. optical path length using interferometric methods; using Schlieren methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The embodiment of the invention provides a phase unwrapping method and a phase unwrapping device in Doppler optical coherence tomography, wherein the method comprises the following steps: based on a winding image in Doppler optical coherence tomography, acquiring a winding number corresponding to a winding phase in the winding image by utilizing a pre-established phase unwrapping classification model based on a deep learning residual error network; calculating a real phase corresponding to the winding image based on the winding phase and the winding number corresponding to the winding phase; the phase unwrapping classification model based on the deep learning residual error network is obtained by setting the deviation between the wrapped phase and the real phase to be an integral multiple of 2 pi in advance, constructing a classification model based on the deep learning residual error network based on the setting, converting the phase unwrapping problem into a semantic segmentation problem, and then training by using the constructed training sample. The embodiment of the invention can effectively shorten the time consumption of the operation process, realize real-time unwrapping and is more beneficial to realizing multi-module integration.

Description

Phase unwrapping method and device in Doppler optical coherence tomography
Technical Field
The invention relates to the technical field of photoelectric imaging, in particular to a phase unwrapping method and a phase unwrapping device in Doppler optical coherence tomography.
Background
The doppler optical coherence tomography is widely used in biomedical imaging because of its high resolution, non-invasive, real-time three-dimensional imaging, and other features. For fluid studies (e.g., vascular imaging), accurate phase information can provide valuable potential biophysical information. In practical imaging systems, however, the phase map suffers from a wrap-around phenomenon due to the measurement length exceeding one wavelength.
For the phase winding problem, the phase winding problem can be solved through phase unwrapping, and the phase unwrapping refers to recovering the wound phase from the range of (-pi, pi) to obtain real phase information. Various methods of phase unwrapping have been proposed in previous studies, such as path tracking methods (e.g., mask segmentation, graph segmentation, minimum discontinuity algorithm, etc.) and minimum norm methods (e.g., least squares, network flow, etc.).
However, the path tracking method is not robust to noise, and therefore, the image needs to be preprocessed in advance; the minimum norm law depends on the calculation path and easily introduces image distortion in areas without noise. In addition, in doppler optical coherence tomography, only network flow methods are currently proposed for plastic tube image unwrapping in doppler optical coherence tomography, specifically, the problem of phase unwrapping is classified as an optimization problem, assuming that the error relationship between the wrapped phase map and the true phase map can be any value. However, the method has complex operation process and long time consumption, cannot realize the real-time unwrapping effect, and is not beneficial to realizing multi-module integration.
Disclosure of Invention
In order to overcome the above problems or at least partially solve the above problems, embodiments of the present invention provide a phase unwrapping method and apparatus in doppler optical coherence tomography, so as to effectively shorten the time consumption of the operation process, achieve real-time unwrapping, and be more beneficial to achieve multi-module integration.
In a first aspect, an embodiment of the present invention provides a phase unwrapping method in doppler optical coherence tomography, including:
acquiring a winding image in Doppler optical coherence tomography, and acquiring a winding number corresponding to a winding phase in the winding image by utilizing a pre-established phase unwrapping classification model based on a deep learning residual error network based on the winding image;
calculating and acquiring a real phase corresponding to the winding image based on the winding phase and the winding number corresponding to the winding phase;
the phase unwrapping classification model based on the deep learning residual error network is obtained by setting the deviation between the wrapped phase and the real phase to be an integral multiple of 2 pi in advance, constructing the classification model based on the deep learning residual error network based on the setting, converting the phase unwrapping problem in Doppler optical coherence tomography into a semantic segmentation problem, forming a training sample by taking a wrapped image sample as input and a wrapping number corresponding to the wrapped image sample as a label, and training the initialized classification model based on the deep learning residual error network by using the training sample.
Optionally, the phase unwrapping classification model based on the deep learning residual network specifically includes a first convolution layer, a first residual module layer, a pooling layer, a full-resolution residual module layer, a second convolution layer, and a classifier layer, which are connected in sequence, and is implemented by a symmetric structure of encoding and decoding; the method fuses high-level semantic features of the warped image with low-level semantic features for representing position information by using the first residual module layer, the pooling layer, the full-resolution residual module layer, and the second residual module layer.
Optionally, the step of obtaining the winding number corresponding to the winding phase in the winding image by using a pre-established phase unwrapping classification model based on a deep learning residual error network specifically includes:
acquiring a first convolution layer based on the winding image to obtain a first convolution output, and acquiring a first residual error output based on the first convolution output and by using the first residual error module;
based on the first residual output, fusing the high-level semantic features and the low-level semantic features respectively through a pooling thread formed by the pooling layer and the full-resolution residual module layer and a residual thread formed by the first residual module layer and the second residual module layer to obtain a fused output;
based on the fusion output, acquiring a second residual error output by utilizing the second residual error module layer, and acquiring a second convolution output based on the second convolution layer;
and acquiring the winding number corresponding to the winding phase by utilizing the classifier layer based on the second convolution output.
Further, before the step of acquiring the winding number corresponding to the winding phase in the winding image, the method of the embodiment of the present invention further includes:
acquiring a given amount of winding image samples, and determining the winding number corresponding to each winding image sample according to the winding phase and the real phase of each winding image sample to be used as a label of the corresponding winding image sample;
constructing a training sample set based on each winding image sample and a label corresponding to each winding image sample, initializing and constructing a classification model based on a deep learning residual error network by setting the deviation between the winding phase and the real phase under the condition of integral multiple of 2 pi, and determining a residual error loss function;
and iteratively training the initialized classification model by using the data in the training sample set until the value of the residual loss function reaches a preset standard, and outputting the trained classification model as the phase unwrapping classification model based on the deep learning residual network.
Optionally, the step of acquiring a given amount of winding image samples specifically includes: generating the initial real phase image of the given quantity by continuously changing the mean value and the variance of the Gaussian function and utilizing the changed Gaussian function; and extracting noise data from a background image of a transparent tubule image in Doppler optical coherence tomography, and superposing the noise data to the initial real phase image to perform modular operation on the superposed image and 2 pi to obtain the winding image sample.
Optionally, the first convolution layer specifically includes a batch normalization unit, a linear rectification unit, and a 3 × 3 convolution layer unit, which are connected in sequence, and the second convolution layer specifically is a 1 × 1 convolution layer unit;
the residual module specifically comprises a first residual layer formed by a batch standardization unit, a linear rectification unit and a 1 × 1 convolution layer unit, a second residual layer formed by the batch standardization unit, the linear rectification unit and a 3 × 3 convolution layer unit, and a third residual layer formed by the batch standardization unit, the linear rectification unit and the 1 × 1 convolution layer unit, which are sequentially connected;
the full-resolution residual module layer specifically comprises a pooling layer, two fourth residual layers consisting of a batch standardization unit, a linear rectification unit and a 3 × 3 convolution layer unit, a 1 × 1 convolution layer and an upper pooling layer which are sequentially connected.
Optionally, the step of determining the residual loss function specifically includes: determining that the residual loss function is specifically a cross-entropy loss function.
In a second aspect, an embodiment of the present invention provides a phase unwrapping device in doppler optical coherence tomography, including:
the device comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for acquiring a winding image in Doppler optical coherence tomography, and acquiring the winding number corresponding to a winding phase in the winding image by utilizing a pre-established phase unwrapping classification model based on a deep learning residual error network based on the winding image;
the second processing module is used for calculating and acquiring a real phase corresponding to the winding image based on the winding phase and the winding number corresponding to the winding phase;
the phase unwrapping classification model based on the deep learning residual error network is obtained by setting the deviation between the wrapped phase and the real phase to be an integral multiple of 2 pi in advance, initializing and constructing the classification model based on the deep learning residual error network based on the setting, converting the phase unwrapping problem in Doppler optical coherence tomography into a semantic segmentation problem, forming a training sample by taking a wrapped image sample as input and a wrapping number corresponding to the wrapped image sample as a label, and training the initialized classification model based on the deep learning residual error network by using the training sample.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the steps of the phase unwrapping method in doppler optical coherence tomography as described in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer readable storage medium, on which computer instructions are stored, and when the computer instructions are executed by a computer, the steps of the phase unwrapping method in doppler optical coherence tomography as described in the first aspect above are implemented.
According to the phase unwrapping method and device in the Doppler optical coherence tomography, provided by the embodiment of the invention, the phase unwrapping problem is converted into the semantic segmentation classification problem by assuming that the deviation between the wrapped phase and the real phase in the Doppler optical coherence tomography is integral multiple of 2 pi, and constructing the classification model based on the deep learning residual error network, so that the phase unwrapping problem in the Doppler optical coherence tomography is realized on the basis, the time consumption of the operation process can be effectively shortened, the real-time unwrapping is realized, the continuity of the image phase is recovered, and the multi-module integration is more favorably realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a phase unwrapping method in doppler optical coherence tomography according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a residual module and a full-resolution residual module in a phase unwrapping method in doppler optical coherence tomography according to an embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating a process of acquiring a winding number corresponding to a winding phase in a phase unwrapping method in doppler optical coherence tomography according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a phase unwrapping device in Doppler optical coherence tomography according to an embodiment of the present invention;
fig. 5 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present invention without any creative efforts belong to the protection scope of the embodiments of the present invention.
Aiming at the problems that the operation process in the prior art is complex and long in time consumption and cannot realize real-time unwrapping, a classification model based on a deep learning residual error network is constructed by assuming that the deviation between an wrapped phase and a real phase in Doppler optical coherence tomography is integral multiple of 2 pi, so that the phase unwrapping problem is converted into a classification problem of semantic segmentation, and phase unwrapping in Doppler optical coherence tomography is realized on the basis, so that the time consumption in the operation process can be effectively shortened, real-time unwrapping is realized, the continuity of image phases is recovered, and multi-module integration is more facilitated. Embodiments of the present invention will be described and illustrated with reference to various embodiments.
Fig. 1 is a schematic flow chart of a phase unwrapping method in doppler optical coherence tomography according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s101, acquiring a winding image in Doppler optical coherence tomography, and acquiring the winding number corresponding to a winding phase in the winding image by utilizing a pre-established phase unwrapping classification model based on a deep learning residual error network based on the winding image. The phase unwrapping classification model based on the deep learning residual error network is obtained by setting the deviation between an wrapped phase and a real phase in integral multiple of 2 pi in advance, constructing the classification model based on the deep learning residual error network based on the setting, converting the phase unwrapping problem in Doppler optical coherence tomography into a semantic segmentation problem, forming a training sample by taking a wrapped image sample as input and a wrapping number corresponding to the wrapped image sample as a label, and training the initialized classification model based on the deep learning residual error network by using the training sample.
In the doppler optical coherence tomography, firstly, an imaged winding image is acquired, then the winding image is input into a pre-established phase unwrapping classification model based on a deep learning residual error network, and finally, the number of windings corresponding to wrapping phases in the winding image is output through an internal operation process of the classification model. Wherein the winding number represents an integer multiple of 2 pi of the phase difference between the winding phase in the winding image and the true phase to which the winding phase ultimately corresponds. For example, if the winding phase is different from the real phase corresponding to the winding phase by 4 π, the number of windings corresponding to the winding phase is 2.
It can be appreciated that before applying the phase unwrapping classification model based on the deep learning residual error network, the embodiments of the present invention need to be appliedThe classification model is established in advance. That is, it is necessary in advance to set the deviation between the winding phase and the true phase of the winding image in the condition of an integral multiple of 2 π, that is, to assume the true phase φ (x, y) and the winding phase of the winding image
Figure BDA0002255225240000071
The deviation between is an integer multiple of 2 pi and is expressed as follows:
Figure BDA0002255225240000072
where (x, y) denotes the spatial coordinate position of an image pixel, 2 π k (x, y) denotes an integer multiple of 2 π, k (x, y) denotes an integer, i.e., the wrapping phase
Figure BDA0002255225240000073
The corresponding number of windings can be expressed as:
Figure BDA0002255225240000074
according to the hypothesis, a coding and decoding residual error network classification model based on deep learning is designed in an initialized mode, and the problem of phase unwrapping is converted into the classification problem of semantic segmentation. And then, acquiring a given number of winding image samples, respectively determining the winding number corresponding to each winding image sample, taking the winding image samples as model input, and taking the winding number corresponding to the winding image samples as a label to form a training sample. And then, carrying out iterative training on the initialized classification model based on the deep learning residual error network by using the training samples to finally obtain a trained network model, namely the phase unwrapping classification model based on the deep learning residual error network. The classification model takes the winding image as input and outputs the winding number corresponding to the winding image, namely the integer multiple of 2 pi of the difference between the winding phase and the real phase of the winding image.
And S102, calculating and acquiring a real phase corresponding to the winding image based on the winding phase and the winding number corresponding to the winding phase.
It can be understood that, on the basis of the above assumption, the phase difference between the winding phase and the true phase of the winding image is an integer multiple of 2 pi, and the obtained winding number is exactly the integer multiple, so that the obtained winding number is multiplied by 2 pi, and then is added to the winding phase to calculate the true phase corresponding to the winding image, thereby implementing the unwrapping.
According to the phase unwrapping method in the Doppler optical coherence tomography, provided by the embodiment of the invention, the classification model based on the deep learning residual error network is constructed by assuming that the deviation between the wrapped phase and the real phase of the Doppler optical coherence tomography is integral multiple of 2 pi, so that the phase unwrapping problem is converted into the classification problem of semantic segmentation, and the phase unwrapping in the Doppler optical coherence tomography is realized on the basis, thereby effectively shortening the time consumption of the operation process, realizing real-time unwrapping, recovering the continuity of the image phase and being more beneficial to realizing multi-module integration.
It can be understood that the phase unwrapping classification model based on the deep learning residual network needs to be established before being applied. Therefore, on the basis of the foregoing embodiments, before the step of acquiring the winding number corresponding to the winding phase in the winding image, the method of the embodiment of the present invention further includes:
acquiring a given amount of winding image samples, and determining the winding number corresponding to each winding image sample according to the winding phase and the real phase of each winding image sample to be used as a label of the corresponding winding image sample; constructing a training sample set based on each winding image sample and a label corresponding to each winding image sample, initializing and constructing a classification model based on a deep learning residual error network by setting the deviation between a winding phase and a real phase under the condition of integral multiple of 2 pi, and determining a residual error loss function; and iteratively training the initialized classification model by using data in the training sample set until the value of the residual loss function reaches a preset standard, and outputting the trained classification model as a phase unwrapping classification model based on the deep learning residual network.
It can be understood that the embodiment of the present invention specifically realizes the establishment of the phase unwrapping classification model based on the deep learning residual error network. In order to obtain a classification model meeting a certain precision requirement, a certain number of training samples are required to train the initially established model. Therefore, the embodiment of the present invention first needs to acquire a given amount of known winding images as winding image samples, the winding phases and the true phases of which are known, so that for each winding image sample, the corresponding winding number can be calculated according to the winding phase and the true phase thereof as the sample label thereof.
Then, each winding image sample is correspondingly paired with the corresponding sample label to form a group of training samples, and all the training samples can form a training sample set containing a large number of training samples. Meanwhile, a classification model based on the deep learning residual error network can be initialized and constructed by assuming that the deviation between the winding phase and the real phase is an integral multiple of 2 pi, and the classification model takes the winding image as input and outputs the winding number corresponding to the winding image. In addition, in order to evaluate the accuracy of the classification model, a residual loss function is determined, so that the residual loss of the model is calculated after the model is trained by using each group of training samples, the completion degree of the model training is judged according to the residual loss function, and meanwhile, an updating strategy for model parameters is determined. Optionally, the step of determining the residual loss function specifically includes: the residual loss function is determined to be a cross-entropy loss function.
And then, each group of training samples in the training sample set is used for carrying out iterative training on the initialized classification model. Specifically, in the network training process, an SGD optimizer is adopted, the momentum is set to be 0.9, the learning rate is set to be 0.0001, and a cross-entropy loss function is adopted to train the model. Meanwhile, to reduce overfitting, dropout layers were used in the model, and the ratio was set to 0.25.
Optionally, after the trained network model is obtained, in order to verify the feasibility of the network, the model may be tested and verified by using a test sample, so as to obtain a wrapped phase diagram, an unwrapped result diagram predicted by the network model, a real phase diagram, and a phase residual error image. By observing and comparing the verification result graphs, the similarity between the unwrapping result graph obtained by the network model prediction and the real phase graph of the wrapped image is high.
In addition, in order to further verify the effectiveness of the network, Doppler optical coherence imaging images of the transparent tubules (in which milk flows) are tested, and a phase diagram representing the Doppler optical coherence imaging transparent tubules and a result diagram representing prediction and unwrapping by using a trained model are respectively obtained. Meanwhile, the test result is described in detail by using one-dimensional data, and a curve corresponding to 125 rows of the winding image, a curve corresponding to the result obtained by predicting the network model and a winding number obtained by subtracting the winding number from the curve are obtained.
The verification result shows that the unwrapping image obtained by the method of the embodiment of the invention is basically consistent with the wrapping image in the non-wrapping area, and the trend of the unwrapping image is consistent with that of the wrapping phase in the wrapping area. Therefore, the method can well solve the problem of discontinuous phase in the Doppler optical coherence tomography system and expand the application range of phase information.
Optionally, according to the above embodiments, the phase unwrapping classification model based on the deep learning residual network specifically includes a first convolution layer, a first residual module layer, a pooling layer, a full-resolution residual module layer, a second convolution layer, and a classifier layer, which are connected in sequence, and is implemented by a symmetric structure of encoding and decoding; according to the method, the high-level semantic features of the wound image and the low-level semantic features used for representing the position information are fused by utilizing the first residual module layer, the pooling layer, the full-resolution residual module layer and the second residual module layer.
It can be understood that the phase unwrapping classification model based on the deep learning Residual error network adopted in the embodiment of the present invention is specifically a Residual en-decoder network (REDN) model based on the deep learning. The network architecture of the model is mainly realized through a symmetrical structure of coding and decoding, and mainly comprises a Residual Block (RB) and a Full Resolution Residual Block (FRRB), and the high-level semantic features of the wound image and the low-level semantic features used for representing the position information are fused through the combination of the Residual Block (RB) and the Full Resolution Residual Block (FRRB).
Optionally, as shown in fig. 2, the structure schematic diagram of the residual error module and the full-resolution residual error module in the phase unwrapping method in doppler optical coherence tomography provided by the embodiment of the present invention is shown, where (a) is the structure schematic diagram of the residual error module, and specifically includes a first residual error layer formed by a batch normalization unit, a linear rectification unit, and a 1 × 1 convolutional layer unit, a second residual error layer formed by a batch normalization unit, a linear rectification unit, and a 3 × 3 convolutional layer unit, and a third residual error layer formed by a batch normalization unit, a linear rectification unit, and a 1 × 1 convolutional layer unit, which are sequentially connected. And (b) is a structural schematic diagram of the full-resolution residual error module, and specifically comprises a pooling layer, two fourth residual error layers consisting of a batch standardization unit, a linear rectification unit and a 3 × 3 convolution layer unit, a 1 × 1 convolution layer and an upper pooling layer which are connected in sequence. The image module comprises two inputs and two outputs, one is from a residual thread, the other is from the output of the FRRB module of the pooling thread, and the two information are integrated, namely, the high-level semantic features used for image recognition are combined with the low-level semantic features used for representing position information.
Meanwhile, in order to adapt to the adjustment of input and output characteristic numbers, a first convolution layer is arranged at the input end of the model, and a second convolution layer is arranged at the output end of the model. Optionally, the first convolution layer specifically includes a batch normalization unit, a linear rectification unit, and a 3 × 3 convolution layer unit connected in sequence, and the second convolution layer specifically is a 1 × 1 convolution layer unit. To achieve classification, the last layer of the model is set as the classifier.
Compared with the convolutional neural network, the method and the device provided by the embodiment of the invention can be more beneficial to the optimization of the network model in the training process by adopting the residual error network.
Optionally, according to the above embodiments, the step of obtaining the winding number corresponding to the winding phase in the winding image by using a pre-established phase unwrapping classification model based on the deep learning residual error network specifically includes: acquiring a first convolution layer based on the winding image to obtain a first convolution output, and acquiring a first residual error output based on the first convolution output by using a first residual error module; based on the first residual output, fusing the high-level semantic features and the low-level semantic features through a pooling thread formed by a pooling layer and a full-resolution residual module layer and a residual thread formed by the first residual module layer and a second residual module layer respectively to obtain fused output; based on the fusion output, acquiring a second residual output by utilizing a second residual module layer, and acquiring a second convolution output based on a second convolution layer; and acquiring the winding number corresponding to the winding phase by utilizing the classifier layer based on the second convolution output.
It can be understood that, as shown in fig. 3, there is a schematic flow chart of acquiring the winding number corresponding to the winding phase in the phase unwrapping method in doppler optical coherence tomography provided in the embodiment of the present invention, where:
firstly, in the encoding process, a winding image firstly passes through a first convolution layer formed by a batch standardization Linear rectification Unit (ReLU) and a convolution layer, then passes through two residual modules, then passes through the pooling layer to be used as the input of an FRRB module of the pooling thread and the input of the residual thread, the output of the image after the pooling thread is processed by the FRRB module is converged with the residual thread, and the other path continues to enter the next FRRB module through the pooling layer. Among them, RELU is also called modified linear unit, which is a commonly used activation function in artificial neural network.
Symmetrical to the encoding is the decoding process, which is realized by pooling. The number of feature channels for the residual thread remains 16. In the pooling thread, the number of the characteristic channels of the FRRB is 48, 96, 384, 192, 96 and 48 in sequence. And the information superposition of the residual thread and the pooling thread is realized by 1 × 1 convolution layer every time, and finally the superposition result is fused and output.
And finally, after the fusion output passes through two residual modules, the fusion output passes through a 1 × 1 convolution layer to reduce output characteristics, and finally, the fusion output passes through a classifier to complete a round of training process. The parameter c of the classifier represents the number of classes in the training process.
The embodiment of the invention can well solve the problem of phase winding in the Doppler optical coherent imaging system, can show the continuity of the phase and can obtain the effect of no winding.
Optionally, according to the above embodiments, the step of obtaining a given amount of winding image samples specifically includes: generating a given amount of initial real phase images by continuously changing the mean value and the variance of the Gaussian function and utilizing the changed Gaussian function; and extracting noise data from a background image of the transparent tubule image in Doppler optical coherence tomography, and superposing the noise data to the initial real phase image to perform modular operation on the superposed image and 2 pi to obtain a wound image sample.
It can be understood that, because of the large amount of data required in the deep learning network model training process, the clinical medical image is limited, and tens of thousands of data sets are difficult to obtain. Therefore, in the network training process, the training data set is generated by utilizing the Gaussian function simulation according to the characteristics of the Doppler optical coherence tomography system. And then, taking the winding phase as network input, taking the winding number as a label of the network, and performing repeated iterative training to successfully converge a loss function of the network and gradually enable a prediction output result of the network to be close to a label set, thereby obtaining an available network model for phase unwrapping.
That is, in training the network, the data set is generated in a simulated manner. Specifically, a Gaussian function is adopted as a prototype, and images in different shapes are generated as materials of the training network by changing the mean value and the variance. Meanwhile, in order to obtain an image with similar image characteristics to that of the Doppler optical coherence tomography system, noise is extracted from a background image of the transparent tubule obtained by the Doppler optical coherence tomography system, the noise is added into a Gaussian function image generated by simulation, and then a modulus calculation is carried out on the Gaussian function and 2 pi to obtain a winding image. And then, taking the winding image as an input sample set during network training, and obtaining a winding number graph as a label set for network training according to the relation between the winding phase and the real phase.
The embodiment of the invention overcomes the problem of the limitation of clinical medical data, and applies the deep learning to the phase unwrapping problem of the Doppler optical coherent imaging system.
Based on the same inventive concept, the embodiments of the present invention provide a phase unwrapping apparatus in doppler optical coherence tomography according to the above-mentioned embodiments, and the apparatus is used for implementing phase unwrapping in doppler optical coherence tomography in the above-mentioned embodiments. Therefore, the description and definition of the phase unwrapping method in the doppler optical coherence tomography in the above embodiments can be used for understanding each execution module in the embodiments of the present invention, and reference may be made to the above embodiments specifically, and details are not repeated here.
According to an embodiment of the present invention, a structure of a phase unwrapping device in doppler optical coherence tomography is shown in fig. 4, which is a schematic structural diagram of a phase unwrapping device in doppler optical coherence tomography provided in an embodiment of the present invention, and the device can be used for implementing phase unwrapping in doppler optical coherence tomography in the above-mentioned method embodiments, and the device includes: a first processing module 401 and a second processing module 402. Wherein:
the first processing module 401 is configured to obtain a wrapped image in doppler optical coherence tomography, and obtain, based on the wrapped image, a wrapping number corresponding to a wrapping phase in the wrapped image by using a pre-established phase unwrapping classification model based on a deep learning residual error network; the second processing module 402 is configured to calculate and obtain a true phase corresponding to the winding image based on the winding phase and the winding number corresponding to the winding phase; the phase unwrapping classification model based on the deep learning residual error network is obtained by setting the deviation between an wrapped phase and a real phase in integral multiple of 2 pi in advance, initializing and constructing the classification model based on the deep learning residual error network based on the setting, converting the phase unwrapping problem in Doppler optical coherence tomography into a semantic segmentation problem, forming a training sample by taking a wrapped image sample as input and a wrapping number corresponding to the wrapped image sample as a label, and training the initialized classification model based on the deep learning residual error network by using the training sample.
Specifically, in the doppler optical coherence tomography, the first processing module 401 first obtains an imaged winding image, inputs the winding image into a pre-established phase unwrapping classification model based on a deep learning residual error network, and finally outputs a winding number corresponding to a winding phase in the winding image through an internal operation process of the classification model.
It is understood that the apparatus of the embodiment of the present invention further includes a model building module to build a classification model in advance before applying the phase unwrapping classification model based on the deep learning residual error network. That is, the model building module needs to set the deviation between the winding phase and the true phase of the winding image in advance under the condition of an integral multiple of 2 pi, that is, assuming the true phase phi (x, y) and the winding phase of the winding image
Figure BDA0002255225240000141
The deviation between the two is integral multiple of 2 pi, and according to the assumption, a coding and decoding residual error network classification model based on deep learning is designed in an initialization mode, and the problem of phase unwrapping is converted into the classification problem of semantic segmentation. And then, the model establishing module acquires a given number of winding image samples, respectively determines the winding number corresponding to each winding image sample, takes the winding image samples as model input, and takes the winding number corresponding to the winding image samples as a label to form a training sample. And then, by using the training samples, the model establishing module carries out iterative training on the initialized classification model based on the deep learning residual error network to finally obtain a trained network model, namely the phase unwrapping classification model based on the deep learning residual error network. The classification model takes the winding image as input and outputs the winding number corresponding to the winding image, namely the integer multiple of 2 pi of the difference between the winding phase and the real phase of the winding image.
Then, on the basis of the above assumption, the phase difference between the winding phase of the winding image and the true phase is an integer multiple of 2 pi, and the obtained winding number is the integer multiple. Therefore, the second processing module 402 may multiply the obtained winding number by 2 pi, and add the obtained winding number to the winding phase to calculate a true phase corresponding to the winding image, thereby implementing unwrapping.
According to the phase unwrapping device in the Doppler optical coherence tomography, provided by the embodiment of the invention, the corresponding execution module is arranged, the deviation between the wrapping phase in the Doppler optical coherence tomography and the real phase is assumed to be integral multiple of 2 pi, and the classification model based on the deep learning residual error network is constructed, so that the phase unwrapping problem is converted into the classification problem of semantic segmentation, and the phase unwrapping in the Doppler optical coherence tomography is realized on the basis, so that the time consumption of the operation process can be effectively shortened, the real-time unwrapping is realized, the continuity of the image phase is recovered, and the multi-module integration is more favorably realized.
It is understood that, in the embodiment of the present invention, each relevant program module in the apparatus of each of the above embodiments may be implemented by a hardware processor (hardware processor). Moreover, the phase unwrapping device in the doppler optical coherence tomography in the embodiments of the present invention can implement the phase unwrapping procedure in the doppler optical coherence tomography in the embodiments of the methods described above by using the program modules described above, and when the phase unwrapping device is used to implement the phase unwrapping in the doppler optical coherence tomography in the embodiments of the methods described above, the beneficial effects produced by the device in the embodiments of the present invention are the same as those in the corresponding embodiments of the methods described above, and reference may be made to the embodiments of the methods described above, and no further description is given here.
As a further aspect of the embodiments of the present invention, the present embodiment provides an electronic device according to the above embodiments, the electronic device includes a memory, a processor and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps of the phase unwrapping method in doppler optical coherence tomography as described in the above embodiments.
Further, the electronic device of the embodiment of the present invention may further include a communication interface and a bus. Referring to fig. 5, an entity structure diagram of an electronic device provided in an embodiment of the present invention includes: at least one memory 501, at least one processor 502, a communication interface 503, and a bus 504.
The memory 501, the processor 502 and the communication interface 503 are used for completing mutual communication through the bus 504, and the communication interface 503 is used for information transmission between the electronic device and the image winding device; the memory 501 stores a computer program operable on the processor 502, and the processor 502 executes the computer program to implement the steps of the phase unwrapping method in doppler optical coherence tomography as described in the above embodiments.
It is understood that the electronic device at least comprises a memory 501, a processor 502, a communication interface 503 and a bus 504, and the memory 501, the processor 502 and the communication interface 503 are connected in communication with each other through the bus 504, and can complete communication with each other, for example, the processor 502 reads program instructions of a phase unwrapping method in doppler optical coherence tomography from the memory 501. In addition, the communication interface 503 can also implement communication connection between the electronic device and the wrapped image device, and can complete mutual information transmission, such as reading the wrapped image in doppler optical coherence tomography, etc. by the communication interface 503.
When the electronic device is running, the processor 502 calls the program instructions in the memory 501 to perform the methods provided by the above-described method embodiments, including for example: acquiring a winding image in Doppler optical coherence tomography, and acquiring the winding number corresponding to a winding phase in the winding image by utilizing a pre-established phase unwrapping classification model based on a deep learning residual error network based on the winding image; and calculating and acquiring a real phase corresponding to the winding image and the like based on the winding phase and the winding number corresponding to the winding phase.
The program instructions in the memory 501 may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Alternatively, all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, where the program may be stored in a computer-readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Embodiments of the present invention further provide a non-transitory computer readable storage medium according to the above embodiments, on which computer instructions are stored, and when the computer instructions are executed by a computer, the steps of the phase unwrapping method in doppler optical coherence tomography according to the above embodiments are implemented, for example, the steps include: acquiring a winding image in Doppler optical coherence tomography, and acquiring the winding number corresponding to a winding phase in the winding image by utilizing a pre-established phase unwrapping classification model based on a deep learning residual error network based on the winding image; and calculating and acquiring a real phase corresponding to the winding image and the like based on the winding phase and the winding number corresponding to the winding phase.
According to the electronic device and the non-transitory computer readable storage medium provided by the embodiments of the present invention, by executing the steps of the phase unwrapping method in the doppler optical coherence tomography described in each of the above embodiments, assuming that the deviation between the wrapped phase and the true phase in the doppler optical coherence tomography is an integral multiple of 2 pi, a classification model based on a deep learning residual error network is constructed, so as to convert the phase unwrapping problem into a classification problem of semantic segmentation, and implement the phase unwrapping in the doppler optical coherence tomography on the basis, which can effectively shorten the time consumption of the operation process, implement real-time unwrapping, recover the continuity of the image phase, and is more beneficial to implement multi-module integration.
It is to be understood that the above-described embodiments of the apparatus, the electronic device and the storage medium are merely illustrative, and that elements described as separate components may or may not be physically separate, may be located in one place, or may be distributed on different network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the technical solutions mentioned above may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a usb disk, a removable hard disk, a ROM, a RAM, a magnetic or optical disk, etc., and includes several instructions for causing a computer device (such as a personal computer, a server, or a network device, etc.) to execute the methods described in the method embodiments or some parts of the method embodiments.
In addition, it should be understood by those skilled in the art that in the specification of the embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the embodiments of the invention, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention, and not to limit the same; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of phase unwrapping in doppler optical coherence tomography, comprising:
acquiring a winding image in Doppler optical coherence tomography, and acquiring a winding number corresponding to a winding phase in the winding image by utilizing a pre-established phase unwrapping classification model based on a deep learning residual error network based on the winding image;
calculating and acquiring a real phase corresponding to the winding image based on the winding phase and the winding number corresponding to the winding phase;
the phase unwrapping classification model based on the deep learning residual error network is obtained by setting the deviation between the wrapped phase and the real phase to be an integral multiple of 2 pi in advance, initializing and constructing the classification model based on the deep learning residual error network based on the setting, converting the phase unwrapping problem in Doppler optical coherence tomography into a semantic segmentation problem, forming a training sample by taking a wrapped image sample as input and a wrapping number corresponding to the wrapped image sample as a label, and training the initialized classification model based on the deep learning residual error network by using the training sample.
2. The phase unwrapping method in doppler optical coherence tomography according to claim 1, wherein the phase unwrapping classification model based on the deep learning residual network specifically includes a first convolutional layer, a first residual module layer, a pooling layer, a full resolution residual module layer, a second convolutional layer, and a classifier layer connected in sequence, and is implemented by a symmetric structure of encoding and decoding;
the method fuses high-level semantic features of the warped image with low-level semantic features for representing position information by using the first residual module layer, the pooling layer, the full-resolution residual module layer, and the second residual module layer.
3. The phase unwrapping method in doppler optical coherence tomography according to claim 2, wherein the step of obtaining the number of wraps corresponding to the wrapping phase in the wrapped image by using the pre-established phase unwrapping classification model based on the deep learning residual error network specifically includes:
acquiring a first convolution layer based on the winding image to obtain a first convolution output, and acquiring a first residual error output based on the first convolution output and by using the first residual error module;
based on the first residual output, fusing the high-level semantic features and the low-level semantic features respectively through a pooling thread formed by the pooling layer and the full-resolution residual module layer and a residual thread formed by the first residual module layer and the second residual module layer to obtain a fused output;
based on the fusion output, acquiring a second residual error output by utilizing the second residual error module layer, and acquiring a second convolution output based on the second convolution layer;
and acquiring the winding number corresponding to the winding phase by utilizing the classifier layer based on the second convolution output.
4. The phase unwrapping method in Doppler optical coherence tomography according to any one of claims 1-3, further comprising, before the step of acquiring a number of wraps corresponding to a wrap phase in the wrapped image:
acquiring a given amount of winding image samples, and determining the winding number corresponding to each winding image sample according to the winding phase and the real phase of each winding image sample to be used as a label of the corresponding winding image sample;
constructing a training sample set based on each winding image sample and a label corresponding to each winding image sample, initializing and constructing a classification model based on a deep learning residual error network by setting the deviation between the winding phase and the real phase under the condition of integral multiple of 2 pi, and determining a residual error loss function;
and iteratively training the initialized classification model by using the data in the training sample set until the value of the residual loss function reaches a preset standard, and outputting the trained classification model as the phase unwrapping classification model based on the deep learning residual network.
5. The method of claim 4, wherein the step of obtaining a given number of wrapped image samples comprises:
generating the initial real phase image of the given quantity by continuously changing the mean value and the variance of the Gaussian function and utilizing the changed Gaussian function;
and extracting noise data from a background image of a transparent tubule image in Doppler optical coherence tomography, and superposing the noise data to the initial real phase image to perform modular operation on the superposed image and 2 pi to obtain the winding image sample.
6. The phase unwrapping method in doppler optical coherence tomography according to claim 2 or 3, wherein the first convolutional layer comprises a batch normalization unit, a linear rectification unit and a 3 × 3 convolutional layer unit connected in sequence, and the second convolutional layer is a 1 × 1 convolutional layer unit;
the residual module specifically comprises a first residual layer formed by a batch standardization unit, a linear rectification unit and a 1 × 1 convolution layer unit, a second residual layer formed by the batch standardization unit, the linear rectification unit and a 3 × 3 convolution layer unit, and a third residual layer formed by the batch standardization unit, the linear rectification unit and the 1 × 1 convolution layer unit, which are sequentially connected;
the full-resolution residual module layer specifically comprises a pooling layer, two fourth residual layers consisting of a batch standardization unit, a linear rectification unit and a 3 × 3 convolution layer unit, a 1 × 1 convolution layer and an upper pooling layer which are sequentially connected.
7. The method of claim 4, wherein the step of determining the residual loss function comprises: determining that the residual loss function is specifically a cross-entropy loss function.
8. A phase unwrapping device in doppler optical coherence tomography, comprising:
the device comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for acquiring a winding image in Doppler optical coherence tomography, and acquiring the winding number corresponding to a winding phase in the winding image by utilizing a pre-established phase unwrapping classification model based on a deep learning residual error network based on the winding image;
the second processing module is used for calculating and acquiring a real phase corresponding to the winding image based on the winding phase and the winding number corresponding to the winding phase;
the phase unwrapping classification model based on the deep learning residual error network is obtained by setting the deviation between the wrapped phase and the real phase to be an integral multiple of 2 pi in advance, initializing and constructing the classification model based on the deep learning residual error network based on the setting, converting the phase unwrapping problem in Doppler optical coherence tomography into a semantic segmentation problem, forming a training sample by taking a wrapped image sample as input and a wrapping number corresponding to the wrapped image sample as a label, and training the initialized classification model based on the deep learning residual error network by using the training sample.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the phase unwrapping method in doppler optical coherence tomography as recited in any one of claims 1 to 7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions, wherein the computer instructions, when executed by a computer, implement the steps of the phase unwrapping method in doppler optical coherence tomography as recited in any one of claims 1-7.
CN201911050545.2A 2019-10-31 2019-10-31 Phase unwrapping method and device in Doppler optical coherence tomography Active CN112748089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911050545.2A CN112748089B (en) 2019-10-31 2019-10-31 Phase unwrapping method and device in Doppler optical coherence tomography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911050545.2A CN112748089B (en) 2019-10-31 2019-10-31 Phase unwrapping method and device in Doppler optical coherence tomography

Publications (2)

Publication Number Publication Date
CN112748089A true CN112748089A (en) 2021-05-04
CN112748089B CN112748089B (en) 2022-02-01

Family

ID=75641226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911050545.2A Active CN112748089B (en) 2019-10-31 2019-10-31 Phase unwrapping method and device in Doppler optical coherence tomography

Country Status (1)

Country Link
CN (1) CN112748089B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238227A (en) * 2021-05-10 2021-08-10 电子科技大学 Improved least square phase unwrapping method and system combined with deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100150467A1 (en) * 2008-07-21 2010-06-17 Mingtao Zhao Methods, systems, and computer readable media for synthetic wavelength-based phase unwrapping in optical coherence tomography and spectral domain phase microscopy
CN106846426A (en) * 2016-09-06 2017-06-13 北京理工大学 A kind of method of phase unwrapping in optical coherence tomography system
CN109297595A (en) * 2018-09-04 2019-02-01 东北大学秦皇岛分校 A kind of optical coherence tomography phase unwrapping around method and device
CN109886880A (en) * 2019-01-03 2019-06-14 杭州电子科技大学 A kind of optical imagery phase unwrapping winding method based on U-Net segmentation network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100150467A1 (en) * 2008-07-21 2010-06-17 Mingtao Zhao Methods, systems, and computer readable media for synthetic wavelength-based phase unwrapping in optical coherence tomography and spectral domain phase microscopy
CN106846426A (en) * 2016-09-06 2017-06-13 北京理工大学 A kind of method of phase unwrapping in optical coherence tomography system
CN109297595A (en) * 2018-09-04 2019-02-01 东北大学秦皇岛分校 A kind of optical coherence tomography phase unwrapping around method and device
CN109886880A (en) * 2019-01-03 2019-06-14 杭州电子科技大学 A kind of optical imagery phase unwrapping winding method based on U-Net segmentation network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHAOHUA PI ET AL.: ""Automated phase unwrapping in Doppler optical coherence tomography"", 《JOURNAL OF BIOMEDICAL OPTICS》 *
SHAOYAN XIA ET AL.: ""Robust phase unwrapping for phase images in Fourier domain Doppler optical coherence tomography"", 《JOURNAL OF BIOMEDICAL OPTICS》 *
YIMIN WANG ET AL.: ""Two-dimensional phase unwrapping in Doppler Fourier domain optical coherence tomography"", 《OPTICAL EXPRESS》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238227A (en) * 2021-05-10 2021-08-10 电子科技大学 Improved least square phase unwrapping method and system combined with deep learning

Also Published As

Publication number Publication date
CN112748089B (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
CN112734634B (en) Face changing method and device, electronic equipment and storage medium
CN113850916A (en) Model training and point cloud missing completion method, device, equipment and medium
CN111476719B (en) Image processing method, device, computer equipment and storage medium
CN109325430B (en) Real-time behavior identification method and system
CN109633502B (en) Magnetic resonance rapid parameter imaging method and device
US20210150347A1 (en) Guided training of machine learning models with convolution layer feature data fusion
CN108665055B (en) Method and device for generating graphic description
US11727584B2 (en) Shape supplementation device, shape supplementation learning device, method, and program
US11836572B2 (en) Quantum inspired convolutional kernels for convolutional neural networks
CN111860528B (en) Image segmentation model based on improved U-Net network and training method
CN109741335A (en) Blood vessel OCT image medium vessels wall and the dividing method and device of blood flow area
CN112132770A (en) Image restoration method and device, computer readable medium and electronic equipment
CN112748089B (en) Phase unwrapping method and device in Doppler optical coherence tomography
CN111539349A (en) Training method and device of gesture recognition model, gesture recognition method and device thereof
CN110599444B (en) Device, system and non-transitory readable storage medium for predicting fractional flow reserve of a vessel tree
CN114240779A (en) Point cloud denoising method, device, equipment and storage medium
CN116882469B (en) Impulse neural network deployment method, device and equipment for emotion recognition
CN115439179A (en) Method for training fitting model, virtual fitting method and related device
CN116342385A (en) Training method and device for text image super-resolution network and storage medium
CN111783936B (en) Convolutional neural network construction method, device, equipment and medium
CN114708353A (en) Image reconstruction method and device, electronic equipment and storage medium
CN113989283A (en) 3D human body posture estimation method and device, electronic equipment and storage medium
CN117635418B (en) Training method for generating countermeasure network, bidirectional image style conversion method and device
CN113487622B (en) Head-neck organ image segmentation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant