CN117876279B - Method and system for removing motion artifact based on scanned light field sequence image - Google Patents

Method and system for removing motion artifact based on scanned light field sequence image Download PDF

Info

Publication number
CN117876279B
CN117876279B CN202410270167.3A CN202410270167A CN117876279B CN 117876279 B CN117876279 B CN 117876279B CN 202410270167 A CN202410270167 A CN 202410270167A CN 117876279 B CN117876279 B CN 117876279B
Authority
CN
China
Prior art keywords
image
sub
sequence
light field
scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410270167.3A
Other languages
Chinese (zh)
Other versions
CN117876279A (en
Inventor
卢志
金满昌
李琦
杨懿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Hehu Technology Co ltd
Original Assignee
Zhejiang Hehu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Hehu Technology Co ltd filed Critical Zhejiang Hehu Technology Co ltd
Priority to CN202410270167.3A priority Critical patent/CN117876279B/en
Publication of CN117876279A publication Critical patent/CN117876279A/en
Application granted granted Critical
Publication of CN117876279B publication Critical patent/CN117876279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for removing motion artifacts based on a scanned light field sequence image, and belongs to the technical field of image processing. Wherein the method comprises: rearranging the acquired scanning light field microscopic data to obtain a multi-view rearranged image containing artifacts; splitting the multi-view rearranged image into a time sequence sub-image sequence and carrying out normalization processing to obtain a normalized scanning light field sub-image sequence; sequentially performing noise removal, inter-frame consistency alignment, bidirectional space-time extraction and spatial up-sampling operation on the normalized scanning light field sub-image sequence by using a convolutional neural network to obtain a sub-image sequence without motion artifacts; performing inverse normalization processing on the obtained sub-image sequence without motion artifacts to obtain a sub-image sequence of original scanning light field microscopic data intensity distribution; and performing motion compensation on the sub-picture sequence subjected to the inverse normalization processing according to the scanning mode. The method and the system improve the efficiency and the robustness of removing the motion artifact.

Description

Method and system for removing motion artifact based on scanned light field sequence image
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for removing motion artifacts based on a scanned light field sequence image.
Background
Two-dimensional imaging is widely used in astronomical observation, natural environment photographing, and biomedical imaging as a main method and technology in the field of optical imaging. However, the conventional two-dimensional imaging method is limited by the resolution of the camera pixels and aberration problems in the optical system, so that the resolution thereof cannot be further improved.
In recent years, a novel imaging technology-meta imaging based on multi-angle image acquisition attracts great attention in the field of optical research. Meta-imaging can capture not only spatial information but also angular information, so it actually contains four-dimensional information content.
The light field imaging technology belongs to the meta imaging technology, and can record four-dimensional light field information of a system and reconstruct the three-dimensional morphology of a target shot object according to the information.
The scanning light field system is based on light field imaging, high frequency preset track scanning of sub-pixel level is added, and a plurality of scanned images are synthesized into one high resolution image. However, due to the introduction of the scanning mode, when the system shoots a dynamic object, the motion of the shot object causes the relative motion of the shot image of each scanning point to exceed the scanning sub-pixel interval, and the motion artifact appears in the high-resolution scanning light field graph, so that the subsequent three-dimensional reconstruction process is influenced.
The existing scanning light field shooting dynamic scene scheme is as follows: firstly, transforming an original light field image to obtain a three-dimensional image stack; then, the images in the stack are processed independently and combined to obtain an image pair; registering the image pairs, and calculating to obtain a coordinate transformation relation of the image pairs; and carrying out scattered point interpolation according to the obtained coordinate transformation relation to obtain a high-resolution artifact-free scanning light field single-view image.
The existing scheme eliminates the motion artifact of a scanned image sequence by a traditional manual registration and scattered point interpolation method, the method needs careful design processing flow, the calculated amount is large when the image pair is aligned, in the image pair registration process, the accuracy of calculating the coordinate transformation relationship can influence the quality of a single-view image generated by final scattered point interpolation, any error or inaccurate matching can cause the distortion or quality reduction of the final image, the fault tolerance of the scheme is lower, and the robustness is poor.
Disclosure of Invention
In view of the above, the invention provides a method and a system for removing motion artifacts based on images of a scanned light field sequence.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
On the one hand, the invention discloses a method for removing motion artifacts based on a scanned light field sequence image, which comprises the following steps:
S1, acquiring microscopic data of a scanning light field;
S2, rearranging the scanning light field microscopic data to obtain a multi-view rearranged image containing artifacts;
s3, splitting the multi-view rearranged images into time sequence sub-image sequences;
S4, carrying out normalization processing on the obtained time sequence sub-image sequence to obtain a normalized scanning light field sub-image sequence;
S5, sequentially performing noise removal, inter-frame consistency alignment, bidirectional space-time extraction and spatial up-sampling operation on the normalized scanning light field sub-image sequence by using a convolutional neural network to obtain a sub-image sequence without motion artifacts;
S6, performing inverse normalization processing on the obtained sub-image sequence without the motion artifact to obtain a sub-image sequence of original scanning light field microscopic data intensity distribution;
S7, performing motion compensation on the sub-picture sequence subjected to the inverse normalization processing according to a scanning mode so as to eliminate scanning jitter.
Further, the step S1 specifically includes the following steps:
light field microscopy data with 4-dimensional information is acquired using a scanning light field microscopy system.
Further, in step S2, the multi-view rearranged image including the artifact includes a multi-view rearranged image having a shape v×h×w;
where v is the number of views, H is the pixel height of the rearranged image, and W is the pixel width of the rearranged image.
Further, the step S3 specifically includes the following steps:
Scanning the multi-view rearranged image containing the artifacts, and decoupling the multi-view rearranged image into t sub-images with the shape of v i x h x w according to a scanning mode, wherein v i is an ith view in the multi-view rearranged image, and h and w are the pixel height and width of each sub-image respectively;
stacking the scan point images of different scan points along the time sequence to obtain a shape v * H > a timing sub-graph sequence of w,/>Is the number of scan points in one scan period, and/>The number of sub-graphs t is the same as the number of sub-graphs t after decoupling.
Further, the step S4 specifically includes the following steps:
Taking the global energy maximum Vmax of the time sequence sub-image sequence as a normalization factor, dividing the energy values of all pixel positions in the time sequence sub-image sequence by Vmax, and normalizing the energy distribution of the sub-image sequence to be between 0 and 1.
Further, the step S5 specifically includes the following steps:
S51, removing significant noise of each scanning point position image in the normalized scanning light field sub-image sequence by using an image noise cleaning module;
S52, dividing the sequence of the sub-image of the scanning light field after noise removal into two groups of sub-sequences Irs and Irs along the time sequence; wherein lrs1 is all sub-sequences except the last time sequence sub-scanning point position image in the scanned light field sub-sequence, and lrs2 is all sub-sequences except the first time sequence sub-scanning point position image in the scanned light field sub-sequence;
S53, inputting the two groups of subsequences Irs and Irs2 into an optical flow estimation module, and respectively calculating the forward optical flow from the subsequence Irs to the subsequence Irs and the backward optical flow from the subsequence Irs2 to the subsequence Irs 1;
s54, respectively guiding and correcting each scanning point position image after noise removal according to the forward optical flow and the direction optical flow to obtain a corrected forward image and a corrected reverse image;
S55, connecting the corrected forward image and the corrected reverse image along the time sequence, and respectively sending the corrected forward image and the corrected reverse image into a residual convolution module to perform feature fusion to obtain forward propagation feature information and reverse propagation feature information;
s56, inputting the forward transmission characteristic information and the backward transmission characteristic information into a spatial up-sampling module together for fusion, and obtaining a high-resolution sub-image sequence without motion artifacts.
Further, in step S6, the performing inverse normalization specifically includes:
the energy values of all pixel positions in the time-series sub-picture sequence after removing the artifacts are multiplied by the global energy maximum Vmax of the time-series sub-picture sequence.
Further, step S7 specifically includes;
and calculating the coordinate offset of each original scanning point position of the scanning light field microscope system relative to the central scanning point position, and subtracting the offset of H/H times from each point position sub-graph in the sub-graph sequence after the inverse normalization treatment in space, wherein H represents the pixel height of the rearranged image, and H represents the pixel height of the sub-graph.
In another aspect, the invention discloses a motion artifact removal system based on a scanned light field sequence image, comprising:
And a data acquisition module: the method is used for acquiring microscopic data of a scanning light field;
The data rearrangement preprocessing module is used for: the method comprises the steps of rearranging microscopic data of a scanning light field to obtain a multi-view rearranged image containing artifacts; splitting the multi-view rearranged image into a time sequence sub-image sequence, and carrying out normalization processing on the split time sequence sub-image sequence to obtain a normalized scanning light field sub-image sequence;
Motion artifact removal module: the method comprises the steps of sequentially carrying out noise removal, inter-frame consistency alignment, bidirectional space-time extraction and spatial up-sampling operation on a normalized scanning light field sub-image sequence by utilizing a convolutional neural network to obtain a sub-image sequence without motion artifacts;
And the inverse normalization processing module: the method comprises the steps of performing inverse normalization processing on an obtained sub-image sequence without motion artifacts to obtain a sub-image sequence of original scanning light field microscopic data intensity distribution;
Motion compensation module: and performing motion compensation on the sub-picture sequence subjected to the inverse normalization processing according to the scanning mode, and eliminating scanning jitter.
Preferably, the motion artifact removal module includes:
And a noise removal module: the method is used for removing significant noise of each scanning point position image in the normalized scanning light field sub-image sequence;
An inter-frame consistency alignment module: the method comprises the steps of dividing a scanned light field sub-sequence after noise removal into two groups of sub-sequences Irs and Irs along a time sequence, and calculating forward optical flows of sub-sequences Irs1 to Irs2 and reverse optical flows of sub-sequences Irs2 to Irs1 by utilizing an optical flow estimation module, wherein lrs1 is all sub-sequences except a last time sequence sub-scanning point position image in the scanned light field sub-sequence, and lrs2 is all sub-sequences except a first time sequence sub-scanning point position image in the scanned light field sub-sequence;
and a bidirectional feature extraction module: the method comprises the steps of respectively guiding and correcting each scanning point position image after noise removal according to a forward optical flow and a direction optical flow to obtain corrected forward images and corrected reverse images; after the corrected forward image and the corrected reverse image are respectively connected along the time sequence, the corrected forward image and the corrected reverse image are respectively sent into a residual convolution module to perform feature fusion, and forward propagation feature information and reverse propagation feature information are obtained;
And a spatial upsampling module: and the method is used for fusing the forward propagation characteristic information and the backward propagation characteristic information to obtain a high-resolution sub-image sequence without motion artifacts.
Compared with the prior art, the invention discloses a method and a system for removing motion artifacts based on a scanned light field sequence image, which have the following beneficial effects:
The convolution neural network structure can utilize the excellent feature extraction capability to acquire the low-dimensional space-time features and the high-dimensional semantic features of the input image sequence, and remove motion artifacts on the basis of not damaging the original input image light field structure according to the extracted information, and the process only has a forward reasoning process, and does not need to carry out additional training when each processing, so that a large amount of time can be saved, and meanwhile, the network has robustness to noise due to the built-in image cleaning module of the network, and when the multi-view scanning light field image sequence is processed, the multi-view parallel processing can be carried out outside the processing flow, so that a large amount of time is saved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a motion artifact removal method provided by the present invention.
Fig. 2 is a structural framework diagram of a convolutional neural network for removing motion artifacts according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Embodiment 1 discloses a method for removing motion artifacts based on a scanned light field sequence image, as shown in fig. 1, comprising the following steps:
s1, acquiring images, and obtaining multi-view light field microscopic data with 4-dimensional information through a scanning light field microscopic system.
S2, rearranging the light field diagram, namely rearranging the multi-view light field data obtained in the previous step to obtain a multi-view rearranged image with artifact and a shape of v H W, wherein v is the number of views, and H and W are the pixel height and width of the rearranged diagram.
S3, splitting a sub-image sequence, and decoupling the multi-view rearranged image of the scanning light field microscopy system obtained in the last step into t sub-images with the shape of v i x h x w according to a scanning mode, wherein v i is the ith view in the multi-view rearranged image, and h and w are the pixel height and width of each sub-image; stacking the scan point images of different scan points along the time sequence to obtain a shape v* H > a timing sub-graph sequence of w,/>Is the number of scan point bits in one scan period.
In the embodiment of the invention, the number t of the sub-images after decoupling and the number of the scanning point bits in one scanning periodAre identical in value; a time sequence sub-graph sequence formed by stacking a plurality of sub-graph edge time sequences is equivalent to a video, and the frame number/>Also numerically equal to the number of decoupled subgraphs t.
S4, normalization pretreatment, wherein the specific implementation method comprises the following steps: taking the sequence v of the time sequence subgraph* The global energy maximum Vmax of h x w is used as a normalization factor, and the energy values of all pixel positions in the time sequence sub-image sequence are divided by Vmax, so that the energy distribution of the sub-image sequence can be normalized to be between 0 and 1 (the energy value reflects that the pixel value is in the image, and the Vmax global energy maximum is reflected on the image and is the maximum pixel value in the image).
And a normalization pretreatment step, which is carried out according to batch, wherein a group of data is input each time for normalization pretreatment, and the number c of image channels for carrying out the normalization pretreatment step is 1 (the multi-view light field microscopic data acquired by the scanning light field microscopic system in the invention is a single gray level image).
S5, removing motion artifacts, sending the normalized scanning light field sub-image sequence into a deep convolutional neural network to remove the motion artifacts of original data, sequentially performing noise removal, inter-frame consistency alignment, bidirectional space-time extraction and spatial up-sampling operation on the normalized scanning light field sub-image sequence by the convolutional neural network, wherein a specific structure frame diagram of the neural network is shown in fig. 2, input Xi is one scanning point position image (time sequence sub-image) in the normalized scanning light field sub-image sequence, xi-1 is the scanning point position image (time sequence sub-image) in the last scanning position of the scanning point position, xi+1 is the scanning point position image (time sequence sub-image) in the next scanning position, the significant noise is removed by a noise cleaning module, and the time sequence sub-image after the noise cleaning module divides the edge time sequence into two groups of sub-sequences lrs1 and lrs2, wherein lrs1 is all sequences except the last time sequence scanning point position image in the time sequence sub-image sequence, namelyFront/>, in frame video-1 Frame, lrs2 is all sub-sequences except the first sequential sub-scan dot-pattern, i.e./>Post/>, in frame video-1 Frame, lrs1 and lrs2 will be pair-wise sent to the optical flow estimation module in time sequence to calculate the optical flow, and this calculation process will have two branches, one for calculating the forward optical flow of lrs1 to lrs2 and one for calculating the reverse optical flow of lrs2 to lrs 1. After two groups of optical flows are obtained, each scanning point position image after passing through the noise cleaning module takes the corresponding reverse and forward optical flows as guidance, the corresponding corrected reverse and forward images are obtained through interpolation, and then each scanning point position image after noise cleaning and the corresponding corrected reverse and forward images are respectively sent to the residual convolution module along the time sequence concat to fuse space-time characteristic information among the scanning points, the characteristic information after the bidirectional propagation process is sent to the spatial up-sampling module together, the spatial up-sampling module is used for fusing bidirectional characteristics and obtaining a high-resolution sub-image sequence with the shape of n, c, H, W, the sub-image sequence not only has the high time resolution of a scanning sub-image, but also has the high spatial resolution of a rearrangement image, and therefore any one of the sub-image sequences can be equivalently used as a rearrangement light field image without motion artifacts;
S6, performing inverse normalization post-processing, namely, in order to keep other dimensions except for motion artifacts in an image sequence as consistent as possible with an original high-resolution image with artifacts, performing inverse normalization on the image sequence with the artifacts removed to the intensity distribution of original data, wherein the specific implementation method comprises the following steps: the time sequence sub-image sequence with the shape of n, t, c, H and W after removing the artifact is multiplied by a normalization factor Vmax in S4, so that the output can be inversely normalized to the energy distribution of the original data;
S7, motion compensation, namely, because of the existence of a scanning mode, a high-resolution light field rearrangement graph with original motion artifact has spatial sub-pixel offset among sub-images, after a high-resolution image is obtained through a deep convolutional neural network, the pixel offset among the sub-images is also increased, obvious scanning jitter can be seen when a high-resolution sub-image is checked in time sequence, and in order to replace the light field rearrangement graph with the output high-resolution sub-image, the motion compensation is required to be carried out on data obtained in the last step according to the scanning mode to eliminate the scanning jitter, and the specific implementation method is as follows: and calculating the coordinate offset of each original scanning point position relative to the central scanning point position, wherein the offset is amplified by H/H times by the neural network, and the offset of H/H times is subtracted from each point bitmap after passing through the neural network in space for erasing the offset.
The invention aims to solve the problem that a high-resolution image obtained by direct rearrangement has motion artifact due to the combined action of object motion and scanning when a scanning light field microscopy system shoots a motion sample; if not scanning, the resulting image has lower spatial resolution, although without motion artifacts, the image in this case is equivalent to taking out a sub-image of each scan point separately in the scanning system; if high resolution and no motion artifact is desired, the solution is to take an image of a single scan point location, and then use a deep convolutional neural network to increase its spatial resolution, so that the image without artifact can be used to replace the original image with artifact with high resolution, and in order to keep the other dimensions of the image except the motion artifact as consistent as possible with the original image with artifact with high resolution, the image output by the network needs to be inversely normalized to keep consistent with the energy distribution of the original image with artifact with high resolution.
Example 2
Embodiment 1 discloses a motion artifact removal system based on scanned light field sequence images, the system comprising:
and a data acquisition module: for acquiring scanning light field microscopy data.
The data rearrangement preprocessing module is used for: the method comprises the steps of rearranging microscopic data of a scanning light field to obtain a multi-view rearranged image containing artifacts; splitting the multi-view rearranged image into a time sequence sub-image sequence, and carrying out normalization processing on the split time sequence sub-image sequence to obtain a normalized scanning light field sub-image sequence; motion artifact removal module: the method is used for sequentially carrying out noise removal, inter-frame consistency alignment, bidirectional space-time extraction and spatial up-sampling operation on the normalized scanning light field sub-image sequence by utilizing a convolutional neural network to obtain a sub-image sequence without motion artifacts.
And the inverse normalization processing module: and the method is used for carrying out inverse normalization processing on the obtained sub-image sequence without motion artifacts to obtain the sub-image sequence of the original scanning light field microscopic data intensity distribution.
Motion compensation module: and performing motion compensation on the sub-picture sequence subjected to the inverse normalization processing according to the scanning mode, and eliminating scanning jitter.
Wherein the motion artifact removal module comprises:
and a noise removal module: and the method is used for removing significant noise of each scanning point position image in the normalized scanning light field sub-image sequence.
An inter-frame consistency alignment module: the method comprises the steps of dividing a scanned light field sub-sequence after noise removal into two groups of sub-sequences Irs and Irs along a time sequence, and calculating forward optical flows of sub-sequences Irs 1to Irs2 and reverse optical flows of sub-sequences Irs2 to Irs1 by utilizing an optical flow estimation module, wherein lrs1 is all sub-sequences except a last time sequence sub-scanning point position image in the scanned light field sub-sequence, and lrs2 is all sub-sequences except a first time sequence sub-scanning point position image in the scanned light field sub-sequence.
And a bidirectional feature extraction module: the method comprises the steps of respectively guiding and correcting each scanning point position image after noise removal according to a forward optical flow and a direction optical flow to obtain corrected forward images and corrected reverse images; and connecting the corrected forward image and the corrected reverse image along the time sequence, and then respectively sending the corrected forward image and the corrected reverse image into a residual convolution module to perform feature fusion to obtain forward propagation feature information and reverse propagation feature information.
And a spatial upsampling module: and the method is used for fusing the forward propagation characteristic information and the backward propagation characteristic information to obtain a high-resolution sub-image sequence without motion artifacts.
More specifically, the data operation process in the motion artifact removal module can also be described by the following steps:
firstly, removing noise from each scanning point position image in the normalized scanning light field time sequence sub-image sequence by using an image noise cleaning module.
In the embodiment of the invention, the image noise cleaning module is obtained by manually adding various random noise and image training of the degradation process to a clean noiseless image, can simulate the noise and the degradation process in the real world, removes the obvious noise in a scanning image time sequence sub-image sequence, prevents the obvious noise from affecting a subsequent module, and enhances the robustness of a network, and the image noise cleaning module mainly comprises the following parts: (1) a two-dimensional convolution layer followed by a leak_ relu activation function to raise the feature dimension of the original input image, (2) a plurality of two-dimensional convolution layers followed by a Relu activation function with residual links to process the high-dimensional information output from the previous step, (3) a post-processing convolution to aggregate the high-dimensional information and reduce the dimension to the original input image feature dimension;
Secondly, after all scanning point images of the time sequence sub-image sequence pass through a noise cleaning module, the consistency among scanning point positions is ensured by an inter-frame consistency alignment module, the module ensures the consistency among the scanning point positions through optical flow estimation (namely, the inter-frame information of a video generated by extracting a scanning optical field image sequence and ensuring the consistency of space-time information thereof), the module is provided with two branches, namely a starting frame, an ending frame and a starting frame, each branch is mainly a plurality of two-dimensional convolution layers which are followed by Relu activation functions, and each branch is input into each current frame and adjacent frames thereof and output into optical flow residual errors;
Then, a bidirectional feature extraction module is utilized to obtain the optical flow residual information of each scanning point position image of the double branch in the last step, and each scanning point position image and the corresponding optical flow residual are sent into a plurality of two-dimensional convolution layers with residual links through concate to perform bidirectional feature fusion and correction;
And finally, using Pixelshuffle method in the spatial up-sampling module to reduce dimension and increase the scale of the high-dimension fusion characteristic output in the last step, and obtaining a clear high-resolution artifact-free shadow graph sequence.
Although the scanning light field system can greatly improve the spatial resolution of light field data in a static scene, when a dynamic scene is shot, motion artifacts inevitably occur due to the existence of scanning characteristics, which will lose the advantages of a scanning mode, and the motion artifacts will influence the subsequent reconstruction process. Although the conventional method uses the conventional image registration and the scattered point interpolation to solve the problem, the method has more steps and large calculation amount, and in addition, in the image registration process, the accuracy of calculating the coordinate transformation relationship may affect the quality of the single-view image generated by the final scattered point interpolation. Any error or inaccurate matching may result in distortion or degradation of the final image. The method can eliminate the motion artifact caused by the moving object, overcomes the shortboard of the scanning light field system on the motion attribute of the shot object, creatively uses the deep convolution neural network to replace the prior method, can process the input image sequence end to end, and simplifies the processing flow. Compared with manual design, the convolutional network has stronger feature extraction capability and better generalization and robustness, and in addition, partial noise can be removed due to the built-in image cleaning module, so that the subsequent reconstruction result is clearer and more accurate, and meanwhile, the processing processes of the method are mutually independent under images of different visual angles, so that the processing processes can be processed in parallel under different visual angles, and the processing process is accelerated.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. The method for removing the motion artifact based on the scanned light field sequence image is characterized by comprising the following steps of:
S1, acquiring microscopic data of a scanning light field;
S2, rearranging the scanning light field microscopic data to obtain a multi-view rearranged image containing artifacts;
s3, splitting the multi-view rearranged images into time sequence sub-image sequences;
S4, carrying out normalization processing on the obtained time sequence sub-image sequence to obtain a normalized scanning light field sub-image sequence;
S5, sequentially performing noise removal, inter-frame consistency alignment, bidirectional space-time extraction and spatial up-sampling operation on the normalized scanning light field sub-image sequence by using a convolutional neural network to obtain a sub-image sequence without motion artifacts;
the step S5 specifically comprises the following steps:
S51, removing significant noise of each scanning point position image in the normalized scanning light field sub-image sequence by using an image noise cleaning module;
S52, dividing the sequence of the sub-image of the scanning light field after noise removal into two groups of sub-sequences Irs and Irs along the time sequence; wherein lrs1 is all sub-sequences except the last time sequence sub-scanning point position image in the scanned light field sub-sequence, and lrs2 is all sub-sequences except the first time sequence sub-scanning point position image in the scanned light field sub-sequence;
S53, inputting the two groups of subsequences Irs and Irs2 into an optical flow estimation module, and respectively calculating the forward optical flow from the subsequence Irs to the subsequence Irs and the backward optical flow from the subsequence Irs2 to the subsequence Irs 1;
s54, respectively guiding and correcting each scanning point position image after noise removal according to the forward optical flow and the direction optical flow to obtain a corrected forward image and a corrected reverse image;
S55, connecting the corrected forward image and the corrected reverse image along the time sequence, and respectively sending the corrected forward image and the corrected reverse image into a residual convolution module to perform feature fusion to obtain forward propagation feature information and reverse propagation feature information;
S56, inputting the forward propagation characteristic information and the backward propagation characteristic information into a spatial up-sampling module together for fusion, and obtaining a high-resolution sub-image sequence without motion artifacts;
S6, performing inverse normalization processing on the obtained sub-image sequence without the motion artifact to obtain a sub-image sequence of original scanning light field microscopic data intensity distribution;
S7, performing motion compensation on the sub-picture sequence subjected to the inverse normalization processing according to a scanning mode so as to eliminate scanning jitter.
2. The method for removing motion artifacts based on images of a scanned light field sequence according to claim 1, wherein step S1 specifically comprises the steps of:
light field microscopy data with 4-dimensional information is acquired using a scanning light field microscopy system.
3. The method according to claim 1, wherein in step S2, the multi-view rearranged image including artifacts includes a multi-view rearranged image having a shape v×h×w;
where v is the number of views, H is the pixel height of the rearranged image, and W is the pixel width of the rearranged image.
4. The method for removing motion artifacts based on images of a scanned light field sequence according to claim 1, wherein step S3 specifically comprises the steps of:
Scanning the multi-view rearranged image containing the artifacts, and decoupling the multi-view rearranged image into t sub-images with the shape of v i x h x w according to a scanning mode, wherein v i is an ith view in the multi-view rearranged image, and h and w are the pixel height and width of each sub-image respectively;
And stacking the scanning point images of different scanning points along the time sequence to obtain a time sequence sub-image sequence with the shape of v x t ' h x w, wherein t ' is the number of scanning point bits in one scanning period, and t ' is the same as the number t of sub-images after decoupling in value.
5. The method for removing motion artifacts based on images of a scanned light field sequence according to claim 1, wherein step S4 specifically comprises the steps of:
Taking the global energy maximum Vmax of the time sequence sub-image sequence as a normalization factor, dividing the energy values of all pixel positions in the time sequence sub-image sequence by Vmax, and normalizing the energy distribution of the sub-image sequence to be between 0 and 1.
6. The method for removing motion artifacts based on a scanned light field sequence image according to claim 1, wherein in step S6, performing inverse normalization processing specifically comprises:
the energy values of all pixel positions in the time-series sub-picture sequence after removing the artifacts are multiplied by the global energy maximum Vmax of the time-series sub-picture sequence.
7. The method for removing motion artifacts based on images of a scanned light field sequence according to claim 1, wherein step S7 specifically comprises:
and calculating the coordinate offset of each original scanning point position of the scanning light field microscope system relative to the central scanning point position, and subtracting the offset of H/H times from each point position sub-graph in the sub-graph sequence after the inverse normalization treatment in space, wherein H represents the pixel height of the rearranged image, and H represents the pixel height of the sub-graph.
8. A system for removing motion artifacts based on images of a sequence of scanned light fields, comprising:
And a data acquisition module: the method is used for acquiring microscopic data of a scanning light field;
The data rearrangement preprocessing module is used for: the method comprises the steps of rearranging microscopic data of a scanning light field to obtain a multi-view rearranged image containing artifacts; splitting the multi-view rearranged image into a time sequence sub-image sequence, and carrying out normalization processing on the split time sequence sub-image sequence to obtain a normalized scanning light field sub-image sequence;
Motion artifact removal module: the method comprises the steps of sequentially carrying out noise removal, inter-frame consistency alignment, bidirectional space-time extraction and spatial up-sampling operation on a normalized scanning light field sub-image sequence by utilizing a convolutional neural network to obtain a sub-image sequence without motion artifacts;
And the inverse normalization processing module: the method comprises the steps of performing inverse normalization processing on an obtained sub-image sequence without motion artifacts to obtain a sub-image sequence of original scanning light field microscopic data intensity distribution;
Motion compensation module: performing motion compensation on the sub-picture sequence subjected to inverse normalization processing according to a scanning mode to eliminate scanning jitter;
wherein the motion artifact removal module comprises:
And a noise removal module: the method is used for removing significant noise of each scanning point position image in the normalized scanning light field sub-image sequence;
An inter-frame consistency alignment module: the method comprises the steps of dividing a scanned light field sub-sequence after noise removal into two groups of sub-sequences Irs and Irs along a time sequence, and calculating forward optical flows of sub-sequences Irs1 to Irs2 and reverse optical flows of sub-sequences Irs2 to Irs1 by utilizing an optical flow estimation module, wherein lrs1 is all sub-sequences except a last time sequence sub-scanning point position image in the scanned light field sub-sequence, and lrs2 is all sub-sequences except a first time sequence sub-scanning point position image in the scanned light field sub-sequence;
and a bidirectional feature extraction module: the method comprises the steps of respectively guiding and correcting each scanning point position image after noise removal according to a forward optical flow and a direction optical flow to obtain corrected forward images and corrected reverse images; after the corrected forward image and the corrected reverse image are respectively connected along the time sequence, the corrected forward image and the corrected reverse image are respectively sent into a residual convolution module to perform feature fusion, and forward propagation feature information and reverse propagation feature information are obtained;
And a spatial upsampling module: and the method is used for fusing the forward propagation characteristic information and the backward propagation characteristic information to obtain a high-resolution sub-image sequence without motion artifacts.
CN202410270167.3A 2024-03-11 2024-03-11 Method and system for removing motion artifact based on scanned light field sequence image Active CN117876279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410270167.3A CN117876279B (en) 2024-03-11 2024-03-11 Method and system for removing motion artifact based on scanned light field sequence image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410270167.3A CN117876279B (en) 2024-03-11 2024-03-11 Method and system for removing motion artifact based on scanned light field sequence image

Publications (2)

Publication Number Publication Date
CN117876279A CN117876279A (en) 2024-04-12
CN117876279B true CN117876279B (en) 2024-05-28

Family

ID=90595214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410270167.3A Active CN117876279B (en) 2024-03-11 2024-03-11 Method and system for removing motion artifact based on scanned light field sequence image

Country Status (1)

Country Link
CN (1) CN117876279B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118154430A (en) * 2024-05-10 2024-06-07 清华大学 Space-time-angle fusion dynamic light field intelligent imaging method

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4607286A (en) * 1985-01-04 1986-08-19 Rca Corporation Removal of line selection artifacts from trace portions of line transfer CCD imager video output signals
US4841555A (en) * 1987-08-03 1989-06-20 University Of Chicago Method and system for removing scatter and veiling glate and other artifacts in digital radiography
CN104597419A (en) * 2015-01-04 2015-05-06 华东师范大学 Method for correcting motion artifacts in combination of navigation echoes and compressed sensing
CN107945132A (en) * 2017-11-29 2018-04-20 深圳安科高技术股份有限公司 A kind of artifact correction method and device of the CT images based on neutral net
CN109741409A (en) * 2018-11-30 2019-05-10 厦门大学 Echo-planar imaging eddy current artifacts without reference scan bearing calibration
CN110706346A (en) * 2019-09-17 2020-01-17 北京优科核动科技发展有限公司 Space-time joint optimization reconstruction method and system
CN111968112A (en) * 2020-09-02 2020-11-20 广州海兆印丰信息科技有限公司 CT three-dimensional positioning image acquisition method and device and computer equipment
CN113487658A (en) * 2021-08-31 2021-10-08 清华大学 Dynamic scene shooting method and device for scanning light field imaging system
CN113781461A (en) * 2021-09-16 2021-12-10 人工智能与数字经济广东省实验室(广州) Intelligent patient monitoring and sequencing method
CN115209119A (en) * 2022-06-15 2022-10-18 华南理工大学 Video automatic coloring method based on deep neural network
WO2023280292A1 (en) * 2021-07-08 2023-01-12 清华大学 Fast-scanning and three-dimensional imaging method and device for large-volume scattered sample
WO2023000244A1 (en) * 2021-07-22 2023-01-26 深圳高性能医疗器械国家研究院有限公司 Image processing method and system, and application of image processing method
CN115797487A (en) * 2022-11-28 2023-03-14 首都师范大学 CT image ring artifact self-adaptive structure-preserving correction method and device and imaging equipment
CN116630178A (en) * 2023-04-13 2023-08-22 中国科学院上海微系统与信息技术研究所 U-Net-based power frequency artifact suppression method for ultra-low field magnetic resonance image
WO2023183486A1 (en) * 2022-03-23 2023-09-28 University Of Southern California Deep-learning-driven accelerated mr vessel wall imaging
CN116843779A (en) * 2023-03-24 2023-10-03 哈尔滨工业大学 Linear scanning detector differential BPF reconstructed image sparse artifact correction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1963830A4 (en) * 2005-12-21 2013-04-24 Yeda Res & Dev Method and apparatus for acquiring high resolution spectral data or high definition images in inhomogeneous environments

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4607286A (en) * 1985-01-04 1986-08-19 Rca Corporation Removal of line selection artifacts from trace portions of line transfer CCD imager video output signals
US4841555A (en) * 1987-08-03 1989-06-20 University Of Chicago Method and system for removing scatter and veiling glate and other artifacts in digital radiography
CN104597419A (en) * 2015-01-04 2015-05-06 华东师范大学 Method for correcting motion artifacts in combination of navigation echoes and compressed sensing
CN107945132A (en) * 2017-11-29 2018-04-20 深圳安科高技术股份有限公司 A kind of artifact correction method and device of the CT images based on neutral net
CN109741409A (en) * 2018-11-30 2019-05-10 厦门大学 Echo-planar imaging eddy current artifacts without reference scan bearing calibration
CN110706346A (en) * 2019-09-17 2020-01-17 北京优科核动科技发展有限公司 Space-time joint optimization reconstruction method and system
CN111968112A (en) * 2020-09-02 2020-11-20 广州海兆印丰信息科技有限公司 CT three-dimensional positioning image acquisition method and device and computer equipment
WO2023280292A1 (en) * 2021-07-08 2023-01-12 清华大学 Fast-scanning and three-dimensional imaging method and device for large-volume scattered sample
WO2023000244A1 (en) * 2021-07-22 2023-01-26 深圳高性能医疗器械国家研究院有限公司 Image processing method and system, and application of image processing method
WO2023029520A1 (en) * 2021-08-31 2023-03-09 清华大学 Method and apparatus for light-field-scanning imaging system to photograph dynamic scene
CN113487658A (en) * 2021-08-31 2021-10-08 清华大学 Dynamic scene shooting method and device for scanning light field imaging system
CN113781461A (en) * 2021-09-16 2021-12-10 人工智能与数字经济广东省实验室(广州) Intelligent patient monitoring and sequencing method
WO2023183486A1 (en) * 2022-03-23 2023-09-28 University Of Southern California Deep-learning-driven accelerated mr vessel wall imaging
CN115209119A (en) * 2022-06-15 2022-10-18 华南理工大学 Video automatic coloring method based on deep neural network
CN115797487A (en) * 2022-11-28 2023-03-14 首都师范大学 CT image ring artifact self-adaptive structure-preserving correction method and device and imaging equipment
CN116843779A (en) * 2023-03-24 2023-10-03 哈尔滨工业大学 Linear scanning detector differential BPF reconstructed image sparse artifact correction method
CN116630178A (en) * 2023-04-13 2023-08-22 中国科学院上海微系统与信息技术研究所 U-Net-based power frequency artifact suppression method for ultra-low field magnetic resonance image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
匀场辅助装置在臂丛神经成像中的应用;李鹏;吕发金;勒都晓兰;王筱璇;;中国医学影像技术;20121020(第10期);全文 *
基于自适应可分离卷积核的视频压缩伪影去除算法;聂可卉;刘文哲;童同;杜民;高钦泉;;计算机应用;20190510(05);全文 *
杨叶甲成虫头部三维显微CT图像数据集;任静;葛斯琴;;中国科学数据(中英文网络版);20171215(第04期);全文 *
磁共振螺旋桨技术与图像质量的相关性研究;倪萍;陈自谦;肖慧;钱根年;陈景华;;医疗设备信息;20070615(第06期);全文 *

Also Published As

Publication number Publication date
CN117876279A (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN117876279B (en) Method and system for removing motion artifact based on scanned light field sequence image
CN110599400B (en) EPI-based light field image super-resolution method
CN109146787B (en) Real-time reconstruction method of dual-camera spectral imaging system based on interpolation
CN113160380B (en) Three-dimensional magnetic resonance image super-resolution reconstruction method, electronic equipment and storage medium
CN111583113A (en) Infrared image super-resolution reconstruction method based on generation countermeasure network
CN111626927A (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN116029902A (en) Knowledge distillation-based unsupervised real world image super-resolution method
CN111369443B (en) Zero-order learning super-resolution method of light field cross-scale
Xue et al. Research on gan-based image super-resolution method
CN104574338A (en) Remote sensing image super-resolution reconstruction method based on multi-angle linear array CCD sensors
CN111815690B (en) Method, system and computer equipment for real-time splicing of microscopic images
CN111986102B (en) Digital pathological image deblurring method
Chen et al. Guided dual networks for single image super-resolution
CN112435165A (en) Two-stage video super-resolution reconstruction method based on generation countermeasure network
CN109615584B (en) SAR image sequence MAP super-resolution reconstruction method based on homography constraint
Sun et al. A lightweight dual-domain attention framework for sparse-view CT reconstruction
CN117078514A (en) Training method, system and product of multistage light field super-resolution network
Shin et al. LoGSRN: Deep super resolution network for digital elevation model
Lyn Multi-level feature fusion mechanism for single image super-resolution
CN116309066A (en) Super-resolution imaging method and device
CN116208812A (en) Video frame inserting method and system based on stereo event and intensity camera
CN115601237A (en) Light field image super-resolution reconstruction network with enhanced inter-view difference
CN114998405A (en) Digital human body model construction method based on image drive
CN111951159B (en) Processing method for super-resolution of light field EPI image under strong noise condition
Wang et al. Reconstructed densenets for image super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant