CN107292865A - A kind of stereo display method based on two dimensional image processing - Google Patents
A kind of stereo display method based on two dimensional image processing Download PDFInfo
- Publication number
- CN107292865A CN107292865A CN201710344783.9A CN201710344783A CN107292865A CN 107292865 A CN107292865 A CN 107292865A CN 201710344783 A CN201710344783 A CN 201710344783A CN 107292865 A CN107292865 A CN 107292865A
- Authority
- CN
- China
- Prior art keywords
- msub
- mtd
- mrow
- mtr
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Abstract
The present invention discloses a kind of stereo display method based on two dimensional image processing, comprises the following steps:S1:The virtual view synthesis of CT image display systems:Adjacent two images are handled, virtual visual point image are obtained by anamorphose method by the image sequence generated using CT Three-dimension Reconstruction Models;S2:The super-resolution rebuilding of CT image display systems:By introducing the theory of manifold learning, super-resolution rebuilding is carried out to image, user is shown to more rich details, so that the original sequence of system input is with the newly-generated virtual visual point image different display devices of adaptation and meets the demands of zoom operations.
Description
Technical field
The present invention relates to medical image processing technical field, more particularly to a kind of stereoscopic display based on two dimensional image processing
Method.
Background technology
Intracranial aneurysm refers to the abnormal bulging part of entocranial artery wall, is the common original for causing spontaneous net film lower cavity hemorrhage
Cause.The cause of disease is unclear, but accounts for major part with congenital aneurysm.Once aneurysm Rupture haemorrhag, clinical manifestation is serious
Subarachnoid hemorrhage, morbidity drastically, or even is gone into a coma.Therefore, the correctly diagnosis to intracranial aneurysm is particularly significant, and CT images are made
It is the conventional means of clinical diagnosis for the important evidence of Diagnosis of intracranial aneurysms.
General CT image processing systems can include image recognition, image enhaucament, feature extraction, three-dimensional reconstruction, geometry
The function such as conversion and parameter measurement, doctor can clearly observe and CT images are effectively analyzed, so as to more fully obtain
The information of focal area in image, is accurately finished diagnosis.Wherein, three-dimensional reconstruction is that faultage image is handled,
Threedimensional model is constructed, and Projection Display is carried out for three-dimensional mould in different directions.Current commercial CT image processing systems, such as
Advantage Workstation systems, can be converted to two-dimensional ct image sequence the three-dimensional mould with volume information
Type, and the result of three-dimensional reconstruction can be saved as to the two dimensional image of a series of general format.These two dimensional images need not
Being parsed can just be observed and convenient transmission on general PC, easily can be used for clinician, but by
In the limitation of resolution ratio and three-dimensional to the data degradation during two-dimensional transformations, clinician is caused to browse and nonreciprocal
The plane picture of limited quantity is observed likes, by the three-dimensional feature in experience subjectively process decision chart picture, accuracy rate is difficult to
Obtain highly effective guarantee.
The content of the invention
It is an object of the present invention to be directed to two dimensional image there is provided one kind under conditions of not directly using CT work stations
Processing method, the resolution ratio of original CT image is lifted, so as to show the CT images with abundant three-dimensional feature information.
For up to above-mentioned purpose, the present invention is adopted the following technical scheme that:
A kind of stereo display method based on two dimensional image processing, comprises the following steps:
S1:Towards the virtual view synthesis for diagnosing enhanced CT image display systems:Generated using CT Three-dimension Reconstruction Models
Image sequence, adjacent two images are handled, virtual visual point image is obtained by anamorphose method;
S2:Towards the super-resolution rebuilding for diagnosing enhanced CT image display systems:By introducing the theory of manifold learning,
Super-resolution rebuilding is carried out to image, user is shown to more rich details, so that the original sequence of system input
With the newly-generated virtual visual point image different display devices of adaptation and meet the demands of zoom operations.
According to CT image processing methods proposed by the present invention, wherein, the step S1 includes:
S11:Towards the feature extraction for the CT images for diagnosing enhanced CT image display systems;
S12:Towards the CT Image Feature Point Matchings for diagnosing enhanced CT image display systems;
S13:The plane subdivision and interpolation algorithm of CT images based on computational geometry;
The step S2 includes:
S21:The image of specified number is selected in the virtual visual point image obtained from step S1;
S22:Chosen image degrades model by Gauss, obtains the image f1 of low resolution;
S23:Set up training set of images;
S24:Matching is rebuild;
S25:High-definition picture after being rebuild.
According to stereo display method proposed by the present invention, the step S11 includes:
Angle point based on Harris calculates leading for each pixel vertical direction and horizontal direction on image to detection method
Number, is denoted as fx、fy, by fx、fyMultiplication obtains fxfy, three matrixes are carried out gaussian filtering, calculated using following formula by totally three matrixes
The interest value each put:
Wherein, fx、fyIt is expressed as the gradient in x, y direction;For gaussian filtering matrix;Det is determinant;K=
0.04 represents weight coefficient, and tr is mark;
The element of every bit is corresponding with the interest value of primordial skull internal aneurysm image respective point in I matrixes;
The characteristic value of Metzler matrix is the single order curvature of auto-correlation function;All offices are extracted in original intracranial aneurysm image
The interest value in portion, the corresponding pixel of great interest value obtained in selected subrange is considered as characteristic point.
According to stereo display method proposed by the present invention, the step S12 includes:
S1201:Using the angle point obtained in step S11, using corner location as center, direction field is set, greatly
Small is 7 × 7, four respective color values of vertex position during inquiry is square, if a color is white, and the other three is black
Color, then provide that the direction of angular coordinate position sensing White vertex position is used as the direction of the angle point;Will point to upper left, lower-left,
Bottom right, the direction vector of upper right are represented with 1,2,3,4 respectively.If there is apex coordinate be unsatisfactory for 1 white 3 it is black if, by the angle
Point deletion;The angle point so obtained all has a direction vector;
S1202:From the alternative angular coordinate of two width intracranial aneurysm images, 5 pairs of matching point coordinates are chosen, it is necessary to uniform
The public domain for being covered in image, calculate two images betweenEc:
Wherein PiRepresent the angular coordinate included at visual angle 1, Pi' represent the angular coordinate that includes at visual angle 2;
S1203:The angular coordinate P that visual angle 1 is includedi, the angular coordinate P included at visual angle 2i' in search for, searching can be with
The matching point coordinates of matching;
abs(E-Ec)<Threshold and direction vector is identical as the condition that the match is successful, wherein Threshold can be with
It is set equal toE is matching point coordinates average.
If search failure, is considered as the angle point P in visual angle 1iThere is no suitable match point in visual angle 2.
According to stereo display method proposed by the present invention, the step S13 includes:
S1301:Using Delaunay triangulation network lattice subdivision method, the following steps are specifically included:
S13011:According to the scope of a centrostigma, a big triangle is presented, is included wherein a little, and will
The big triangle formed is put into triangle chained list;
S13012:Successively by concentrate inserting a little, while traversing triangle chained list, finds out two of which triangle
Shape, its circumscribed circle includes the insertion point, two triangles found out is called the influence triangle of the point, the two are influenceed
The shared edge contract of triangle, whole summits of insertion point and influence triangle are connected, one is thus just inserted in the triangulation network
Individual summit;
S13013:Two triangles with adjacent edge are merged into a quadrangle, by outside one of triangle
Connect circle and be set to standard round, according to largest empty circle criterion, the position of the 4th point is observed, if wherein, just formed top
Quadrangle diagonal is exchanged, and this completes local optimum processing;
S13014:It will be put into by the LOP triangles handled in triangle chained list, circulation step S13012 and step
S13013, until being all inserted into vertex set in the triangulation network a little;
S1302:Linear interpolation carries out figure deformation, specifically includes the following steps:
S13021:Target intracranial aneurysm image and source intracranial aneurysm image or so two images are defined as IsAnd Id,
The angle point that step S101 is obtained utilizes 4 of source CT images and target CT images and dominating pair of vertices to being set to dominating pair of vertices
Summit pair, constructs two Delaunay triangulation networks on the CT images of left and right, and left figure is set to Ms, right figure is set to Md, act on left and right control
System it is online be with identical spatial alternation, so can be by IsIn all point be mapped to IsUp;Due to angle point and subdivision
The triangle in grid is all one-to-one afterwards, and the deformation between two all images of width CT images can be converted into all triangles
Deformation;
S13022:Using inverse warping by TsIt is deformed into Td, TsAnd TdCorresponding vertex be set to Ps1、Ps2、Ps3And P1、
P2、P3, then the affine transformation uniquely determined by this 6 points:TsRepresent the triangle before deformation, TdRepresent three after deformation
It is angular.
P in formula (5)dxAnd PdyFor TdMiddle certain point PdX, y-coordinate, PsxAnd PsyFor TsMiddle corresponding points PsX, y-coordinate.
If
Then
Step S13023:Prognostic chart picture is obtained using bilinear interpolation, makes (Psx,Psy) it is integer;If (Psx0,Psy0)
For (Psx,Psy) integer part, so dx=Psx-Pdx, dy=Psy-Pdy;TdImage can be determined by following formula:
(Pdx,Pdy)=(1-dx) (1-dy) (Psx0,Psy0)+(1-dx)dy(Psx0,Psy0+1)+dx(1-dy)(Psx0+1,
Psy0)+dxdy(Psx0+1,Psy0+1)。
According to stereo display method proposed by the present invention, the step S23 includes:
Gradient is selected, brightness and border carry out feature extraction as characteristic vector to f1;
If it is f2 to extract the low-resolution image after feature;Monochrome information is chosen as the feature of high-definition picture, if
It is f3 to extract the high-definition picture after feature;
Piecemeal extraction is carried out to f2 and f3, it is corresponding into a low resolution characteristic image block training set H2 and one throughout one's life
High-resolution features image block training set H3;
The low resolution characteristic image block and high-resolution features image block of correspondence position are respectively put into H2 and H3, it is complete
Into training set of images Ht foundation.
According to stereo display method proposed by the present invention, the step S24 includes:
Based on Ht, if Lt is the low-resolution image to be reconstructed of input, Wt is the high-definition picture reconstructed;
Feature extraction first is carried out to Lt, the characteristic quantity of selection will set up feature selected during training set of images with the first step
Amount is consistent;
Piecemeal is carried out to the Lt after feature extraction, the fritter for taking 3 × 3 is low-resolution image block to be reconstructed, to wherein
Each low resolution characteristic image block fiCarry out matching reconstruction;
After feature extraction, matching search is carried out, it is match block to take its corresponding 7 × 7 block, using match block in correspondence
21 × 21 search window enter line search;3 and f are found out in low resolution characteristic image block training setiEuclidean distance
Nearest image block;
The local manifolds of low resolution characteristic image block and high-resolution features image block are similar, obtain being used for linearly
3 high-resolution features image block hj of combination.
According to stereo display method proposed by the present invention, the step S25 includes:
The 3 low resolution neighbour's blocks found out using matching search, obtain the line of low resolution characteristic image block to be reconstructed
Property represent, obtain rebuild weight coefficient;
Wherein, fiRepresent the feature of i-th of low-resolution image block to be reconstructed, djRepresent in low resolution training set
J-th of the neighbour's block searched, NiIt is the set of the whole K low resolution neighbour's blocks searched, wiIt is to rebuild weight coefficient.
It is in order that must represent that error is minimum, while to meet w to solve above formulaijSum is 1, and for being not belonging to each in set Ni
Block, wijFor 0;
Weight coefficient and one-to-one high-resolution features image block are rebuild using this 3, linear combination is rebuild
High-resolution features image block y afterwardsi, hjFor the high-resolution features image block of the K neighbour searched out, wijTo rebuild weights
Coefficient,
High-low resolution characteristic image block after being rebuild, by the splicing of image block, the high-resolution after being rebuild
Rate image.
Compared with prior art, the present invention provides a kind of stereo display method based on two dimensional image processing, using virtual
Viewpoint generating algorithm is handled input picture, to realize that image-based rending provides condition, introduces super-resolution rebuilding
Method increases the detailed information of each visual point image under conditions of image input is not changed, and shows, makes applied to multi-viewpoint three-dimensional
Two-dimensional CT image possesses the feature of three dimensional object again.Image procossing and display methods that the present invention is used, can not only be CT figures
Image intensifying provides effective means, and also has certain theory significance for the research of the processing system of nuclear-magnetism and ultrasonoscopy
And application value.
Brief description of the drawings
Fig. 1 is the flow chart of the specific embodiment of stereo display method one of the present invention;
Fig. 2 is the flow chart of virtual view synthesis step in stereo display method of the invention;
Fig. 3 is the flow chart of super-resolution rebuilding step in stereo display method of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not paid
Embodiment, belongs to the scope of protection of the invention.
Also referring to Fig. 1 to Fig. 3, stereo display method of the present invention based on two dimensional image processing comprises the following steps:
Step S1:Towards the CT image characteristics extractions for diagnosing enhanced CT image display systems.
Angle point based on Harris calculates leading for each pixel vertical direction and horizontal direction on image to detection method
Number, is denoted as fx、fy, by fx、fyMultiplication obtains fxfy3 matrixes are carried out gaussian filtering by totally three values, are calculated using formula (1)
The interest value each put, formula is as follows.
I=det (M)-ktr (M), k=0.04 (2)
Wherein, fx、fyIt is expressed as the gradient in x, y direction;For gaussian filtering matrix;Det is determinant;K=
0.04 represents weight coefficient, and tr is mark.
The element of every bit is corresponding with the interest value of primordial skull internal aneurysm image respective point in I matrixes.
M gusts of characteristic value is the single order curvature of auto-correlation function, and such as two curvature values are all at a relatively high, then recognized the point
To be angle point.All local interest values are extracted in original intracranial aneurysm image, are obtained in selected subrange
The corresponding pixel of great interest value is considered as characteristic point.
Step S2:Towards the CT Image Feature Point Matchings for diagnosing enhanced intracranial aneurysm CT image display systems.
Step S201:Deletion is unwanted to falsely drop angle point.Using the angle point obtained in step S1, using corner location in
The heart, sets direction field, and size is 7 × 7, four respective color values of vertex position during inquiry is square a, if face
Color is white, and the other three is black, then provides that the direction of angular coordinate position sensing White vertex position is used as the angle point
Direction.The direction vector of sensing upper left, lower-left, bottom right, upper right is represented with 1,2,3,4 respectively.If there is apex coordinate
Be unsatisfactory for 1 white 3 it is black if, the angle point is deleted.The angle point so obtained all has a direction vector.
Step S202:Calculate the poor standard deviation of approximate coordinateWith average Ec.From the alternative angle of two width intracranial aneurysm images
In point coordinates, 5 pairs of matching point coordinates are chosen, it is necessary to be uniformly covered on the public domain of image, between calculating two images
Ec。
Step S203:Corners Matching is carried out using the poor average of approximate coordinate and direction vector.The angle point that visual angle 1 includes is sat
Mark Pi, the angular coordinate P included at visual angle 2i' middle search, find the matching point coordinates that can be matched.abc(E-Ec)<
Threshold and direction vector is identical as the condition that the match is successful, wherein Threshold can be set equal toIf searched
Rope fails, then is considered as the angle point P in visual angle 1iThere is no suitable match point in visual angle 2.
Step S3:The plane subdivision and interpolation algorithm of intracranial aneurysm CT images based on computational geometry
Step S301:Using Delaunay triangulation network lattice subdivision method.
Step S3011:According to the scope of a centrostigma, a big triangle is presented, is included wherein a little, and
The big triangle of formation is put into triangle chained list.
Step S3012:Successively by concentrate inserting a little, while traversing triangle chained list, finds out two of which three
Angular, its circumscribed circle includes the insertion point, two triangles found out is called the influence triangle of the point, by the two shadows
The shared edge contract of triangle is rung, whole summits of insertion point and influence triangle are connected, thus just inserted in the triangulation network
One summit.
Step S3013:Suboptimization is handled.Two triangles with adjacent edge are merged into a quadrangle, will
One of triangle circumscribed circle is set to standard round, according to largest empty circle criterion, the position of the 4th point is observed, if at it
In, just the quadrangle diagonal that top is formed is exchanged, this completes local optimum processing.
Step S3014:It will be put into by the LOP triangles handled in triangle chained list, circulation step S3012 and step
S3013, until being all inserted into vertex set in the triangulation network a little.
Step S302:Linear interpolation carries out figure deformation.
Step S3021:Target intracranial aneurysm image and source intracranial aneurysm image or so two images are defined as Is
And Id, the angle point that step S101 is obtained utilizes source intracranial aneurysm image and target intracranial aneurysm to being set to dominating pair of vertices
4 summits pair of image and dominating pair of vertices, construct two Delaunay triangulation networks, left figure on the intracranial aneurysm image of left and right
It is set to Ms, right figure is set to Md, act on left and right control it is online be with identical spatial alternation, so can be by IsIn own
Point be mapped to IsUp.Because the triangle in grid after angle point and subdivision is all one-to-one, two width intracranial aneurysms
Deformation between all images of image can be converted into the deformation of all triangles.
Step S3022:Using inverse warping by TsIt is deformed into Td, TsAnd TdCorresponding vertex be set to Ps1、Ps2、Ps3With
P1、P2、P3, then the affine transformation uniquely determined by this 6 points:
P in formula (5)dxAnd PdyFor TdMiddle certain point PdX, y-coordinate, PsxAnd PsyFor TsMiddle corresponding points PsX, y-coordinate.If
Then
A points can solve and intangibility in formula, a TdA point or straight line are not deteriorated to, A can be solved, but if
TdDegenerated, because the border of triangle and adjacent triangle all in grid is to overlap, even so not
Can solution do not interfere with the deformation effect of intracranial aneurysm image yet.
Step S3023:Prognostic chart picture is obtained using bilinear interpolation, makes (Psx,Psy) it is integer.
(1) (P is setsx0,Psy0) it is (Psx,Psy) integer part, so dx=Psx-Pdx, dy=Psy-Pdy。
(2)TdPrediction intracranial aneurysm image can be determined by following formula:
Step 2:Rebuild towards enhanced intracranial aneurysm CT image high-resolutions are diagnosed.
Step S401:The virtual view intracranial aneurysm CT images obtained by step one, choose 4 images altogether.
Step S402:Degrade model by Gauss, obtain the image f of low resolution1。
Step S403:Set up training set of images.Gradient is selected, brightness and border are as characteristic vector, to f1Carry out feature
Extract.If it is f to extract the low-resolution image after feature2.Monochrome information is chosen as the feature of high-definition picture, if extracting
High-definition picture after feature is f3.To f2And f3Piecemeal extraction is carried out, throughout one's life into a low resolution characteristic image block training
Collect H2The high-resolution features image block training set H corresponding with one3.By the low resolution characteristic image block of correspondence position and
High-resolution features image block is respectively put into H2And H3In, complete training set of images Ht foundation.
Step S404:Matching is rebuild.Based on Ht, if Lt is the low-resolution image to be reconstructed of input, Wt is reconstructed
High-definition picture.Feature extraction first is carried out to Lt, the characteristic quantity of selection will set up selected during training set of images with the first step
Characteristic quantity be consistent.Piecemeal is carried out to the Lt after feature extraction, the fritter for taking 3 × 3 is low-resolution image to be reconstructed
Block, to each low resolution characteristic image block fiCarry out matching reconstruction.After feature extraction, matching search is carried out,
It is match block to take its corresponding 7 × 7 block, and line search is entered in corresponding 21 × 21 search window using match block.At low point
3 and f are found out in resolution characteristic image block training setiThe nearest image block of Euclidean distance.Low resolution characteristic image block and
The local manifolds of high-resolution features image block are similar, obtain 3 high-resolution features image blocks for linear combination
hj。
Step S405:High-definition picture after being rebuild.The 3 low resolution neighbours found out using matching search
Block, obtains the linear expression of low resolution characteristic image block to be reconstructed, has also just obtained reconstruction weight coefficient.
Wherein, fiRepresent the feature of i-th of low-resolution image block to be reconstructed, djRepresent in low resolution training set
J-th of the neighbour's block searched, NiIt is the set of the whole K low resolution neighbour's blocks searched, wiIt is to rebuild weight coefficient.
It is in order that must represent that error is minimum, while to meet w to solve above formulaijSum is 1, and for being not belonging to each in set Ni
Block, wijFor 0.
Weight coefficient and one-to-one high-resolution features image block are rebuild using this 3, linear combination is rebuild
High-resolution features image block y afterwardsi, hjFor the high-resolution features image block of the K neighbour searched out, wijTo rebuild weights
Coefficient,
High-low resolution characteristic image block after being rebuild, by the splicing of image block, the high-resolution after being rebuild
Rate image.
Virtual view module and super-resolution remodelling are combined, intracranial aneurysm CT image function modules are formed.Make
Two-dimensional ct intracranial aneurysm image realizes three-dimensional visualization, and visual figure is shown with the display module of the post processing work station
Picture.
Finally it should be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
The present invention is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:It still may be used
To be modified to the technical scheme described in previous embodiment, or equivalent substitution is carried out to which part technical characteristic;And
These modifications are replaced, and the essence of appropriate technical solution is departed from the spirit and model of technical scheme of the embodiment of the present invention
Enclose.
Claims (8)
1. a kind of stereo display method based on two dimensional image processing, it is characterised in that comprise the following steps:
S1:The virtual view synthesis that CT images are shown:The image sequence generated using CT Three-dimension Reconstruction Models, to two adjacent width
Image is handled, and virtual visual point image is obtained by anamorphose method;
S2:The super-resolution rebuilding that CT images are shown:By introducing the theory of manifold learning, Super-resolution reconstruction is carried out to image
Build, user is shown to more rich details, so that the original sequence and newly-generated virtual view figure of system input
Picture adapts to different display devices and meets the demand of zoom operations.
2. CT image processing methods according to claim 1, it is characterised in that the step S1 includes:
S11:The feature extraction of CT images;
S12:CT Image Feature Point Matchings;
S13:The plane subdivision and interpolation algorithm of CT images based on computational geometry;
The step S2 includes:
S21:The image of specified number is selected in the virtual visual point image obtained from step S1;
S22:Chosen image degrades model by Gauss, obtains the image f1 of low resolution;
S23:Set up training set of images;
S24:Matching is rebuild;
S25:High-definition picture after being rebuild.
3. CT image processing methods according to claim 2, it is characterised in that the step S11 includes:
Angle point based on Harris calculates the derivative of each pixel vertical direction and horizontal direction on image to detection method,
It is denoted as fx、fy, by fx、fyMultiplication obtains fxfy, three matrixes are carried out gaussian filtering by totally three matrixes, calculate every using following formula
The interest value of individual point:
<mrow>
<mi>M</mi>
<mo>=</mo>
<mi>G</mi>
<mrow>
<mo>(</mo>
<mover>
<mi>s</mi>
<mo>~</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>&CircleTimes;</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>f</mi>
<mi>x</mi>
<mn>2</mn>
</msubsup>
</mtd>
<mtd>
<mrow>
<msub>
<mi>f</mi>
<mi>x</mi>
</msub>
<msub>
<mi>f</mi>
<mi>y</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>f</mi>
<mi>x</mi>
</msub>
<msub>
<mi>f</mi>
<mi>y</mi>
</msub>
</mrow>
</mtd>
<mtd>
<msubsup>
<mi>f</mi>
<mi>y</mi>
<mn>2</mn>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mi>I</mi>
<mo>=</mo>
<mi>det</mi>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>k</mi>
<mo>&CenterDot;</mo>
<mi>t</mi>
<mi>r</mi>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>k</mi>
<mo>=</mo>
<mn>0.04</mn>
</mrow>
Wherein, fx、fyIt is expressed as the gradient in x, y direction;For gaussian filtering matrix;Det is determinant;K=0.04 tables
Show weight coefficient, tr is mark;
The element of every bit is corresponding with the interest value of original CT image respective points in I matrixes;
The characteristic value of Metzler matrix is the single order curvature of auto-correlation function;All local interest values are extracted in original CT image,
The corresponding pixel of great interest value obtained in selected subrange is considered as characteristic point.
4. CT image processing methods according to claim 3, it is characterised in that the step S12 includes:
S1201:Using the angle point obtained in step S11, using corner location as center, direction field is set, and size is 7
× 7, four respective color values of vertex position during inquiry is square, if a color is white, and the other three is black, then
Provide that the direction of angular coordinate position sensing White vertex position is used as the direction of the angle point;Will point to upper left, lower-left, bottom right,
The direction vector of upper right is represented with 1,2,3,4 respectively.If there is apex coordinate be unsatisfactory for 1 white 3 it is black if, the angle point is deleted
Remove;The angle point so obtained all has a direction vector;
S1202:From the alternative angular coordinate of two width intracranial aneurysm images, 5 pairs of matching point coordinates are chosen, it is necessary to uniformly cover
Cover in the public domain of image, between calculating two imagesEc:
<mrow>
<msub>
<mi>E</mi>
<mi>c</mi>
</msub>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mn>5</mn>
</munderover>
<mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msubsup>
<mi>P</mi>
<mi>i</mi>
<mo>&prime;</mo>
</msubsup>
<mo>)</mo>
</mrow>
<mn>5</mn>
</mfrac>
</mrow>
Wherein PiRepresent the angular coordinate included at visual angle 1, Pi' represent the angular coordinate that includes at visual angle 2;
S1203:The angular coordinate P that visual angle 1 is includedi, the angular coordinate P included at visual angle 2i' in search for, searching can match
Matching point coordinates;
abs(E-Ec)<Threshold and direction vector is identical as the condition that the match is successful, wherein Threshold can be set
It is equal toE is matching point coordinates average.
If search failure, is considered as the angle point P in visual angle 1iThere is no suitable match point in visual angle 2.
5. CT image processing methods according to claim 4, it is characterised in that the step S13 includes:
S1301:Using Delaunay triangulation network lattice subdivision method, the following steps are specifically included:
S13011:According to the scope of a centrostigma, a big triangle is presented, is included wherein a little, and will be formed
Big triangle be put into triangle chained list;
S13012:Being inserted what is concentrated a little successively, while traversing triangle chained list, finds out two of which triangle, it
Circumscribed circle include the insertion point, two triangles found out are called the influence triangle of the point, by the two influence triangle
The shared edge contract of shape, whole summits of insertion point and influence triangle are connected, and a top is thus just inserted in the triangulation network
Point;
S13013:Two triangles with adjacent edge are merged into a quadrangle, by one of triangle circumscribed circle
It is set to standard round, according to largest empty circle criterion, observes the position of the 4th point, if wherein, four sides for just forming top
Shape diagonal is exchanged, and this completes local optimum processing;
S13014:It will be put into by the LOP triangles handled in triangle chained list, circulation step S13012 and step S13013,
Until being all inserted into vertex set in the triangulation network a little;
S1302:Linear interpolation carries out figure deformation, specifically includes the following steps:
S13021:Target CT images and source CT images or so two images are defined as IsAnd Id, the angle point that step S101 is obtained
To being set to dominating pair of vertices, using 4 summits pair of source CT images and target CT images and dominating pair of vertices, on the CT images of left and right
Two Delaunay triangulation networks are constructed, left figure is set to Ms, right figure is set to Md, act on left and right and control to be on the net with identical
Spatial alternation, so can be by IsIn all point be mapped to IsUp;Because the triangle in grid after angle point and subdivision is all
Correspondingly, the deformation between two all images of width CT images can be converted into the deformation of all triangles;
S13022:Using inverse warping by TsIt is deformed into Td, TsAnd TdCorresponding vertex be set to Ps1、Ps2、Ps3And P1、P2、P3,
The affine transformation then uniquely determined by this 6 points:TsRepresent the triangle before deformation, TdRepresent the triangle after deformation.
<mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>s</mi>
<mi>x</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>x</mi>
<mi>y</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>a</mi>
<mn>11</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>a</mi>
<mn>12</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>a</mi>
<mn>13</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>a</mi>
<mn>21</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>a</mi>
<mn>22</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>a</mi>
<mn>23</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>d</mi>
<mi>x</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>d</mi>
<mi>y</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
P in formula (5)dxAnd PdyFor TdMiddle certain point PdX, y-coordinate, PsxAnd PsyFor TsMiddle corresponding points PsX, y-coordinate.
If
<mrow>
<mi>A</mi>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>a</mi>
<mn>11</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>a</mi>
<mn>12</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>a</mi>
<mn>13</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>a</mi>
<mn>21</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>a</mi>
<mn>22</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>a</mi>
<mn>23</mn>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mrow>
Then
<mrow>
<mi>A</mi>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>s</mi>
<mn>1</mn>
<mi>x</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>s</mi>
<mn>2</mn>
<mi>x</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>s</mi>
<mn>3</mn>
<mi>x</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>s</mi>
<mn>1</mn>
<mi>y</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>s</mi>
<mn>2</mn>
<mi>y</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mi>s</mi>
<mn>3</mn>
<mi>y</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mn>1</mn>
<mi>x</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mn>2</mn>
<mi>x</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mn>3</mn>
<mi>x</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mn>1</mn>
<mi>y</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mn>2</mn>
<mi>y</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>P</mi>
<mrow>
<mn>3</mn>
<mi>y</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>;</mo>
</mrow>
Step S13023:Prognostic chart picture is obtained using bilinear interpolation, makes (Psx,Psy) it is integer;
If (Psx0,Psy0) it is (Psx,Psy) integer part, so dx=Psx-Pdx, dy=Psy-Pdy;
TdImage can be determined by following formula:
(Pdx,Pdy)=(1-dx) (1-dy) (Psx0,Psy0)+(1-dx)dy(Psx0,Psy0+1)
+dx(1-dy)(Psx0+1,Psy0)3+dxdy(Psx0+1,Psy0+1)。
6. CT image processing methods according to claim 5, it is characterised in that the step S23 includes:
Gradient is selected, brightness and border carry out feature extraction as characteristic vector to f1;
If it is f2 to extract the low-resolution image after feature;Monochrome information is chosen as the feature of high-definition picture, if extracting
High-definition picture after feature is f3;
Piecemeal extraction is carried out to f2 and f3, throughout one's life into a low resolution characteristic image block training set H2 and a corresponding height
Resolution characteristics image block training set H3;
The low resolution characteristic image block and high-resolution features image block of correspondence position are respectively put into H2 and H3, figure is completed
As training set Ht foundation.
7. CT image processing methods according to claim 6, it is characterised in that the step S24 includes:
Based on Ht, if Lt is the low-resolution image to be reconstructed of input, Wt is the high-definition picture reconstructed;
Feature extraction is first carried out to Lt, the characteristic quantity of selection will set up characteristic quantity selected during training set of images with the first step and protect
Hold consistent;
Piecemeal is carried out to the Lt after feature extraction, the fritter for taking 3 × 3 is low-resolution image block to be reconstructed, to therein every
One low resolution characteristic image block fiCarry out matching reconstruction;
After feature extraction, matching search is carried out, it is match block to take its corresponding 7 × 7 block, using match block corresponding 21
× 21 search window enters line search;3 and f are found out in low resolution characteristic image block training setiEuclidean distance it is nearest
Image block;
The local manifolds of low resolution characteristic image block and high-resolution features image block are similar, obtain being used for linear combination
3 high-resolution features image block hj.
8. CT image processing methods according to claim 7, it is characterised in that the step S25 includes:
The 3 low resolution neighbour's blocks found out using matching search, obtain the linear list of low resolution characteristic image block to be reconstructed
Show, obtain rebuilding weight coefficient;
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>W</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<msub>
<mi>w</mi>
<mi>i</mi>
</msub>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>f</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<msub>
<mi>d</mi>
<mi>j</mi>
</msub>
<mo>&Element;</mo>
<msub>
<mi>N</mi>
<mi>i</mi>
</msub>
</mrow>
</munder>
<msub>
<mi>w</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<msub>
<mi>d</mi>
<mi>j</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mo>.</mo>
<mi>t</mi>
<mo>.</mo>
<munder>
<mo>&Sigma;</mo>
<mi>j</mi>
</munder>
<msub>
<mi>w</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Wherein, fiRepresent the feature of i-th of low-resolution image block to be reconstructed, djRepresent to search in low resolution training set
J-th of the neighbour's block arrived, NiIt is the set of the whole K low resolution neighbour's blocks searched, wiIt is to rebuild weight coefficient.Solve
Above formula is in order that must represent that error is minimum, while to meet wijSum is 1, and each block for being not belonging in set Ni,
wijFor 0;
Weight coefficient and one-to-one high-resolution features image block are rebuild using this 3, after linear combination is rebuild
High-resolution features image block yi, hjFor the high-resolution features image block of the K neighbour searched out, wijTo rebuild weights system
Number,
<mrow>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<msub>
<mi>h</mi>
<mi>j</mi>
</msub>
<mo>&Element;</mo>
<msub>
<mi>N</mi>
<mi>i</mi>
</msub>
</mrow>
</munder>
<msub>
<mi>w</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<msub>
<mi>h</mi>
<mi>j</mi>
</msub>
</mrow>
High-low resolution characteristic image block after being rebuild, by the splicing of image block, the high resolution graphics after being rebuild
Picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710344783.9A CN107292865B (en) | 2017-05-16 | 2017-05-16 | Three-dimensional display method based on two-dimensional image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710344783.9A CN107292865B (en) | 2017-05-16 | 2017-05-16 | Three-dimensional display method based on two-dimensional image processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107292865A true CN107292865A (en) | 2017-10-24 |
CN107292865B CN107292865B (en) | 2021-01-26 |
Family
ID=60094062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710344783.9A Active CN107292865B (en) | 2017-05-16 | 2017-05-16 | Three-dimensional display method based on two-dimensional image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107292865B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510443A (en) * | 2018-03-30 | 2018-09-07 | 河北北方学院 | A kind of medical image rebuilds localization method offline |
CN108506170A (en) * | 2018-03-08 | 2018-09-07 | 上海扩博智能技术有限公司 | Fan blade detection method, system, equipment and storage medium |
CN111161137A (en) * | 2019-12-31 | 2020-05-15 | 四川大学 | Multi-style Chinese painting flower generation method based on neural network |
CN112541963A (en) * | 2020-11-09 | 2021-03-23 | 北京百度网讯科技有限公司 | Three-dimensional virtual image generation method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3524147B2 (en) * | 1994-04-28 | 2004-05-10 | キヤノン株式会社 | 3D image display device |
CN102521810A (en) * | 2011-12-16 | 2012-06-27 | 武汉大学 | Face super-resolution reconstruction method based on local constraint representation |
CN103106685A (en) * | 2013-01-16 | 2013-05-15 | 东北大学 | Abdominal viscera three dimension visualization method based on graphic processing unit (GPU) |
CN103108208A (en) * | 2013-01-23 | 2013-05-15 | 哈尔滨医科大学 | Method and system of enhancing display of computed tomography (CT) postprocessing image |
CN104008542A (en) * | 2014-05-07 | 2014-08-27 | 华南理工大学 | Fast angle point matching method for specific plane figure |
-
2017
- 2017-05-16 CN CN201710344783.9A patent/CN107292865B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3524147B2 (en) * | 1994-04-28 | 2004-05-10 | キヤノン株式会社 | 3D image display device |
CN102521810A (en) * | 2011-12-16 | 2012-06-27 | 武汉大学 | Face super-resolution reconstruction method based on local constraint representation |
CN103106685A (en) * | 2013-01-16 | 2013-05-15 | 东北大学 | Abdominal viscera three dimension visualization method based on graphic processing unit (GPU) |
CN103108208A (en) * | 2013-01-23 | 2013-05-15 | 哈尔滨医科大学 | Method and system of enhancing display of computed tomography (CT) postprocessing image |
CN104008542A (en) * | 2014-05-07 | 2014-08-27 | 华南理工大学 | Fast angle point matching method for specific plane figure |
Non-Patent Citations (2)
Title |
---|
孙小喃: "基于两视点的任意虚拟视点图像合成方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》 * |
郭忠斌: "基于图形变形和深度图的多视点立体图像中间虚拟视点的生成", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108506170A (en) * | 2018-03-08 | 2018-09-07 | 上海扩博智能技术有限公司 | Fan blade detection method, system, equipment and storage medium |
CN108510443A (en) * | 2018-03-30 | 2018-09-07 | 河北北方学院 | A kind of medical image rebuilds localization method offline |
CN111161137A (en) * | 2019-12-31 | 2020-05-15 | 四川大学 | Multi-style Chinese painting flower generation method based on neural network |
CN112541963A (en) * | 2020-11-09 | 2021-03-23 | 北京百度网讯科技有限公司 | Three-dimensional virtual image generation method and device, electronic equipment and storage medium |
CN112541963B (en) * | 2020-11-09 | 2023-12-26 | 北京百度网讯科技有限公司 | Three-dimensional avatar generation method, three-dimensional avatar generation device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107292865B (en) | 2021-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11538229B2 (en) | Image processing method and apparatus, electronic device, and computer-readable storage medium | |
US20220044356A1 (en) | Large-field-angle image real-time stitching method based on calibration | |
CN107292865A (en) | A kind of stereo display method based on two dimensional image processing | |
CN101883291B (en) | Method for drawing viewpoints by reinforcing interested region | |
CN104376552B (en) | A kind of virtual combat method of 3D models and two dimensional image | |
CN104156957B (en) | Stable and high-efficiency high-resolution stereo matching method | |
CN111325693B (en) | Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image | |
CN109978984A (en) | Face three-dimensional rebuilding method and terminal device | |
CN103559737A (en) | Object panorama modeling method | |
CN106023230B (en) | A kind of dense matching method of suitable deformation pattern | |
CN103971366B (en) | A kind of solid matching method being polymerize based on double weights | |
CN109859105B (en) | Non-parameter image natural splicing method | |
CN102592275A (en) | Virtual viewpoint rendering method | |
CN103606188A (en) | Geographical information on-demand acquisition method based on image point cloud | |
CN107451952A (en) | A kind of splicing and amalgamation method of panoramic video, equipment and system | |
CN105005964A (en) | Video sequence image based method for rapidly generating panorama of geographic scene | |
CN103679680B (en) | Solid matching method and system | |
CN102521586A (en) | High-resolution three-dimensional face scanning method for camera phone | |
CN113808261A (en) | Panorama-based self-supervised learning scene point cloud completion data set generation method | |
CN115082254A (en) | Lean control digital twin system of transformer substation | |
CN107360416A (en) | Stereo image quality evaluation method based on local multivariate Gaussian description | |
CN104751508B (en) | The full-automatic of new view is quickly generated and complementing method in the making of 3D three-dimensional films | |
CN104796624B (en) | A kind of light field editor transmission method | |
CN114723884A (en) | Three-dimensional face reconstruction method and device, computer equipment and storage medium | |
Guo et al. | Real-time dense-view imaging for three-dimensional light-field display based on image color calibration and self-supervised view synthesis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |