CN105812772B - Medical image three-dimensional display system and method - Google Patents

Medical image three-dimensional display system and method Download PDF

Info

Publication number
CN105812772B
CN105812772B CN201410837180.9A CN201410837180A CN105812772B CN 105812772 B CN105812772 B CN 105812772B CN 201410837180 A CN201410837180 A CN 201410837180A CN 105812772 B CN105812772 B CN 105812772B
Authority
CN
China
Prior art keywords
mark point
image
unit
spatial position
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410837180.9A
Other languages
Chinese (zh)
Other versions
CN105812772A (en
Inventor
姚劲
简培云
包瑞
宫晓达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Super Technology Co Ltd
Original Assignee
Shenzhen Super Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Super Technology Co Ltd filed Critical Shenzhen Super Technology Co Ltd
Priority to CN201410837180.9A priority Critical patent/CN105812772B/en
Publication of CN105812772A publication Critical patent/CN105812772A/en
Application granted granted Critical
Publication of CN105812772B publication Critical patent/CN105812772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention belongs to field of medical technology, a kind of medical image three-dimensional display system and method are provided, the system includes: display unit, spectrophotometric unit, tracking equipment and image capturing unit, the spectrophotometric unit is used to that left view and right view will to be divided on image space that the display unit is shown, the tracking equipment is used to obtain the location information of first object object, the image capturing unit is for shooting the second target object, the medical image three-dimensional display system further includes image playback process unit, location information according to the first object object, the display parameters of the grating parameter of the spectrophotometric unit and the display unit, the stereo-picture that the image capturing unit received takes is handled in real time, the display unit real-time display is sent after processing.The advantages of image broadcasting speed of the invention greatly improves compared with the prior art, can meet the requirement that real-time volume is shown, surgical operation and auxiliary doctor is facilitated to improve success rate of operation.

Description

Medical image three-dimensional display system and method
Technical field
The present invention relates to technical field of medical equipment, scheme in particular to a kind of medical treatment applied on clinical medicine As three-dimensional display system and display methods.
Background technique
Recently more and more display technologies are applied on Medical Devices, promote the progress of medical technology.For example, diagnosis Endoscopy (diagnostic endoscopy), treatment endoscopic operation (surgical endoscopy), robotic surgical system (Robot surgical system), medical electric microscope (medical electron microscope), electronics calculate Machine tomoscan (computer tomography, CT), magnetic resonance imaging (magnetic resonance imaging, MRI) Equal Medical Devices, which require to use image, to be shown.Therefore, when doctor performs an operation to patient, it can watch at any time and be output to above-mentioned doctor The image treated on the picture of display is performed the operation.Such as: parapelvic cyst resection.For doctor, this kind of operation exists , easily there is big bleeding in operation, and the risk of postoperative leakage of urine is also higher in the hidden danger of kidney, blood vessel and renal plevis damage.Such as take Laparoscopy operation, since anatomical layer is more, blood vessel is complicated, operating difficulty is big and two dimensional display picture lacks depth feelings Factor, this requires operative doctor to have technology under very high mirror.
Industry has at present is applied in above-mentioned operation using the stereoscopic display of wearing spectacles formula, and doctor needs wearing spectacles Watch stereoscopically displaying images.Since doctor is when performing the operation, need to wear mask, when doctor breathes, due to adorning oneself with mask, Fog may be formed on 3D glasses, to influence the visual field, and adorn oneself with 3D glasses and see that image can be very dark.It is easy to appear and sentences Dislocation misses, and leads to the problem of " accidentally cutting " " cutting ", it is also possible to lead to the damage of patient's internal organs and blood vessel, nerve more.
Therefore, how to overcome the above problem, become medical field faces at present one big technical problem.
Summary of the invention
The purpose of the present invention is to provide a kind of medical image three-dimensional display systems, it is intended to solve the office by the prior art Said one or multiple technical problems caused by limit and disadvantage.
A kind of medical image three-dimensional display system provided by the invention, comprising: display unit, spectrophotometric unit, tracking equipment And image capturing unit, the spectrophotometric unit is located at the display side of the display unit, for show the display unit It is divided into left view and right view on image space, the tracking equipment is used to obtain the location information of first object object, described Image capturing unit is for shooting the second target object, wherein the medical image three-dimensional display system further includes that image plays Processing unit is connect, at described image broadcasting respectively with the tracking equipment, the display unit and described image shooting unit Unit is managed according to the aobvious of the location information of the first object object, the grating parameter of the spectrophotometric unit and the display unit Show parameter, the stereo-picture that the described image shooting unit received takes is handled in real time, after processing described in transmission Display unit real-time display.
The present invention also provides a kind of medical image stereo display methods, wherein the described method comprises the following steps: S0 shooting The stereo-picture of second target object, and the information of the stereo-picture is sent in real time;The position of S1 acquisition first object object Information;S2 obtains the display parameters of the grating parameter of the spectrophotometric unit of display equipment and the display unit of the display equipment;S3 The described image shooting unit received is taken according to the location information and the grating parameter and the display parameters Stereo-picture handled in real time;S4 shows the image to be played.
Medical image three-dimensional display system provided by the invention and method, image broadcasting speed is compared with the prior art significantly The advantages of improving, the requirement that real-time volume is shown can be met, surgical operation and auxiliary doctor is facilitated to improve success rate of operation.
Detailed description of the invention
Fig. 1 shows the structural schematic diagram of the three-dimensional display system of embodiment of the present invention one.
Fig. 2 shows the structural schematic diagrams of the image playback process unit in Fig. 1.
Fig. 3 is the structural representation that spectrophotometric unit is bonded with display unit in the three-dimensional display system of embodiment of the present invention one Figure.
Fig. 4 shows the preferred embodiment structural representation of tracking equipment in the three-dimensional display system of embodiment of the present invention one Figure.
Fig. 5 shows the concrete structure schematic diagram of the acquiring unit in Fig. 4.
Fig. 6 shows the concrete structure schematic diagram for rebuilding unit first variation in Fig. 4.
Fig. 7 shows the concrete structure schematic diagram for rebuilding the second variation of unit in Fig. 4.
Fig. 8 shows the concrete structure schematic diagram for rebuilding unit third variation in Fig. 4.
Fig. 9, which is shown, to be corresponded to the structure of the locating support of first object object setting flag point and shows in the tracking equipment of Fig. 4 It is intended to.
Figure 10 is the flow diagram of the medical image stereo display method of embodiment of the present invention two.
Figure 11 is the idiographic flow schematic diagram of S1 in Figure 10.
Figure 12 is the idiographic flow schematic diagram of S12 in Figure 11.
Figure 13 is the idiographic flow schematic diagram of the first variation of S13 in Figure 10.
Figure 14 is the idiographic flow schematic diagram of the second variation of S13 in Figure 10.
Figure 15 is the idiographic flow schematic diagram of the third variation of S13 in Figure 10.
Figure 16 is the idiographic flow schematic diagram of the S3 in Figure 10.
Specific embodiment
To better understand the objects, features and advantages of the present invention, with reference to the accompanying drawing and specific real Applying mode, the present invention is further described in detail.It should be noted that in the absence of conflict, the implementation of the application Feature in mode and embodiment can be combined with each other.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, still, the present invention may be used also To be implemented using other than the one described here other modes, therefore, protection scope of the present invention is not by described below Specific embodiment limitation.
Embodiment one
Referring to Figure 1, Fig. 1 is the structural schematic diagram of medical image three-dimensional display system of the present invention.As shown in Figure 1, this hair Bright medical image three-dimensional display system includes: image capturing unit 10, tracking equipment 30, spectrophotometric unit 50 and display unit 40.The image capturing unit 10 is used to shoot the second target object, and in real time by the image of second target object taken It is sent to the image play unit.The tracking equipment 30 is used to obtain the location information of first object object, the spectrophotometric unit 50 It is divided into left view and right view on the display side of the display unit 40, the image space for showing the display unit 40 Figure.The medical image three-dimensional display system further includes image playback process unit 20, respectively with the tracking equipment 30 and the display Unit 40 connects, and the image playback process unit 20 is according to the location information of the first object object, the light of the spectrophotometric unit 50 The display parameters of grid parameter and display unit 40 handle image to be played in real time, and the display unit 40 progress is sent after processing Display.Medical image three-dimensional display system is mainly used in diagnostic endoscopy, treatment endoscopic operation, robotic surgical system, doctor With at least one of electron microscope, CT scan, magnetic resonance imaging.
Since tracking equipment 30 and display unit 40 are directly connected to image playback process unit 20, image playback process list Member 20 gets the location information, grating parameter and display parameters of first object object in time, and carries out image procossing accordingly, saves It has gone to need the treatment process by central processing unit in the prior art, thus the speed that image plays is big compared with the prior art It is big to improve, it is able to satisfy the requirement that real-time volume is shown, facilitates surgical operation and doctor is assisted to improve the excellent of success rate of operation Point.This is because doctor can obtain accurate stereo-picture in time at the time of surgery, and perform the operation in time, is not in background The problem of being mentioned in technology.Above-mentioned grating parameter mainly includes pitch (pitch) and grating the inclining with respect to display panel of grating Rake angle, grating are with respect to parameters such as the placement distances of display panel.These grating parameters, which can be, is stored directly in image broadcasting In memory in processing unit, it can also be other detection device real-time detections and obtain the grating parameter of spectrophotometric unit, by light Grid parameter value is sent to image playback process unit 20.Above-mentioned display unit parameter includes the size of display unit, display unit Screen resolution, display unit pixel unit sub-pixel put in order and the parameters such as arrangement architecture.Arrangement of subpixels Sequence is that sub-pixel is to arrange according to RGB arrangement or RBG arrangement, or at BGR, or arrange at other sequences;Sub-pixel Arrangement architecture, that is, sub-pixel be vertically arranged or it is transversely arranged, such as be the cycle arrangement in the way of RGB from top to bottom, It or is successively cycle arrangement etc. in the way of RGB from left to right.
The image capturing unit 10 is used to shoot the second target object, and in real time by second target object taken Image is sent to the image play unit.Here the second target object is primarily referred to as shooting the various fields of record by video camera Scape such as shoots the scene of ball match, the scene of operation, internal image of patient etc..It is clapped in real time by image capturing unit 10 Stereo-picture is taken the photograph, and on the display unit by the stereo-picture real-time display taken, needs not move through additional image procossing, and When and truly show the various scenes taken, meet demand of the user to real-time display, improve user experience.Image Shooting unit 10 may include at least one of monocular-camera, binocular camera or multi-lens camera.
When the image capturing unit 10 includes monocular-camera, is shot according to the monocular-camera and obtain the second mesh Mark the stereo-picture of object.Preferably, the monocular-camera can use liquid crystal lens imaging device or liquid crystal microlens battle array Column imaging device.In a specific embodiment, which obtains the two of measured object in different moments from different perspectives Width digital picture, and the three-dimensional geometric information of object is recovered based on principle of parallax, rebuild object three-dimensional contour outline and position.
When the image capturing unit 10 includes binocular camera, including two video camera either video cameras have two A camera carries out the second target object of shooting to the second target object by binocular camera and forms stereo-picture.Specifically, Binocular camera is mainly based upon principle of parallax and obtains object dimensional geological information by multiple image.Binocular Stereo Vision System It generally obtains two width digital pictures of measured object (the second target object) simultaneously from different perspectives by twin camera, and is based on parallax Principle recovers the three-dimensional geometric information of object, rebuilds object three-dimensional contour outline and position.
When the image capturing unit 10 includes multi-lens camera, i.e., the video camera of three or more (including three), these Video camera is arranged in arrays, for obtaining stereo-picture.The is obtained simultaneously from different perspectives by video camera more than above three Several digital pictures of two target objects recover the three-dimensional geometric information of object based on principle of parallax, rebuild object dimensional wheel Wide and position.
The image capturing unit 10 further includes acquisition unit, which is used to acquire the solid of second target object Image, and left view information and right view information are extracted from the stereo-picture.The acquisition unit one end is taken the photograph with above-mentioned monocular Camera, binocular camera or the connection of above-mentioned multi-lens camera, the other end are connected on image playback process unit 20.By adopting Collection unit extracts the left view information and right view information of stereo-picture while when shooting stereo-picture, improves image procossing Speed, ensure that the display effect of real-time perfoming stereoscopic display.
Above-mentioned tracking equipment 30 can be camera and/or infrared sensor, be mainly used for tracking first object object Position, such as people eyes or people head or people face position or people the upper part of the body position.Camera Or the quantity of infrared sensor does not limit, and can be one, is also possible to multiple.Camera or infrared sensor can be installed On the frame of display unit, or it is separately positioned at the position for being easy to track first object object.In addition, if using red As tracking equipment also infrared transmitter can be arranged in the position of corresponding first object object in outer sensor, red by receiving The infrared positioning signal that external transmitter is sent is calculated using the relative positional relationship of infrared transmitter and first object object The location information of first object object.
Above-mentioned spectrophotometric unit 50 is set to the light emission side of display unit 40, the left side with parallax that display unit 40 is shown View and right view are separately sent to the left eye and right eye of people, according to the left eye and right eye synthetic stereo image of people, make one to watch To the effect of stereoscopic display.Preferably, above-mentioned spectrophotometric unit can be disparity barrier or lenticulation.The disparity barrier can be Liquid crystal slit or solid slit grating piece or electrochromism slit grating piece etc., the lenticulation can be liquid crystal lens or solid Body liquid crystal lens grating.Liquid crystal lens grating mainly passes through ultraviolet light and liquid crystal is cured on thin slice, and it is saturating to form solid-state Mirror is emitted to the left eye and right eye of people after being divided to light.Preferably, above-mentioned display unit 40 and spectrophotometric unit 50 are made For an integrated display equipment 60, which is the display portion of entire medical image three-dimensional display system, can be with It is fitted together with aforementioned image playback process unit and tracking equipment, is also possible to an independent sector individualism.For example, It can be needed according to viewing, will individually show that equipment 60 is placed on the position convenient for viewing, and 20 He of image playback process unit Tracking equipment 30 can be equipment respectively with standalone feature, assembles these equipment when use and realizes reality of the invention When three-dimensional display function.For example, the image playback process unit 20 can be VMR 3D playback equipment, itself has 3D Playback process function is established with other equipment and is connected using being assembled into medical image three-dimensional display system of the invention It connects.
Above-mentioned image playback process unit 20, the position letter according to the first object object that tracking equipment 30 traces into The display parameters of breath, the grating parameter of the spectrophotometric unit 50 and display list handle image to be played in real time.Fig. 2 is referred to, is schemed As playback process unit 20 further comprises:
Graph parameter determining module 201 is arranged, location information and the light splitting according to the first object object got The grating parameter of unit and the display parameters of display list calculate row's graph parameter on the display unit;
Parallax image arrangement module 202, for according to the anaglyph on row's graph parameter arrangement display unit;The view Difference image is generated by spatially dividing left-eye image and eye image.
Anaglyph playing module 203 plays the anaglyph.After receiving the anaglyph after arrangement, carry out It plays, viewer sees the stereo-picture of display in display unit in real time.
Further, image playback process unit 20 further include: stereo-picture obtains module 204, obtains described image The stereo image information that shooting unit 10 is shot, the i.e. left view of stereo-picture and right view information.Stereo-picture includes left view Therefore figure and right view to stereo-picture to be played, need first to obtain the image information of left view and right view, Cai Nengjin Row row's figure processing.
Embodiment 1
In the embodiment of the present invention 1, preferable real-time volume display effect is obtained, needs the grating according to spectrophotometric unit The display parameters of parameter and display unit carry out optical design to spectrophotometric unit and display unit, and the optical design is according to following public affairs Formula:
(3) m*t=p-pitch
In above-mentioned formula, F is that (grating in i.e. above-mentioned grating parameter is opposite for the distance between spectrophotometric unit and display unit The placement distance of display panel), L is viewer at a distance from display unit, and IPD is to match interpupillary distance, between common people's double vision Distance, for example, general value is 62.5mm, l-pitch is the pitch (pitch) of spectrophotometric unit, and p-pitch is display unit On pixel row figure pitch, n is three-dimensional view quantity, and the pixel quantity that m is covered by spectrophotometric unit, p is display unit Point is away from for point here away from the size (one kind for belonging to display parameters) for being primarily referred to as a pixel unit, the pixel unit is usual Including tri- sub-pixels of R, G, B.In order to eliminate moire fringes, spectrophotometric unit can generally rotate a certain angle when fitting (i.e. spectrophotometric unit has certain tilt angle compared to display unit), therefore, the pitch of actual spectrophotometric unit is by following Formula provides:
(4)Wlens=l-pitch*sin θ
Wherein, WlensFor the actual pitch of spectrophotometric unit, θ is spectrophotometric unit with respect to the tilt angle (on i.e. of display panel State one of grating parameter).
As described previously for the distance between spectrophotometric unit and display unit F, when between display unit and spectrophotometric unit When medium is air, F is equal to the actual range between spectrophotometric unit and display unit;When between display unit and spectrophotometric unit Medium when being the transparent medium that refractive index is n (n be greater than 1), the actual range that F is equal between spectrophotometric unit and display unit removes With refractive index n;When between display unit and spectrophotometric unit there are when different media, and the refractive index of medium be respectively n1, N2, n3 (refractive index be all larger than or be equal to 1), F=s1/n1+s2/n2+s3/ n3, wherein s1, s2, S3 is the thickness of respective media.
By above-mentioned optical computing formula, spectrophotometric unit and display unit are configured, can reduce moire fringes, is improved The stereo display effect watched in real time.
In addition, setting is bonded unit between spectrophotometric unit and display unit in a variant embodiment, figure is referred to 3, Fig. 3 be that the bonding structure of spectrophotometric unit and display unit shows in the medical image three-dimensional display system of embodiment of the present invention one It is intended to.As shown in figure 3, being equipped with fitting unit between spectrophotometric unit 50 and display unit 40, three is similar to " sandwich knot Structure ", fitting unit includes first substrate 42 and the second substrate 43, and the sky between first substrate 42 and the second substrate 43 Gas-bearing formation 41.The air layer 41 is in sealing state between first substrate 42 and the second substrate 43, prevents air from escaping.First base Plate 42 is bonded with display panel, can be transparent glass material composition, is also possible to the composition such as transparent resin material.The second substrate 43 are oppositely arranged with first substrate 42, and the side away from first substrate 42 is for being bonded spectrophotometric unit 50.Due to single in light splitting Setting fitting unit between member 50 and display unit 40, and it is bonded unit using the above structure, the stereoscopic display for large screen Device, not only ensure that the flatness of grating fitting, but also alleviate the weight of entire 3 d display device, when preventing using pure glass Screen is caused to fall the risk split because overweight.
Embodiment 2
In the present embodiment 2, which includes video camera, which shoots the first object object.Camera shooting The quantity of machine can be one or more, can be set on the display unit, can also be separately provided.In addition, video camera can be with It is monocular-camera, binocular camera or multi-lens camera.
In addition, the tracking equipment 30 can also be including infrared remote receiver, correspondingly, corresponding first object object is provided with Infrared transmitter, the infrared transmitter may be provided at the corresponding position of first object object, also can be set other with first On the relatively-stationary object of target object position, which receives infrared set by first object object from corresponding to The infrared signal that transmitter is sent.The positioning to first object object is realized by common infrared positioning method.
In addition, above-mentioned tracking equipment 30 can also use GPS positioning module, location information is sent extremely by GPS positioning module Image playback process unit 20.
Embodiment 3
Fig. 4 is referred to, Fig. 4 shows tracking equipment in the medical image three-dimensional display system of embodiment of the present invention one Preferred embodiment structural schematic diagram.As shown in figure 4, the embodiment of the present invention 3 also proposes another tracking equipment 30, the tracking equipment 30 include:
Mark point setting unit 1, for corresponding to the spatial position setting flag point of first object object;Here mark point Can be set on first object object, can also be not provided on first object object, but be arranged with first object object There is relative positional relationship, and it can also on the object of first object object synchronous motion.For example, first object object is human eye, then Can around the eye socket of human eye setting flag point;Or glasses are configured around human eye, mark point is located to the frame of glasses On, or by mark point be located at on the ear of the relatively-stationary people of position of human eye relationship.The mark point can be transmission letter Number infrared emission sensor, LED light, GPS sensor, the various parts such as laser positioning sensor, be also possible to it is other can By the physical label of cameras capture, e.g. there is the object of shape feature and/or color characteristic.Preferably, outer to avoid The interference of boundary's veiling glare improves the robustness of label point tracking, it is preferable to use the more narrow infrared LED lamp of frequency spectrum is as label Point, and using can only by infrared LED mark point is captured using the corresponding thermal camera of frequency spectrum.In view of outer Boundary's veiling glare is mostly irregular shape and Luminance Distribution is uneven, mark point can be arranged to that the light of regular shape can be issued Spot, luminous intensity is higher, brightness uniformity.In addition it can which multiple mark points are arranged, the corresponding hot spot of each mark point is each The geometry of mark point composition rule, such as triangle, quadrangle etc. obtain mark point to be easy to trace into mark point Spatial positional information, and improve the accuracy of hot spot extraction.
Acquiring unit 2, for obtaining the location information of the mark point;This can be the letter issued by receiving mark point Number, to determine the location information of mark point, it is also possible to shoot the image containing mark point using video camera, in image Mark point extracts.The location information of mark point is obtained by image processing algorithm.
Unit 3 is rebuild, for the location information according to the mark point, rebuilds the space of the first object object Position.After acquiring the location information of the mark point, recreate the spatial position of mark point, then according to mark point with The spatial position of mark point is transformed into the spatial position (example of first object object by the relative positional relationship of first object object Such as the spatial position of the left and right two of people).
The tracking equipment 30 of the embodiment of the present invention corresponds to the location information of the mark point of first object object by obtaining, And according to the location information, the spatial position of first object object is recreated.With use in the prior art video camera as Human eye captures equipment and needs to carry out two dimensional image signature analysis to obtain position of human eye or utilize human eye rainbow using other The human eye of film reflecting effect captures equipment acquisition position of human eye and compares, and has stability good, accuracy is high, low in cost and right The advantages of the distance between tracking equipment and first object object distance do not require.
Fig. 5 is referred to, Fig. 5 shows the concrete structure schematic diagram of the acquiring unit in Fig. 4.Aforementioned acquiring unit is further Include:
Presetting module 21 is equipped with reference marker point, and described in acquisition for presetting a standard picture in the standard picture The space coordinate and plane coordinates of reference marker point;Standard picture for example can be the mark acquired by image capture device Quasi- image obtains the image coordinate of reference marker point, and uses other accurate measurement in space equipment such as laser scanners, structure The equipment such as photoscanner (such as Kinect) obtain the space coordinate and plane coordinates of reference marker point in standard picture.
Module 22 is obtained, for obtaining the present image comprising the first object object and the mark point and described Plane coordinates of the mark point in the present image;
Matching module 23, for by the present image mark point and the standard picture the reference marker point It is matched.Here will first by mark point the present image plane coordinates and reference marker point standard picture plane Corresponding relationship is established between coordinate, then matches mark point with reference marker point.
It is easy for there capable of be one when obtaining the spatial position of present image by setting standard picture and reference marker point Object of reference, it further ensures the stability of the target tracker of embodiment of the present invention and accuracys.
Further, the tracking equipment 30 further include:
Acquisition unit, for acquiring the mark point;
Screening unit screens target label point from the mark point.
Specifically, when the quantity of mark point is multiple, all of corresponding first object object are acquired using video camera Mark point, selection and the maximally related mark point of first object object, then use corresponding image procossing from all mark points Algorithm extracts the mark point on image, and extraction needs are carried out according to the feature of mark point.Generally, to the mark The method that the feature of note point extracts is to obtain the feature minute of each point in image using feature extraction function H to image I Number, and filter out the sufficiently high mark point of characteristic value.Here can be concluded with following formula indicates:
S (x, y)=H (I (x, y))
F={ arg(x, y)(S (x, y) > s0) }
In above-mentioned formula, H is feature extraction function, and I (x, y) is image value corresponding to each pixel (x, y), be can be Gray value or the color energy value of triple channel etc., S (x, y) are each feature scores of pixel (x, y) after feature extraction, S0 is a feature scores threshold value, and the S (x, y) greater than s0 is considered mark point, and F is label point set.Preferably, this Inventive embodiments using infrared markers point and thermal camera it is more obvious at the energy feature of image.Due to using narrowband LED infrared lamp and corresponding thermal camera, video camera, only mark point very low at most of pixel energies of image Corresponding pixel has high-energy.Therefore corresponding function H (x, y) can be to use the image B after Threshold segmentation operator (x, Y) it carries out region and increases several subgraphs of acquisition, and center of gravity extraction is carried out to the subgraph acquired.Meanwhile according in environment light The veiling glare that can be imaged in thermal camera, we can add such as mark point institute into hot spot in infrared markers point extraction process Area, the constraint conditions such as the positional relationship of mark point in two dimensional image screen the mark point extracted.
When video camera number is greater than 1, need to different cameras in synchronization or the figure obtained close to synchronization As a matching is marked, to provide condition for subsequent mark point three-dimensional reconstruction.The method of reference points matching needs basis Depending on feature extraction function H.We can be used some classics feature point extraction operator based on gradient of image and gray scale figure and The matching process to match therewith such as Harris, SIFT, the methods of FAST are obtained and matched indicia point.Also the limit can be used about A matching is marked in beam, the modes such as priori conditions of mark point.The method for carrying out matching screening used here as limit restraint is: According to the projection on two different cameras images in same o'clock all in this principle of same plane, for some camera shooting Some mark point p0 in machine c0, we can calculate a polar curve equation in other video cameras c1, and p0 pairs of mark point Following relationships should be met in the mark point p1 on other video camera c1:
[p1;1]TF[p0;1]=0
In above-mentioned formula, F is basis matrix of the video camera c0 to video camera c1.By using above-mentioned relation, we can be big The candidate number of mark point p1 is reduced greatly, improves matching accuracy.
In addition, we can be used mark point priori conditions be mark point spatial order, the size etc. of mark point. Such as according to the mutual alignment relation of two video cameras make it captured by image on every a pair of corresponding the same space point two A pixel is equal in some dimension such as y-axis, this process is also referred to as image calibration (rectification).Then this When mark point matching can also be executed according to the x-axis of mark point sequence, i.e. the corresponding minimum x of minimum x, and so on, maximum X correspond to maximum x.
Based on the following for tracking video camera number number, target tracker of the invention is discussed in detail.
Fig. 6 is referred to, Fig. 6 shows the concrete structure schematic diagram for rebuilding unit in Fig. 4.As shown in fig. 6, at this In embodiment, the corresponding mark point of first object object of the tracking equipment 30 tracking is no more than four, and is imaged using monocular When location information of the machine to obtain mark point, rebuilding unit further comprises:
First computing module 31, for the plane coordinates and the standard picture according to the mark point in the present image The plane coordinates of the reference marker point and the hypothesis condition of scene where the first object object calculate it is described current Homograph relationship between image and the standard picture;By the reference marker in the mark point and standard picture of present image Point is matched, and the homograph relationship between present image and standard picture is calculated according to the respective plane coordinates of the two. So-called homograph is the homography in corresponding geometry, is a kind of transform method often applied in computer vision field.
First reconstructed module 32 is shooting the standard for calculating the mark point according to the homograph relationship Then the spatial position at image moment calculates the mark point at current time to the rigid transformation of the spatial position at current time Spatial position, and the current space of the first object object is calculated in the spatial position at current time according to the mark point Position.
Specifically, for the hypothesis condition of scene, we can assume that when the rigid transformation of mark point in scene The numerical value of certain dimension is constant, such as in three dimensional spatial scene, and space coordinate is x, y, z, x and y respectively with the image of camera X-axis is parallel with y-axis in coordinate (plane coordinates), and z-axis is perpendicular to the image of camera, it is assumed that condition can be mark point in z Coordinate on axis is constant, and it is constant to be also possible to coordinate of the mark point in x-axis and/or y-axis.Different suppositive scenario conditions, institute The estimation method used is also not quite similar.In another example another kind assume under the conditions of, it is assumed that first object object direction with take the photograph As head towards between rotation angle remain constant in use, then can be mutual according to the mark point in present image The distance between with the mark point on standard picture from each other at a distance between ratio speculate that first object object is current Spatial position.
By above calculation method, structure again may be implemented when monocular-camera is no more than four to the quantity of mark point The spatial position of the first object object is built, it is easy to operate, and tracking result is also relatively accurate, due to using monocular, reduces Cost of the first object to image tracing.
It is above-mentioned to acquire image to restore in object dimensional seat calibration method, since the image of acquisition is believed using single camera It ceases less, it is therefore desirable to increase the number of mark point to provide more image informations to calculate the three-dimensional coordinate of object.Root According to machine vision theory, the steric information of scene is extrapolated from single image, need at least to determine five calibration in image Point.Therefore, monocular scheme increases mark point quantity, also increases the complexity of design, but simultaneously, it is only necessary to a video camera To reduce the complexity of Image Acquisition, reduce costs.
Fig. 7 is referred to, Fig. 7 shows the specific structure signal of the second variant embodiment for rebuilding unit in Fig. 4 Figure.As shown in fig. 7, in the present embodiment, when the quantity of the mark point is five or more, and institute is obtained using monocular-camera When stating the location information of mark point, the unit that rebuilds further comprises:
Second computing module 33, for the plane coordinates and the standard picture according to the mark point in the present image The reference marker point plane coordinates, calculate the homograph relationship between the present image and the standard picture.
Second reconstructed module 34 is shooting the standard for calculating the mark point according to the homograph relationship Then the spatial position at image moment calculates the mark point at current time to the rigid transformation of the spatial position at current time Spatial position, and the current spatial position of first object object is calculated according to the spatial position at the mark point current time.
A width standard picture is acquired first, measures reference using devices such as accurate depth camera or laser scanners The spatial position of mark point, and obtain the two dimensional image coordinate (i.e. plane coordinates) of reference marker point at this time.
In use, video camera constantly captures all mark points in the present image containing first object object Two dimensional image coordinate, and current shape is calculated according to the two-dimensional coordinate of two dimensional image coordinate at this time and standard picture reference marker point The rigid transformation between mark point when mark point under state and shooting standard picture, assuming that relative position is not between mark point In the case where change, and then the transformation of spatial position when mark point is relative to standard picture at this time is calculated out, worked as to calculate The spatial position of preceding mark point.
Here, the sky of mark point when can calculate current markers point using points more than five points and shoot standard picture Between position rigid transformation [R | T], it is preferred that five or more the points not in one plane, and the projection matrix P quilt of camera It demarcates in advance.The concrete mode for calculating [R | T] is as follows:
Each mark point is respectively X0, Xi in the homogeneous coordinates of standard picture and present image.The two meets the limit about Beam, i.e. X0P-1[R | T] P=Xi.All mark points form the equation group that a unknown parameter is [R | T].When mark point quantity is big When 5, [R | T] can be solved;When mark point quantity is greater than 6, optimal solution can be asked to [R | T], method can be used Singular value decomposition SVD, and/or non-linear optimal solution is calculated using the method for iteration.After calculating mark point spatial position, We can speculate according to the mutual alignment relation between the prior mark point demarcated and first object object (such as human eye) The spatial position of first object object (such as human eye) out.
The present embodiment only uses a video camera, uses five or five or more mark points, so that it may accurately construct The spatial position of first object object, it is not only easy to operate but also low in cost.
Fig. 8 is referred to, Fig. 8 shows the specific structure signal of the third variant embodiment for rebuilding unit in Fig. 4 Figure.As shown in figure 8, the present embodiment uses two or more video cameras, one or more mark point.Using double When lens camera or multi-lens camera obtain the location information of the mark point, the unit that rebuilds further comprises:
Third computing module 35 calculates each mark point at current time using binocular or more mesh three-dimensional reconstruction principles Spatial position;So-called binocular or three mesh, which rebuild principle, can use following methods, for example, by using the matched mark of left and right camera Parallax between note point, calculates each mark point in the spatial position at current time.Either using other existing common Method is realized.
Third reconstruction module 36, current according to the spatial position at mark point current time calculating first object object Spatial position.
Specifically, the mutual alignment relation between each video camera is carried out using the method that multi-lens camera is calibrated first Calibration.Then in use, the image zooming-out mark point coordinate each video camera got, and each label is clicked through Row matching obtains it in the corresponding mark point of each video camera, then using phase between matched mark point and video camera Mutual positional relationship calculates the spatial position of mark point.
In a specific example, carry out shot mark point using multi-lens camera (i.e. number of cameras is more than or equal to 2), Realize stereo reconstruction.Coordinate u of the known mark point on a certain shot by camera image and the camera parameters matrix M, we can calculate a ray, this mark point is in space on this ray.
αjuj=MjX j=1 ... n (wherein n is the natural number more than or equal to 2)
Similarly, according to above-mentioned formula, this mark point can also calculate corresponding other video cameras on other video cameras Ray.Theoretically, this two rays converge on a point, i.e. the spatial position of this mark point.Actually due to camera shooting The digitizer error of machine, the error etc. of video camera internal reference and outer ginseng calibration, these rays can not converge at a bit, therefore need The method approximate calculation of triangulation (triangululation) is used to go out the spatial position of mark point.For example it can be used Least square judgment criterion determines the point nearest apart from all light as object point.
After calculating mark point spatial position, we can be according to the mark point and first object pair demarcated in advance As the mutual alignment relation between (such as human eye) deduces the spatial position of first object object (human eye).
In the above-mentioned method for realizing stereo reconstruction using multi-lens camera, preferable method is using binocular camera meter It calculates.Its principle is rebuild as principle with aforementioned multi-lens camera, is all the mutual alignment relation and label according to two video cameras O'clock two video camera imagings two-dimensional coordinate calculate mark point spatial position.Its minute differences is binocular camera laid parallel, According to image calibration as previously described is done to the image of two video cameras after simple calibration, so that two two to match each other Dimension mark point is equal on y (or x) axis, then depth of the mark point away from video camera can be by the two-dimensional marker point after calibrating in x at this time Gap on (or y) axis is calculated.The method can regard the specific process that multi-eye stereo is reconstituted under biocular case as, letter It the step of having changed stereo reconstruction and is easier to realize on device hardware.
Embodiment 4
Fig. 9 is referred to, Fig. 9 shows the positioning branch that first object object setting flag point is corresponded in the tracking device of Fig. 4 The structural schematic diagram of frame.As shown in figure 9, the present invention provides a kind of locating support, which is located at human eye (first object pair As) front, structure is similar to glasses, wears and is similar to glasses, comprising: crossbeam 11, fixed part 12, support portion 13 and control Portion 14, crossbeam 11 are provided with mark point 111;Support portion 13 is set on crossbeam 11;The end pivot of fixed part 12 and crossbeam 11 Connection.The position that wherein mark point 111 is arranged is corresponding with the position of human eye (first object object), by obtaining mark point Then 111 spatial positional information calculates the spatial positional information of human eye accordingly.When the head of people occurs mobile, correspondingly, Mark point 111 corresponding with human eye also moves, the movement of Camera location mark point 111, then uses aforementioned embodiment party The scheme of the target object tracking of formula one obtains the spatial positional information of mark point 111, utilizes mark point 111 and human eye Relative tertiary location relationship recreates spatial position (the three-dimensional seat i.e. in space of human eye (first object object) Mark).
In the present embodiment, crossbeam 11 is a strip, and has certain radian, and radian is close with the forehead radian of people Seemingly, with convenient to use.Crossbeam 11 include upper surface 112, lower surface corresponding thereto, setting upper surface 112 and lower surface it Between first surface 114 and second surface.
In the present embodiment, mark point 111 is three LED light, and interval is evenly provided on the first surface of crossbeam 11 On 114.It is understood that mark point 111 or one, two or more, and can be any light source, including LED light, infrared lamp or ultraviolet lamp etc..Also, the arrangement mode of the mark point 111 and setting position also can according to need into Row adjustment.
It is understood that crossbeam 11, which also can according to need, is designed to linear or other shapes.
In the present embodiment, there are two fixed parts 12, and the both ends with crossbeam 11 are pivotally connected respectively, and two fixed parts 12 opposed, inwardly directed can fold, meanwhile, two fixed parts 12 can be expanded to crossbeam 11 outward respectively in 100 ° or so of interior angle, be had Body, the size of interior angle can be adjusted according to practical operation demand.It should be understood that fixed part 12 or one.
The one end of fixed part 12 far from crossbeam 11 bends setting along the extending direction of support portion 13, to be used for fixed part 12 End be fixed on the ear of people.
In the present embodiment, support portion 13 is in a strip shape, and the middle part of the lower surface 113 of crossbeam 11 is arranged in and extends downwardly.Into One step, support portion 13 are provided with nose support 131 far from the end of crossbeam 11, to be used to cooperate positioning device the bridge of the nose, and will positioning Device is set to above human eye.It should be understood that in other embodiments, if being not provided with nose support 131, support portion 13 is settable It at Y-shaped, and along the middle part of crossbeam 11 and extends downwardly, positioning device is cooperated the bridge of the nose, and positioning device is arranged Above human eye.
The rounded cuboid of control unit 14 is arranged on fixed part 12.Control unit 14 is used for the LED light, infrared lamp Or ultraviolet lamp provides power supply and/or person controls the LED light, the use state of infrared lamp or ultraviolet lamp comprising power switch 141, power supply indicator and charging indicator light.It is understood that the unlimited setting shape of control unit 14, can have any shape, It may be an integrated chip.Also, control unit 14 also can be set in other positions, on crossbeam 11.
In use, turning on the power switch 141, power supply indicator shows that LED is in power supply state, and LED light is lit;Work as electricity When amount is insufficient, charging indicator light prompts not enough power supply;It turns off the power switch, power supply indicator extinguishes, and indicates that LED is in and closes shape State, LED light are extinguished.
Since the interpupillary distance range of people is 58mm~64mm, the interpupillary distance that can be approximately considered people is definite value, provided by the invention fixed Position bracket is similar to spectacle frame, and is fixed on above human eye, is similar to spectacle frame, and as needed, mark point setting is being positioned The predetermined position of device, so as to simply and easily determine the position of human eye according to the position of mark point.Positioning device structure Simply, design with it is easy to use.
Embodiment two
Referring to Figure 10 to Figure 13, Figure 10 is that the process of the medical image stereo display method of embodiment of the present invention two is shown It is intended to, Figure 11 is the idiographic flow schematic diagram of S1 in Figure 10, and Figure 12 is the idiographic flow schematic diagram of S12 in Figure 11, and Figure 13 is figure The idiographic flow schematic diagram of S3 in 10.As shown in Figure 10 to Figure 13, the medical image stereoscopic display of embodiment of the present invention two Method mainly comprises the steps that
S0 image capturing procedure, shoots the stereo-picture of the second target object, and sends described second taken in real time The information of the stereo-picture of target object, the information include left view information and right view information.
The location information of S1 acquisition first object object;Using the position of tracking equipment tracking first object object, such as Location information where viewer.
S2 obtains the grating parameter of the spectrophotometric unit of 3 d display device and the display parameters of display unit;Spectrophotometric unit The pitch (pitch) and grating that grating parameter mainly includes grating are with respect to the tilt angle of display panel, the opposite display surface of grating The parameters such as the placement distance of plate.
S3 shoots the described image received according to the location information and the grating parameter and the display parameters Unit photographs to stereo-picture handled in real time.Before stereo-picture to be played, need to believe in conjunction with the position of human eye in advance The display parameters of breath and grating parameter and display unit, handle image, in order to provide optimal three-dimensional aobvious to viewer Show effect.
S4 shows the image to be played.
Medical image stereo display method of the invention, by the location information and light that get first object object in time Grid parameter, and image procossing is directly carried out accordingly, the speed of image broadcasting is improved, the requirement that real-time volume is shown is able to satisfy, The advantages of facilitating surgical operation and auxiliary doctor to improve success rate of operation.
In addition, the second target object here refers mainly to the various scenes that video camera takes, actual people can be, or The ball match or the image etc. in patient body shot by some equipment that person is being broadcast live.By shooting perspective view in real time Picture, and on the display unit by the stereo-picture real-time display taken, additional image procossing is needed not move through, it is in time and true Ground shows the various scenes taken, meets demand of the user to real-time display, improves user experience.
In a specific variant embodiment, above-mentioned steps S0 further include: image acquisition step acquires second mesh The stereo-picture of object is marked, and extracts left view information and right view information from the stereo-picture.By being shot on side The left view information and right view information of stereo-picture are extracted in side when stereo-picture, improve the speed of image procossing, ensure that The display effect of real-time perfoming stereoscopic display.
Embodiment 5
Referring to Figure 11, the embodiment of the present invention 5 be mainly the S1 location information for how obtaining first object object is made it is detailed Thin description.These first object objects are, for example, that upper part of the body of human eye, the head of people, the face of people or human body etc. is watched with people Relevant position.Above-mentioned " location information of S1 acquisition first object object " mainly comprises the steps that
S11 corresponds to the spatial position setting flag point of first object object;Here first object can be set in mark point On object, it can also be not provided on first object object, but setting is having relative positional relationship with first object object, And with can also on the object of first object object synchronous motion.For example, target object is human eye, then it can be in the eye socket week of human eye Enclose setting flag point;Or locating support is configured around human eye, mark point is located on the frame of locating support, or will mark Note point be located at on the ear of the relatively-stationary people of position of human eye relationship.The mark point can be the infrared emission biography for sending signal Sensor, LED light, GPS sensor, the various parts such as laser positioning sensor, be also possible to it is other can be by cameras capture Physical label, the e.g. object with shape feature and/or color characteristic.Preferably, being mentioned for the interference for avoiding extraneous veiling glare The robustness of height label point tracking is, it is preferable to use the more narrow infrared LED lamp of frequency spectrum is as mark point, and use can only pass through Infrared LED mark point is captured using the corresponding thermal camera of frequency spectrum.In view of extraneous veiling glare is mostly irregular Shape and Luminance Distribution it is uneven, mark point can be arranged to that the hot spot of regular shape can be issued, luminous intensity is higher, Brightness uniformity.In addition it can which multiple mark points are arranged, each mark point corresponds to a hot spot, each mark point composition rule Geometry, such as triangle, quadrangle etc. obtains the spatial positional information of mark point to be easy to trace into mark point, and mentions The accuracy that high hot spot extracts.
S12 obtains the location information of the mark point;This can be the signal issued by receiving mark point, to determine label The location information of point, is also possible to be shot the image containing mark point using video camera, be mentioned to the mark point in image It takes.The location information of mark point is obtained by image processing algorithm.
Location information of the S13 according to the mark point, rebuilds the spatial position of the first object object.When acquiring this After the location information of mark point, the spatial position of mark point is recreated, then according to mark point and first object object The spatial position of mark point is transformed into spatial position (such as the left and right of people two of first object object by relative positional relationship Spatial position).
The location information of the mark point by obtaining corresponding first object object of embodiment of the present invention two, and according to this Location information recreates the spatial position of first object object.It is captured with video camera is used in the prior art as human eye Equipment needs to carry out two dimensional image signature analysis to obtain position of human eye or imitate using other using human eye iris reflex The human eye of fruit, which captures equipment and obtains position of human eye, to compare, and has stability good, the accuracy for capturing the location information of human eye is high, It is low in cost and the advantages that do not require the distance between tracking equipment and first object object are far and near.
Referring to Figure 12, above-mentioned steps S12 further comprises:
S121 presets a standard picture, is equipped with reference marker point in the standard picture, and obtain the reference marker point Space coordinate and plane coordinates;Standard picture for example can be the standard picture acquired by image capture device, obtain The image coordinate of reference marker point is taken, and uses other accurate measurement in space equipment such as laser scanners, structured light scanner Equipment such as (such as Kinect) obtain the space coordinate and plane coordinates of reference marker point in standard picture.
S122 obtains the present image comprising the target object and the mark point and the mark point described current The plane coordinates of image;
S123 matches the mark point in the present image with the reference marker point of the standard picture.This In will plane coordinates and the reference marker point first by mark point in the present image built between the plane coordinates of standard picture Vertical corresponding relationship, then matches mark point with reference marker point.
It is easy for there capable of be one when obtaining the spatial position of present image by setting standard picture and reference marker point Object of reference, it further ensures the stability of the method for tracking target of embodiment of the present invention and accuracys.
Further, before above-mentioned steps S11 further include: S10 is to the location information for obtaining the mark point Video camera is demarcated.
Above-mentioned calibration has in the following several ways:
(1) when the video camera of the S10 is monocular-camera, common Zhang Shi gridiron pattern calibration algorithm, example can be used Such as demarcated using following formula:
Sm '=A [R | t] M ' (1)
In formula (1), A is inner parameter, and R is external parameter, and t is translation vector, the coordinate of m ' picture point in the picture, and M ' is The space coordinate (three-dimensional coordinate i.e. in space) of object point;Wherein A, R and t are determined by following formula respectively:
WithTranslation vector
Certainly for video camera calibration algorithm there are many kinds of, can also use the common calibration algorithm of other industries, this Invention is not construed as limiting, and uses calibration algorithm, mainly to improve the accuracy of first object method for tracing object of the invention.
(2) it when the video camera of the S10 is binocular camera or multi-lens camera, is demarcated using following steps:
S101 first demarcates any lens camera in the binocular camera or multi-lens camera, and using normal The Zhang Shi gridiron pattern calibration algorithm seen, for example, by using following formula:
Sm '=A [R | t] M ' (1)
In formula (1), A is inner parameter, and R is external parameter, and t is translation vector, the coordinate of m ' picture point in the picture, and M ' is The space coordinate of object point;Wherein A, R and t are determined by following formula respectively:
WithTranslation vector
S102 calculates relative rotation matrices and relative translation amount between the binocular camera or the multi-lens camera, Using following formula:
Relative rotation matricesWith relative translation amount
Certainly the above-mentioned binocular camera or the calibration algorithm of multi-lens camera of being directed to is wherein more typical one kind, may be used also To use the common calibration algorithm of other industries, the present invention is not construed as limiting, and mainly uses calibration algorithm, of the invention to improve The accuracy of first object method for tracing object.
Further, between above-mentioned S11 and S12 further include:
S14 acquires the mark point;
S15 screens target label point from the mark point.
Specifically, when the quantity of mark point is multiple, all of corresponding first object object are acquired using video camera Mark point, selection and the maximally related mark point of first object object, then use corresponding image procossing from all mark points Algorithm extracts the mark point on image, and extraction needs are carried out according to the feature of mark point.Generally, to the mark The method that the feature of note point extracts is to obtain the feature minute of each point in image using feature extraction function H to image I Number, and filter out the sufficiently high mark point of characteristic value.Here can be concluded with following formula indicates:
S (x, y)=H (I (x, y))
F={ arg(x, y)(S (x, y) > s0) }
In above-mentioned formula, H is feature extraction function, and I (x, y) is image value corresponding to each pixel (x, y), be can be Gray value or the color energy value of triple channel etc., S (x, y) are each feature scores of pixel (x, y) after feature extraction, S0 is a feature scores threshold value, and the S (x, y) greater than s0 is considered mark point, and F is label point set.Preferably, The embodiment of the present invention using infrared markers point and thermal camera it is more obvious at the energy feature of image.It is narrow due to using Band LED infrared lamp and corresponding thermal camera, video camera it is very low at most of pixel energies of image, only mark The corresponding pixel of point has high-energy.Therefore corresponding function H (x, y) can be to using the image B after Threshold segmentation operator (x, y) carries out region and increases several subgraphs of acquisition, and carries out center of gravity extraction to the subgraph acquired.This feature extracts function H (x, y), can be Harris, and the features point function such as SIFT, FAST is also possible to the image processing functions such as circular light spot extraction. Meanwhile according to the veiling glare that can be imaged in thermal camera in environment light, we can add in infrared markers point extraction process Such as mark point institute is at facula area, and the constraint conditions such as the positional relationship of mark point in two dimensional image are to the mark point extracted It is screened.
When video camera number is greater than 1, need to different cameras in synchronization or the figure obtained close to synchronization As a matching is marked, to provide condition for subsequent mark point three-dimensional reconstruction.The method of reference points matching needs basis Depending on feature extraction function H.We can be used some classics feature point extraction operator based on gradient of image and gray scale figure and The matching process to match therewith such as Harris, SIFT, the methods of FAST are obtained and matched indicia point.Also the limit can be used about A matching is marked in beam, the modes such as priori conditions of mark point.The method for carrying out matching screening used here as limit restraint is: According to the projection on two different cameras images in same o'clock all in this principle of same plane, for some camera shooting Some mark point p0 in machine c0, we can calculate a polar curve equation in other video cameras c1, and p0 pairs of mark point Following relationships should be met in the mark point p1 on other video camera c1:
[p1;1]TF[p0;1]=0
In above-mentioned formula, F is basis matrix of the video camera c0 to video camera c1.By using above-mentioned relation, we can be big The candidate number of mark point p1 is reduced greatly, improves matching accuracy.
In addition, we can be used mark point priori conditions be mark point spatial order, the size etc. of mark point.Than Two of every a pair of corresponding the same space point on image as captured by making it according to the mutual alignment relation of two video cameras Pixel is equal in some dimension such as y-axis, this process is also referred to as image calibration (rectification).Then at this time The matching of mark point can also be executed according to the x-axis sequence of mark point, i.e. the corresponding minimum x of minimum x, and so on, maximum x Corresponding maximum x.
Based on the following for tracking video camera number number, method for tracking target of the invention is discussed in detail.
Referring to Figure 13, it is the idiographic flow schematic diagram of the first variation of S13 in Figure 10.As shown in figure 13, in this reality It applies in example, the corresponding mark point of first object object of first object method for tracing object tracking is no more than four, and uses When location information of the monocular-camera to obtain mark point, abovementioned steps S13 further comprises:
S131 is according to the plane coordinates of the mark point in the present image and the reference marker of the standard picture The hypothesis condition of scene calculates the present image and the standard where the plane coordinates and the first object object of point Homograph relationship between image;The mark point of present image is matched with the reference marker point in standard picture, and The homograph relationship between present image and standard picture is calculated according to the respective plane coordinates of the two.So-called homograph is Homography in corresponding geometry, is common a kind of transform method in computer vision field.
S132 calculates the mark point in the space for shooting the standard picture moment according to the homograph relationship Then position calculates the mark point in the spatial position at current time to the rigid transformation of the spatial position at current time, and The current spatial position of the first object object is calculated in the spatial position at current time according to the mark point.
Specifically, for the hypothesis condition of scene, we can assume that when the rigid transformation of mark point in scene The numerical value of certain dimension is constant, such as in three dimensional spatial scene, and space coordinate is x, y, z, x and y respectively with the image of camera X-axis is parallel with y-axis in coordinate (plane coordinates), and z-axis is perpendicular to the image of camera, it is assumed that condition can be mark point in z Coordinate on axis is constant, and it is constant to be also possible to coordinate of the mark point in x-axis and/or y-axis.Different suppositive scenario conditions, institute The estimation method used is also not quite similar.In another example another kind assume under the conditions of, it is assumed that first object object direction with take the photograph As head towards between rotation angle remain constant in use, then can be mutual according to the mark point in present image The distance between with the mark point on standard picture from each other at a distance between ratio speculate that first object object is current Spatial position.
By above calculation method, structure again may be implemented when monocular-camera is no more than four to the quantity of mark point The spatial position of the first object object is built, it is easy to operate, and tracking result is also relatively accurate, due to using monocular, reduces Cost of the first object to image tracing.
It is above-mentioned to acquire image to restore in object dimensional seat calibration method, due to the image of acquisition using single camera Information is less, it is therefore desirable to increase the number of mark point to provide more image informations to calculate the three-dimensional coordinate of object. According to machine vision theory, the steric information of scene is extrapolated from single image, need at least to determine five marks in image Fixed point.Therefore, monocular scheme increases mark point quantity, also increases the complexity of design, but simultaneously, it is only necessary to a camera shooting Machine reduces costs to reduce the complexity of Image Acquisition.
Referring to Figure 14, it is the idiographic flow schematic diagram of the second variation of S13 in Figure 10.As shown in figure 14, in this reality It applies in example, when the quantity of the mark point is five or more, and the location information of the mark point is obtained using monocular-camera When, the S13 further comprises:
S133 is according to the plane coordinates of the mark point in the present image and the reference marker of the standard picture The plane coordinates of point, calculates the homograph relationship between the present image and the standard picture;
S134 calculates the mark point in the space for shooting the standard picture moment according to the homograph relationship Then position calculates the mark point in the spatial position at current time to the rigid transformation of the spatial position at current time, and The current spatial position of first object object is calculated according to the spatial position at the mark point current time.
Specifically, a width standard picture is acquired first, uses the devices such as accurate depth camera or laser scanner The spatial position of reference marker point is measured, and obtains the two dimensional image coordinate (i.e. plane coordinates) of reference marker point at this time.
In use, video camera constantly captures all mark points in the present image containing first object object Two dimensional image coordinate, and current shape is calculated according to the two-dimensional coordinate of two dimensional image coordinate at this time and standard picture reference marker point The rigid transformation between mark point when mark point under state and shooting standard picture, assuming that relative position is not between mark point In the case where change, and then the transformation of spatial position when mark point is relative to standard picture at this time is calculated out, worked as to calculate The spatial position of preceding mark point.
Here, the sky of mark point when can calculate current markers point using points more than five points and shoot standard picture Between position rigid transformation [R | T], it is preferred that five or more the points not in one plane, and the projection matrix P quilt of camera It demarcates in advance.The concrete mode for calculating [R | T] is as follows:
Each mark point is respectively X0, Xi in the homogeneous coordinates of standard picture and present image.The two meets the limit about Beam, i.e. X0P-1[R | T] P=Xi.All mark points form the equation group that a unknown parameter is [R | T].When mark point quantity is big When 5, [R | T] can be solved;When mark point quantity is greater than 6, optimal solution can be asked to [R | T], method can be used Singular value decomposition SVD, and/or non-linear optimal solution is calculated using the method for iteration.After calculating mark point spatial position, We can speculate according to the mutual alignment relation between the prior mark point demarcated and first object object (such as human eye) The spatial position of first object object (such as human eye) out.
The present embodiment only uses a video camera, uses five or five or more mark points, so that it may accurately construct The spatial position of first object object, it is not only easy to operate but also low in cost.
Referring to Figure 1 in 5, Figure 15 Figure 10 the third variation of S13 idiographic flow schematic diagram.As shown in figure 15, this reality It applies example 3 and uses two or more video cameras, one or more mark point.It is taken the photograph using binocular camera or more mesh When camera obtains the location information of the mark point, the S3 further comprises:
S35 uses binocular or more mesh three-dimensional reconstruction principles, calculates each mark point in the spatial position at current time;Institute It calls binocular or three mesh rebuilds principle and can use following methods, for example, by using the view between the matched mark point of left and right camera Difference calculates each mark point in the spatial position at current time.Either realized using other existing common methods.
S36 calculates the current spatial position of first object object according to the spatial position at the mark point current time.
Specifically, the mutual alignment relation between each video camera is carried out using the method that multi-lens camera is calibrated first Calibration.Then in use, the image zooming-out mark point coordinate each video camera got, and each label is clicked through Row matching obtains it in the corresponding mark point of each video camera, then using phase between matched mark point and video camera Mutual positional relationship calculates the spatial position of mark point.
In a specific example, carry out shot mark point using multi-lens camera (i.e. number of cameras is more than or equal to 2), Realize stereo reconstruction.Coordinate u of the known mark point on a certain shot by camera image and the camera parameters matrix M, we can calculate a ray, this mark point is in space on this ray.
αjuj=MjX j=1 ... n (wherein n is the natural number more than or equal to 2)
Similarly, according to above-mentioned formula, this mark point can also calculate corresponding other video cameras on other video cameras Ray.Theoretically, this two rays converge on a point, i.e. the spatial position of this mark point.Actually due to camera shooting The digitizer error of machine, the error etc. of video camera internal reference and outer ginseng calibration, these rays can not converge at a bit, therefore need The method approximate calculation of triangulation (triangululation) is used to go out the spatial position of mark point.For example it can be used Least square judgment criterion determines the point nearest apart from all light as object point.
After calculating mark point spatial position, we can be according to the mark point and first object pair demarcated in advance As the mutual alignment relation between (such as human eye) deduces the spatial position of first object object (human eye).
In the above-mentioned method for realizing stereo reconstruction using multi-lens camera, preferable method is using binocular camera meter It calculates.Its principle is rebuild as principle with aforementioned multi-lens camera, is all the mutual alignment relation and label according to two video cameras O'clock two video camera imagings two-dimensional coordinate calculate mark point spatial position.Its minute differences is binocular camera laid parallel, According to image calibration as previously described is done to the image of two video cameras after simple calibration, so that two two to match each other Dimension mark point is equal on y (or x) axis, then depth of the mark point away from video camera can be by the two-dimensional marker point after calibrating in x at this time Gap on (or y) axis is calculated.The method can regard the specific process that multi-eye stereo is reconstituted under biocular case as, letter It the step of having changed stereo reconstruction and is easier to realize on device hardware.
Embodiment 6
6, Figure 16 is the idiographic flow schematic diagram of the S3 in Figure 10 referring to Figure 1.As shown in figure 16, it is based on aforementioned embodiment party The step S3 of formula two and previous embodiment, medical image stereo display method of the invention further comprises:
S301 row's graph parameter determines step, location information and the light splitting according to the first object object got The grating parameter of unit and the display parameters of display unit calculate row's graph parameter on the display unit;
S302 parallax image arrangement step arranges the anaglyph on the display unit according to row's graph parameter;
S303 anaglyph plays step, plays the anaglyph.
Through the above steps, stereo-picture to be played is rearranged, improves the effect of stereoscopic display.
Further, before step S301 further include: S304 stereo-picture obtaining step obtains the institute that captured in real-time arrives State the information of stereo-picture.While side plays anaglyph, side obtains the stereo image information that captured in real-time arrives, and improves The efficiency of image procossing not only ensure that real-time broadcasting, but also reduce the data storage occupied by stereoscopically displaying images simultaneously Amount is very big and requires the requirement of big memory, reduces costs.
The foregoing is merely the preferred embodiment of the present invention, are not intended to restrict the invention, for this field For technical staff, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any Modification, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (20)

1. a kind of medical image three-dimensional display system, comprising: display unit, spectrophotometric unit, tracking equipment and image capturing unit, The spectrophotometric unit is located at the display side of the display unit, is divided into a left side on the image space for showing the display unit View and right view, the tracking equipment are used to obtain the location information of first object object, and described image shooting unit is used for Shooting the second target object, which is characterized in that the medical image three-dimensional display system further includes image playback process unit, point It is not connect with the tracking equipment, the display unit and described image shooting unit, described image playback process unit foundation The display parameters of the location information of the first object object, the grating parameter of the spectrophotometric unit and the display unit, will The stereo-picture that the described image shooting unit received takes is handled in real time, and it is real that the display unit is sent after processing When show;
Wherein, the tracking equipment includes:
Mark point setting unit, the spatial position setting flag point of the corresponding first object object;
Acquiring unit obtains the location information of the mark point;
Unit is rebuild, according to the location information of the mark point, rebuilds the spatial position of the first object object; The first object object includes human eye;
Wherein, the mark point is arranged in has on the object of relative positional relationship with the first object object, or setting exists On the object of the first object object synchronous motion;
The spectrophotometric unit is solid slit grating piece or liquid crystal lens grating;
The acquiring unit further comprises:
Presetting module presets a standard picture, is equipped with reference marker point in the standard picture, and obtain the reference marker point Space coordinate and plane coordinates in the standard picture;
Module is obtained, obtains the present image comprising the first object object and the mark point and the mark point in institute State the plane coordinates of present image;
Matching module matches the mark point in the present image with the reference marker point of the standard picture;
When the quantity of the mark point is no more than four, and obtains the location information of the mark point using monocular-camera, It is described to rebuild unit further include:
First computing module, for described in the plane coordinates and the standard picture according to the mark point in the present image The hypothesis condition of scene where the plane coordinates of reference marker point and the first object object calculate the present image with Homograph relationship between the standard picture;
First reconstructed module, for calculating the mark point when shooting the standard picture according to the homograph relationship Then the spatial position at quarter calculates the mark point in the space at current time to the rigid transformation of the spatial position at current time Position, and the current spatial position of the first object object is calculated in the spatial position at current time according to the mark point;
When the quantity of the mark point is five or more, and obtains the location information of the mark point using monocular-camera, It is described to rebuild unit further include:
Second computing module, for described in the plane coordinates and the standard picture according to the mark point in the present image The plane coordinates of reference marker point calculates the homograph relationship between the present image and the standard picture;
Second reconstructed module, for calculating the mark point when shooting the standard picture according to the homograph relationship Then the spatial position at quarter calculates the mark point in the space at current time to the rigid transformation of the spatial position at current time Position, and the current spatial position of first object object is calculated according to the spatial position at the mark point current time.
2. medical image three-dimensional display system as described in claim 1, which is characterized in that medical image stereoscopic display system System is applied to diagnostic endoscopy, treatment endoscopic operation, robotic surgical system, medical electric microscope, electronic computer tomography In scanning or magnetic resonance imaging.
3. medical image three-dimensional display system as described in claim 1, which is characterized in that described image shooting unit includes single Lens camera shoots with a camera and obtains the stereo-picture of second target object.
4. medical image three-dimensional display system as described in claim 1, which is characterized in that described image shooting unit includes double Lens camera shoots with two cameras and obtains the stereo-picture of second target object.
5. medical image three-dimensional display system as described in claim 1, which is characterized in that described image shooting unit includes more Lens camera, by three or more the cameras stereo-picture arranged in arrays to shoot and obtain second target object.
6. such as the described in any item medical image three-dimensional display systems of claim 2 to 5, which is characterized in that described image shooting Unit further comprises acquisition unit, and the acquisition unit is used to acquire the stereo-picture of second target object, and from institute It states and extracts left view information and right view information in stereo-picture.
7. medical image three-dimensional display system as described in claim 1, which is characterized in that the tracking equipment includes camera shooting Machine, the change in location of first object object described in the Camera location.
8. medical image three-dimensional display system as described in claim 1, which is characterized in that the tracking equipment includes infrared connects Receive device, the infrared remote receiver receive the infrared transmitter set by the correspondence first object object send it is infrared Positioning signal.
9. medical image three-dimensional display system as described in claim 1, which is characterized in that when using binocular camera or more mesh It is described to rebuild unit when video camera obtains the location information of the mark point further include:
Third computing module calculates each mark point at current time for using binocular or more mesh three-dimensional reconstruction principles Spatial position;
Third reconstruction module, for calculating the current sky of first object object according to the spatial position at the mark point current time Between position.
10. the medical image three-dimensional display system as described in any one of claim 2-5,7-9, which is characterized in that the figure As playback process unit includes:
Graph parameter determining module is arranged, according to the location information of the first object object got and the light of the spectrophotometric unit Grid parameter calculates row's graph parameter on the display unit;
Parallax image arrangement module, for arranging the anaglyph on the display unit according to row's graph parameter;
Anaglyph playing module plays the anaglyph.
11. medical image three-dimensional display system as claimed in claim 10, which is characterized in that described image playback process unit Include:
Stereo-picture obtains module, obtains the information of the stereo-picture of described image shooting unit shooting.
12. medical image three-dimensional display system as described in claim 1, which is characterized in that the spectrophotometric unit is parallax screen Barrier or lenticulation.
13. medical image three-dimensional display system as claimed in claim 12, which is characterized in that the lenticulation is that liquid crystal is saturating Mirror grating.
14. the medical image three-dimensional display system as described in any one of claim 1-5,7-9,11-13, which is characterized in that It is equipped between the spectrophotometric unit and the display unit and is bonded unit, be bonded the spectrophotometric unit by the fitting unit Onto the display unit.
15. medical image three-dimensional display system as claimed in claim 14, which is characterized in that the fitting unit includes first Substrate, the second substrate, and the air layer between the first substrate and the second substrate.
16. a kind of medical image stereo display method, which is characterized in that the described method comprises the following steps:
S0 shoots the stereo-picture of the second target object, and sends the information of the stereo-picture in real time;
The location information of S1 acquisition first object object;
S2 obtains the display parameters of the grating parameter of the spectrophotometric unit of display equipment and the display unit of the display equipment;
S3 shoots the image capturing unit received according to the location information and the grating parameter and the display parameters To stereo-picture handled in real time;
S4 shows that treated the stereo-picture;
Wherein, the S1 further include:
The spatial position setting flag point of the corresponding first object object of S 11;
S12 obtains the location information of the mark point;
Location information of the S13 according to the mark point rebuilds the spatial position of the first object object;First mesh Marking object includes human eye;
Wherein, the mark point is arranged in has on the object of relative positional relationship with the first object object, or setting exists On the object of the first object object synchronous motion;
The spectrophotometric unit is solid slit grating piece or liquid crystal lens grating;
The S12 further comprises:
S121 presets a standard picture, is equipped with reference marker point in the standard picture, and obtain the reference marker point in institute State the space coordinate and plane coordinates in standard picture;
S122 is obtained comprising the first object object, the present image of the mark point and the mark point described current The plane coordinates of image;
S123 matches the mark point in the present image with the reference marker point of the standard picture;
When the quantity of the mark point is no more than four, and obtains the location information of the mark point using monocular-camera, The S13 further comprises:
S131 is according to the plane coordinates of the mark point in the present image and the reference marker point of the standard picture The hypothesis condition of scene calculates the present image and the standard picture where plane coordinates and the first object object Between homograph relationship;
S132 calculates the mark point in the spatial position for shooting the standard picture moment according to the homograph relationship To the rigid transformation of the spatial position at current time, the mark point is then calculated in the spatial position at current time, and according to The mark point calculates the current spatial position of the first object object in the spatial position at current time;
When the quantity of the mark point is five or more, and obtains the location information of the mark point using monocular-camera, The S13 further comprises;
S133 is according to the plane coordinates of the mark point in the present image and the reference marker point of the standard picture Plane coordinates calculates the homograph relationship between the present image and the standard picture;
S134 calculates the mark point in the spatial position for shooting the standard picture moment according to the homograph relationship To the rigid transformation of the spatial position at current time, the mark point is then calculated in the spatial position at current time, and according to The spatial position at the mark point current time calculates the current spatial position of first object object.
17. medical image stereo display method as claimed in claim 16, which is characterized in that the S0 further include:
Image acquisition step, acquires the stereo-picture of second target object, and extracts left view from the stereo-picture Figure information and right view information.
18. medical image stereo display method as claimed in claim 16, which is characterized in that when using binocular camera or more When lens camera obtains the location information of the mark point, the S13 further comprises:
S135 uses binocular or more mesh three-dimensional reconstruction principles, calculates each mark point in the spatial position at current time;
S136 calculates the current spatial position of target object according to the spatial position at the mark point current time.
19. such as the described in any item medical image stereo display methods of claim 16 to 18, which is characterized in that the S3 is into one Step includes:
S301 row's graph parameter determines step, the location information and the spectrophotometric unit according to the first object object got Grating parameter and the display parameters of the display unit calculate row's graph parameter on the display unit;
S302 parallax image arrangement step arranges the anaglyph on the display unit according to row's graph parameter;
S303 anaglyph plays step, plays the anaglyph.
20. medical image stereo display method as claimed in claim 19, which is characterized in that the S3 further include:
S304 stereo-picture obtaining step obtains the information for the stereo-picture that captured in real-time arrives.
CN201410837180.9A 2014-12-29 2014-12-29 Medical image three-dimensional display system and method Active CN105812772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410837180.9A CN105812772B (en) 2014-12-29 2014-12-29 Medical image three-dimensional display system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410837180.9A CN105812772B (en) 2014-12-29 2014-12-29 Medical image three-dimensional display system and method

Publications (2)

Publication Number Publication Date
CN105812772A CN105812772A (en) 2016-07-27
CN105812772B true CN105812772B (en) 2019-06-18

Family

ID=56980775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410837180.9A Active CN105812772B (en) 2014-12-29 2014-12-29 Medical image three-dimensional display system and method

Country Status (1)

Country Link
CN (1) CN105812772B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107770510A (en) * 2017-10-16 2018-03-06 广州音视控通科技有限公司 A kind of 3D solids visual area collecting device
CN107861079B (en) * 2017-11-03 2020-04-24 上海联影医疗科技有限公司 Method for local coil localization, magnetic resonance system and computer-readable storage medium
CN109961477A (en) * 2017-12-25 2019-07-02 深圳超多维科技有限公司 A kind of space-location method, device and equipment
CN109961478A (en) * 2017-12-25 2019-07-02 深圳超多维科技有限公司 A kind of Nakedness-yet stereoscopic display method, device and equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1514300A (en) * 2002-12-31 2004-07-21 �廪��ѧ Method of multi viewing angle x-ray stereo imaging and system
CN101505433A (en) * 2009-03-13 2009-08-12 四川大学 Real acquisition real display multi-lens digital stereo system
CN101562756A (en) * 2009-05-07 2009-10-21 昆山龙腾光电有限公司 Stereo display device as well as display method and stereo display jointing wall thereof
CN101984670A (en) * 2010-11-16 2011-03-09 深圳超多维光电子有限公司 Stereoscopic displaying method, tracking stereoscopic display and image processing device
CN102098524A (en) * 2010-12-17 2011-06-15 深圳超多维光电子有限公司 Tracking type stereo display device and method
CN201937772U (en) * 2010-12-14 2011-08-17 深圳超多维光电子有限公司 Tracking mode stereoscopic display device, computer and mobile terminal
CN102208012A (en) * 2010-03-31 2011-10-05 爱信艾达株式会社 Scene matching reference data generation system and position measurement system
CN102833562A (en) * 2011-06-15 2012-12-19 株式会社东芝 Image processing system and method
CN104155767A (en) * 2014-07-09 2014-11-19 深圳市亿思达显示科技有限公司 Self-adapting tracking dimensional display device and display method thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101112735B1 (en) * 2005-04-08 2012-03-13 삼성전자주식회사 3D display apparatus using hybrid tracking system
JP5573379B2 (en) * 2010-06-07 2014-08-20 ソニー株式会社 Information display device and display image control method
KR101371387B1 (en) * 2013-01-18 2014-03-10 경북대학교 산학협력단 Tracking system and method for tracking using the same

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1514300A (en) * 2002-12-31 2004-07-21 �廪��ѧ Method of multi viewing angle x-ray stereo imaging and system
CN101505433A (en) * 2009-03-13 2009-08-12 四川大学 Real acquisition real display multi-lens digital stereo system
CN101562756A (en) * 2009-05-07 2009-10-21 昆山龙腾光电有限公司 Stereo display device as well as display method and stereo display jointing wall thereof
CN102208012A (en) * 2010-03-31 2011-10-05 爱信艾达株式会社 Scene matching reference data generation system and position measurement system
CN101984670A (en) * 2010-11-16 2011-03-09 深圳超多维光电子有限公司 Stereoscopic displaying method, tracking stereoscopic display and image processing device
CN201937772U (en) * 2010-12-14 2011-08-17 深圳超多维光电子有限公司 Tracking mode stereoscopic display device, computer and mobile terminal
CN102098524A (en) * 2010-12-17 2011-06-15 深圳超多维光电子有限公司 Tracking type stereo display device and method
CN102833562A (en) * 2011-06-15 2012-12-19 株式会社东芝 Image processing system and method
CN104155767A (en) * 2014-07-09 2014-11-19 深圳市亿思达显示科技有限公司 Self-adapting tracking dimensional display device and display method thereof

Also Published As

Publication number Publication date
CN105812772A (en) 2016-07-27

Similar Documents

Publication Publication Date Title
CN105791800B (en) Three-dimensional display system and stereo display method
CN105809654B (en) Target object tracking, device and stereoscopic display device and method
US11612307B2 (en) Light field capture and rendering for head-mounted displays
CN106773080B (en) Stereoscopic display device and display method
CN114895471A (en) Head mounted display for virtual reality and mixed reality with inside-outside position tracking, user body tracking, and environment tracking
US20120002014A1 (en) 3D Graphic Insertion For Live Action Stereoscopic Video
CN109491087A (en) Modularized dismounting formula wearable device for AR/VR/MR
CN108307675A (en) More baseline camera array system architectures of depth enhancing in being applied for VR/AR
CN105812772B (en) Medical image three-dimensional display system and method
CN105812774B (en) Three-dimensional display system and method based on intubation mirror
CN103119946A (en) Double stacked projection
CN204578692U (en) Three-dimensional display system
CN103281507B (en) Video-phone system based on true Three-dimensional Display and method
JP2023511670A (en) A method and system for augmenting depth data from a depth sensor, such as by using data from a multi-view camera system
CN107545537A (en) A kind of method from dense point cloud generation 3D panoramic pictures
CN204377059U (en) Based on the three-dimensional display system of soft lens
US20140125779A1 (en) Capturing and visualization of images and video for autostereoscopic display
CN104887316A (en) Virtual three-dimensional endoscope displaying method based on active three-dimensional displaying technology
CN109068035A (en) A kind of micro- camera array endoscopic imaging system of intelligence
Fontana et al. Closed–loop calibration for optical see-through near eye display with infinity focus
CN204539353U (en) Medical image three-dimensional display system
CN204377058U (en) Based on the three-dimensional display system of hard mirror
CN105812776A (en) Stereoscopic display system based on soft lens and method
CN204377057U (en) Based on the three-dimensional display system of intubate mirror
CN113206991A (en) Holographic display method, system, computer program product and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180723

Address after: 518054 Room 201, building A, 1 front Bay Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong

Applicant after: Shenzhen super Technology Co., Ltd.

Address before: 518053 East Guangdong H-1 East 101, overseas Chinese town, Nanshan District, Shenzhen.

Applicant before: Shenzhen SuperD Photoelectronic Co., Ltd.

GR01 Patent grant
GR01 Patent grant