CN105812774B - Three-dimensional display system and method based on intubation mirror - Google Patents
Three-dimensional display system and method based on intubation mirror Download PDFInfo
- Publication number
- CN105812774B CN105812774B CN201410848782.4A CN201410848782A CN105812774B CN 105812774 B CN105812774 B CN 105812774B CN 201410848782 A CN201410848782 A CN 201410848782A CN 105812774 B CN105812774 B CN 105812774B
- Authority
- CN
- China
- Prior art keywords
- unit
- mark point
- image
- spatial position
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention belongs to field of medical technology, a kind of three-dimensional display system and method based on intubation mirror is provided, the system includes: display unit, spectrophotometric unit, tracking equipment and image capturing unit, the spectrophotometric unit is used to that left view and right view will to be divided on image space that the display unit is shown, the tracking equipment is used to obtain the location information of first object object, the image capturing unit is for shooting the second target object, the three-dimensional display system based on intubation mirror further includes image playback process equipment, location information according to the first object object, the display parameters of the grating parameter of the spectrophotometric unit and the display unit, the stereo-picture that the image capturing unit received takes is handled in real time, the display unit real-time display is sent after processing.Image broadcasting speed of the invention greatly improves compared with the prior art, can meet the requirement that real-time volume is shown, has the advantages that surgical operation and auxiliary doctor is facilitated to improve success rate of operation.
Description
Technical field
The present invention relates to technical field of medical equipment, in particular to it is a kind of applied on clinical medicine based on inserting
The three-dimensional display system and display methods of Guan Jing.
Background technique
Trachea cannula, which refers to, is known as trachea cannula through the technology of glottis merging tracheae for a special endotracheal tube, this
Technology can attract for airway patency, ventilation oxygen-supplying, respiratory tract and prevent accidentally suction etc. from providing optimum condition.Emergency trachea intubation technique
Have become CPR and with the important measures during the Severe acute disease patient rescue of respiratory dysfunction.Endotracheal intubation
Common important rescue technology in rescue service, be in management of respiratory it is most widely used, most effective most efficiently means it
One, it is the basic skills that medical worker must skillfully grasp, vital work is played to rescue patient vitals, reduction case fatality rate
With.And tracheae endocrine or foreign matter can be sucked out in time, it prevents foreign matter from entering respiratory tract, keeps respiratory passage unblocked, had
The manually or mechanically ventilation bottleneck throat physiology dissection of effect prevents patient to lack oxygen and carbon dioxide and is detained, and whether trachea cannula is timely
Be directly related to the success or not of rescue, patient can safe transit and patient prognosis situation.Endoscope be one equipped with
The pipe of light, it can enter in stomach with direct oral cavity or enter through other ducts the one kind for belonging to intubation mirror in vivo.It is peeped using interior
Mirror can see the lesion that X-ray cannot be shown, therefore it is highly useful to doctor.For example, can be seen by endoscope doctor
The ulcer or tumour in stomach are examined, makes optimal therapeutic scheme accordingly.Endoscope is a kind of optical instrument, is by cold light source mirror
The composition such as head, fiber optic conducting wire, graphic transmission system, on-screen display system, it can expand surgical field of view.Use the prominent of endoscope
Feature is that operative incision is small out, and incisional scar is unobvious, and after-operation response is light, and bleeding, livid purple and swelling time can greatly reduce, extensive
It is multiple also fast compared with traditional operation, meet very much the requirement of aesthetic surgery beauty not trace.
Initial endoscope is made into stereoplasm tube, and invention is before more than 100 years.Although they gradually improve to some extent,
It still fails to be widely used.Later, the soft pipe production of endoscope-use in the 1950s, thus can be at the intracorporal turning of people
Place is bent easily.The endoscope is mostly soft materials preparation.
Current existing medical endoscopes a small number of on the market have the function of showing 3D picture.But it is only limitted to wear 3D
The assist type 3D display of mirror.Its technology used is so that right and left eyes inject different polarization light, and to generate parallax, formation is three-dimensional
Sense organ.The shortcomings that this technology is to need to wear polarized filter glasses, the use of one side polarized filter glasses, so that into doctor's eye
Light in eyeball is reduced to less than half original, wastes the light information that itself is most valuable in cavity, reduces cavity
The interior information discrimination compared under dark situation;On the other hand, in live on ordinary days not for the doctor of wearing spectacles, during surgery
Polarized filter glasses are worn, easily generation sense of discomfort, and is easy to generate mist on spectacle lens surface in breathing because wearing mask simultaneously
Gas generates strong influence to operation safety.
Therefore, how to overcome the above problem, become medical field faces at present one big technical problem.
Summary of the invention
The purpose of the present invention is to provide it is a kind of based on intubation mirror three-dimensional display system and display methods, it is intended to solve by
Said one caused by the limitation and disadvantage of the prior art or multiple technical problems.
A kind of three-dimensional display system based on intubation mirror provided by the invention, comprising: display unit, spectrophotometric unit, tracking
Equipment and image capturing unit, the spectrophotometric unit is located at the display side of the display unit, for showing the display unit
It is divided into left view and right view on the image space shown, the tracking equipment is used to obtain the location information of first object object,
Described image shooting unit is for shooting the second target object, wherein described to further include based on the three-dimensional display system for being intubated mirror
Image playback process equipment is connect with the tracking equipment, the display unit and described image shooting unit, the figure respectively
As playback process equipment is according to the location information of the first object object, the grating parameter of the spectrophotometric unit and the display
The display parameters of unit are handled the stereo-picture that the described image shooting unit received takes, in real time after processing
Send the display unit real-time display.
The present invention also provides a kind of stereo display methods based on intubation mirror, the described method comprises the following steps: S0 shooting
The stereo-picture of second target object, and the information of the stereo-picture is sent in real time;The position of S1 acquisition first object object
Information;S2 obtains the display parameters of the grating parameter of the spectrophotometric unit of display equipment and the display unit of the display equipment;S3
The described image shooting unit received is taken according to the location information and the grating parameter and the display parameters
Stereo-picture handled in real time;S4 shows the image to be played.
Three-dimensional display system and display methods provided by the invention based on intubation mirror, image broadcasting speed is compared to existing
Technology greatly improves, and can meet the requirement that real-time volume is shown, has and surgical operation and auxiliary doctor is facilitated to improve operation
The advantages of success rate.
Detailed description of the invention
Fig. 1 shows the structural schematic diagram of the three-dimensional display system based on intubation mirror of embodiment of the present invention one.
Fig. 2 is the structure of a specific embodiment of the three-dimensional display system based on intubation mirror of embodiment of the present invention one
Schematic diagram.
Fig. 3 shows the structural schematic diagram of the image playback process unit in Fig. 2.
Fig. 4 is that spectrophotometric unit and display unit paste in the three-dimensional display system based on intubation mirror of embodiment of the present invention one
The structural schematic diagram of conjunction.
Fig. 5 shows the preferable reality of tracking equipment in the three-dimensional display system based on intubation mirror of embodiment of the present invention one
Apply a structural schematic diagram.
Fig. 6 shows the concrete structure schematic diagram of the acquiring unit in Fig. 4.
Fig. 7 shows the concrete structure schematic diagram for rebuilding unit first variation in Fig. 4.
Fig. 8 shows the concrete structure schematic diagram for rebuilding the second variation of unit in Fig. 4.
Fig. 9 shows the concrete structure schematic diagram for rebuilding unit third variation in Fig. 4.
Figure 10, which is shown, to be corresponded to the structure of the locating support of first object object setting flag point and shows in the tracking equipment of Fig. 4
It is intended to.
Figure 11 is the flow diagram of the stereo display method based on intubation mirror of embodiment of the present invention two.
Figure 12 is the idiographic flow schematic diagram of S1 in Figure 11.
Figure 13 is the idiographic flow schematic diagram of S12 in Figure 12.
Figure 14 is the idiographic flow schematic diagram of the first variation of S13 in Figure 11.
Figure 15 is the idiographic flow schematic diagram of the second variation of S13 in Figure 11.
Figure 16 is the idiographic flow schematic diagram of the third variation of S13 in Figure 11.
Figure 17 is the idiographic flow schematic diagram of the S3 in Figure 11.
Specific embodiment
To better understand the objects, features and advantages of the present invention, with reference to the accompanying drawing and specific real
Applying mode, the present invention is further described in detail.It should be noted that in the absence of conflict, the implementation of the application
Feature in mode and embodiment can be combined with each other.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, still, the present invention may be used also
To be implemented using other than the one described here other modes, therefore, protection scope of the present invention is not by described below
Specific embodiment limitation.
Embodiment one
Referring to Figure 1, Fig. 1 is that the present invention is based on the structural schematic diagrams of the three-dimensional display system of intubation mirror.As shown in Figure 1,
Three-dimensional display system based on intubation mirror of the invention includes: image capturing unit 10, tracking equipment 30, spectrophotometric unit 50 and shows
Show unit 40.The image capturing unit 10 is for shooting the second target object, and second target object that will be taken in real time
Image be sent to the image play unit.The tracking equipment 30 is used to obtain the location information of first object object, the light splitting
Unit 50 is located at the display side of the display unit 40, is divided into left view on the image space for showing the display unit 40
And right view.This based on intubation mirror three-dimensional display system further include image playback process equipment 20, respectively with the tracking equipment
30 and the display unit 40 connect, location information of the image playback process equipment 20 according to the first object object, the light splitting
The grating parameter of unit 50 and the display parameters of display unit 40 handle image to be played in real time, and the display list is sent after processing
Member 40 is shown.In addition, the image capturing unit 10 is set on intubation mirror, when intubation mirror enters in human or animal's body, clap
The image inside human or animal is taken the photograph, doctor is facilitated to observe in real time and is performed the operation in time.
Since tracking equipment 30 and display unit 40 are directly connected to image playback process equipment 20, image playback process is set
Standby 20 get the location information, grating parameter and display parameters of first object object in time, and carry out image procossing accordingly, save
It has gone to need the treatment process by central processing unit in the prior art, thus the speed that image plays is big compared with the prior art
It is big to improve, it is able to satisfy the requirement that real-time volume is shown, facilitates surgical operation and doctor is assisted to improve the excellent of success rate of operation
Point.This is because doctor can obtain accurate stereo-picture in real time at the time of surgery, and perform the operation in time, is not in background
The problem of being mentioned in technology.Above-mentioned grating parameter mainly includes pitch (pitch) and grating the inclining with respect to display panel of grating
Rake angle, grating are with respect to parameters such as the placement distances of display panel.These grating parameters, which can be, is stored directly in image broadcasting
In memory in processing equipment, it can also be other detection device real-time detections and obtain the grating parameter of spectrophotometric unit, by light
Grid parameter value is sent to image playback process equipment 20.Above-mentioned display unit parameter includes the size of display unit, display unit
Screen resolution, display unit pixel unit sub-pixel put in order and the parameters such as arrangement architecture.Arrangement of subpixels
Sequence is that sub-pixel is to arrange according to RGB arrangement or RBG arrangement, or at BGR, or arrange at other sequences;Sub-pixel
Arrangement architecture, that is, sub-pixel be vertically arranged or it is transversely arranged, such as be the cycle arrangement in the way of RGB from top to bottom,
It or is successively cycle arrangement etc. in the way of RGB from left to right.
The image capturing unit 10 is used to shoot the second target object, and in real time by second target object taken
Image is sent to the image play unit.Here the second target object is primarily referred to as shooting the various fields of record by video camera
Scape, such as scene of operation, internal image of patient etc..Stereo-picture is shot in real time by image capturing unit 10, and will
The stereo-picture real-time display taken on the display unit, needs not move through additional image procossing, shows in time and truly
The various scenes taken meet demand of the user to real-time display, improve user experience.Image capturing unit 10 can be with
Including at least one of monocular-camera, binocular camera or multi-lens camera.
When the image capturing unit 10 includes monocular-camera, is shot according to the monocular-camera and obtain the second target
The stereo-picture of object.Preferably, the monocular-camera can use liquid crystal lens imaging device or liquid crystal microlens array
Imaging device.In a specific embodiment, which obtains two width of measured object in different moments from different perspectives
Digital picture, and the three-dimensional geometric information of object is recovered based on principle of parallax, rebuild object three-dimensional contour outline and position.
When the image capturing unit 10 includes binocular camera, including two video camera either video cameras have two
A camera carries out the second target object of shooting to the second target object by binocular camera and forms stereo-picture.Specifically,
Binocular camera is mainly based upon principle of parallax and obtains object dimensional geological information by multiple image.Binocular Stereo Vision System
It generally obtains two width digital pictures of measured object (the second target object) simultaneously from different perspectives by twin camera, and is based on parallax
Principle recovers the three-dimensional geometric information of object, rebuilds object three-dimensional contour outline and position.
When the image capturing unit 10 includes multi-lens camera, i.e., the video camera of three or more (including three), these
Video camera is arranged in arrays, for obtaining stereo-picture.The is obtained simultaneously from different perspectives by video camera more than above three
Several digital pictures of two target objects recover the three-dimensional geometric information of object based on principle of parallax, rebuild object dimensional wheel
Wide and position.
The image capturing unit 10 further includes acquisition unit, which is used to acquire the solid of second target object
Image, and left view information and right view information are extracted from the stereo-picture.The acquisition unit one end is taken the photograph with above-mentioned monocular
Camera, binocular camera or the connection of above-mentioned multi-lens camera, the other end are connected in image playback process equipment 20.By adopting
Collection unit extracts the left view information and right view information of stereo-picture while when shooting stereo-picture, improves image procossing
Speed, ensure that the display effect of real-time perfoming stereoscopic display.
Above-mentioned tracking equipment 30 can be camera and/or infrared sensor, be mainly used for tracking first object object
Position, such as people eyes or people head or people face position or people the upper part of the body position.Camera
Or the quantity of infrared sensor does not limit, and can be one, is also possible to multiple.Camera or infrared sensor can be installed
On the frame of display unit, or it is separately positioned at the position for being easy to track first object object.In addition, if using red
As tracking equipment also infrared transmitter can be arranged in the position of corresponding first object object in outer sensor, red by receiving
The infrared positioning signal that external transmitter is sent is calculated using the relative positional relationship of infrared transmitter and first object object
The location information of first object object.
Specifically, which includes video camera, which shoots the first object object.The quantity of video camera
It can be one or more, can be set on the display unit, can also be separately provided.
The tracking equipment 30 includes infrared remote receiver, and correspondingly, corresponding first object object is provided with infrared transmitter, should
Infrared transmitter may be provided at the corresponding position of first object object, also can be set other with first object object's position phase
To on fixed object, the infrared remote receiver receive the infrared transmitter set by the corresponding first object object send it is red
External signal.The positioning to first object object is realized by common infrared positioning method.
In addition, above-mentioned tracking equipment 30 can also use GPS positioning module, location information is sent extremely by GPS positioning module
Image playback process equipment 20.
Above-mentioned spectrophotometric unit 50 is set to the light emission side of display unit 40, the left view with parallax that display unit 40 is shown
Figure and right view are separately sent to the left eye and right eye of people, according to the left eye and right eye synthetic stereo image of people, make one to watch
The effect of stereoscopic display.Preferably, above-mentioned spectrophotometric unit can be disparity barrier or lenticulation.The disparity barrier can be liquid
Brilliant slit or solid slit grating piece or electrochromism slit grating piece etc., which can be liquid crystal lens or solid
Liquid crystal lens grating.Liquid crystal lens grating mainly passes through ultraviolet light and liquid crystal is cured on thin slice, forms solid lens,
The left eye and right eye of people are emitted to after being divided to light.Preferably, by 50 conduct of above-mentioned display unit 40 and spectrophotometric unit
One integrated display equipment 60, the display equipment 60 are the display portions of the three-dimensional display system entirely based on intubation mirror, can
To fit together with aforementioned image playback process equipment and tracking equipment, it is also possible to an independent sector individualism.Example
Such as, it can be needed according to viewing, will individually show that equipment 60 is placed on the position convenient for viewing, and image playback process equipment 20
Can be equipment respectively with standalone feature with tracking equipment 30, these equipment are assembled when use realize it is of the invention
Real-time volume display function.For example, the image playback process equipment 20 can be VMR 3D playback equipment, itself has
3D playback process function, using being assembled into the three-dimensional display system of the invention based on intubation mirror, with other equipment
Establish connection.
Fig. 2 is referred to, Fig. 2 is that one of the three-dimensional display system based on intubation mirror of embodiment of the present invention one is specific real
Apply the structural schematic diagram of example.As shown in Fig. 2, image plays in the three-dimensional display system based on intubation mirror of the embodiment of the present invention 1
Processing equipment 20 further comprises: image playback process unit 22 and storage unit 23, and image playback process unit 22 is mainly
It, will according to the location information, the grating parameter of the spectrophotometric unit and the display parameters of display unit of the first object object
The stereo-picture received carries out real-time row's figure processing, and the display unit real-time display is sent after processing.Storage unit 23 is used
In the image that storage image capturing unit 10 transmits.When needing to play stereo-picture, image playback process unit 22 is adjusted
Row's figure processing is carried out with the stereo-picture stored in storage unit 23.
Further, which may further comprise: signal processing unit 21, the signal processing list
Member 21 connect with storage unit 23 and image playback process unit 22 respectively, and wherein signal processing unit 21 is mainly to receiving
The signal of stereo-picture that shoots of image capturing unit 10 handled, including the processing such as image format conversion, compression of images.
By the storage of compressed image into storage unit 23.By signal processing unit 21, treated that signal can correspond to left view
Figure, right view information export picture signal respectively, or are output in image playback process unit 22 together.Image playback process
Unit 22 is according to the location information of the first object object and the grating parameter of the spectrophotometric unit, at the signal received
It manages treated the stereo-picture of unit 21 and carries out row's figure processing in real time, the display unit real-time display is sent after processing.Here
Image playback process unit 22 can call directly stereo-picture and decompression in storage unit 23, can also directly receive letter
Number processing unit 21 treated stereo-picture, then carries out row's figure processing.
Fig. 3 is referred to, image playback process unit 22 further comprises:
Graph parameter determining module 201 is arranged, location information and the light splitting according to the first object object got
The grating parameter of unit and the display parameters of display list calculate row's graph parameter on the display unit;
Parallax image arrangement module 202, for according to the anaglyph on row's graph parameter arrangement display unit;The view
Difference image is generated by spatially dividing left-eye image and eye image.
Anaglyph playing module 203 plays the anaglyph.After receiving the anaglyph after arrangement, carry out
It plays, viewer sees the stereo-picture of display in display unit in real time.
Further, image playback process unit 22 further include: stereo-picture obtains module 204, obtains described image
The stereo image information that shooting unit 10 is shot, the i.e. left view of stereo-picture and right view information.Stereo-picture includes left view
Therefore figure and right view to stereo-picture to be played, need first to obtain the image information of left view and right view, Cai Nengjin
Row row's figure processing.
Continuing with referring to fig. 2, tracking equipment 30 in the present invention further comprise tracking and positioning processing unit 31 and with
Track unit 32.Tracking cell 32 is used to track the realtime graphic of first object object, refers mainly to camera, infrared remote receiver etc.
A kind of equipment of human eye or head vision signal can accurately be captured.Tracking and positioning processing unit 31 according to tracking cell 32 with
The realtime graphic for the first object object that track arrives, is extracted by the characteristic point to first object object, calculates first object
The space coordinate of object.Specifically, such as by camera face is recorded in real time, carries out people through tracking and positioning processing unit 31
Face characteristic point extracts, and calculates the space coordinate of human eye;It can also such as be increased on the head of people by way of increasing characteristic point
Characteristic point, such as infrared launcher, then capture by camera the realtime graphic of this feature point, tracking and positioning processing unit 31
Finally calculate the space coordinate of human eye.
In addition, tracking and positioning processing unit 31 can quickly track in real time human eye position when position of human eye occurs mobile
It sets, provides the space coordinate of human eye, and the coordinate information is supplied to image playback process unit 22.
It may include video camera or infrared remote receiver there are also above-mentioned tracking cell 32, when tracking cell 32 includes video camera
When, pass through the change in location of the corresponding characteristic point of first object object described in Camera location.When tracking cell 32 includes infrared
When receiver, the infrared remote receiver is received from the correspondence first object object and as the infrared hair of the characteristic point
The infrared positioning signal that emitter is sent.
The viewing effect of stereoscopic display is improved by above-mentioned tracking and positioning processing unit 31 and tracking cell 32, makes solid
Display adjust automatically with the movement of people, provides optimal stereo display effect in real time.
Embodiment 1
In the embodiment of the present invention 1, preferable real-time volume display effect is obtained, needs the grating according to spectrophotometric unit
The display parameters of parameter and display unit carry out optical design to spectrophotometric unit and display unit, and the optical design is according to following public affairs
Formula:
(3) m*t=p-pitch
In above-mentioned formula, F is that (grating in i.e. above-mentioned grating parameter is opposite for the distance between spectrophotometric unit and display unit
The placement distance of display panel), L is viewer at a distance from display unit, and IPD is to match interpupillary distance, between common people's double vision
Distance, for example, general value is 62.5mm, l-pitch is the pitch (pitch) of spectrophotometric unit, and p-pitch is display unit
On pixel row figure pitch, n is three-dimensional view quantity, and the pixel quantity that m is covered by spectrophotometric unit, p is display unit
Point is away from for point here away from the size (one kind for belonging to display parameters) for being primarily referred to as a pixel unit, the pixel unit is usual
Including tri- sub-pixels of R, G, B.In order to eliminate moire fringes, spectrophotometric unit can generally rotate a certain angle when fitting
(i.e. spectrophotometric unit has certain tilt angle compared to display unit), therefore, the pitch of actual spectrophotometric unit is by following
Formula provides:
(4)Wlens=l-pitch*sin θ
Wherein, WlensFor the actual pitch of spectrophotometric unit, θ is spectrophotometric unit with respect to the tilt angle (on i.e. of display panel
State one of grating parameter).
As described previously for the distance between spectrophotometric unit and display unit F, when between display unit and spectrophotometric unit
When medium is air, F is equal to the actual range between spectrophotometric unit and display unit;When between display unit and spectrophotometric unit
Medium when being the transparent medium that refractive index is n (n be greater than 1), the actual range that F is equal between spectrophotometric unit and display unit removes
With refractive index n;When between display unit and spectrophotometric unit there are when different media, and the refractive index of medium be respectively n1,
N2, n3 (refractive index be all larger than or be equal to 1), F=s1/n1+s2/n2+s3/n3, wherein s1, s2,
S3 is the thickness of respective media.
By above-mentioned optical computing formula, spectrophotometric unit and display unit are configured, can reduce moire fringes, is improved
The stereo display effect watched in real time.
In addition, setting is bonded unit between spectrophotometric unit and display unit in a variant embodiment, figure is referred to
4, Fig. 4 be that spectrophotometric unit and display unit are bonded knot in the three-dimensional display system based on intubation mirror of embodiment of the present invention one
Structure schematic diagram.As shown in figure 4, being equipped with fitting unit between spectrophotometric unit 50 and display unit 40, three is similar to " sandwich
Structure ", fitting unit includes first substrate 42 and the second substrate 43, and between first substrate 42 and the second substrate 43
Air layer 41.The air layer 41 is in sealing state between first substrate 42 and the second substrate 43, prevents air from escaping.First
Substrate 42 is bonded with display panel, can be transparent glass material composition, is also possible to the composition such as transparent resin material.Second base
Plate 43 is oppositely arranged with first substrate 42, and the side away from first substrate 42 is for being bonded spectrophotometric unit 50.Due to being divided
Setting fitting unit between unit 50 and display unit 40, and it is bonded unit using the above structure, it is aobvious for the solid of large screen
Showing device, not only ensure that the flatness of grating fitting, but also alleviate the weight of entire 3 d display device, prevent using pure glass
When cause screen to fall the risk split because overweight.
Embodiment 2
In the present embodiment 2, which includes video camera, which shoots the first object object.Camera shooting
The quantity of machine can be one or more, can be set on the display unit, can also be separately provided.In addition, video camera can be with
It is monocular-camera, binocular camera or multi-lens camera.
In addition, the tracking equipment 30 can also be including infrared remote receiver, correspondingly, corresponding first object object is provided with
Infrared transmitter, the infrared transmitter may be provided at the corresponding position of first object object, also can be set other with first
On the relatively-stationary object of target object position, which receives infrared set by first object object from corresponding to
The infrared signal that transmitter is sent.The positioning to first object object is realized by common infrared positioning method.
In addition, above-mentioned tracking equipment 30 can also use GPS positioning module, location information is sent extremely by GPS positioning module
Image playback process unit 20.
Embodiment 3
Refer to Fig. 5, Fig. 5 shows to track in the three-dimensional display system based on intubation mirror of embodiment of the present invention one and set
Standby preferred embodiment structural schematic diagram.As shown in figure 5, the embodiment of the present invention 3 also proposes another tracking equipment 30, the tracking
Equipment 30 includes:
Mark point setting unit 1, for corresponding to the spatial position setting flag point of first object object;Here mark point
Can be set on first object object, can also be not provided on first object object, but be arranged with first object object
There is relative positional relationship, and it can also on the object of first object object synchronous motion.For example, first object object is human eye, then
Can around the eye socket of human eye setting flag point;Or glasses are configured around human eye, mark point is located to the frame of glasses
On, or by mark point be located at on the ear of the relatively-stationary people of position of human eye relationship.The mark point can be transmission letter
Number infrared emission sensor, LED light, GPS sensor, the various parts such as laser positioning sensor, be also possible to it is other can
By the physical label of cameras capture, e.g. there is the object of shape feature and/or color characteristic.Preferably, outer to avoid
The interference of boundary's veiling glare improves the robustness of label point tracking, it is preferable to use the more narrow infrared LED lamp of frequency spectrum is as label
Point, and using can only by infrared LED mark point is captured using the corresponding thermal camera of frequency spectrum.In view of outer
Boundary's veiling glare is mostly irregular shape and Luminance Distribution is uneven, mark point can be arranged to that the light of regular shape can be issued
Spot, luminous intensity is higher, brightness uniformity.In addition it can which multiple mark points are arranged, the corresponding hot spot of each mark point is each
The geometry of mark point composition rule, such as triangle, quadrangle etc. obtain mark point to be easy to trace into mark point
Spatial positional information, and improve the accuracy of hot spot extraction.
Acquiring unit 2, for obtaining the location information of the mark point;This can be the letter issued by receiving mark point
Number, to determine the location information of mark point, it is also possible to shoot the image containing mark point using video camera, in image
Mark point extracts.The location information of mark point is obtained by image processing algorithm.
Unit 3 is rebuild, for the location information according to the mark point, rebuilds the space of the first object object
Position.After acquiring the location information of the mark point, recreate the spatial position of mark point, then according to mark point with
The spatial position of mark point is transformed into the spatial position (example of first object object by the relative positional relationship of first object object
Such as the spatial position of the left and right two of people).
The tracking equipment 30 of the embodiment of the present invention corresponds to the location information of the mark point of first object object by obtaining, and
According to the location information, the spatial position of first object object is recreated.With use video camera as people in the prior art
Eye captures equipment and needs to carry out two dimensional image signature analysis to obtain position of human eye or utilize human eye iris using other
The human eye of reflecting effect, which captures equipment and obtains position of human eye, to compare, and has stability good, accuracy is high, it is low in cost and to
The advantages of the distance between track equipment and first object object distance do not require.
Fig. 6 is referred to, Fig. 6 shows the concrete structure schematic diagram of the acquiring unit in Fig. 5.Aforementioned acquiring unit is further
Include:
Presetting module 21 is equipped with reference marker point, and described in acquisition for presetting a standard picture in the standard picture
The space coordinate and plane coordinates of reference marker point;Standard picture for example can be the mark acquired by image capture device
Quasi- image obtains the image coordinate of reference marker point, and uses other accurate measurement in space equipment such as laser scanners, structure
The equipment such as photoscanner (such as Kinect) obtain the space coordinate and plane coordinates of reference marker point in standard picture.
Module 22 is obtained, for obtaining the present image comprising the first object object and the mark point and described
Plane coordinates of the mark point in the present image;
Matching module 23, for by the present image mark point and the standard picture the reference marker point
It is matched.Here will first by mark point the present image plane coordinates and reference marker point standard picture plane
Corresponding relationship is established between coordinate, then matches mark point with reference marker point.
It is easy for there capable of be one when obtaining the spatial position of present image by setting standard picture and reference marker point
Object of reference, it further ensures the stability of the target tracker of embodiment of the present invention and accuracys.
Further, the tracking equipment 30 further include:
Acquisition unit, for acquiring the mark point;
Screening unit screens target label point from the mark point.
Specifically, when the quantity of mark point is multiple, all of corresponding first object object are acquired using video camera
Mark point, selection and the maximally related mark point of first object object, then use corresponding image procossing from all mark points
Algorithm extracts the mark point on image, and extraction needs are carried out according to the feature of mark point.Generally, to the mark
The method that the feature of note point extracts is to obtain the feature minute of each point in image using feature extraction function H to image I
Number, and filter out the sufficiently high mark point of characteristic value.Here can be concluded with following formula indicates:
S (x, y)=H (I (x, y))
F={ arg(x, y)(S (x, y) > s0) }
In above-mentioned formula, H is feature extraction function, and I (x, y) is image value corresponding to each pixel (x, y), be can be
Gray value or the color energy value of triple channel etc., S (x, y) are each feature scores of pixel (x, y) after feature extraction,
S0 is a feature scores threshold value, and the S (x, y) greater than s0 is considered mark point, and F is label point set.Preferably, this
Inventive embodiments using infrared markers point and thermal camera it is more obvious at the energy feature of image.Due to using narrowband
LED infrared lamp and corresponding thermal camera, video camera, only mark point very low at most of pixel energies of image
Corresponding pixel has high-energy.Therefore corresponding function H (x, y) can be to use the image B after Threshold segmentation operator (x,
Y) it carries out region and increases several subgraphs of acquisition, and center of gravity extraction is carried out to the subgraph acquired.Meanwhile according in environment light
The veiling glare that can be imaged in thermal camera, we can add such as mark point institute into hot spot in infrared markers point extraction process
Area, the constraint conditions such as the positional relationship of mark point in two dimensional image screen the mark point extracted.
When video camera number is greater than 1, need to different cameras in synchronization or the figure obtained close to synchronization
As a matching is marked, to provide condition for subsequent mark point three-dimensional reconstruction.The method of reference points matching needs basis
Depending on feature extraction function H.We can be used some classics feature point extraction operator based on gradient of image and gray scale figure and
The matching process to match therewith such as Harris, SIFT, the methods of FAST are obtained and matched indicia point.Also the limit can be used about
A matching is marked in beam, the modes such as priori conditions of mark point.The method for carrying out matching screening used here as limit restraint is:
According to the projection on two different cameras images in same o'clock all in this principle of same plane, for some camera shooting
Some mark point p0 in machine c0, we can calculate a polar curve equation in other video cameras c1, and p0 pairs of mark point
Following relationships should be met in the mark point p1 on other video camera c1:
[p1;1]TF[p0;1]=0
In above-mentioned formula, F is basis matrix of the video camera c0 to video camera c1.By using above-mentioned relation, we can be big
The candidate number of mark point p1 is reduced greatly, improves matching accuracy.
In addition, we can be used mark point priori conditions be mark point spatial order, the size etc. of mark point.Than
Two of every a pair of corresponding the same space point on image as captured by making it according to the mutual alignment relation of two video cameras
Pixel is equal in some dimension such as y-axis, this process is also referred to as image calibration (rectification).Then at this time
The matching of mark point can also be executed according to the x-axis sequence of mark point, i.e. the corresponding minimum x of minimum x, and so on, maximum x
Corresponding maximum x.
Based on the following for tracking video camera number number, target tracker of the invention is discussed in detail.
Fig. 7 is referred to, Fig. 7 shows the concrete structure schematic diagram for rebuilding unit in Fig. 5.As shown in fig. 7, at this
In embodiment, the corresponding mark point of first object object of the tracking equipment 30 tracking is no more than four, and is imaged using monocular
When location information of the machine to obtain mark point, rebuilding unit further comprises:
First computing module 31, for the plane coordinates and the standard picture according to the mark point in the present image
The plane coordinates of the reference marker point and the hypothesis condition of scene where the first object object calculate it is described current
Homograph relationship between image and the standard picture;By the reference marker in the mark point and standard picture of present image
Point is matched, and the homograph relationship between present image and standard picture is calculated according to the respective plane coordinates of the two.
So-called homograph is the homography in corresponding geometry, is a kind of transform method often applied in computer vision field.
First reconstructed module 32 is shooting the standard for calculating the mark point according to the homograph relationship
Then the spatial position at image moment calculates the mark point at current time to the rigid transformation of the spatial position at current time
Spatial position, and the current space of the first object object is calculated in the spatial position at current time according to the mark point
Position.
Specifically, for the hypothesis condition of scene, we can assume that when the rigid transformation of mark point in scene
The numerical value of certain dimension is constant, such as in three dimensional spatial scene, and space coordinate is x, y, z, x and y respectively with the image of camera
X-axis is parallel with y-axis in coordinate (plane coordinates), and z-axis is perpendicular to the image of camera, it is assumed that condition can be mark point in z
Coordinate on axis is constant, and it is constant to be also possible to coordinate of the mark point in x-axis and/or y-axis.Different suppositive scenario conditions, institute
The estimation method used is also not quite similar.In another example another kind assume under the conditions of, it is assumed that first object object direction with take the photograph
As head towards between rotation angle remain constant in use, then can be mutual according to the mark point in present image
The distance between with the mark point on standard picture from each other at a distance between ratio speculate that first object object is current
Spatial position.
By above calculation method, structure again may be implemented when monocular-camera is no more than four to the quantity of mark point
The spatial position of the first object object is built, it is easy to operate, and tracking result is also relatively accurate, due to using monocular, reduces
Cost of the first object to image tracing.
It is above-mentioned to acquire image to restore in object dimensional seat calibration method, since the image of acquisition is believed using single camera
It ceases less, it is therefore desirable to increase the number of mark point to provide more image informations to calculate the three-dimensional coordinate of object.Root
According to machine vision theory, the steric information of scene is extrapolated from single image, need at least to determine five calibration in image
Point.Therefore, monocular scheme increases mark point quantity, also increases the complexity of design, but simultaneously, it is only necessary to a video camera
To reduce the complexity of Image Acquisition, reduce costs.
Fig. 8 is referred to, Fig. 8 shows the specific structure signal of the second variant embodiment for rebuilding unit in Fig. 5
Figure.As shown in fig. 7, in the present embodiment, when the quantity of the mark point is five or more, and institute is obtained using monocular-camera
When stating the location information of mark point, the unit that rebuilds further comprises:
Second computing module 33, for the plane coordinates and the standard picture according to the mark point in the present image
The reference marker point plane coordinates, calculate the homograph relationship between the present image and the standard picture.
Second reconstructed module 34 is shooting the standard for calculating the mark point according to the homograph relationship
Then the spatial position at image moment calculates the mark point at current time to the rigid transformation of the spatial position at current time
Spatial position, and the current spatial position of first object object is calculated according to the spatial position at the mark point current time.
A width standard picture is acquired first, measures reference using devices such as accurate depth camera or laser scanners
The spatial position of mark point, and obtain the two dimensional image coordinate (i.e. plane coordinates) of reference marker point at this time.
In use, video camera constantly captures all mark points in the present image containing first object object
Two dimensional image coordinate, and current shape is calculated according to the two-dimensional coordinate of two dimensional image coordinate at this time and standard picture reference marker point
The rigid transformation between mark point when mark point under state and shooting standard picture, assuming that relative position is not between mark point
In the case where change, and then the transformation of spatial position when mark point is relative to standard picture at this time is calculated out, worked as to calculate
The spatial position of preceding mark point.
Here, the sky of mark point when can calculate current markers point using points more than five points and shoot standard picture
Between position rigid transformation [R | T], it is preferred that five or more the points not in one plane, and the projection matrix P quilt of camera
It demarcates in advance.The concrete mode for calculating [R | T] is as follows:
Each mark point is respectively X0, Xi in the homogeneous coordinates of standard picture and present image.The two meets the limit about
Beam, i.e. X0P-1[R | T] P=Xi.All mark points form the equation group that a unknown parameter is [R | T].When mark point quantity is big
When 5, [R | T] can be solved;When mark point quantity is greater than 6, optimal solution can be asked to [R | T], method can be used
Singular value decomposition SVD, and/or non-linear optimal solution is calculated using the method for iteration.After calculating mark point spatial position,
We can speculate according to the mutual alignment relation between the prior mark point demarcated and first object object (such as human eye)
The spatial position of first object object (such as human eye) out.
The present embodiment only uses a video camera, uses five or five or more mark points, so that it may accurately construct
The spatial position of first object object, it is not only easy to operate but also low in cost.
Fig. 9 is referred to, Fig. 9 shows the specific structure signal of the third variant embodiment for rebuilding unit in Fig. 5
Figure.As shown in figure 9, the present embodiment uses two or more video cameras, one or more mark point.Using double
When lens camera or multi-lens camera obtain the location information of the mark point, the unit that rebuilds further comprises:
Third computing module 35 calculates each mark point at current time using binocular or more mesh three-dimensional reconstruction principles
Spatial position;So-called binocular or three mesh, which rebuild principle, can use following methods, for example, by using the matched mark of left and right camera
Parallax between note point, calculates each mark point in the spatial position at current time.Either using other existing common
Method is realized.
Third reconstruction module 36, current according to the spatial position at mark point current time calculating first object object
Spatial position.
Specifically, the mutual alignment relation between each video camera is carried out using the method that multi-lens camera is calibrated first
Calibration.Then in use, the image zooming-out mark point coordinate each video camera got, and each label is clicked through
Row matching obtains it in the corresponding mark point of each video camera, then using phase between matched mark point and video camera
Mutual positional relationship calculates the spatial position of mark point.
In a specific example, carry out shot mark point using multi-lens camera (i.e. number of cameras is more than or equal to 2),
Realize stereo reconstruction.Coordinate u of the known mark point on a certain shot by camera image and the camera parameters matrix
M, we can calculate a ray, this mark point is in space on this ray.
αjuj=MjX j=1 ... n (wherein n is the natural number more than or equal to 2)
Similarly, according to above-mentioned formula, this mark point can also calculate corresponding other video cameras on other video cameras
Ray.Theoretically, this two rays converge on a point, i.e. the spatial position of this mark point.Actually due to camera shooting
The digitizer error of machine, the error etc. of video camera internal reference and outer ginseng calibration, these rays can not converge at a bit, therefore need
The method approximate calculation of triangulation (triangululation) is used to go out the spatial position of mark point.For example it can be used
Least square judgment criterion determines the point nearest apart from all light as object point.
After calculating mark point spatial position, we can be according to the mark point and first object pair demarcated in advance
As the mutual alignment relation between (such as human eye) deduces the spatial position of first object object (human eye).
In the above-mentioned method for realizing stereo reconstruction using multi-lens camera, preferable method is using binocular camera meter
It calculates.Its principle is rebuild as principle with aforementioned multi-lens camera, is all the mutual alignment relation and label according to two video cameras
O'clock two video camera imagings two-dimensional coordinate calculate mark point spatial position.Its minute differences is binocular camera laid parallel,
According to image calibration as previously described is done to the image of two video cameras after simple calibration, so that two two to match each other
Dimension mark point is equal on y (or x) axis, then depth of the mark point away from video camera can be by the two-dimensional marker point after calibrating in x at this time
Gap on (or y) axis is calculated.The method can regard the specific process that multi-eye stereo is reconstituted under biocular case as, letter
It the step of having changed stereo reconstruction and is easier to realize on device hardware.
Embodiment 4
0, Figure 10 shows the positioning that first object object setting flag point is corresponded in the tracking device of Fig. 5 referring to Figure 1
The structural schematic diagram of bracket.As shown in Figure 10, the present invention provides a kind of locating support, which is located at human eye (the first mesh
Mark object) front, structure is similar to glasses, wears similar to glasses, comprising: crossbeam 11, fixed part 12, support portion 13 and
Control unit 14, crossbeam 11 are provided with mark point 111;Support portion 13 is set on crossbeam 11;The end of fixed part 12 and crossbeam 11
It is pivotally connected.The position that wherein mark point 111 is arranged is corresponding with the position of human eye (first object object), is marked by obtaining
Then the spatial positional information of point 111 calculates the spatial positional information of human eye accordingly.When the head of people occurs mobile, accordingly
Ground, mark point 111 corresponding with human eye also move, the movement of Camera location mark point 111, then use aforementioned implementation
The scheme of the target object tracking of mode one obtains the spatial positional information of mark point 111, utilizes mark point 111 and human eye
Relative tertiary location relationship, recreating the spatial position of human eye (first object object), (i.e. in space three-dimensional is sat
Mark).
In the present embodiment, crossbeam 11 is a strip, and has certain radian, and radian is close with the forehead radian of people
Seemingly, with convenient to use.Crossbeam 11 include upper surface 112, lower surface corresponding thereto, setting upper surface 112 and lower surface it
Between first surface 114 and second surface.
In the present embodiment, mark point 111 is three LED light, and interval is evenly provided on the first surface of crossbeam 11
On 114.It is understood that mark point 111 or one, two or more, and can be any light source, including
LED light, infrared lamp or ultraviolet lamp etc..Also, the arrangement mode of the mark point 111 and setting position also can according to need into
Row adjustment.
It is understood that crossbeam 11, which also can according to need, is designed to linear or other shapes.
In the present embodiment, there are two fixed parts 12, and the both ends with crossbeam 11 are pivotally connected respectively, and two fixed parts
12 opposed, inwardly directed can fold, meanwhile, two fixed parts 12 can be expanded to crossbeam 11 outward respectively in 100 ° or so of interior angle, be had
Body, the size of interior angle can be adjusted according to practical operation demand.It should be understood that fixed part 12 or one.
The one end of fixed part 12 far from crossbeam 11 bends setting along the extending direction of support portion 13, to be used for fixed part 12
End be fixed on the ear of people.
In the present embodiment, support portion 13 is in a strip shape, and the middle part of the lower surface 113 of crossbeam 11 is arranged in and extends downwardly.Into
One step, support portion 13 are provided with nose support 131 far from the end of crossbeam 11, to be used to cooperate positioning device the bridge of the nose, and will positioning
Device is set to above human eye.It should be understood that in other embodiments, if being not provided with nose support 131, support portion 13 is settable
It at Y-shaped, and along the middle part of crossbeam 11 and extends downwardly, positioning device is cooperated the bridge of the nose, and positioning device is arranged
Above human eye.
The rounded cuboid of control unit 14 is arranged on fixed part 12.Control unit 14 is used for the LED light, infrared lamp
Or ultraviolet lamp provides power supply and/or person controls the LED light, the use state of infrared lamp or ultraviolet lamp comprising power switch
141, power supply indicator and charging indicator light.It is understood that the unlimited setting shape of control unit 14, can have any shape,
It may be an integrated chip.Also, control unit 14 also can be set in other positions, on crossbeam 11.
In use, turning on the power switch 141, power supply indicator shows that LED is in power supply state, and LED light is lit;Work as electricity
When amount is insufficient, charging indicator light prompts not enough power supply;It turns off the power switch, power supply indicator extinguishes, and indicates that LED is in and closes shape
State, LED light are extinguished.
Since the interpupillary distance range of people is 58mm~64mm, the interpupillary distance that can be approximately considered people is definite value, provided by the invention fixed
Position bracket is similar to spectacle frame, and is fixed on above human eye, is similar to spectacle frame, and as needed, mark point setting is being positioned
The predetermined position of device, so as to simply and easily determine the position of human eye according to the position of mark point.Positioning device structure
Simply, design with it is easy to use.
Embodiment two
Referring to Figure 11 to Figure 14, Figure 11 are the stream of the stereo display method based on intubation mirror of embodiment of the present invention two
Journey schematic diagram, Figure 12 are the idiographic flow schematic diagrams of S1 in Figure 11, and Figure 13 is the idiographic flow schematic diagram of S12 in Figure 12, Figure 14
It is the idiographic flow schematic diagram of the S3 in Figure 11.As shown in Figure 11 to 14, embodiment of the present invention two based on intubation mirror
Stereo display method mainly comprises the steps that
S0 image capturing procedure, shoots the stereo-picture of the second target object, and sends described second taken in real time
The information of the stereo-picture of target object, the information include left view information and right view information.
The location information of S1 acquisition first object object;Using the position of tracking equipment tracking first object object, such as
Location information where viewer.
S2 obtains the grating parameter of the spectrophotometric unit of 3 d display device and the display parameters of display unit;Spectrophotometric unit
The pitch (pitch) and grating that grating parameter mainly includes grating are with respect to the tilt angle of display panel, the opposite display surface of grating
The parameters such as the placement distance of plate.
S3 shoots the described image received according to the location information and the grating parameter and the display parameters
Unit photographs to stereo-picture handled in real time.Before stereo-picture to be played, need to believe in conjunction with the position of human eye in advance
The display parameters of breath and grating parameter and display unit, handle image, in order to provide optimal three-dimensional aobvious to viewer
Show effect.
S4 shows the image to be played.
Stereo display method based on intubation mirror of the invention, by the location information for getting first object object in time
And grating parameter, and image procossing directly being carried out accordingly, improving the speed of image broadcasting, be able to satisfy that real-time volume shows wants
The advantages of asking, surgical operation and auxiliary doctor facilitated to improve success rate of operation.
In addition, the second target object here refers mainly to the various scenes that video camera takes, actual people can be, or
The ball match or the image etc. in patient body shot by some equipment that person is being broadcast live.By shooting perspective view in real time
Picture, and on the display unit by the stereo-picture real-time display taken, additional image procossing is needed not move through, it is in time and true
Ground shows the various scenes taken, meets demand of the user to real-time display, improves user experience.
In a specific variant embodiment, above-mentioned steps S0 further include: image acquisition step acquires second mesh
The stereo-picture of object is marked, and extracts left view information and right view information from the stereo-picture.By being shot on side
The left view information and right view information of stereo-picture are extracted in side when stereo-picture, improve the speed of image procossing, ensure that
The display effect of real-time perfoming stereoscopic display.
Embodiment 5
Referring to Figure 12, the embodiment of the present invention 5 be mainly the S1 location information for how obtaining first object object is made it is detailed
Thin description.These first object objects are, for example, that upper part of the body of human eye, the head of people, the face of people or human body etc. is watched with people
Relevant position.Above-mentioned " location information of S1 acquisition first object object " mainly comprises the steps that
S11 corresponds to the spatial position setting flag point of first object object;Here first object can be set in mark point
On object, it can also be not provided on first object object, but setting is having relative positional relationship with first object object, and
With can also on the object of first object object synchronous motion.For example, target object is human eye, then it can be around the eye socket of human eye
Setting flag point;Or locating support is configured around human eye, mark point is located on the frame of locating support, or will label
Point be located at on the ear of the relatively-stationary people of position of human eye relationship.The mark point can be the infrared emission sensing for sending signal
Device, LED light, GPS sensor, the various parts such as laser positioning sensor, be also possible to it is other can be by the object of cameras capture
Reason mark, the e.g. object with shape feature and/or color characteristic.Preferably, being improved for the interference for avoiding extraneous veiling glare
The robustness of point tracking is marked, it is preferable to use the more narrow infrared LED lamp of frequency spectrum is as mark point, and use can only be by red
Outer LED mark point is captured using the corresponding thermal camera of frequency spectrum.In view of extraneous veiling glare is mostly irregular
Shape and Luminance Distribution is uneven, mark point can be arranged to that the hot spot of regular shape can be issued, luminous intensity is higher, bright
Degree is uniform.In addition it can be arranged multiple mark points, the corresponding hot spot of each mark point, each mark point composition rule it is several
What shape, such as triangle, quadrangle etc. obtains the spatial positional information of mark point to be easy to trace into mark point, and improves
The accuracy that hot spot extracts.
S12 obtains the location information of the mark point;This can be the signal issued by receiving mark point, to determine label
The location information of point, is also possible to be shot the image containing mark point using video camera, be mentioned to the mark point in image
It takes.The location information of mark point is obtained by image processing algorithm.
Location information of the S13 according to the mark point, rebuilds the spatial position of the first object object.When acquiring this
After the location information of mark point, the spatial position of mark point is recreated, then according to mark point and first object object
The spatial position of mark point is transformed into spatial position (such as the left and right of people two of first object object by relative positional relationship
Spatial position).
The location information of the mark point by obtaining corresponding first object object of embodiment of the present invention two, and according to this
Location information recreates the spatial position of first object object.It is captured with video camera is used in the prior art as human eye
Equipment needs to carry out two dimensional image signature analysis to obtain position of human eye or imitate using other using human eye iris reflex
The human eye of fruit, which captures equipment and obtains position of human eye, to compare, and has stability good, the accuracy for capturing the location information of human eye is high,
It is low in cost and the advantages that do not require the distance between tracking equipment and first object object are far and near.
Referring to Figure 13, above-mentioned steps S12 further comprises:
S121 presets a standard picture, is equipped with reference marker point in the standard picture, and obtain the reference marker point
Space coordinate and plane coordinates;Standard picture for example can be the standard picture acquired by image capture device, obtain
The image coordinate of reference marker point is taken, and uses other accurate measurement in space equipment such as laser scanners, structured light scanner
Equipment such as (such as Kinect) obtain the space coordinate and plane coordinates of reference marker point in standard picture.
S122 obtains the present image comprising the target object and the mark point and the mark point described current
The plane coordinates of image;
S123 matches the mark point in the present image with the reference marker point of the standard picture.This
In will plane coordinates and the reference marker point first by mark point in the present image built between the plane coordinates of standard picture
Vertical corresponding relationship, then matches mark point with reference marker point.
It is easy for there capable of be one when obtaining the spatial position of present image by setting standard picture and reference marker point
Object of reference, it further ensures the stability of the method for tracking target of embodiment of the present invention and accuracys.
Further, before above-mentioned steps S11 further include: S10 is to the location information for obtaining the mark point
Video camera is demarcated.
Above-mentioned calibration has in the following several ways:
(1) when the video camera of the S10 is monocular-camera, common Zhang Shi gridiron pattern calibration algorithm, example can be used
Such as demarcated using following formula:
Sm '=A [R | t] | M ' (1)
In formula (1), A is inner parameter, and R is external parameter, and t is translation vector, the coordinate of m ' picture point in the picture, and M ' is
The space coordinate (three-dimensional coordinate i.e. in space) of object point;Wherein A, R and t are determined by following formula respectively:
WithTranslation vector
Certainly for video camera calibration algorithm there are many kinds of, can also use the common calibration algorithm of other industries, this
Invention is not construed as limiting, and uses calibration algorithm, mainly to improve the accuracy of first object method for tracing object of the invention.
(2) it when the video camera of the S10 is binocular camera or multi-lens camera, is demarcated using following steps:
S101 first demarcates any lens camera in the binocular camera or multi-lens camera, and using normal
The Zhang Shi gridiron pattern calibration algorithm seen, for example, by using following formula:
Sm '=A [R | t] M ' (1)
In formula (1), A is inner parameter, and R is external parameter, and t is translation vector, the coordinate of m ' picture point in the picture, and M ' is
The space coordinate of object point;Wherein A, R and t are determined by following formula respectively:
WithTranslation vector
S102 calculates relative rotation matrices and relative translation amount between the binocular camera or the multi-lens camera,
Using following formula:
Relative rotation matricesWith relative translation amount
Certainly the above-mentioned binocular camera or the calibration algorithm of multi-lens camera of being directed to is wherein more typical one kind, may be used also
To use the common calibration algorithm of other industries, the present invention is not construed as limiting, and mainly uses calibration algorithm, of the invention to improve
The accuracy of first object method for tracing object.
Further, between above-mentioned S11 and S12 further include:
S14 acquires the mark point;
S15 screens target label point from the mark point.
Specifically, when the quantity of mark point is multiple, all of corresponding first object object are acquired using video camera
Mark point, selection and the maximally related mark point of first object object, then use corresponding image procossing from all mark points
Algorithm extracts the mark point on image, and extraction needs are carried out according to the feature of mark point.Generally, to the mark
The method that the feature of note point extracts is to obtain the feature minute of each point in image using feature extraction function H to image I
Number, and filter out the sufficiently high mark point of characteristic value.Here can be concluded with following formula indicates:
S (x, y)=H (I (x, y))
F={ arg(x, y)(S (x, y) > s0) }
In above-mentioned formula, H is feature extraction function, and I (x, y) is image value corresponding to each pixel (x, y), be can be
Gray value or the color energy value of triple channel etc., S (x, y) are each feature scores of pixel (x, y) after feature extraction,
S0 is a feature scores threshold value, and the S (x, y) greater than s0 is considered mark point, and F is label point set.Preferably, this
Inventive embodiments using infrared markers point and thermal camera it is more obvious at the energy feature of image.Due to using narrowband
LED infrared lamp and corresponding thermal camera, video camera, only mark point very low at most of pixel energies of image
Corresponding pixel has high-energy.Therefore corresponding function H (x, y) can be to use the image B after Threshold segmentation operator (x,
Y) it carries out region and increases several subgraphs of acquisition, and center of gravity extraction is carried out to the subgraph acquired.This feature extracts function H
(x, y), can be Harris, and the features point function such as SIFT, FAST is also possible to the image processing functions such as circular light spot extraction.
Meanwhile according to the veiling glare that can be imaged in thermal camera in environment light, we can add in infrared markers point extraction process
Such as mark point institute is at facula area, and the constraint conditions such as the positional relationship of mark point in two dimensional image are to the mark point extracted
It is screened.
When video camera number is greater than 1, need to different cameras in synchronization or the figure obtained close to synchronization
As a matching is marked, to provide condition for subsequent mark point three-dimensional reconstruction.The method of reference points matching needs basis
Depending on feature extraction function H.We can be used some classics feature point extraction operator based on gradient of image and gray scale figure and
The matching process to match therewith such as Harris, SIFT, the methods of FAST are obtained and matched indicia point.Also the limit can be used about
A matching is marked in beam, the modes such as priori conditions of mark point.The method for carrying out matching screening used here as limit restraint is:
According to the projection on two different cameras images in same o'clock all in this principle of same plane, for some camera shooting
Some mark point p0 in machine c0, we can calculate a polar curve equation in other video cameras c1, and p0 pairs of mark point
Following relationships should be met in the mark point p1 on other video camera c1:
[p1;1]TF[p0;1]=0
In above-mentioned formula, F is basis matrix of the video camera c0 to video camera c1.By using above-mentioned relation, we can be big
The candidate number of mark point p1 is reduced greatly, improves matching accuracy.
In addition, we can be used mark point priori conditions be mark point spatial order, the size etc. of mark point.Than
Two of every a pair of corresponding the same space point on image as captured by making it according to the mutual alignment relation of two video cameras
Pixel is equal in some dimension such as y-axis, this process is also referred to as image calibration (rectification).Then at this time
The matching of mark point can also be executed according to the x-axis sequence of mark point, i.e. the corresponding minimum x of minimum x, and so on, maximum x
Corresponding maximum x.
Based on the following for tracking video camera number number, method for tracking target of the invention is discussed in detail.
Referring to Figure 14, it is the idiographic flow schematic diagram of the first variation of S13 in Figure 11.As shown in figure 13, in this reality
It applies in example, the corresponding mark point of first object object of first object method for tracing object tracking is no more than four, and uses
When location information of the monocular-camera to obtain mark point, abovementioned steps S13 further comprises:
S131 is according to the plane coordinates of the mark point in the present image and the reference marker of the standard picture
The hypothesis condition of scene calculates the present image and the standard where the plane coordinates and the first object object of point
Homograph relationship between image;The mark point of present image is matched with the reference marker point in standard picture, and
The homograph relationship between present image and standard picture is calculated according to the respective plane coordinates of the two.So-called homograph is
Homography in corresponding geometry, is common a kind of transform method in computer vision field.
S132 calculates the mark point in the space for shooting the standard picture moment according to the homograph relationship
Then position calculates the mark point in the spatial position at current time to the rigid transformation of the spatial position at current time, and
The current spatial position of the first object object is calculated in the spatial position at current time according to the mark point.
Specifically, for the hypothesis condition of scene, we can assume that when the rigid transformation of mark point in scene
The numerical value of certain dimension is constant, such as in three dimensional spatial scene, and space coordinate is x, y, z, x and y respectively with the image of camera
X-axis is parallel with y-axis in coordinate (plane coordinates), and z-axis is perpendicular to the image of camera, it is assumed that condition can be mark point in z
Coordinate on axis is constant, and it is constant to be also possible to coordinate of the mark point in x-axis and/or y-axis.Different suppositive scenario conditions, institute
The estimation method used is also not quite similar.In another example another kind assume under the conditions of, it is assumed that first object object direction with take the photograph
As head towards between rotation angle remain constant in use, then can be mutual according to the mark point in present image
The distance between with the mark point on standard picture from each other at a distance between ratio speculate that first object object is current
Spatial position.
By above calculation method, structure again may be implemented when monocular-camera is no more than four to the quantity of mark point
The spatial position of the first object object is built, it is easy to operate, and tracking result is also relatively accurate, due to using monocular, reduces
Cost of the first object to image tracing.
It is above-mentioned to acquire image to restore in object dimensional seat calibration method, since the image of acquisition is believed using single camera
It ceases less, it is therefore desirable to increase the number of mark point to provide more image informations to calculate the three-dimensional coordinate of object.Root
According to machine vision theory, the steric information of scene is extrapolated from single image, need at least to determine five calibration in image
Point.Therefore, monocular scheme increases mark point quantity, also increases the complexity of design, but simultaneously, it is only necessary to a video camera
To reduce the complexity of Image Acquisition, reduce costs.
Referring to Figure 15, it is the idiographic flow schematic diagram of the second variation of S13 in Figure 11.As shown in figure 14, in this reality
It applies in example, when the quantity of the mark point is five or more, and the location information of the mark point is obtained using monocular-camera
When, the S13 further comprises:
S133 is according to the plane coordinates of the mark point in the present image and the reference marker of the standard picture
The plane coordinates of point, calculates the homograph relationship between the present image and the standard picture;
S134 calculates the mark point in the space for shooting the standard picture moment according to the homograph relationship
Then position calculates the mark point in the spatial position at current time to the rigid transformation of the spatial position at current time, and
The current spatial position of first object object is calculated according to the spatial position at the mark point current time.
Specifically, a width standard picture is acquired first, uses the devices such as accurate depth camera or laser scanner
The spatial position of reference marker point is measured, and obtains the two dimensional image coordinate (i.e. plane coordinates) of reference marker point at this time.
In use, video camera constantly captures all mark points in the present image containing first object object
Two dimensional image coordinate, and current shape is calculated according to the two-dimensional coordinate of two dimensional image coordinate at this time and standard picture reference marker point
The rigid transformation between mark point when mark point under state and shooting standard picture, assuming that relative position is not between mark point
In the case where change, and then the transformation of spatial position when mark point is relative to standard picture at this time is calculated out, worked as to calculate
The spatial position of preceding mark point.
Here, the sky of mark point when can calculate current markers point using points more than five points and shoot standard picture
Between position rigid transformation [R | T], it is preferred that five or more the points not in one plane, and the projection matrix P quilt of camera
It demarcates in advance.The concrete mode for calculating [R | T] is as follows:
Each mark point is respectively X0, Xi in the homogeneous coordinates of standard picture and present image.The two meets the limit about
Beam, i.e. X0P-1[R | T] P=Xi.All mark points form the equation group that a unknown parameter is [R | T].When mark point quantity is big
When 5, [R | T] can be solved;When mark point quantity is greater than 6, optimal solution can be asked to [R | T], method can be used
Singular value decomposition SVD, and/or non-linear optimal solution is calculated using the method for iteration.After calculating mark point spatial position,
We can speculate according to the mutual alignment relation between the prior mark point demarcated and first object object (such as human eye)
The spatial position of first object object (such as human eye) out.
The present embodiment only uses a video camera, uses five or five or more mark points, so that it may accurately construct
The spatial position of first object object, it is not only easy to operate but also low in cost.
Referring to Figure 1 in 6, Figure 16 Figure 11 the third variation of S13 idiographic flow schematic diagram.As shown in figure 16, this reality
It applies example 3 and uses two or more video cameras, one or more mark point.It is taken the photograph using binocular camera or more mesh
When camera obtains the location information of the mark point, the S3 further comprises:
S35 uses binocular or more mesh three-dimensional reconstruction principles, calculates each mark point in the spatial position at current time;Institute
It calls binocular or three mesh rebuilds principle and can use following methods, for example, by using the view between the matched mark point of left and right camera
Difference calculates each mark point in the spatial position at current time.Either realized using other existing common methods.
S36 calculates the current spatial position of first object object according to the spatial position at the mark point current time.
Specifically, the mutual alignment relation between each video camera is carried out using the method that multi-lens camera is calibrated first
Calibration.Then in use, the image zooming-out mark point coordinate each video camera got, and each label is clicked through
Row matching obtains it in the corresponding mark point of each video camera, then using phase between matched mark point and video camera
Mutual positional relationship calculates the spatial position of mark point.
In a specific example, carry out shot mark point using multi-lens camera (i.e. number of cameras is more than or equal to 2),
Realize stereo reconstruction.Coordinate u of the known mark point on a certain shot by camera image and the camera parameters matrix
M, we can calculate a ray, this mark point is in space on this ray.
αjuj=MjX j=1 ... n (wherein n is the natural number more than or equal to 2)
Similarly, according to above-mentioned formula, this mark point can also calculate corresponding other video cameras on other video cameras
Ray.Theoretically, this two rays converge on a point, i.e. the spatial position of this mark point.Actually due to camera shooting
The digitizer error of machine, the error etc. of video camera internal reference and outer ginseng calibration, these rays can not converge at a bit, therefore need
The method approximate calculation of triangulation (triangululation) is used to go out the spatial position of mark point.For example it can be used
Least square judgment criterion determines the point nearest apart from all light as object point.
After calculating mark point spatial position, we can be according to the mark point and first object pair demarcated in advance
As the mutual alignment relation between (such as human eye) deduces the spatial position of first object object (human eye).
In the above-mentioned method for realizing stereo reconstruction using multi-lens camera, preferable method is using binocular camera meter
It calculates.Its principle is rebuild as principle with aforementioned multi-lens camera, is all the mutual alignment relation and label according to two video cameras
O'clock two video camera imagings two-dimensional coordinate calculate mark point spatial position.Its minute differences is binocular camera laid parallel,
According to image calibration as previously described is done to the image of two video cameras after simple calibration, so that two two to match each other
Dimension mark point is equal on y (or x) axis, then depth of the mark point away from video camera can be by the two-dimensional marker point after calibrating in x at this time
Gap on (or y) axis is calculated.The method can regard the specific process that multi-eye stereo is reconstituted under biocular case as, letter
It the step of having changed stereo reconstruction and is easier to realize on device hardware.
Embodiment 6
7, Figure 17 is the idiographic flow schematic diagram of the S3 in Figure 11 referring to Figure 1.As shown in figure 17, it is based on aforementioned embodiment party
The step S3 of formula two and previous embodiment, the stereo display method of the invention based on intubation mirror further comprises:
S301 row's graph parameter determines step, location information and the light splitting according to the first object object got
The grating parameter of unit and the display parameters of display unit calculate row's graph parameter on the display unit;
S302 parallax image arrangement step arranges the anaglyph on the display unit according to row's graph parameter;
S303 anaglyph plays step, plays the anaglyph.
Through the above steps, stereo-picture to be played is rearranged, improves the effect of stereoscopic display.
Further, before step S301 further include: S304 stereo-picture obtaining step obtains the institute that captured in real-time arrives
State the information of stereo-picture.While side plays anaglyph, side obtains the stereo image information that captured in real-time arrives, and improves
The efficiency of image procossing not only ensure that real-time broadcasting, but also reduce the data storage occupied by stereoscopically displaying images simultaneously
Amount is very big and requires the requirement of big memory, reduces costs.
The foregoing is merely the preferred embodiment of the present invention, are not intended to restrict the invention, for this field
For technical staff, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any
Modification, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.
Claims (18)
1. a kind of three-dimensional display system based on intubation mirror, comprising: display unit, spectrophotometric unit, tracking equipment and monocular camera shooting
Machine, the spectrophotometric unit are located at the display side of the display unit, divide on the image space for showing the display unit
At left view and right view, the tracking equipment is used to obtain the location information of first object object, and the monocular-camera is used
In shooting the second target object, which is characterized in that the three-dimensional display system based on intubation mirror further includes image playback process
Equipment is connect with the tracking equipment, the display unit and the monocular-camera respectively, described image playback process equipment
Including image playback process unit, the image playback process unit according to the first object object location information, described point
The display parameters of the grating parameter of light unit and the display unit take the described image shooting unit received vertical
Body image carries out row's figure processing in real time, and the display unit real-time display is sent after processing;
The tracking equipment includes: mark point setting unit, acquiring unit and rebuilds unit, and the acquiring unit includes pre-
If module obtains module, matching module, the unit that rebuilds includes the first computing module and the first reconstructed module;
Mark point setting unit, the spatial position setting flag point of the corresponding first object object;
Presetting module presets a standard picture, is equipped with reference marker point in the standard picture, and obtain the reference marker point
Space coordinate and plane coordinates in the standard picture;
Module is obtained, obtains the present image comprising the first object object and the mark point and the mark point in institute
State the plane coordinates of present image;
Matching module matches the mark point in the present image with the reference marker point of the standard picture;
First computing module, for described in the plane coordinates and the standard picture according to the mark point in the present image
The hypothesis condition of scene where the plane coordinates of reference marker point and the first object object calculate the present image with
Homograph relationship between the standard picture;
First reconstructed module, for calculating the mark point when shooting the standard picture according to the homograph relationship
Then the spatial position at quarter calculates the mark point in the space at current time to the rigid transformation of the spatial position at current time
Position, and the current spatial position of the first object object is calculated in the spatial position at current time according to the mark point;
When the quantity of the mark point is five or more, and obtains the location information of the mark point using monocular-camera,
It is described to rebuild unit further include:
Second computing module, for described in the plane coordinates and the standard picture according to the mark point in the present image
The plane coordinates of reference marker point calculates the homograph relationship between the present image and the standard picture;
Second reconstructed module, for calculating the mark point when shooting the standard picture according to the homograph relationship
Then the spatial position at quarter calculates the mark point in the space at current time to the rigid transformation of the spatial position at current time
Position, and the current spatial position of first object object is calculated according to the spatial position at the mark point current time.
2. the three-dimensional display system as described in claim 1 based on intubation mirror, which is characterized in that described image shooting unit is set
In on the intubation mirror.
3. the three-dimensional display system as described in claim 1 based on intubation mirror, which is characterized in that described image shooting unit packet
Binocular camera is included, the stereo-picture of second target object is shot and obtained with two cameras.
4. the three-dimensional display system as described in claim 1 based on intubation mirror, which is characterized in that described image shooting unit packet
Multi-lens camera is included, by three or more the cameras perspective view arranged in arrays to shoot and obtain second target object
Picture.
5. such as the described in any item three-dimensional display systems based on intubation mirror of claim 2 to 4, which is characterized in that described image
Shooting unit further comprises acquisition unit, and the acquisition unit is used to acquire the stereo-picture of second target object, and
Left view information and right view information are extracted from the stereo-picture.
6. the three-dimensional display system as described in claim 1 based on intubation mirror, which is characterized in that the tracking equipment includes taking the photograph
Camera, the change in location of first object object described in the Camera location.
7. the three-dimensional display system as described in claim 1 based on intubation mirror, which is characterized in that the tracking equipment includes red
Outer receiver, the infrared remote receiver receive what the infrared transmitter set by the correspondence first object object was sent
Infrared positioning signal.
8. the three-dimensional display system as described in claim 1 based on intubation mirror, which is characterized in that when using binocular camera or
It is described to rebuild unit when multi-lens camera obtains the location information of the mark point further include:
Third computing module calculates each mark point at current time for using binocular or more mesh three-dimensional reconstruction principles
Spatial position;
Third reconstruction module, for calculating the current sky of first object object according to the spatial position at the mark point current time
Between position.
9. such as the described in any item three-dimensional display systems based on intubation mirror of claim 6 to 8, which is characterized in that described image
Playback process equipment includes storage unit;The storage unit is used to store the image that described image shooting unit transmits,
Wherein, described image playback process unit is connect with the storage unit.
10. the three-dimensional display system as claimed in claim 9 based on intubation mirror, which is characterized in that described image playback process
Unit includes:
Stereo-picture obtains module, obtains the information of the stereo-picture of described image shooting unit shooting;
Graph parameter determining module is arranged, according to the location information of the first object object got and the light of the spectrophotometric unit
Grid parameter calculates row's graph parameter on the display unit;
Parallax image arrangement module, for arranging the view of the stereo-picture on the display unit according to row's graph parameter
Difference image;
Anaglyph playing module plays the anaglyph.
11. the three-dimensional display system as described in claim 1 based on intubation mirror, which is characterized in that the spectrophotometric unit is view
Poor barrier or lenticulation.
12. the three-dimensional display system as claimed in claim 11 based on intubation mirror, which is characterized in that the lenticulation is liquid
Brilliant lenticulation.
13. such as the described in any item three-dimensional display systems based on intubation mirror of Claims 1-4, which is characterized in that the light splitting
It is equipped between unit and the display unit and is bonded unit, the spectrophotometric unit is fitted to by described show by the fitting unit
Show on unit.
14. the three-dimensional display system as claimed in claim 13 based on intubation mirror, which is characterized in that the fitting unit includes
First substrate, the second substrate, and the air layer between the first substrate and the second substrate.
15. a kind of stereo display method based on intubation mirror, which is characterized in that the described method comprises the following steps:
SO shoots the stereo-picture of the second target object, and sends the information of the stereo-picture in real time;
The location information of S1 acquisition first object object;
S2 obtains the display parameters of the grating parameter of the spectrophotometric unit of display equipment and the display unit of the display equipment;
The described image shooting unit that S3 will be received according to the location information and the grating parameter and the display parameters
The stereo-picture taken carries out real-time row's figure processing;
S4 shows image to be played;
Wherein, the S1 further include: S11 corresponds to the spatial position setting flag point of the first object object;
S12 obtains the location information of the mark point;
Location information of the S13 according to the mark point rebuilds the spatial position of the first object object;
The S12 further comprises:
S121 presets a standard picture, is equipped with reference marker point in the standard picture, and obtain the reference marker point in institute
State the space coordinate and plane coordinates in standard picture;
S122 is obtained comprising the first object object, the present image of the mark point and the mark point described current
The plane coordinates of image;
S123 matches the mark point in the present image with the reference marker point of the standard picture;
When the quantity of the mark point is no more than four, and obtains the location information of the mark point using monocular-camera,
The S13 further comprises:
S131 is according to the plane coordinates of the mark point in the present image and the reference marker point of the standard picture
The hypothesis condition of scene calculates the present image and the standard picture where plane coordinates and the first object object
Between homograph relationship;
S132 calculates the mark point in the spatial position for shooting the standard picture moment according to the homograph relationship
To the rigid transformation of the spatial position at current time, the mark point is then calculated in the spatial position at current time, and according to
The mark point calculates the current spatial position of the first object object in the spatial position at current time;
When the quantity of the mark point is five or more, and obtains the location information of the mark point using monocular-camera,
The S13 further comprises:
S133 is according to the plane coordinates of the mark point in the present image and the reference marker point of the standard picture
Plane coordinates calculates the homograph relationship between the present image and the standard picture;
S134 calculates the mark point in the spatial position for shooting the standard picture moment according to the homograph relationship
To the rigid transformation of the spatial position at current time, the mark point is then calculated in the spatial position at current time, and according to
The spatial position at the mark point current time calculates the current spatial position of first object object.
16. the stereo display method as claimed in claim 15 based on intubation mirror, which is characterized in that the SO further include: figure
As acquisition step, the stereo-picture of second target object is acquired, and extracts left view information from the stereo-picture
With right view information.
17. the stereo display method as claimed in claim 15 based on intubation mirror, which is characterized in that when using binocular camera
Or multi-lens camera, when obtaining the location information of the mark point, the S13 further comprises:
S135 uses binocular or more mesh three-dimensional reconstruction principles, calculates each mark point in the spatial position at current time;
S136 calculates the current spatial position of target object according to the spatial position at the mark point current time.
18. such as the described in any item stereo display methods based on intubation mirror of claim 15 to 17, which is characterized in that the S3
Further comprise:
S304 stereo-picture obtaining step obtains the information for the stereo-picture that captured in real-time arrives;
S301 row's graph parameter determines step, the location information and the spectrophotometric unit according to the first object object got
Grating parameter and the display parameters of the display unit calculate row's graph parameter on the display unit;
S302 parallax image arrangement step arranges the view of the stereo-picture on the display unit according to row's graph parameter
Difference image;
S303 anaglyph plays step, plays the anaglyph.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410848782.4A CN105812774B (en) | 2014-12-29 | 2014-12-29 | Three-dimensional display system and method based on intubation mirror |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410848782.4A CN105812774B (en) | 2014-12-29 | 2014-12-29 | Three-dimensional display system and method based on intubation mirror |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105812774A CN105812774A (en) | 2016-07-27 |
CN105812774B true CN105812774B (en) | 2019-05-21 |
Family
ID=56420403
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410848782.4A Expired - Fee Related CN105812774B (en) | 2014-12-29 | 2014-12-29 | Three-dimensional display system and method based on intubation mirror |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105812774B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109410277B (en) * | 2018-11-15 | 2022-03-08 | 广州科莱瑞迪医疗器材股份有限公司 | Virtual mark point filtering method and system |
EP4033758A4 (en) * | 2019-09-30 | 2024-01-17 | Beijing Ivisual 3D Tech Co Ltd | Method and apparatus for realizing 3d display, and 3d display terminal |
CN111009009A (en) * | 2019-12-11 | 2020-04-14 | 苏州新光维医疗科技有限公司 | Endoscope 3D image adjusting method |
CN110995997A (en) * | 2019-12-11 | 2020-04-10 | 苏州新光维医疗科技有限公司 | Image processing conversion method for single-lens endoscope |
CN113947652A (en) * | 2020-07-15 | 2022-01-18 | 北京芯海视界三维科技有限公司 | Method and device for realizing target object positioning and display device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101518438A (en) * | 2009-03-27 | 2009-09-02 | 南开大学 | Binocular endoscope operation visual system |
CN102075776A (en) * | 2011-01-18 | 2011-05-25 | 青岛海信电器股份有限公司 | Stereo display controlling method and device |
CN102596000A (en) * | 2010-07-09 | 2012-07-18 | 奥林巴斯医疗株式会社 | Stereoscopic endoscope |
CN103139592A (en) * | 2011-11-23 | 2013-06-05 | 韩国科学技术研究院 | 3d display system |
CN103234514A (en) * | 2013-04-03 | 2013-08-07 | 广州市佳思信息科技有限公司 | Methods for distance measurement and offset measurement with single camera and for realizing naked eye 3D grating adjustment |
CN103875243A (en) * | 2011-10-14 | 2014-06-18 | 奥林巴斯株式会社 | 3d endoscope device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1264062C (en) * | 2002-12-31 | 2006-07-12 | 清华大学 | Method of multi viewing angle x-ray stereo imaging and system |
US20050004919A1 (en) * | 2003-07-03 | 2005-01-06 | Sabre, Inc. | Systems, methods, and computer program products providing a generalized inventory system |
CN101562756A (en) * | 2009-05-07 | 2009-10-21 | 昆山龙腾光电有限公司 | Stereo display device as well as display method and stereo display jointing wall thereof |
CN101984670B (en) * | 2010-11-16 | 2013-01-23 | 深圳超多维光电子有限公司 | Stereoscopic displaying method, tracking stereoscopic display and image processing device |
KR101371387B1 (en) * | 2013-01-18 | 2014-03-10 | 경북대학교 산학협력단 | Tracking system and method for tracking using the same |
-
2014
- 2014-12-29 CN CN201410848782.4A patent/CN105812774B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101518438A (en) * | 2009-03-27 | 2009-09-02 | 南开大学 | Binocular endoscope operation visual system |
CN102596000A (en) * | 2010-07-09 | 2012-07-18 | 奥林巴斯医疗株式会社 | Stereoscopic endoscope |
CN102075776A (en) * | 2011-01-18 | 2011-05-25 | 青岛海信电器股份有限公司 | Stereo display controlling method and device |
CN103875243A (en) * | 2011-10-14 | 2014-06-18 | 奥林巴斯株式会社 | 3d endoscope device |
CN103139592A (en) * | 2011-11-23 | 2013-06-05 | 韩国科学技术研究院 | 3d display system |
CN103234514A (en) * | 2013-04-03 | 2013-08-07 | 广州市佳思信息科技有限公司 | Methods for distance measurement and offset measurement with single camera and for realizing naked eye 3D grating adjustment |
Also Published As
Publication number | Publication date |
---|---|
CN105812774A (en) | 2016-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105791800B (en) | Three-dimensional display system and stereo display method | |
CN105812774B (en) | Three-dimensional display system and method based on intubation mirror | |
CN105809654B (en) | Target object tracking, device and stereoscopic display device and method | |
US9699438B2 (en) | 3D graphic insertion for live action stereoscopic video | |
CN106773080B (en) | Stereoscopic display device and display method | |
CN114895471A (en) | Head mounted display for virtual reality and mixed reality with inside-outside position tracking, user body tracking, and environment tracking | |
CN205758500U (en) | Capsule endoscope system | |
CN109491087A (en) | Modularized dismounting formula wearable device for AR/VR/MR | |
CN105942959A (en) | Capsule endoscope system and three-dimensional imaging method thereof | |
CN106840398A (en) | A kind of multispectral light-field imaging method | |
TW201503865A (en) | Information processing apparatus, information processing method, and information processing system | |
CN103995352A (en) | Head mounted display and control method for head mounted display | |
CN204578692U (en) | Three-dimensional display system | |
CN105812772B (en) | Medical image three-dimensional display system and method | |
CN103281507B (en) | Video-phone system based on true Three-dimensional Display and method | |
CN204636276U (en) | A kind of wearing die cavity mirror virtual display system | |
CN204377059U (en) | Based on the three-dimensional display system of soft lens | |
CN108778143A (en) | Computing device for laparoscopic image and ultrasonoscopy to be overlapped | |
WO2023029848A1 (en) | Dual input 3d near-eye imaging system | |
CN109068035A (en) | A kind of micro- camera array endoscopic imaging system of intelligence | |
CN102376207B (en) | LED three-dimensional display screen and manufacturing method thereof, display system and method | |
CN204377057U (en) | Based on the three-dimensional display system of intubate mirror | |
CN204377058U (en) | Based on the three-dimensional display system of hard mirror | |
CN104887316A (en) | Virtual three-dimensional endoscope displaying method based on active three-dimensional displaying technology | |
CN204539353U (en) | Medical image three-dimensional display system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190521 Termination date: 20211229 |
|
CF01 | Termination of patent right due to non-payment of annual fee |