CN105812776A - Stereoscopic display system based on soft lens and method - Google Patents

Stereoscopic display system based on soft lens and method Download PDF

Info

Publication number
CN105812776A
CN105812776A CN201410852264.XA CN201410852264A CN105812776A CN 105812776 A CN105812776 A CN 105812776A CN 201410852264 A CN201410852264 A CN 201410852264A CN 105812776 A CN105812776 A CN 105812776A
Authority
CN
China
Prior art keywords
labelling point
unit
image
locus
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410852264.XA
Other languages
Chinese (zh)
Inventor
何建行
刘君
包瑞
邵文龙
郁树达
邢旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Mingyi Medical Charitable Foundation
SuperD Co Ltd
First Affiliated Hospital of Guangzhou Medical University
Original Assignee
Guangdong Mingyi Medical Charitable Foundation
SuperD Co Ltd
First Affiliated Hospital of Guangzhou Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Mingyi Medical Charitable Foundation, SuperD Co Ltd, First Affiliated Hospital of Guangzhou Medical University filed Critical Guangdong Mingyi Medical Charitable Foundation
Priority to CN201410852264.XA priority Critical patent/CN105812776A/en
Publication of CN105812776A publication Critical patent/CN105812776A/en
Pending legal-status Critical Current

Links

Abstract

The invention belongs to the medical technology field and provides a stereoscopic display system based on a soft lens and a method. The system comprises a display unit, a light splitting unit, the tracking equipment and an image shooting unit, wherein the light splitting unit is used for dividing an image space displayed by the display unit into a left view and a right view, the tracking equipment is used for acquiring the position information of a first target object, the image shooting unit is used for shooting a second target object, the system further comprises the image playing processing equipment, a received stereo image shot by the image shooting unit is processed in real time according the position information of the first target object, raster parameters of the light splitting unit and display parameters of the display unit, after processing, the processing result is sent to the display unit for display. Through the system, the image playing speed is greatly improved compared with the prior art, real-time stereoscopic display requirements can be satisfied, and the system is further advantaged in that the system is convenient for doctors to operate and assists the doctors in improving the operation success rate.

Description

Three-dimensional display system and method based on soft lens
Technical field
The present invention relates to technical field of medical equipment, in particular to a kind of three-dimensional display system based on soft lens being applied on clinical medicine and display packing.
Background technology
Endoscope is a pipe equipped with light, and it can enter gastric or enter internal through other ducts by direct oral cavity.Utilizing endoscope it can be seen that the pathological changes that can not show of X ray, therefore doctor is highly useful by it.Such as, ulcer or the tumor of gastric can be observed by endoscope doctor, make the therapeutic scheme of the best accordingly.Endoscope is a kind of optical instrument, is made up of cold light source camera lens, fiber optic wire, image delivering system, on-screen display system etc., and it can expand surgical field of view.The outstanding feature using endoscope is that operative incision is little, and incisional scar is inconspicuous, and after-operation response is light, and hemorrhage, the livid purple and swelling time can greatly reduce, and recovers also relatively traditional operation fast, meets very much the requirement of the beautiful not trace of cosmetic surgery.
Endoscope is divided into hard material and soft material two kinds, is called the hard mirror of medical treatment and soft lens.Initial endoscope makes with stereoplasm tube, invents before more than 100 years.Although they improve to some extent gradually, but still fail to be widely used.Later, in the soft pipe making of endoscope-use the 1950's, thus can bend easily in the corner in human body.
Medical treatment soft lens includes gastroscope, colonoscope, duodenoscope, bronchoscope, nasopharyngolarygnoscope, choledochoscope etc..
The existing medical endoscope of minority has the function of display 3D picture on the market at present.But the assist type 3D being only limitted to wear 3D spectacle shows.Its technology adopted is so that right and left eyes injects different polarization light, produces parallax, forms three-dimensional sense organ.The shortcoming of this technology is to need to wear polarized filter glasses, the use of polarized filter glasses on the one hand, make the light entered in doctor's eyes be reduced to original less than half, waste the light information that in cavity, itself is most valuable, reduce in cavity the information discrimination under relatively dark situation;On the other hand, for the doctor of not wearing spectacles in life on ordinary days, operation is worn polarized filter glasses, very easily produce sense of discomfort, and easily produce fog on lens surface when breathing because wearing mask simultaneously, operation safety is produced strong influence.
Therefore, how to overcome the problems referred to above, become the big technical barrier that medical circle faces at present.
Summary of the invention
It is an object of the invention to provide a kind of three-dimensional display system based on soft lens and display packing, it is intended to solve the said one that caused by limitation and the shortcoming of prior art or multiple technical problem.
nullA kind of three-dimensional display system based on soft lens provided by the invention,Including: display unit、Spectrophotometric unit、Tracking equipment and image capturing unit,Described spectrophotometric unit is positioned at the display side of described display unit,Left view and right view it is divided on the image space shown by described display unit,Described tracking equipment is for obtaining the positional information of first object object,Described image capturing unit is used for shooting the second destination object,Wherein,The described three-dimensional display system based on soft lens also includes image player and processes equipment,Respectively with described tracking equipment、Described display unit and described image capturing unit connect,Described image player processes the equipment positional information according to described first object object、The grating parameter of described spectrophotometric unit and the display parameters of described display unit,The stereo-picture that the described image capturing unit received photographs is processed in real time,Send described display unit after process to show in real time.
The present invention also provides for a kind of stereo display method based on soft lens, said method comprising the steps of: S0 shoots the stereo-picture of the second destination object, and sends the information of described stereo-picture in real time;S1 obtains the positional information of first object object;S2 obtains the grating parameter of the spectrophotometric unit of display device and the display parameters of the display unit of described display device;The stereo-picture that the described image capturing unit received photographs is processed in real time by S3 according to described positional information and described grating parameter and described display parameters;S4 shows described image to be played.
Three-dimensional display system based on soft lens provided by the invention and display packing, image player speed is greatly improved compared to prior art, can meet the requirement that real-time volume shows, has the advantage facilitating surgical operation and assist physician to improve success rate of operation.
Accompanying drawing explanation
Fig. 1 illustrates the structural representation of the three-dimensional display system based on soft lens of embodiment of the present invention one.
Fig. 2 is the structural representation of a specific embodiment of the three-dimensional display system based on soft lens of embodiment of the present invention one.
Fig. 3 illustrates the structural representation of the image player processing unit in Fig. 2.
Fig. 4 is the structural representation based on spectrophotometric unit in the three-dimensional display system of soft lens with display unit laminating of embodiment of the present invention one.
Fig. 5 illustrate embodiment of the present invention one based on the preferred embodiment structural representation following the tracks of equipment in the three-dimensional display system of soft lens.
Fig. 6 illustrates the concrete structure schematic diagram of the acquiring unit in Fig. 4.
Fig. 7 illustrates the concrete structure schematic diagram rebuilding unit the first variation in Fig. 4.
Fig. 8 illustrates the concrete structure schematic diagram rebuilding unit the second variation in Fig. 4.
Fig. 9 illustrates the concrete structure schematic diagram rebuilding unit the 3rd variation in Fig. 4.
Figure 10 illustrates that in the tracking equipment of Fig. 4, corresponding first object object arranges the structural representation of the locating support of labelling point.
Figure 11 is the schematic flow sheet of the stereo display method based on soft lens of embodiment of the present invention two.
Figure 12 is the idiographic flow schematic diagram of S1 in Figure 11.
Figure 13 is the idiographic flow schematic diagram of S12 in Figure 12.
Figure 14 is the idiographic flow schematic diagram of first variation of S13 in Figure 11.
Figure 15 is the idiographic flow schematic diagram of second variation of S13 in Figure 11.
Figure 16 is the idiographic flow schematic diagram of the 3rd variation of S13 in Figure 11.
Figure 17 is the idiographic flow schematic diagram of the S3 in Figure 11.
Detailed description of the invention
In order to be more clearly understood that the above-mentioned purpose of the present invention, feature and advantage, below in conjunction with the drawings and specific embodiments, the present invention is further described in detail.It should be noted that when not conflicting, presently filed embodiment and the feature in embodiment can be mutually combined.
Elaborate a lot of detail in the following description so that fully understanding the present invention; but; the present invention can also adopt other to be different from other modes described here to implement, and therefore, protection scope of the present invention is by the restriction of following public detailed description of the invention.
Embodiment one
Referring to Fig. 1, Fig. 1 is the present invention structural representation based on the three-dimensional display system of soft lens.As it is shown in figure 1, the three-dimensional display system based on soft lens of the present invention includes: image capturing unit 10, tracking equipment 30, spectrophotometric unit 50 and display unit 40.This image capturing unit 10 is used for shooting the second destination object, and in real time the image of this second destination object photographed is sent to this image play unit.This tracking equipment 30 is for obtaining the positional information of first object object, and this spectrophotometric unit 50 is positioned at the display side of described display unit 40, is divided into left view and right view on the image space shown by this display unit 40.Should also include image player based on the three-dimensional display system of soft lens and process equipment 20, it is connected with this tracking equipment 30 and this display unit 40 respectively, this image player processes equipment 20 and processes image to be played in real time according to the positional information of this first object object, the grating parameter of this spectrophotometric unit 50 and the display parameters of display unit 40, sends this display unit 40 and display after process.It addition, this image capturing unit 10 is located on soft lens, when soft lens enters in human or animal's body, shooting image within human or animal, facilitate doctor's Real Time Observation and perform the operation in time.
Equipment 20 is processed owing to tracking equipment 30 and display unit 40 are directly connected to image player, image player processes equipment 20 and gets the positional information of first object object, grating parameter and display parameters in time, and carry out image procossing accordingly, eliminate and prior art needs the processing procedure through central processing unit, thus the speed of image player is greatly improved compared to prior art, the requirement that real-time volume shows can be met, facilitate the advantage that surgical operation and assist physician improve success rate of operation.This is because doctor is when operation, stereo-picture accurately can be obtained in real time, and perform the operation in time, not have the problem mentioned in background technology.Above-mentioned grating parameter mainly includes the pitch (pitch) of grating and the grating parameter such as placement distance relative to the angle of inclination of display floater, the relative display floater of grating.These grating parameters can be stored directly in the memorizer in image player process equipment, it is possible to is that other detection equipment detects in real time and obtains the grating parameter of spectrophotometric unit, grating parameter value is sent to image player and processes equipment 20.Above-mentioned display unit parameter includes putting in order and the parameter such as arrangement architecture of the pixel cell sub-pixel of the size of display unit, the screen resolution of display unit, display unit.Arrangement of subpixels order and sub-pixel are according to RGB arrangement or RBG arrangement, or become BGR arrangement, still become other order arrangements;Arrangement of subpixels structure and sub-pixel are that be vertically arranged or transversely arranged, as being the mode cycle arrangement according to RGB from top to bottom, or are the mode cycle arrangement etc. according to RGB from left to right successively.
This image capturing unit 10 is used for shooting the second destination object, and in real time the image of this second destination object photographed is sent to this image play unit.Here the second destination object is primarily referred to as by the various scenes of video camera shooting record, such as the scene of operation, internal image of patient etc..Stereo-picture is shot in real time by image capturing unit 10, and the stereo-picture photographed is shown on the display unit in real time, it is not necessary to through extra image procossing, in time and show the various scenes photographed truly, meet user's demand to display in real time, improve Consumer's Experience.Image capturing unit 10 can include at least one in monocular-camera, binocular camera or multi-lens camera.
When this image capturing unit 10 includes monocular-camera, shoot and obtain the stereo-picture of the second destination object according to this monocular-camera.It is preferred that this monocular-camera can adopt liquid crystal lens imaging device or liquid crystal microlens array imaging device.In a specific embodiment, this monocular-camera is not obtaining two width digital pictures of measured object in the same time from different perspectives, and recovers the three-dimensional geometric information of object based on principle of parallax, rebuilds object three-dimensional contour outline and position.
When this image capturing unit 10 includes binocular camera, there are two photographic head including two video cameras or a video camera, by binocular camera, the second destination object carried out shooting the second destination object and form stereo-picture.Specifically, binocular camera is mainly based upon principle of parallax and is obtained object dimensional geological information by multiple image.Binocular Stereo Vision System is generally obtained two width digital pictures of measured object (the second destination object) from different perspectives by twin camera simultaneously, and recovers the three-dimensional geometric information of object based on principle of parallax, rebuilds object three-dimensional contour outline and position.
When this image capturing unit 10 includes multi-lens camera, the video camera of namely more than three (including three), these video cameras are arranged in arrays, are used for obtaining stereo-picture.Obtained several digital pictures of the second destination object by video camera more than above three from different perspectives simultaneously, recover the three-dimensional geometric information of object based on principle of parallax, rebuild object three-dimensional contour outline and position.
This image capturing unit 10 also includes collecting unit, and this collecting unit is for gathering the stereo-picture of this second destination object, and extracts left view information and right view information from this stereo-picture.This collecting unit one end is connected with above-mentioned monocular-camera, binocular camera or above-mentioned multi-lens camera, and the other end is connected to image player and processes on equipment 20.Extract left view information and the right view information of stereo-picture by collecting unit limit when limit shooting stereo-picture, improve the speed of image procossing, it is ensured that carry out the display effect of stereo display in real time.
Above-mentioned tracking equipment 30 can be photographic head and/or infrared sensor, is mainly used in following the trail of the position of first object object, for instance the position of the position of the face of the eyes of people or the head of people or people or the upper part of the body of people.The quantity of photographic head or infrared sensor is not intended to, it is possible to be one, it is also possible to be multiple.Photographic head or infrared sensor may be mounted on the frame of display unit, or are separately positioned at the position being prone to track first object object.In addition, if adopting infrared sensor as the equipment of tracking, also infrared transmitter can be set in the position of corresponding first object object, by receiving the infrared framing signal that infrared transmitter sends, utilize the relative position relation of infrared transmitter and first object object, calculate the positional information of first object object.
Specifically, this tracking equipment 30 includes video camera, and this video camera shoots this first object object.The quantity of video camera can be one or more, it is possible to arranges on the display unit, it is also possible to be separately provided.
This tracking equipment 30 includes infrared remote receiver, correspondingly, corresponding first object object is provided with infrared transmitter, this infrared transmitter may be provided at the relevant position of first object object, can also being arranged on other with on the relatively-stationary object of first object object's position, this infrared remote receiver receives the infrared signal sent from the infrared transmitter set by corresponding first object object.The location to first object object is realized by common infrared positioning method.
Additionally, above-mentioned tracking equipment 30 can also adopt GPS locating module, GPS locating module send location information and process equipment 20 to image player.
Above-mentioned spectrophotometric unit 50 is located at the light emission side of display unit 40, and the left view with parallax shown by display unit 40 and right view are separately sent to left eye and the right eye of people, according to left eye and the right eye synthetic stereo image of people, makes people watch the effect of stereo display.It is preferred that above-mentioned spectrophotometric unit can be disparity barrier or lenticulation.This disparity barrier can be liquid crystal slit or solid slit grating sheet or electrochromism slit grating sheet etc., and this lenticulation can be liquid crystal lens or liquid crystal lens grating.Liquid crystal is cured on thin slice mainly by ultraviolet light by liquid crystal lens grating, forms solid lens, shines left eye and the right eye of people after light is carried out light splitting.Preferably, using above-mentioned display unit 40 and spectrophotometric unit 50 as an integrated display device 60, this display device 60 is the display part of the whole three-dimensional display system based on soft lens, can fit together as playback process equipment and tracking equipment with earlier figures, it is also possible to be an independent sector individualism.Such as, can need according to viewing, individually display device 60 is placed on the position being easy to viewing, and image player processes equipment 20 and tracking equipment 30 can be the equipment each with standalone feature, and these equipment assemble during use the real-time volume display function realizing the present invention.Such as, it can be VMR3D playback equipment that this image player processes equipment 20, itself has 3D playback process function, use be assembled into the present invention based in the three-dimensional display system of soft lens, set up with miscellaneous equipment and be connected.
Refer to the structural representation of a specific embodiment of the three-dimensional display system based on soft lens that Fig. 2, Fig. 2 are embodiment of the present invention one.As shown in Figure 2, the embodiment of the present invention 1 based in the three-dimensional display system of soft lens, image player processes equipment 20 and farther includes: image player processing unit 22 and memory element 23, image player processing unit 22 is mainly according to the display parameters of the positional information of described first object object, the grating parameter of described spectrophotometric unit and display unit, the stereo-picture received is carried out real-time row's figure process, sends described display unit after process and show in real time.Memory element 23 is for storing the image that image capturing unit 10 transmits.When stereo-picture play by needs, image player processing unit 22 calls the stereo-picture of storage in memory element 23 and carries out row's figure process.
Further, this image player processes equipment 20 and may further comprise: signal processing unit 21, this signal processing unit 21 is connected with memory element 23 and image player processing unit 22 respectively, wherein the signal of the stereo-picture of image capturing unit 10 shooting received mainly is processed by signal processing unit 21, processes including image format conversion, compression of images etc..Image after compression is stored in memory element 23.Signal after signal processing unit 21 processes can correspond to left view, right view information output image signal respectively, or exports in the lump in image player processing unit 22.The grating parameter of the image player processing unit 22 positional information according to described first object object and described spectrophotometric unit, stereo-picture after being processed by the signal processing unit 21 received carries out real-time row's figure process, sends described display unit and show in real time after process.Here image player processing unit 22 can directly invoke the stereo-picture in memory element 23 and decompress, it is also possible to directly receives the stereo-picture after signal processing unit 21 processes, then carries out row's figure process.
Referring to Fig. 3, image player processing unit 22 farther includes:
Row's graph parameter determines module 201, and the display parameters single according to the positional information of described first object object got and the grating parameter of described spectrophotometric unit and display calculate row's graph parameter on the display unit;
Parallax image arrangement module 202, for arranging the anaglyph on display unit according to described row's graph parameter;This anaglyph generates by spatially dividing left-eye image and eye image.
Anaglyph playing module 203, plays described anaglyph.After anaglyph after receiving arrangement, playing out, beholder sees the stereo-picture of display in real time at display unit.
Further, this image player processing unit 22 also includes: stereo-picture acquisition module 204, obtains the stereo image information of described image capturing unit 10 shooting, i.e. the left view of stereo-picture and right view information.Stereo-picture includes left view and right view, therefore, to stereo-picture to be played, it is necessary to first obtains the image information of left view and right view, just can carry out row's figure process.
Continuing with referring to Fig. 2, tracking equipment 30 in the present invention farther includes track and localization processing unit 31 and tracking cell 32.Tracking cell 32 is for following the tracks of the real time imaging of first object object, and it refers mainly to photographic head, infrared remote receiver etc. can catch a kind equipment of human eye or head video signal exactly.The real time imaging of the first object object that track and localization processing unit 31 traces into according to tracking cell 32, by the characteristic point of first object object is extracted, calculates the space coordinates of first object object.Specifically, for instance carry out real time record face by photographic head, carry out human face characteristic point extraction through track and localization processing unit 31, calculate the space coordinates of human eye;Can also passing through to increase the mode of characteristic point, as the head people increases characteristic point, such as infrared launcher, catch the real time imaging of this characteristic point again through photographic head, track and localization processing unit 31 finally calculates the space coordinates of human eye.
Additionally, track and localization processing unit 31 quickly can follow the tracks of position of human eye in real time when position of human eye is moved, provide the space coordinates of human eye, and this coordinate information is supplied to image player processing unit 22.
Also have above-mentioned tracking cell 32 can include video camera or infrared remote receiver, when tracking cell 32 includes video camera, by the change in location of first object object characteristic of correspondence point described in Camera location.When tracking cell 32 includes infrared remote receiver, described infrared remote receiver receives and comes from corresponding described first object object and infrared framing signal that the infrared transmitter as described characteristic point sends.
Improved the viewing effect of stereo display by above-mentioned track and localization processing unit 31 and tracking cell 32, make stereo display automatically adjust with the movement of people, provide the stereo display effect of optimum in real time.
Embodiment 1
In the embodiment of the present invention 1, preferably real-time volume display effect to be obtained, it is necessary to according to the grating parameter of spectrophotometric unit and the display parameters of display unit, spectrophotometric unit and display unit being carried out optical design, this optical design is according to below equation:
( 1 ) - - - n * IPD m * t = L F
( 2 ) - - - l - pitch p - pitch = L L + F
(3) m*t=p-pitch
In above-mentioned formula, F is distance between spectrophotometric unit with display unit (i.e. the placement distance of the relative display floater of the grating in above-mentioned grating parameter), L is the distance of beholder and display unit, IPD is coupling interpupillary distance, distance between common people's double vision, such as, general value is 62.5mm, l-pitch is the pitch (pitch) of spectrophotometric unit, p-pitch is row's figure pitch of the pixel on display unit, n is three-dimensional view quantity, m is the pixel quantity that spectrophotometric unit covers, p be display unit point from, here point is from the size (belonging to the one of display parameters) being primarily referred to as a pixel cell, this pixel cell generally includes R, G, tri-sub-pixels of B.In order to eliminate moire fringes, spectrophotometric unit generally can rotate a certain angle (namely spectrophotometric unit has certain angle of inclination compared to display unit) when laminating, and therefore, the pitch of actual spectrophotometric unit is given by the following formula:
(4)Wlens=l-pitch*sin θ
Wherein, WlensFor the actual pitch of spectrophotometric unit, θ is that spectrophotometric unit is relative to the angle of inclination of display floater (i.e. one of above-mentioned grating parameter).
As described previously for the distance F between spectrophotometric unit and display unit, when the medium between display unit and spectrophotometric unit is air, F is equal to the actual range between spectrophotometric unit and display unit;Medium between display unit and spectrophotometric unit is refractive index when being the transparent medium of n (n is more than 1), F equal to the actual range between spectrophotometric unit and display unit divided by this refractive index n;When there is different media between display unit from spectrophotometric unit, and the refractive index of medium respectively n1, n2, n3 (refractive index is all higher than or equal to 1), F=s1/n1+s2/n2+s3/n3, wherein s1, s2, s3 are the thickness of respective media.
By above-mentioned optical computing formula, spectrophotometric unit and display unit are configured, it is possible to reduce moire fringes, improve the stereo display effect of viewing in real time.
Additionally, in a variant embodiment, arrange laminating unit between spectrophotometric unit and display unit, refer to the bonding structure schematic diagram based on spectrophotometric unit in the three-dimensional display system of soft lens Yu display unit that Fig. 4, Fig. 4 are embodiment of the present invention one.As shown in Figure 4, being provided with laminating unit between spectrophotometric unit 50 and display unit 40, three is similar to " sandwich structure ", and laminating unit includes first substrate 42 and second substrate 43 and the air layer 41 between first substrate 42 and second substrate 43.This air layer 41 is in sealing state between first substrate 42 and second substrate 43, it is prevented that air is overflowed.First substrate 42 and display floater laminating, it is possible to be that transparent glass material is constituted, it is also possible to be that transparent resin material etc. is constituted.Second substrate 43 and first substrate 42 are oppositely arranged, and it deviates from the side of first substrate 42 for spectrophotometric unit 50 of fitting.Owing to arranging laminating unit between spectrophotometric unit 50 and display unit 40, and laminating unit adopts said structure, 3 d display device for giant-screen, both ensure that the flatness that grating is fitted, alleviate again the weight of whole 3 d display device, it is prevented that adopt and cause that screen falls the risk split because of overweight during pure glass.
Embodiment 2
In the present embodiment 2, this tracking equipment 30 includes video camera, and this video camera shoots this first object object.The quantity of video camera can be one or more, it is possible to arranges on the display unit, it is also possible to be separately provided.Further, video camera can be monocular-camera, binocular camera or multi-lens camera.
Additionally, this tracking equipment 30 can also is that and includes infrared remote receiver, correspondingly, corresponding first object object is provided with infrared transmitter, this infrared transmitter may be provided at the relevant position of first object object, can also being arranged on other with on the relatively-stationary object of first object object's position, this infrared remote receiver receives the infrared signal sent from the infrared transmitter set by corresponding first object object.The location to first object object is realized by common infrared positioning method.
Additionally, above-mentioned tracking equipment 30 can also adopt GPS locating module, by GPS locating module transmission location information to image player processing unit 20.
Embodiment 3
Refer to Fig. 5, Fig. 5 illustrate embodiment of the present invention one based on the preferred embodiment structural representation following the tracks of equipment in the three-dimensional display system of soft lens.As it is shown in figure 5, the embodiment of the present invention 3 also proposes another kind of tracking equipment 30, this tracking equipment 30 includes:
Labelling point arranges unit 1, and the locus for corresponding first object object arranges labelling point;Here labelling point can be arranged on first object object, it is also possible to is not provided with on first object object, and is provided in there is relative position relation with first object object, also may be used with on the object of first object object synchronous motion.Such as, first object to liking human eye, then can be arranged around labelling point at the eye socket of human eye;Or around human eye configure glasses, labelling point is located on the picture frame of glasses, or labelling point is located at on the ear of the relatively-stationary people of position of human eye relation.This labelling point can be the various parts such as the infrared emission sensor sending signal, LED, GPS sensor, laser positioning sensor, it is also possible to is that other can by the physical label of cameras capture, for instance be the object with shape facility and/or color characteristic.It is preferred that for the interference avoiding extraneous veiling glare, improve the robustness of labelling point tracking, it is preferred to use the comparatively narrow infrared LED lamp of frequency spectrum is as labelling point, and uses the corresponding thermal camera only by the used frequency spectrum of infrared LED that labelling point is caught.Consider that extraneous veiling glare mostly is irregular shape and Luminance Distribution is uneven, it is possible to being arranged to send the hot spot of regular shape by labelling point, luminous intensity is higher, brightness uniformity.It can in addition contain arrange multiple labelling point, the corresponding hot spot of each labelling point, the geometry of each labelling point composition rule, such as triangle, tetragon etc., thus being prone to trace into labelling point, obtain the spatial positional information of labelling point, and improve the accuracy that hot spot extracts.
Acquiring unit 2, for obtaining the positional information of this labelling point;This can be through receiving the signal that labelling point sends, and determines the positional information of labelling point, it is also possible to be adopt video camera to shoot the image containing labelling point, the labelling point in image is extracted.The positional information of labelling point is obtained by image processing algorithm.
Rebuild unit 3, for the positional information according to this labelling point, rebuild the locus of this first object object.When after the positional information acquiring this labelling point, recreate the locus of labelling point, then according to the relative position relation of labelling point with first object object, the locus of labelling point is transformed into the locus (locus of two, the left and right of such as people) of first object object.
The equipment 30 of following the tracks of of the embodiment of the present invention is by obtaining the positional information of the labelling point of corresponding first object object, and according to this positional information, recreates the locus of first object object.Need two dimensional image is carried out feature analysis thus obtaining position of human eye or using other to utilize compared with the human eye of human eye iris reflex effect seizure equipment acquisition position of human eye as human eye seizure equipment with use video camera in prior art, there is good stability, high, with low cost and that the distance between tracking equipment and first object object is not the required advantage of accuracy.
Refer to Fig. 6, Fig. 6 and illustrate the concrete structure schematic diagram of the acquiring unit in Fig. 5.Aforementioned acquiring unit farther includes:
Presetting module 21, is used for presetting a standard picture, is provided with reference marker point, and obtains space coordinates and the plane coordinates of described reference marker point in described standard picture;Standard picture such as can be through the standard picture that image capture device gathers, obtain the image coordinate of reference marker point, and using other accurate measurement in space equipment such as laser scanners, the equipment such as structured light scanner (such as Kinect etc.) obtains space coordinates and the plane coordinates of reference marker point in standard picture.
Acquisition module 22, for obtaining the present image comprising described first object object and described labelling point, and described labelling point is at the plane coordinates of described present image;
Matching module 23, for mating the labelling point in described present image with the described reference marker point of described standard picture.Here first the labelling point plane coordinates at described present image and reference marker point are set up corresponding relation between the plane coordinates of standard picture, then labelling point be mated with reference marker point.
It is easy for can there be an object of reference, it further ensures the stability of the target tracker of embodiment of the present invention and accuracy when obtaining the locus of present image by arranging standard picture and reference marker point.
Further, this tracking equipment 30 also includes:
Collecting unit, is used for gathering described labelling point;
Screening unit, screens target label point from described labelling point.
Specifically, when the quantity of labelling point is multiple, adopt all labelling points of camera acquisition correspondence first object object, labelling point maximally related with first object object is chosen from all labelling points, then using corresponding image processing algorithm that the labelling point on image is extracted, this extraction needs the feature according to labelling point to carry out.Generally, the method that the feature of this labelling point is extracted is that image I is used feature extraction function H, obtains the feature scores of each point in image, and filters out the sufficiently high labelling point of eigenvalue.Here can conclude with following formula and represent:
S (x, y)=H (I (x, y))
F={arg(x, y)(S (x, y) > s0) }
In above-mentioned formula, H is feature extraction function, I (x, y) it is each pixel (x, y) image value corresponding to, it is possible to be gray value or three-channel color energy value etc., S (x, y) it is each pixel (x, y) feature scores after feature extraction, s0 is a feature scores threshold value, more than the S (x of s0, y) being considered labelling point, F is labelling point set.It is preferred that the energy feature that the embodiment of the present invention uses infrared markers point and the become image of thermal camera is comparatively obvious.Owing to using narrow-band LED infrared lamp, and corresponding thermal camera, most of pixel energy of the become image of video camera are very low, and the pixel only having labelling point corresponding has high-energy.Therefore (x can be y) that (x y) carries out region and increases the some subimages of acquisition, and the subimage acquired is carried out center of gravity extraction to the image B after use Threshold segmentation operator to corresponding function H.Simultaneously, according in environment light can in thermal camera the veiling glare of imaging, the labelling extracted is clicked on row filter by the constraintss such as we can add such as the become facula area of labelling point in infrared markers point extraction process, labelling point position relationship in two dimensional image.
When video camera number is more than 1, it is necessary to different cameras is carried out reference points matching at synchronization or the image that obtains close to synchronization, thus providing condition for follow-up labelling point three-dimensional reconstruction.The method of reference points matching needs to determine according to feature extraction function H.The methods such as the feature point extraction operator based on gradient of image and gray scale figure that we can use some classics and the matching process such as Harris, SIFT, the FAST that match with it obtain and matched indicia point.Can also operating limit constraint, the mode such as the priori conditions of labelling point carries out reference points matching.The method carrying out coupling screening used here as limit restraint is: according to the projection on two different cameras images in same o'clock all in this principle of same plane, for some labelling point p0 in some video camera c0, we can calculate a polar curve equation in other video cameras c1, and labelling point p0 meets following relation corresponding to the labelling point p1 on this other video camera c1:
[p1;1]TF[p0;1]=0
In above-mentioned formula, F is the video camera c0 basis matrix to video camera c1.By using above-mentioned relation, we can reduce candidate's number of labelling point p1 significantly, improves matching accuracy.
Additionally, we can use the priori conditions of labelling point to be the spatial order of labelling point, the size etc. of labelling point.Such as according to the mutual alignment relation of two video cameras make two pixels of every a pair corresponding the same space point on its captured image in some dimension such as y-axis equal, this process is also referred to as image calibration (rectification).Then the coupling of this tense marker point also just can perform according to the x-axis order of labelling point, i.e. the corresponding minimum x of minimum x, the like, the x that maximum x is corresponding maximum.
Based on the following for the video camera number followed the tracks of number, the target tracker of the present invention is discussed in detail.
Refer to the concrete structure schematic diagram rebuilding unit that Fig. 7, Fig. 7 illustrate in Fig. 5.As it is shown in fig. 7, in the present embodiment, labelling o'clock corresponding to first object object that this tracking equipment 30 is followed the tracks of is less than four, and when adopting positional information that monocular-camera obtains labelling point, rebuilds unit and farther include:
First computing module 31, for calculating the homograph relation between described present image and described standard picture according to the plane coordinates of the described reference marker point of the plane coordinates of the labelling point in described present image and described standard picture and the assumed conditions of described first object object place scene;The labelling of present image point is mated with the reference marker point in standard picture, and calculates the homograph relation between present image and standard picture according to the two respective plane coordinates.So-called homograph is the homography in corresponding geometry, is a kind of alternative approach of often application in computer vision field.
First reconstructed module 32, for calculating the described labelling point rigid transformation in the locus to the locus of current time that shoot the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and calculate the current locus of described first object object in the locus of current time according to described labelling point.
Specifically, assumed conditions for scene, the numerical value of certain dimension when we can assume that the rigid transformation of labelling point in scene is constant, in such as three dimensional spatial scene, space coordinates is x, y, z, and x and y is parallel with x-axis in the image coordinate of photographic head (plane coordinates) and y-axis respectively, and z-axis is perpendicular to the image of photographic head, assumed conditions can be that labelling point coordinate in z-axis is constant, it is also possible to is that labelling point coordinate in x-axis and/or y-axis is constant.Different suppositive scenario conditions, the estimation method used also is not quite similar.Again such as, under another kind of assumed conditions, assume first object object towards photographic head towards between the anglec of rotation in use remain constant, then can speculate the current locus of first object object according to the ratio between the distance from each other of the labelling point on the distance from each other of the labelling point in present image and standard picture.
By above computational methods, rebuilding the locus of described first object object when can realize the monocular-camera quantity to labelling point less than four, it is simple to operate, and it is also more accurate to follow the tracks of result, owing to adopting monocular, reduce the cost of first object Object tracking.
Above-mentioned use single camera gathers image and recovers in object dimensional seat calibration method, owing to the image information obtained is less, it is therefore desirable to the number increasing labelling point provides more image information thus calculating the three-dimensional coordinate of object.Theoretical according to machine vision, the steric information of scene to be extrapolated from single image, it is necessary at least determine five fixed points in image.Therefore, monocular scheme adds mark tally amount, too increases the complexity of design, but simultaneously, it is only necessary to a video camera, thus reducing the complexity of image acquisition, reduces cost.
Refer to Fig. 8, Fig. 8 and illustrate the concrete structure schematic diagram of the second variant embodiment rebuilding unit in Fig. 5.As it is shown in fig. 7, in the present embodiment, when the quantity of described labelling point is more than five, and when adopting positional information that monocular-camera obtains described labelling point, described in rebuild unit and farther include:
Second computing module 33, is used for the plane coordinates of the plane coordinates according to the labelling point in described present image and the described reference marker point of described standard picture, calculates the homograph relation between described present image and described standard picture.
Second reconstructed module 34, for calculating the described labelling point rigid transformation in the locus to the locus of current time that shoot the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and the locus according to described labelling point current time calculates the locus that first object object is current.
First gather a width standard picture, use the devices such as accurate depth camera or laser scanner to measure the locus of reference marker point the two dimensional image coordinate (i.e. plane coordinates) of the reference marker point that acquisition is now.
In use, video camera constantly catches the two dimensional image coordinate of all labelling points in the present image containing first object object, and the rigid transformation between labelling point when calculating the labelling point under current state and shooting standard picture according to the two-dimensional coordinate of now two dimensional image coordinate and standard picture reference marker point, when assuming that between labelling point, relative position is constant, and then locus conversion when calculating out this tense marker point relative to standard picture, thus calculating the locus of current markers point.
Here, the point using more than five points can calculate the locus rigid transformation [R | T] of current markers point and shooting standard picture tense marker point, preferably, not in one plane, and the projection matrix P of photographic head is demarcated this point of more than five in advance.The concrete mode calculating [R | T] is as follows:
Each labelling point is at homogeneous coordinates respectively X0, the Xi of standard picture and present image.The two meets limit restraint, i.e. X0P-1[R | T] P=Xi.All labelling points form the equation group that unknown parameter is [R | T].When mark tally amount is more than 5, it is possible to [R | T] solve;When mark tally amount is more than 6, it is possible to [R | T] seek optimal solution, its method can use singular value decomposition SVD, and/or uses the method for iteration to calculate non-linear optimal solution.After calculating labelling space of points position, we can deduce the locus of first object object (such as human eye) according to the mutual alignment relation between labelling point and the first object object (such as human eye) demarcated in advance.
The present embodiment only with a video camera, uses the labelling point of five or more than five, it is possible to construct the locus of first object object exactly, not only simple to operate, and with low cost.
Refer to Fig. 9, Fig. 9 and illustrate the concrete structure schematic diagram of the 3rd variant embodiment rebuilding unit in Fig. 5.As it is shown in figure 9, the present embodiment uses two or more video cameras, one or more labelling point.When adopting the positional information that binocular camera or multi-lens camera obtain described labelling point, described in rebuild unit and farther include:
3rd computing module 35, adopts binocular or many orders three-dimensional reconstruction principle, calculates each labelling point in the locus of current time;So-called binocular or three orders rebuild principle can adopt following methods, for instance adopts the parallax between the labelling point that left and right photographic head mates, calculates each labelling point in the locus of current time.Or adopt other existing common methods to realize.
Third reconstruction module 36, calculates, according to the locus of described labelling point current time, the locus that first object object is current.
Specifically, the mutual alignment relation between each video camera is demarcated by the method first by multi-lens camera calibration.Then in use, the image zooming-out labelling point coordinates that each video camera is got, and each labelling point is mated, namely obtain it at labelling point corresponding to each video camera, then use mutual alignment relation between labelling point and the video camera of coupling to calculate the locus of labelling point.
In a specific example, multi-lens camera (namely number of cameras is be more than or equal to 2) is used to carry out shot mark point, it is achieved stereo reconstruction.Known labelling point coordinate u on a certain shot by camera image and this camera parameters matrix M, we can calculate a ray, and this labelling point is on this ray in space.
αjuj=MjXj=1 ... n (wherein n is the natural number be more than or equal to 2)
In like manner, according to above-mentioned formula, this labelling point can also calculate should the ray of other video camera on other video camera.Theoretically, these two rays converge on a point, i.e. the locus of this labelling point.Actually due to the digitizer error of video camera, error that video camera internal reference and outer ginseng are demarcated etc., these rays can not converge at a bit, it is therefore desirable to uses the method approximate calculation of triangulation (triangululation) to go out the locus of labelling point.Least square judgment criterion such as can be used to determine apart from the nearest point of all light as object point.
X ′ = arg x min Σ j = 1 m [ ( m 1 j X m 3 j X - u 1 j ) 2 + ( m 2 j X m 4 j X - u 2 j ) 2 ]
After calculating labelling space of points position, we can deduce the locus of first object object (human eye) according to the mutual alignment relation between labelling point and the first object object (such as human eye) demarcated in advance.
Realize in the method for stereo reconstruction at above-mentioned use multi-lens camera, it is advantageous to method be use binocular camera calculate.It is the same that its principle and aforementioned multi-lens camera rebuild principle, is all the mutual alignment relation according to two video cameras and labelling o'clock calculates labelling space of points position at the two-dimensional coordinate of two video camera imagings.Its minute differences is binocular camera laid parallel, according to after simple demarcation, the image of two video cameras is done image calibration as previously described, make two two-dimensional marker points matched each other equal on y (or x) axle, then this tense marker point degree of depth from video camera can be calculated by the gap on x (or y) axle of the two-dimensional marker point after calibrating.The method can regard the specific process that multi-eye stereo is reconstituted under biocular case as, which simplify the step of stereo reconstruction and is easier to realize on device hardware.
Embodiment 4
Refer to Figure 10, Figure 10 and illustrate that the corresponding first object object in device of following the tracks of of Fig. 5 arranges the structural representation of the locating support of labelling point.As shown in Figure 10, the present invention provides a kind of locating support, and this locating support is positioned at human eye (first object object) front, structure is similar to glasses, it is worn and is similar to glasses, including: crossbeam 11, fixed part 12, support portion 13 and control portion 14, crossbeam 11 is provided with labelling point 111;Support portion 13 is arranged on crossbeam 11;Fixed part 12 is connected with the end pivot of crossbeam 11.The position that wherein labelling point 111 is arranged is corresponding with the position of human eye (first object object), by obtaining the spatial positional information of labelling point 111, then calculates the spatial positional information of human eye accordingly.When the head of people is moved, correspondingly, the labelling point 111 corresponding with human eye is also moved, the movement of Camera location labelling point 111, then the scheme adopting the destination object tracking of aforementioned embodiments one obtains the spatial positional information of labelling point 111, utilize the relative tertiary location relation of labelling point 111 and human eye, recreate the locus (i.e. three-dimensional coordinate in space) of human eye (first object object).
In the present embodiment, crossbeam 11 is a strip, and has certain radian, and its radian is approximate with the forehead radian of people, to facilitate use.Crossbeam 11 includes upper surface 112, lower surface corresponding thereto, the first surface 114 being arranged between upper surface 112 and lower surface and second surface.
In the present embodiment, labelling point 111 is three LED, and its interval is evenly provided on the first surface 114 of crossbeam 11.It is understood that labelling point 111 can also be one, two or more, and can be any light source, including LED, infrared lamp or uviol lamp etc..Further, the arrangement mode of described labelling point 111 can also be adjusted as required with arranging position.
It is understood that crossbeam 11 can also be designed to linear or other shapes as required.
In the present embodiment, fixed part 12 has two, it is pivotally connected with the two ends of crossbeam 11 respectively, and two fixed parts 12 can opposed, inwardly directed fold, simultaneously, two fixed parts 12 can outwards be expanded to and the crossbeam 11 interior angle in about 100 ° respectively, concrete, it is possible to adjust the size of interior angle according to practical operation demand.It should be understood that fixed part 12 can also be one.
Fixed part 12 away from crossbeam 11 one end along support portion 13 bearing of trend bend arrange, be fixed on the ear of people for by the end of fixed part 12.
In the present embodiment, support portion 13, in strip, is arranged on the middle part of the lower surface 113 of crossbeam 11 and downwardly extends.Further, support portion 13 is provided with nose support 131 away from the end of crossbeam 11, for positioner coordinates the bridge of the nose, and is arranged at above human eye by positioner.It should be understood that in other embodiments, if being not provided with nose support 131, then support portion 13 can be set to down Y-shaped, and along the middle part of crossbeam 11 and downwardly extend, in order to positioner to coordinate the bridge of the nose, and is arranged at above human eye by positioner.
The rounded cuboid in control portion 14, is arranged on fixed part 12.Control portion 14 is for providing power supply and/or person to control the use state of described LED, infrared lamp or uviol lamp to described LED, infrared lamp or uviol lamp, and it includes on and off switch 141, power supply indicator and charging indicator light.It is understood that control portion 14 does not limit shape, it can have any shape, it is also possible to is an integrated chip.Further, control portion 14 can also be arranged on other positions, as on crossbeam 11.
During use, turning on the power switch 141, power supply indicator display LED is in power supply state, and LED is lit;When electricity deficiency, charging indicator light prompting electricity is not enough;Turn off the power switch, power supply indicator extinguishes, and represents that LED is closed, and LED is extinguished.
Owing to the interpupillary distance of people ranges for 58mm~64mm, the interpupillary distance that can be approximately considered people is definite value, locating support provided by the invention is similar to spectacle frame, and it is fixed on above human eye, it is similar to spectacle frame, as required, labelling point is arranged on the precalculated position of positioner, such that it is able to determine the position of human eye simply and easily according to the position of labelling point.Positioning device structure is simple, and design is with easy to use.
Embodiment two
Refer to Figure 11 to Figure 14, Figure 11 is the schematic flow sheet of the stereo display method based on soft lens of embodiment of the present invention two, Figure 12 is the idiographic flow schematic diagram of S1 in Figure 11, and Figure 13 is the idiographic flow schematic diagram of S12 in Figure 12, and Figure 14 is the idiographic flow schematic diagram of the S3 in Figure 11.As shown in Figure 11 to 14, the stereo display method based on soft lens of embodiment of the present invention two, mainly comprise the steps that
S0 image capturing procedure, shoots the stereo-picture of the second destination object, and sends the information of the stereo-picture of described second destination object photographed in real time, and this information includes left view information and right view information.
S1 obtains the positional information of first object object;Tracking equipment is utilized to follow the tracks of the position of first object object, for instance the positional information at beholder place.
S2 obtains the grating parameter of the spectrophotometric unit of 3 d display device and the display parameters of display unit;The grating parameter of spectrophotometric unit mainly includes the pitch (pitch) of grating and the grating parameter such as placement distance relative to the angle of inclination of display floater, the relative display floater of grating.
The stereo-picture that the described image capturing unit received photographs is processed in real time by S3 according to described positional information and described grating parameter and described display parameters.Before stereo-picture to be played, it is necessary in advance in conjunction with the display parameters of the positional information of human eye and grating parameter and display unit, image is processed, in order to be supplied to the stereo display effect that beholder is best.
S4 shows this image to be played.
The stereo display method based on soft lens of the present invention, by getting positional information and the grating parameter of first object object in time, and it is made directly image procossing accordingly, improve the speed of image player, the requirement that real-time volume shows can be met, facilitate the advantage that surgical operation and assist physician improve success rate of operation.
It addition, the second destination object here refers mainly to the various scenes that video camera photographs, it is possible to be actual people, or the just image etc. in live ball match or the patient body that shoots by some equipment.By shooting stereo-picture in real time, and the stereo-picture photographed is shown on the display unit in real time, it is not necessary to through extra image procossing, in time and show the various scenes photographed truly, meet user's demand to display in real time, improve Consumer's Experience.
In a concrete variant embodiment, above-mentioned steps S0 also includes: image acquisition step, gathers the stereo-picture of described second destination object, and extracts left view information and right view information from described stereo-picture.During by shooting stereo-picture on limit, left view information and the right view information of stereo-picture is extracted on limit, improves the speed of image procossing, it is ensured that carry out the display effect of stereo display in real time.
Embodiment 5
Referring to Figure 12, how S1 is mainly obtained the positional information of first object object by the embodiment of the present invention 5 is described in detail.These first object objects such as watch relevant position for the upper part of the body of human eye, the head of people, the face of people or human body etc. to people.Above-mentioned " positional information of S1 acquisition first object object " mainly comprises the steps that
The locus of S11 correspondence first object object arranges labelling point;Here labelling point can be arranged on first object object, it is also possible to is not provided with on first object object, and is provided in there is a relative position relation with first object object, and with the object of first object object synchronous motion on also may be used.Such as, destination object is human eye, then can be arranged around labelling point at the eye socket of human eye;Or around human eye configure locating support, labelling point is located on the frame of locating support, or labelling point is located at on the ear of the relatively-stationary people of position of human eye relation.This labelling point can be the various parts such as the infrared emission sensor sending signal, LED, GPS sensor, laser positioning sensor, it is also possible to is that other can by the physical label of cameras capture, for instance be the object with shape facility and/or color characteristic.It is preferred that for the interference avoiding extraneous veiling glare, improve the robustness of labelling point tracking, it is preferred to use the comparatively narrow infrared LED lamp of frequency spectrum is as labelling point, and uses the corresponding thermal camera only by the used frequency spectrum of infrared LED that labelling point is caught.Consider that extraneous veiling glare mostly is irregular shape and Luminance Distribution is uneven, it is possible to being arranged to send the hot spot of regular shape by labelling point, luminous intensity is higher, brightness uniformity.It can in addition contain arrange multiple labelling point, the corresponding hot spot of each labelling point, the geometry of each labelling point composition rule, such as triangle, tetragon etc., thus being prone to trace into labelling point, obtain the spatial positional information of labelling point, and improve the accuracy that hot spot extracts.
S12 obtains the positional information of this labelling point;This can be through receiving the signal that labelling point sends, and determines the positional information of labelling point, it is also possible to be adopt video camera to shoot the image containing labelling point, the labelling point in image is extracted.The positional information of labelling point is obtained by image processing algorithm.
S13, according to the positional information of this labelling point, rebuilds the locus of this first object object.When after the positional information acquiring this labelling point, recreate the locus of labelling point, then according to the relative position relation of labelling point with first object object, the locus of labelling point is transformed into the locus (locus of two, the left and right of such as people) of first object object.
The positional information of the labelling point by obtaining corresponding first object object of embodiment of the present invention two, and according to this positional information, recreate the locus of first object object.Need two dimensional image is carried out feature analysis thus obtaining position of human eye or using other to utilize compared with the human eye of human eye iris reflex effect seizure equipment acquisition position of human eye as human eye seizure equipment with use video camera in prior art, there is good stability, catch the accuracy of the positional information of human eye high, with low cost and to advantages such as the distance not requirements between the equipment of tracking and first object object.
Referring to Figure 13, above-mentioned steps S12 farther includes:
S121 presets a standard picture, is provided with reference marker point, and obtains space coordinates and the plane coordinates of described reference marker point in described standard picture;Standard picture such as can be through the standard picture that image capture device gathers, obtain the image coordinate of reference marker point, and using other accurate measurement in space equipment such as laser scanners, the equipment such as structured light scanner (such as Kinect etc.) obtains space coordinates and the plane coordinates of reference marker point in standard picture.
S122 obtains and comprises described destination object and the present image of described labelling point, and described labelling point is at the plane coordinates of described present image;
Labelling point in described present image is mated by S123 with the described reference marker point of described standard picture.Here first the labelling point plane coordinates at described present image and reference marker point are set up corresponding relation between the plane coordinates of standard picture, then labelling point be mated with reference marker point.
It is easy for can there be an object of reference, it further ensures the stability of the method for tracking target of embodiment of the present invention and accuracy when obtaining the locus of present image by arranging standard picture and reference marker point.
Further, also included before above-mentioned steps S11: the video camera of the positional information for obtaining described labelling point is demarcated by S10.
Above-mentioned demarcation has in the following several ways:
(1) when the video camera of described S10 is monocular-camera, it is possible to adopt common Zhang Shi gridiron pattern calibration algorithm, for instance adopt below equation to demarcate:
Sm '=A [R | t] M ' (1)
In formula (1), A is inner parameter, and R is external parameter, and t is translation vector, m ' picture point coordinate in the picture, the space coordinates (i.e. three-dimensional coordinate in space) that M ' is object point;Wherein A, R and t are determined by below equation respectively:
A = fx 0 cx 0 fy cy 0 0 1 With R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , Translation vector t = t 1 t 2 t 3 .
Calibration algorithm certainly for video camera has a variety of, it is also possible to adopting the calibration algorithm that other industry is conventional, the present invention is not construed as limiting, and mainly uses calibration algorithm, to improve the accuracy of the first object method for tracing object of the present invention.
(2), when the video camera of described S10 is binocular camera or multi-lens camera, following steps are adopted to demarcate:
Arbitrary lens camera in described binocular camera or multi-lens camera is first demarcated by S101, is also adopt common Zhang Shi gridiron pattern calibration algorithm, for instance adopt below equation:
Sm '=A [R | t] M ' (1)
In formula (1), A is inner parameter, and R is external parameter, and t is translation vector, m ' picture point coordinate in the picture, the space coordinates that M ' is object point;Wherein A, R and t are determined by below equation respectively:
A = fx 0 cx 0 fy cy 0 0 1 With R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , Translation vector t = t 1 t 2 t 3 ;
S102 calculates the relative rotation matrices between described binocular camera or described multi-lens camera and relative translation amount, adopts below equation:
Relative rotation matrices R ′ = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 With relative translation amount T = T 1 T 2 T 3 .
Certainly the above-mentioned calibration algorithm for binocular camera or multi-lens camera simply wherein one more typically, the calibration algorithm that other industry is conventional can also be adopted, the present invention is not construed as limiting, and mainly uses calibration algorithm, to improve the accuracy of the first object method for tracing object of the present invention.
Further, also include between above-mentioned S11 and S12:
S14 gathers described labelling point;
S15 screens target label point from described labelling point.
Specifically, when the quantity of labelling point is multiple, adopt all labelling points of camera acquisition correspondence first object object, labelling point maximally related with first object object is chosen from all labelling points, then using corresponding image processing algorithm that the labelling point on image is extracted, this extraction needs the feature according to labelling point to carry out.Generally, the method that the feature of this labelling point is extracted is that image I is used feature extraction function H, obtains the feature scores of each point in image, and filters out the sufficiently high labelling point of eigenvalue.Here can conclude with following formula and represent:
S (x, y)=H (I (x, y))
F={arg(x, y)(S (x, y) > s0) }
In above-mentioned formula, H is feature extraction function, I (x, y) it is each pixel (x, y) image value corresponding to, it is possible to be gray value or three-channel color energy value etc., S (x, y) it is each pixel (x, y) feature scores after feature extraction, s0 is a feature scores threshold value, more than the S (x of s0, y) being considered labelling point, F is labelling point set.It is preferred that the energy feature that the embodiment of the present invention uses infrared markers point and the become image of thermal camera is comparatively obvious.Owing to using narrow-band LED infrared lamp, and corresponding thermal camera, most of pixel energy of the become image of video camera are very low, and the pixel only having labelling point corresponding has high-energy.Therefore (x can be y) that (x y) carries out region and increases the some subimages of acquisition, and the subimage acquired is carried out center of gravity extraction to the image B after use Threshold segmentation operator to corresponding function H.This feature extraction function H (x, y), it is possible to be the characteristic point functions such as Harris, SIFT, FAST, it is also possible to be the image processing function such as circular light spot extraction.Simultaneously, according in environment light can in thermal camera the veiling glare of imaging, the labelling extracted is clicked on row filter by the constraintss such as we can add such as the become facula area of labelling point in infrared markers point extraction process, labelling point position relationship in two dimensional image.
When video camera number is more than 1, it is necessary to different cameras is carried out reference points matching at synchronization or the image that obtains close to synchronization, thus providing condition for follow-up labelling point three-dimensional reconstruction.The method of reference points matching needs to determine according to feature extraction function H.The methods such as the feature point extraction operator based on gradient of image and gray scale figure that we can use some classics and the matching process such as Harris, SIFT, the FAST that match with it obtain and matched indicia point.Can also operating limit constraint, the mode such as the priori conditions of labelling point carries out reference points matching.The method carrying out coupling screening used here as limit restraint is: according to the projection on two different cameras images in same o'clock all in this principle of same plane, for some labelling point p0 in some video camera c0, we can calculate a polar curve equation in other video cameras c1, and labelling point p0 meets following relation corresponding to the labelling point p1 on this other video camera c1:
[p1;1]TF[p0;1]=0
In above-mentioned formula, F is the video camera c0 basis matrix to video camera c1.By using above-mentioned relation, we can reduce candidate's number of labelling point p1 significantly, improves matching accuracy.
Additionally, we can use the priori conditions of labelling point to be the spatial order of labelling point, the size etc. of labelling point.Such as according to the mutual alignment relation of two video cameras make two pixels of every a pair corresponding the same space point on its captured image in some dimension such as y-axis equal, this process is also referred to as image calibration (rectification).Then the coupling of this tense marker point also just can perform according to the x-axis order of labelling point, i.e. the corresponding minimum x of minimum x, the like, the x that maximum x is corresponding maximum.
Based on the following for the video camera number followed the tracks of number, the method for tracking target of the present invention is discussed in detail.
Refer to Figure 14, be the idiographic flow schematic diagram of first variation of S13 in Figure 11.As shown in figure 13, in the present embodiment, labelling o'clock corresponding to first object object that this first object method for tracing object is followed the tracks of is less than four, and when adopting positional information that monocular-camera obtains labelling point, abovementioned steps S13 farther includes:
S131 calculates the homograph relation between described present image and described standard picture according to the plane coordinates of the labelling point in described present image and the plane coordinates of described reference marker point of described standard picture and the assumed conditions of described first object object place scene;The labelling of present image point is mated with the reference marker point in standard picture, and calculates the homograph relation between present image and standard picture according to the two respective plane coordinates.So-called homograph is the homography in corresponding geometry, is a kind of alternative approach conventional in computer vision field.
S132 calculates the described labelling point rigid transformation in the locus to the locus of current time that shoot the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and calculate the current locus of described first object object in the locus of current time according to described labelling point.
Specifically, assumed conditions for scene, the numerical value of certain dimension when we can assume that the rigid transformation of labelling point in scene is constant, in such as three dimensional spatial scene, space coordinates is x, y, z, and x and y is parallel with x-axis in the image coordinate of photographic head (plane coordinates) and y-axis respectively, and z-axis is perpendicular to the image of photographic head, assumed conditions can be that labelling point coordinate in z-axis is constant, it is also possible to is that labelling point coordinate in x-axis and/or y-axis is constant.Different suppositive scenario conditions, the estimation method used also is not quite similar.Again such as, under another kind of assumed conditions, assume first object object towards photographic head towards between the anglec of rotation in use remain constant, then can speculate the current locus of first object object according to the ratio between the distance from each other of the labelling point on the distance from each other of the labelling point in present image and standard picture.
By above computational methods, rebuilding the locus of described first object object when can realize the monocular-camera quantity to labelling point less than four, it is simple to operate, and it is also more accurate to follow the tracks of result, owing to adopting monocular, reduce the cost of first object Object tracking.
Above-mentioned use single camera gathers image and recovers in object dimensional seat calibration method, owing to the image information obtained is less, it is therefore desirable to the number increasing labelling point provides more image information thus calculating the three-dimensional coordinate of object.Theoretical according to machine vision, the steric information of scene to be extrapolated from single image, it is necessary at least determine five fixed points in image.Therefore, monocular scheme adds mark tally amount, too increases the complexity of design, but simultaneously, it is only necessary to a video camera, thus reducing the complexity of image acquisition, reduces cost.
Refer to Figure 15, be the idiographic flow schematic diagram of second variation of S13 in Figure 11.As shown in figure 14, in the present embodiment, when the quantity of described labelling point is more than five, and when adopting the positional information of the monocular-camera described labelling point of acquisition, described S13 farther includes:
S133, according to the plane coordinates of the described reference marker point of plane coordinates and the described standard picture of the labelling point in described present image, calculates the homograph relation between described present image and described standard picture;
S134 calculates the described labelling point rigid transformation in the locus to the locus of current time that shoot the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and the locus according to described labelling point current time calculates the locus that first object object is current.
Specifically, first gather a width standard picture, use the devices such as accurate depth camera or laser scanner to measure the locus of reference marker point the two dimensional image coordinate (i.e. plane coordinates) of the reference marker point that acquisition is now.
In use, video camera constantly catches the two dimensional image coordinate of all labelling points in the present image containing first object object, and the rigid transformation between labelling point when calculating the labelling point under current state and shooting standard picture according to the two-dimensional coordinate of now two dimensional image coordinate and standard picture reference marker point, when assuming that between labelling point, relative position is constant, and then locus conversion when calculating out this tense marker point relative to standard picture, thus calculating the locus of current markers point.
Here, the point using more than five points can calculate the locus rigid transformation [R | T] of current markers point and shooting standard picture tense marker point, preferably, not in one plane, and the projection matrix P of photographic head is demarcated this point of more than five in advance.The concrete mode calculating [R | T] is as follows:
Each labelling point is at homogeneous coordinates respectively X0, the Xi of standard picture and present image.The two meets limit restraint, i.e. X0P-1[R | T] P=Xi.All labelling points form the equation group that unknown parameter is [R | T].When mark tally amount is more than 5, it is possible to [R | T] solve;When mark tally amount is more than 6, it is possible to [R | T] seek optimal solution, its method can use singular value decomposition SVD, and/or uses the method for iteration to calculate non-linear optimal solution.After calculating labelling space of points position, we can deduce the locus of first object object (such as human eye) according to the mutual alignment relation between labelling point and the first object object (such as human eye) demarcated in advance.
The present embodiment only with a video camera, uses the labelling point of five or more than five, it is possible to construct the locus of first object object exactly, not only simple to operate, and with low cost.
Refer to the idiographic flow schematic diagram of the 3rd variation of S13 in Figure 16, Figure 16 Figure 11.As shown in figure 16, the present embodiment 3 uses two or more video camera, one or more labelling point.When adopting the positional information that binocular camera or multi-lens camera obtain described labelling point, described S3 farther includes:
S35 adopts binocular or many orders three-dimensional reconstruction principle, calculates each labelling point in the locus of current time;So-called binocular or three orders rebuild principle can adopt following methods, for instance adopts the parallax between the labelling point that left and right photographic head mates, calculates each labelling point in the locus of current time.Or adopt other existing common methods to realize.
S36 calculates, according to the locus of described labelling point current time, the locus that first object object is current.
Specifically, the mutual alignment relation between each video camera is demarcated by the method first by multi-lens camera calibration.Then in use, the image zooming-out labelling point coordinates that each video camera is got, and each labelling point is mated, namely obtain it at labelling point corresponding to each video camera, then use mutual alignment relation between labelling point and the video camera of coupling to calculate the locus of labelling point.
In a specific example, multi-lens camera (namely number of cameras is be more than or equal to 2) is used to carry out shot mark point, it is achieved stereo reconstruction.Known labelling point coordinate u on a certain shot by camera image and this camera parameters matrix M, we can calculate a ray, and this labelling point is on this ray in space.
αjuj=MjXj=1 ... n (wherein n is the natural number be more than or equal to 2)
In like manner, according to above-mentioned formula, this labelling point can also calculate should the ray of other video camera on other video camera.Theoretically, these two rays converge on a point, i.e. the locus of this labelling point.Actually due to the digitizer error of video camera, error that video camera internal reference and outer ginseng are demarcated etc., these rays can not converge at a bit, it is therefore desirable to uses the method approximate calculation of triangulation (triangululation) to go out the locus of labelling point.Least square judgment criterion such as can be used to determine apart from the nearest point of all light as object point.
X ′ = arg x min Σ j = 1 m [ ( m 1 j X m 3 j X - u 1 j ) 2 + ( m 2 j X m 4 j X - u 2 j ) 2 ]
After calculating labelling space of points position, we can deduce the locus of first object object (human eye) according to the mutual alignment relation between labelling point and the first object object (such as human eye) demarcated in advance.
Realize in the method for stereo reconstruction at above-mentioned use multi-lens camera, it is advantageous to method be use binocular camera calculate.It is the same that its principle and aforementioned multi-lens camera rebuild principle, is all the mutual alignment relation according to two video cameras and labelling o'clock calculates labelling space of points position at the two-dimensional coordinate of two video camera imagings.Its minute differences is binocular camera laid parallel, according to after simple demarcation, the image of two video cameras is done image calibration as previously described, make two two-dimensional marker points matched each other equal on y (or x) axle, then this tense marker point degree of depth from video camera can be calculated by the gap on x (or y) axle of the two-dimensional marker point after calibrating.The method can regard the specific process that multi-eye stereo is reconstituted under biocular case as, which simplify the step of stereo reconstruction and is easier to realize on device hardware.
Embodiment 6
Refer to the idiographic flow schematic diagram that Figure 17, Figure 17 are the S3 in Figure 11.As shown in figure 17, based on aforementioned embodiments two and previous embodiment, the step S3 based on the stereo display method of soft lens of the present invention farther includes:
S301 arranges graph parameter and determines step, and the positional information of the described first object object that foundation gets and the grating parameter of described spectrophotometric unit and the display parameters of display unit calculate row's graph parameter on the display unit;
S302 parallax image arrangement step, arranges the anaglyph on described display unit according to described row's graph parameter;
S303 anaglyph plays step, plays described anaglyph.
By above-mentioned step, stereo-picture to be played is rearranged, improve the effect of stereo display.
Further, also included before step S301: S304 stereo-picture obtaining step, obtain the information of the described stereo-picture that captured in real-time arrives.While anaglyph is play on limit, limit obtains the stereo image information that captured in real-time arrives, and improves the efficiency of image procossing, not only ensure that real-time play, and it is very big and require the requirement of big internal memory to decrease the memory data output taken because of stereoscopically displaying images, reduces cost simultaneously.
The foregoing is only the preferred embodiment of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.All within the spirit and principles in the present invention, any amendment of making, equivalent replacement, improvement etc., should be included within protection scope of the present invention.

Claims (25)

  1. null1. the three-dimensional display system based on soft lens,Including: display unit、Spectrophotometric unit、Tracking equipment and image capturing unit,Described spectrophotometric unit is positioned at the display side of described display unit,Left view and right view it is divided on the image space shown by described display unit,Described tracking equipment is for obtaining the positional information of first object object,Described image capturing unit is used for shooting the second destination object,It is characterized in that,The described three-dimensional display system based on soft lens also includes image player and processes equipment,Respectively with described tracking equipment、Described display unit and described image capturing unit connect,Described image player processes the equipment positional information according to described first object object、The grating parameter of described spectrophotometric unit and the display parameters of described display unit,The stereo-picture that the described image capturing unit received photographs is processed in real time,Send described display unit after process to show in real time.
  2. 2. the three-dimensional display system based on soft lens as claimed in claim 1, it is characterised in that described image capturing unit is located on described soft lens.
  3. 3. the three-dimensional display system based on soft lens as claimed in claim 1, it is characterised in that described image capturing unit includes monocular-camera, shoots and obtain the stereo-picture of described second destination object with a photographic head.
  4. 4. the three-dimensional display system based on soft lens as claimed in claim 1, it is characterised in that described image capturing unit includes binocular camera, shoots and obtain the stereo-picture of described second destination object with two photographic head.
  5. 5. the three-dimensional display system based on soft lens as claimed in claim 1, it is characterised in that described image capturing unit includes multi-lens camera, by the photographic head of more than three stereo-picture shooting and obtaining described second destination object arranged in arrays.
  6. 6. the three-dimensional display system based on soft lens as described in any one of claim 2 to 5, it is characterized in that, described image capturing unit farther includes collecting unit, described collecting unit is for gathering the stereo-picture of described second destination object, and extracts left view information and right view information from described stereo-picture.
  7. 7. the three-dimensional display system based on soft lens as claimed in claim 1, it is characterised in that described tracking equipment includes video camera, the change in location of first object object described in described Camera location.
  8. 8. the three-dimensional display system based on soft lens as claimed in claim 1, it is characterized in that, described tracking equipment includes infrared remote receiver, and described infrared remote receiver receives and comes from the infrared framing signal that the infrared transmitter set by corresponding described first object object sends.
  9. 9. the three-dimensional display system based on soft lens as claimed in claim 1, it is characterised in that described tracking equipment includes:
    Labelling point arranges unit, and the locus of corresponding described first object object arranges labelling point;
    Acquiring unit, obtains the positional information of described labelling point;
    Rebuild unit, according to the positional information of described labelling point, rebuild the locus of described first object object.
  10. 10. the three-dimensional display system based on soft lens as claimed in claim 9, it is characterised in that described acquiring unit farther includes:
    Presetting module, presets a standard picture, is provided with reference marker point, and obtains described reference marker point space coordinates in described standard picture and plane coordinates in described standard picture;
    Acquisition module, obtains the present image comprising described first object object with described labelling point, and described labelling point is at the plane coordinates of described present image;
    Matching module, mates the labelling point in described present image with the described reference marker point of described standard picture.
  11. 11. the three-dimensional display system based on soft lens as claimed in claim 10, it is characterised in that when the quantity of described labelling point is less than four, and when adopting positional information that monocular-camera obtains described labelling point, described in rebuild unit and also include:
    First computing module, for calculating the homograph relation between described present image and described standard picture according to the plane coordinates of the described reference marker point of the plane coordinates of the labelling point in described present image and described standard picture and the assumed conditions of described first object object place scene;
    First reconstructed module, for calculating the described labelling point rigid transformation in the locus to the locus of current time that shoot the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and calculate the current locus of described first object object in the locus of current time according to described labelling point.
  12. 12. the three-dimensional display system based on soft lens as claimed in claim 10, it is characterised in that when the quantity of described labelling point is more than five, and when adopting positional information that monocular-camera obtains described labelling point, described in rebuild unit and also include:
    Second computing module, is used for the plane coordinates of the plane coordinates according to the labelling point in described present image and the described reference marker point of described standard picture, calculates the homograph relation between described present image and described standard picture;
    Second reconstructed module, for calculating the described labelling point rigid transformation in the locus to the locus of current time that shoot the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and the locus according to described labelling point current time calculates the locus that first object object is current.
  13. 13. the three-dimensional display system based on soft lens as claimed in claim 10, it is characterised in that when the positional information adopting binocular camera or multi-lens camera to obtain described labelling point, described in rebuild unit and also include:
    3rd computing module, is used for adopting binocular or many orders three-dimensional reconstruction principle, calculates each labelling point in the locus of current time;
    Third reconstruction module, calculates, for the locus according to described labelling point current time, the locus that first object object is current.
  14. 14. the three-dimensional display system based on soft lens as described in any one of claim 2 to 13, it is characterized in that, described image player processes equipment and includes image player processing unit and memory element, described image player processing unit is for the display parameters according to the positional information of described first object object, the grating parameter of described spectrophotometric unit and described display unit, the stereo-picture received is carried out real-time row's figure process, sends described display unit after process and show in real time;Described memory element is for storing the image that described image capturing unit transmits, and wherein, described image player processing unit is connected with described memory element.
  15. 15. the three-dimensional display system based on soft lens as claimed in claim 14, it is characterised in that described image player processing unit includes:
    Stereo-picture acquisition module, obtains the information of the described stereo-picture of described image capturing unit shooting;
    Row's graph parameter determines module, according to row's graph parameter that the positional information of described first object object got and the grating parameter of described spectrophotometric unit calculate on described display unit;
    Parallax image arrangement module, for arranging the anaglyph of the described stereo-picture on described display unit according to described row's graph parameter;
    Anaglyph playing module, plays described anaglyph.
  16. 16. the three-dimensional display system based on soft lens as described in any one of claim 1 to 15, it is characterised in that be provided with laminating unit between described spectrophotometric unit and described display unit, by described laminating unit, described spectrophotometric unit is fitted on described display unit.
  17. 17. the three-dimensional display system based on soft lens as claimed in claim 16, it is characterised in that described laminating unit includes first substrate, second substrate and the air layer between described first substrate and described second substrate.
  18. 18. the stereo display method based on soft lens, it is characterised in that said method comprising the steps of:
    S0 shoots the stereo-picture of the second destination object, and sends the information of described stereo-picture in real time;
    S1 obtains the positional information of first object object;
    S2 obtains the grating parameter of the spectrophotometric unit of display device and the display parameters of the display unit of described display device;
    The stereo-picture that the described image capturing unit received photographs is processed in real time by S3 according to described positional information and described grating parameter and described display parameters;
    S4 shows described image to be played.
  19. 19. the stereo display method based on soft lens as claimed in claim 18, it is characterised in that described S0 also includes:
    Image acquisition step, gathers the stereo-picture of described second destination object, and extracts left view information and right view information from described stereo-picture.
  20. 20. the stereo display method based on soft lens as claimed in claim 19, it is characterised in that described S1 also includes:
    The locus of the corresponding described first object object of S11 arranges labelling point;
    S12 obtains the positional information of described labelling point;
    S13, according to the positional information of described labelling point, rebuilds the locus of described first object object.
  21. 21. the stereo display method based on soft lens as claimed in claim 20, it is characterized in that, described S12 farther includes: S121 presets a standard picture, is provided with reference marker point, and obtains described reference marker point space coordinates in described standard picture and plane coordinates in described standard picture;
    S122 obtains the present image comprising described first object object, described labelling point, and described labelling point is at the plane coordinates of described present image;
    Labelling point in described present image is mated by S123 with the described reference marker point of described standard picture.
  22. 22. the stereo display method based on soft lens as claimed in claim 21, it is characterised in that when the quantity of described labelling point is less than four, and when adopting the positional information of the monocular-camera described labelling point of acquisition, described S13 farther includes:
    S131 calculates the homograph relation between described present image and described standard picture according to the plane coordinates of the labelling point in described present image and the plane coordinates of described reference marker point of described standard picture and the assumed conditions of described first object object place scene;
    S132 calculates the described labelling point rigid transformation in the locus to the locus of current time that shoot the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and calculate the current locus of described first object object in the locus of current time according to described labelling point.
  23. 23. the stereo display method based on soft lens as claimed in claim 21, it is characterised in that when the quantity of described labelling point is more than five, and when adopting the positional information of the monocular-camera described labelling point of acquisition, described S13 farther includes:
    S133, according to the plane coordinates of the described reference marker point of plane coordinates and the described standard picture of the labelling point in described present image, calculates the homograph relation between described present image and described standard picture;
    S134 calculates the described labelling point rigid transformation in the locus to the locus of current time that shoot the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and the locus according to described labelling point current time calculates the locus that first object object is current.
  24. 24. the stereo display method based on soft lens as claimed in claim 21, it is characterised in that when the positional information adopting binocular camera or multi-lens camera to obtain described labelling point, described S13 farther includes:
    S135 adopts binocular or many orders three-dimensional reconstruction principle, calculates each labelling point in the locus of current time;
    S136 calculates, according to the locus of described labelling point current time, the locus that destination object is current.
  25. 25. the stereo display method based on soft lens as described in any one of claim 18 to 24, it is characterised in that described S3 farther includes:
    S304 stereo-picture obtaining step, obtains the information of the described stereo-picture that captured in real-time arrives;
    S301 arranges graph parameter and determines step, calculates row's graph parameter on the display unit according to the display parameters of the positional information of described first object object got and the grating parameter of described spectrophotometric unit and described display unit;
    S302 parallax image arrangement step, according to the anaglyph of the described stereo-picture on described row's graph parameter described display unit of arrangement;
    S303 anaglyph plays step, plays described anaglyph.
CN201410852264.XA 2014-12-29 2014-12-29 Stereoscopic display system based on soft lens and method Pending CN105812776A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410852264.XA CN105812776A (en) 2014-12-29 2014-12-29 Stereoscopic display system based on soft lens and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410852264.XA CN105812776A (en) 2014-12-29 2014-12-29 Stereoscopic display system based on soft lens and method

Publications (1)

Publication Number Publication Date
CN105812776A true CN105812776A (en) 2016-07-27

Family

ID=56421459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410852264.XA Pending CN105812776A (en) 2014-12-29 2014-12-29 Stereoscopic display system based on soft lens and method

Country Status (1)

Country Link
CN (1) CN105812776A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021063321A1 (en) * 2019-09-30 2021-04-08 北京芯海视界三维科技有限公司 Method and apparatus for realizing 3d display, and 3d display terminal
WO2022012455A1 (en) * 2020-07-15 2022-01-20 北京芯海视界三维科技有限公司 Method and apparatus for implementing target object positioning, and display component

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1514300A (en) * 2002-12-31 2004-07-21 �廪��ѧ Method of multi viewing angle x-ray stereo imaging and system
CN101562756A (en) * 2009-05-07 2009-10-21 昆山龙腾光电有限公司 Stereo display device as well as display method and stereo display jointing wall thereof
CN101984670A (en) * 2010-11-16 2011-03-09 深圳超多维光电子有限公司 Stereoscopic displaying method, tracking stereoscopic display and image processing device
CN102208012A (en) * 2010-03-31 2011-10-05 爱信艾达株式会社 Scene matching reference data generation system and position measurement system
CN103139592A (en) * 2011-11-23 2013-06-05 韩国科学技术研究院 3d display system
CN103875243A (en) * 2011-10-14 2014-06-18 奥林巴斯株式会社 3d endoscope device
WO2014112782A1 (en) * 2013-01-18 2014-07-24 주식회사 고영테크놀러지 Tracking system and tracking method using same
CN204377059U (en) * 2014-12-29 2015-06-03 广东省明医医疗慈善基金会 Based on the three-dimensional display system of soft lens

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1514300A (en) * 2002-12-31 2004-07-21 �廪��ѧ Method of multi viewing angle x-ray stereo imaging and system
CN101562756A (en) * 2009-05-07 2009-10-21 昆山龙腾光电有限公司 Stereo display device as well as display method and stereo display jointing wall thereof
CN102208012A (en) * 2010-03-31 2011-10-05 爱信艾达株式会社 Scene matching reference data generation system and position measurement system
CN101984670A (en) * 2010-11-16 2011-03-09 深圳超多维光电子有限公司 Stereoscopic displaying method, tracking stereoscopic display and image processing device
CN103875243A (en) * 2011-10-14 2014-06-18 奥林巴斯株式会社 3d endoscope device
CN103139592A (en) * 2011-11-23 2013-06-05 韩国科学技术研究院 3d display system
WO2014112782A1 (en) * 2013-01-18 2014-07-24 주식회사 고영테크놀러지 Tracking system and tracking method using same
CN204377059U (en) * 2014-12-29 2015-06-03 广东省明医医疗慈善基金会 Based on the three-dimensional display system of soft lens

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021063321A1 (en) * 2019-09-30 2021-04-08 北京芯海视界三维科技有限公司 Method and apparatus for realizing 3d display, and 3d display terminal
WO2022012455A1 (en) * 2020-07-15 2022-01-20 北京芯海视界三维科技有限公司 Method and apparatus for implementing target object positioning, and display component
TWI816153B (en) * 2020-07-15 2023-09-21 大陸商北京芯海視界三維科技有限公司 Method, device and display device for achieving target object positioning

Similar Documents

Publication Publication Date Title
CN205610834U (en) Stereo display system
US11199706B2 (en) Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
CN101072366B (en) Free stereo display system based on light field and binocular vision technology
CN105809654A (en) Target object tracking method and device, and stereo display equipment and method
CN106101689B (en) The method that using mobile phone monocular cam virtual reality glasses are carried out with augmented reality
CN109491087A (en) Modularized dismounting formula wearable device for AR/VR/MR
CN102098524B (en) Tracking type stereo display device and method
CN106773080B (en) Stereoscopic display device and display method
CN204578692U (en) Three-dimensional display system
CN108307675A (en) More baseline camera array system architectures of depth enhancing in being applied for VR/AR
CN105812774A (en) Three-dimensional display system based on endoscope and method thereof
CN204377059U (en) Based on the three-dimensional display system of soft lens
CN105651384A (en) Full-light information collection system
CN105812772A (en) Stereo display system and method for medical images
AU2019221088A1 (en) Method and system for calibrating a plenoptic camera system
CN101702056B (en) Stereo image displaying method based on stereo image pairs
CN109660786A (en) A kind of naked eye 3D three-dimensional imaging and observation method
CN204539353U (en) Medical image three-dimensional display system
CN204377058U (en) Based on the three-dimensional display system of hard mirror
CN109068035A (en) A kind of micro- camera array endoscopic imaging system of intelligence
CN204377057U (en) Based on the three-dimensional display system of intubate mirror
CN105812776A (en) Stereoscopic display system based on soft lens and method
CN103731653A (en) System and method for integrated 3D camera shooting
CN104887316A (en) Virtual three-dimensional endoscope displaying method based on active three-dimensional displaying technology
CN104469340A (en) Stereoscopic video co-optical-center imaging system and imaging method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160727