CN205610834U - Stereo display system - Google Patents

Stereo display system Download PDF

Info

Publication number
CN205610834U
CN205610834U CN201521106711.3U CN201521106711U CN205610834U CN 205610834 U CN205610834 U CN 205610834U CN 201521106711 U CN201521106711 U CN 201521106711U CN 205610834 U CN205610834 U CN 205610834U
Authority
CN
China
Prior art keywords
unit
image
labelling point
point
dimensional display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201521106711.3U
Other languages
Chinese (zh)
Inventor
包瑞
李统福
周峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Super Technology Co Ltd
Original Assignee
深圳超多维光电子有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳超多维光电子有限公司 filed Critical 深圳超多维光电子有限公司
Application granted granted Critical
Publication of CN205610834U publication Critical patent/CN205610834U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/225Image signal generators using stereoscopic image cameras using a single 2D image sensor using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The utility model belongs to the technical field of stereoscopic display, a stereo display system is provided, including display element, tracking device and beam split unit. Wherein, this tracking device is used for acquireing the positional information of first target object, and this beam split unit is located this display element's demonstration side for fall into left view and right view on the image space with this display element demonstration. Wherein, this stereo display system still includes image broadcast processing unit, is connected with this tracking device and this display element respectively, and the positional information of this first target object of this image broadcast processing unit foundation, the grating parameter of this beam split unit and this display element's display parameters real -time processing treats to send the stereo image of broadcast this display element after the processing and show. Above -mentioned stereo display system realizes showing stereo image in real time, has promoted user experience.

Description

Three-dimensional display system
Technical field
This utility model relates to stereo display technique field, particularly to a kind of three-dimensional display system.
Background technology
In recent years, stereo display technique quickly grows, and becomes the focus of people's research.Stereo display skill Art is the most increasingly widely used in medical treatment, advertisement, military affairs, puts on display, plays and car-mounted display etc. is each Individual field.Stereo display technique includes that wearing spectacles formula stereo display technique and the bore hole without glasses are stood Body Display Technique.Wherein, wearing spectacles formula stereo display technique develops very early, the most technology ratio More ripe, still using in a lot of fields;And bore hole stereo display technique is started late, its technology Difficulty is higher than the difficulty of wearing spectacles formula, though having utilization in association area at present, but display effect is also The demand of people can not be met.Especially current bore hole stereo display technique is applied to ball match fact such as and turns When the field of real-time play such as broadcasting, medical operating is on-the-spot, the effect of its real-time play is poor, it is impossible to full The needs of foot viewing.Therefore, in these fields, most stereo displays using wearing spectacles formula Technology, does not also have the application of bore hole stereo display technique.
Additionally, in current naked-eye stereoscopic display system, generally use the tracing of human eye such as photographic head to set The standby position catching human eye, then according to people right and left eyes position adaptive regulation spectrophotometric unit or The pixel of display floater is arranged so that people moves freely within the specific limits, the most also will not Have influence on the display effect of viewing stereo-picture.But, the tracing of human eye equipment such as existing photographic head needs The two dimensional image containing position of human eye photographed is carried out feature analysis, to obtain position of human eye letter Breath.Adopt in this way, it is difficult to ensure that its stability and accuracy.If not getting people accurately Eye positional information, this will have a strong impact on stereo display effect, bring poor Consumer's Experience.Especially In the field needing real-time play stereoscopically displaying images, such as at medical field, doctor is according to the most aobvious When the stereoscopically displaying images shown is performed the operation, doctor needs frequently and watches real-time stereoscopically displaying images, If the position of human eye information followed the tracks of is inaccurate, the operation of doctor, severe patient will be had influence on, even can Have influence on the success of operation.And for example when ball match live broadcast, higher for requirement of real-time, image In transmission and process, delay situation occurs, it is impossible to real-time implementation is relayed, then Consumer's Experience is excessively poor.
Therefore, the display in real time how realizing bore hole stereoscopic display device becomes problem demanding prompt solution.
Utility model content
The purpose of this utility model is to provide a kind of three-dimensional display system, it is intended to solve by prior art Limit to the said one or multiple technical problem caused with shortcoming.
The utility model proposes a kind of three-dimensional display system, including display unit, spectrophotometric unit and with Track equipment, described tracking equipment is for obtaining the positional information of first object object, described light splitting Unit is positioned at the display side of described display unit, empty for the image shown by described display unit Being divided into left view and right view between, it is single that described three-dimensional display system also includes that image player processes Unit, is connected with described tracking equipment and described display unit respectively, and described image player processes single Unit is according to positional information, the grating parameter of described spectrophotometric unit and the institute of described first object object The display parameters stating display unit process image to be played in real time, send described display after process Unit shows.
Relative to prior art, the beneficial effects of the utility model are: system of the present utility model And method can get the positional information of first object object, grating parameter and display parameters in time, And carry out image procossing accordingly, eliminate and prior art need the processing procedure through central processing unit, Thus the speed of image player is greatly improved compared to prior art, can meet that real-time volume shows wants Ask.
Accompanying drawing explanation
Fig. 1 shows the structural representation of the three-dimensional display system of this utility model embodiment one;
Fig. 2 shows the structural representation of the image player processing unit in Fig. 1;
Fig. 3 be this utility model embodiment one three-dimensional display system in spectrophotometric unit and display unit The structural representation of laminating;
Fig. 4 shows in the three-dimensional display system of this utility model embodiment one and follows the tracks of the preferable of equipment Example structure schematic diagram;
Fig. 5 shows the concrete structure schematic diagram of the acquiring unit in Fig. 4;
Fig. 6 shows the concrete structure schematic diagram rebuilding unit the first variation in Fig. 4;
Fig. 7 shows the concrete structure schematic diagram rebuilding unit the second variation in Fig. 4;
Fig. 8 shows the concrete structure schematic diagram rebuilding unit the 3rd variation in Fig. 4;
Fig. 9 shows that the location that in the tracking equipment of Fig. 4, corresponding first object object arranges labelling point is propped up The structural representation of frame;
Figure 10 is the schematic flow sheet of the stereo display method of this utility model embodiment two;
Figure 11 is the idiographic flow schematic diagram of S1 in Figure 10;
Figure 12 is the idiographic flow schematic diagram of S12 in Figure 11;
Figure 13 is the idiographic flow schematic diagram of first variation of S13 in Figure 10;
Figure 14 is the idiographic flow schematic diagram of second variation of S13 in Figure 10;
Figure 15 is the idiographic flow schematic diagram of the 3rd variation of S13 in Figure 10;
Figure 16 is the idiographic flow schematic diagram of the S3 in Figure 10;
Figure 17 is a kind of structural representation of this utility model image player processing unit;
Figure 18 is the another kind of structural representation of this utility model image player processing unit;
Figure 19 is a kind of structural representation of this utility model image acquisition unit;
Figure 20 is the another kind of structural representation of this utility model image acquisition unit;
Figure 21 is a kind of structural representation that this utility model follows the tracks of equipment;
Figure 22 is the another kind of structural representation that this utility model follows the tracks of equipment.
Detailed description of the invention
Relevant of the present utility model aforementioned and other technology contents, feature and effect, coordinate ginseng following Examine during graphic preferred embodiment describes in detail and can clearly present.Saying by detailed description of the invention Bright, when technological means and the effect can taked this utility model by reaching predetermined purpose is able to more Going deep into and concrete understanding, but institute's accompanying drawings is only to provide reference and purposes of discussion, it is right to be not used for This utility model is any limitation as.
Embodiment one
Refer to the structural representation that Fig. 1, Fig. 1 are this utility model three-dimensional display system.Such as Fig. 1 institute Showing, three-dimensional display system of the present utility model includes: follow the tracks of equipment 30, spectrophotometric unit 50 and display single Unit 40, this tracking equipment 30 is for obtaining the positional information of first object object, this spectrophotometric unit 50 It is positioned at the display side of described display unit 40, for the image space shown by this display unit 40 It is divided into left view and right view.This three-dimensional display system also includes image player processing unit 20, respectively Be connected with this tracking equipment 30 and this display unit 40, this image player processing unit 20 according to this The positional information of one destination object, the grating parameter of this spectrophotometric unit 50 and the display of display unit 40 Parameter processes image to be played in real time, sends this display unit 40 and show after process.
Owing to tracking equipment 30 and display unit 40 are directly connected to image player processing unit 20, figure As playback process unit 20 gets the positional information of first object object, grating parameter and display in time Parameter, and carry out image procossing accordingly, eliminate and prior art needs the place through central processing unit Reason process, thus the speed of image player is greatly improved compared to prior art, can meet real-time volume The requirement of display.Above-mentioned grating parameter mainly includes that the pitch (pitch) of grating is relative with grating and shows The angle of inclination of panel, grating are relative to the parameter such as placement distance of display floater.These grating parameters can Being to be stored directly in the memorizer in image player processing unit, it is possible to be that other detection equipment is real Time detect and obtain the grating parameter of spectrophotometric unit, grating parameter value is sent to image player and processes single Unit 20.Above-mentioned display unit parameter include the size of display unit, the screen resolution of display unit, Putting in order and the parameter such as arrangement architecture of the pixel cell sub-pixel of display unit.Arrangement of subpixels Order i.e. sub-pixel is according to RGB arrangement or RBG arrangement, or becomes BGR arrangement, or becomes Other order arrangements;Arrangement of subpixels structure i.e. sub-pixel is that be vertically arranged or transversely arranged, As being the mode cycle arrangement according to RGB from top to bottom, or it is according to RGB the most successively Mode cycle arrangement etc..
Above-mentioned tracking equipment 30 can be photographic head and/or infrared sensor, is mainly used in following the trail of first The position of the face of the eyes of the position of destination object, such as people or the head of people or people or people The position of the upper part of the body.The quantity of photographic head or infrared sensor is not intended to, and can be one, it is possible to Being multiple.Photographic head or infrared sensor may be mounted on the frame of display unit, or single Solely it is placed on the position being prone to track first object object.If additionally, using infrared sensor to make For following the tracks of equipment, also infrared transmitter can be set in the position of corresponding first object object, by receiving The infrared framing signal sent to infrared transmitter, utilizes the phase of infrared transmitter and first object object To position relationship, calculate the positional information of first object object.
Above-mentioned spectrophotometric unit 50 is located at the light emission side of display unit 40, the tool shown by display unit 40 The left view of parallax and right view is had to be separately sent to left eye and the right eye of people, according to left eye and the right side of people Eye synthetic stereo image, makes people watch the effect of stereo display.It is preferred that above-mentioned spectrophotometric unit can To be disparity barrier or lenticulation.This disparity barrier can be liquid crystal slit or solid slit grating sheet Or electrochromism slit grating sheets etc., this lenticulation can be liquid crystal lens, resin lens or solid Body liquid crystal lens grating.Resin lens, liquid crystal lens grating are mainly resin by ultraviolet light Or liquid crystal is cured on thin slice, forms solid lens, after light is carried out light splitting, shine the left eye of people And right eye.It is preferred that using above-mentioned display unit 40 and spectrophotometric unit 50 as an integrated display Equipment 60, this display device 60 is the display part of whole three-dimensional display system, can be with earlier figures picture Playback process unit and tracking equipment fit together, it is also possible to be an independent sector individualism. For example, it is possible to need according to viewing, individually display device 60 is placed on the position being easy to viewing, and Image player processing unit 20 and tracking equipment 30 can be the equipment each with standalone feature, make These equipment are assembled and realize real-time volume display function of the present utility model by the used time.Such as, This image player processing unit 20 can be VMR 3D playback equipment, itself has at 3D broadcasting Reason function, is assembled into during use in three-dimensional display system of the present utility model, builds with miscellaneous equipment Vertical connection.
Above-mentioned image player processing unit 20, this first object object traced into according to tracking equipment 30 The single display parameters of positional information, the grating parameter of this spectrophotometric unit 50 and display process in real time and wait to broadcast The image put.Referring to Fig. 2, image player processing unit 20 farther includes:
Stereo-picture acquisition module 204, obtains the described solid of described image acquisition unit 10 shooting The information of image.
Row's graph parameter determines module 201, according to the described first object object got positional information and The grating parameter of described spectrophotometric unit and the display parameters of display list calculate row's figure on the display unit Parameter;
Parallax image arrangement module 202, for according to the parallax on described row's graph parameter arrangement display unit Image;This anaglyph generates by spatially dividing left-eye image and eye image.
Anaglyph playing module 203, plays described anaglyph.Disparity map after receiving arrangement After Xiang, playing out, beholder sees the stereo-picture of display in real time at display unit.
Embodiment 1
Image player processing unit 20 can by the way of software processes to image to be played at Reason, in addition, image player processing unit 20 can also use the mode of hardware handles to be played Image processes.
Described hardware handles mode refers to that image player processing unit 20 can include hardware processing module, And it being not merely software function module, such as hardware processing module can be FPGA (Field-Programmable Gate Array, field programmable gate array) module, or can also It is ASIC (Application Specific Intergrated Circuits, special IC) module, please Seeing Figure 17 and Figure 18, the image player processing unit 20 in Figure 17 includes FPGA module 205, Image player processing unit 20 in Figure 18 includes module ASIC 206.And hardware processing module tool Having more powerful parallel processing capability, compared to the processing mode of software, the mode of hardware handles can With speed up processing, reduce signal delay.
It is understood that stereo-picture acquisition module 204, row's graph parameter in earlier figures 2 determine mould Whole or the portion that block 201, parallax image arrangement module 202 and anaglyph playing module 203 are realized Divide function, can be completed by hardware processing module, and the parallel place that mat is possessed in hardware processing module Reason ability, can process image information quickly, and then the efficiency of the row's of being greatly improved figure, carries The real-time of high three-dimensional imaging.
Embodiment 2
In this utility model embodiment 2, preferably real-time volume display effect to be obtained, need to depend on According to the grating parameter of spectrophotometric unit and the display parameters of display unit, spectrophotometric unit is carried out with display unit Optical design, this optical design is according to below equation:
( 1 ) - - - n * I P D m * t = L F
( 2 ) - - - l - p i t c h p - p i t c h = L L + F
(3) m*t=p-pitch
In above-mentioned formula, F is that the distance between spectrophotometric unit and display unit is (in the most above-mentioned grating parameter Grating relative to the placement distance of display floater), L is the distance of beholder and display unit, and IPD is Coupling interpupillary distance, the distance between common people's double vision, such as, general value is 62.5mm, l-pitch For the pitch (pitch) of spectrophotometric unit, p-pitch is row's figure pitch of the pixel on display unit, and n is Three-dimensional view quantity, the pixel quantity that m is covered by spectrophotometric unit, p be display unit point away from, this In point away from being primarily referred to as the size (belonging to the one of display parameters) of a pixel cell, this pixel list Unit generally includes tri-sub-pixels of R, G, B.In order to eliminate moire fringes, spectrophotometric unit laminating time Time typically can rotate a certain angle, and (i.e. spectrophotometric unit has certain inclination angle compared to display unit Degree), therefore, the pitch of actual spectrophotometric unit is given by the following formula:
(4)Wlens=l-pitch*sin θ
Wherein, Wlens is the actual pitch of spectrophotometric unit, and θ is spectrophotometric unit inclining relative to display floater Rake angle (one of the most above-mentioned grating parameter).
As described previously for distance F between spectrophotometric unit and display unit, when display unit and light splitting When medium between unit is air, F is equal to the actual range between spectrophotometric unit and display unit; When medium between display unit and spectrophotometric unit is the transparent medium that refractive index is n (n is more than 1), F is equal to the actual range between spectrophotometric unit and display unit divided by this refractive index n;When display unit with When there is different media between spectrophotometric unit, and the refractive index of medium is respectively n1, n2, n3 (folding The rate of penetrating is all higher than or equal to 1), F=s1/n1+s2/n2+s3/n3, wherein s1, s2, s3 are The thickness of respective media.
By above-mentioned optical computing formula, spectrophotometric unit and display unit are configured, can reduce Moire fringes, improves the stereo display effect of viewing in real time.
Additionally, in a variant embodiment, arrange laminating between spectrophotometric unit and display unit single Unit, refers to spectrophotometric unit in the three-dimensional display system that Fig. 3, Fig. 3 are this utility model embodiments one Bonding structure schematic diagram with display unit.As it is shown on figure 3, at spectrophotometric unit 50 and display unit 40 Between be provided with laminating unit, three is similar to " sandwich structure ", laminating unit include first substrate 42 With second substrate 43, and the air layer 41 between first substrate 42 and second substrate 43.Should Air layer 41 is in sealing state between first substrate 42 and second substrate 43, prevents air from escaping. First substrate 42 is fitted with display floater, can be that transparent glass material is constituted, it is also possible to be transparent tree Fat materials etc. are constituted.Second substrate 43 is oppositely arranged with first substrate 42, and it deviates from first substrate 42 Side be used for spectrophotometric unit 50 of fitting.Owing to arranging between spectrophotometric unit 50 and display unit 40 Laminating unit, and laminating unit employing said structure, for the 3 d display device of giant-screen, both protected Demonstrate,prove the flatness of grating laminating, alleviated again the weight of whole 3 d display device, prevented from using pure Cause screen to fall the risk split during glass because of overweight.It can be stated that, using different chi In the case of very little display screen and different viewing distance, according to different attaching process, first substrate 42, second Substrate 43, air layer 41 3 part can also be integrated into a monoblock substrate, and it deviates from display unit 40 Side be used for spectrophotometric unit 50 of fitting, it pastes with display unit 40 near the side of display unit 40 Closing, in other words, laminating unit can also be to be made up of the material that a monoblock is transparent, described transparent Material can be glass or resin etc..
Embodiment 3
Continuing with seeing Fig. 1, on the basis of aforementioned embodiments and embodiment, this three-dimensional display system Farther including image acquisition unit 10, this image acquisition unit 10 is used for shooting the second destination object, And in real time the image of this second destination object photographed is sent to this image play unit 20.Here The second destination object be primarily referred to as by the various scenes of video camera shooting record, such as shooting the showing of ball match , the scene of operation, internal image of patient etc..Shot in real time by image acquisition unit 10 Stereo-picture, and the stereo-picture photographed is shown on the display unit in real time, in time and truly The various scenes that display photographs, meet user's demand to display in real time, improve Consumer's Experience. Image acquisition unit 10 can include in monocular-camera, binocular camera or multi-lens camera extremely Few one.
When this image acquisition unit 10 includes monocular-camera, shoot according to this monocular-camera and obtain Take the stereo-picture of the second destination object.It is preferred that this monocular-camera can use liquid crystal lens to become As device or liquid crystal microlens array imaging device.In a specific embodiment, this monocular shooting Machine is obtaining two width digital pictures of measured object the most from different perspectives, and extensive based on principle of parallax Appear again the three-dimensional geometric information of object, rebuild object three-dimensional contour outline and position.
When this image acquisition unit 10 includes binocular camera, including two video cameras or one Video camera has two photographic head, shoots and formed vertical by binocular camera to the second destination object Body image.Specifically, binocular camera is mainly based upon principle of parallax and is obtained object by multiple image Three-dimensional geometric information.Binocular Stereo Vision System is typically obtained quilt by twin camera the most simultaneously Survey two width digital pictures of thing (the second destination object), and recover the three of object based on principle of parallax Dimension geological information, rebuilds object three-dimensional contour outline and position.
When this image acquisition unit 10 includes multi-lens camera, i.e. more than three (including three) Video camera, these video cameras are arranged in arrays, are used for obtaining stereo-picture.More than above three Video camera obtains several digital pictures of the second destination object the most simultaneously, based on principle of parallax Recover the three-dimensional geometric information of object, rebuild object three-dimensional contour outline and position.
This image acquisition unit 10 also includes collecting unit, and this collecting unit is used for gathering this second target The stereo-picture of object, and from this stereo-picture, extract left view information and right view information.Should Collecting unit one end is connected with above-mentioned monocular-camera, binocular camera or above-mentioned multi-lens camera, The other end is connected on image player processing unit 20.By collecting unit when limit shooting stereo-picture Left view information and the right view information of stereo-picture is extracted on limit, improves the speed of image procossing, protects Demonstrate,prove the display effect carrying out stereo display in real time.
Accordingly, this stereo-picture acquisition module 204 included by image player processing unit 20, Obtain left view and the right side of the stereo image information of described image acquisition unit 10 shooting, i.e. stereo-picture View information.Stereo-picture includes left view and right view, therefore, to stereo-picture to be played, Need first to obtain the image information of left view and right view, just can carry out row's figure process.
Embodiment 4
Image acquisition unit 10 can be to the left and right view image photographed by the way of software processes Process.For example, it is possible to the two width left views photographed and right view are divided by image pick-up card The stereo-picture import system opened, synthesizes individual picture by the method for software and comprises left and right view Stereo-picture, then export the stereo-picture after this synthesis by video card.Stereo-picture after synthesis The arrangement mode of middle left and right view content can be left and right form, upper and lower stagger scheme, upper, Lower form etc..
In addition, image acquisition unit 10 can also be by hardware handles mode to described second mesh The stereo-picture of mark object carries out synthesis process.Specifically, image acquisition unit 10 collects The two-path video signal of left and right view independence can pass through hardware module such as FPGA module or ASIC Two-path video signal syntheses is become a road to contain the video signal of left and right view information by module.And hardware The mode processed has more powerful parallel processing capability, compared to the processing mode of software, hardware The mode processed can improve conversion speed with speed up processing, reduces signal delay.
Refer to Figure 19 and Figure 20.Image acquisition unit 10 depicted in Figure 19 includes FPGA module 103.Image acquisition unit 10 depicted in Figure 20 includes module ASIC 104.FPGA in Figure 19 Module ASIC 104 in module 103 and Figure 20 contributes to utilize the hardware processing capability of itself Stereo-picture is carried out synthesis process.
Embodiment 5
In the present embodiment 5, this tracking equipment 30 includes video camera, and this video camera shoots this first mesh Mark object.The quantity of video camera can be one or more, can arrange on the display unit, it is possible to To be separately provided.Further, video camera can be monocular-camera, binocular camera or the shooting of many mesh Machine.
It addition, this tracking equipment 30 can also is that and includes infrared remote receiver, correspondingly, corresponding first mesh Mark object is provided with infrared transmitter, and this infrared transmitter may be provided at the corresponding positions of first object object Put, it is also possible to be arranged on other with the relatively-stationary object of first object object's position on, this is infrared connects Receive device and receive the infrared signal sent from the infrared transmitter set by corresponding first object object.Logical Cross common infrared positioning method and realize the location to first object object.
Additionally, above-mentioned tracking equipment 30 can also use GPS locating module, by GPS locating module Send location information to image player processing unit 20.
Embodiment 6
Refer to Fig. 4, Fig. 4 show in the three-dimensional display system of this utility model embodiment one and follow the tracks of The preferred embodiment structural representation of equipment.As shown in Figure 4, this utility model embodiment also proposes separately A kind of tracking equipment 30, this tracking equipment 30 includes:
Labelling point arranges unit 1, arranges labelling point for the locus of corresponding first object object;This In labelling point can arrange on first object object, it is also possible to be not provided with on first object object, And it is provided in there is relative position relation with first object object, with first object object synchronous motion Also may be used on object.Such as, first object to liking human eye, then can be arranged around at the eye socket of human eye Labelling point;Or around human eye, configure glasses, labelling point is located on the picture frame of glasses, or Labelling point is located on the ear of people relatively-stationary with position of human eye relation.This labelling point can be to send out The infrared emission sensor of the number of delivering letters, LED, GPS sensor, laser positioning sensor etc. is various Parts, it is also possible to be that other can e.g. be had shape facility by the physical label of cameras capture And/or the object of color characteristic.It is preferred that be the interference avoiding extraneous veiling glare, improve labelling point tracking Robustness, frequency spectrum the narrowest infrared LED lamp be preferably used as labelling point, and use can only By the corresponding thermal camera of infrared LED used frequency spectrum, labelling point is caught.Consider Extraneous veiling glare mostly is irregular shape and Luminance Distribution is uneven, can be arranged to permissible by labelling point Sending the hot spot of regular shape, luminous intensity is higher, brightness uniformity.It can in addition contain arrange multiple mark Note point, the corresponding hot spot of each labelling point, the geometry of each labelling point composition rule, such as three Dihedral, tetragon etc., thus be prone to trace into labelling point, it is thus achieved that the spatial positional information of labelling point, And improve the accuracy that hot spot extracts.
Acquiring unit 2, for obtaining the positional information of this labelling point;This can be by receiving labelling point The signal sent, determines the positional information of labelling point, it is also possible to be use video camera shoot containing The image of labelling point, extracts the labelling point in image.Mark is obtained by image processing algorithm The positional information of note point.
Rebuild unit 3, for the positional information according to this labelling point, rebuild this first object The locus of object.When, after the positional information acquiring this labelling point, recreating labelling point Locus, then according to the relative position relation of labelling point with first object object, by labelling point Locus is transformed into the locus (space bit of two, the left and right of such as people of first object object Put).
The equipment 30 of following the tracks of of this utility model embodiment is by obtaining the labelling point of corresponding first object object Positional information, and according to this positional information, recreate the locus of first object object. Need two dimensional image is carried out feature analysis with prior art using video camera catch equipment as human eye Thus obtain position of human eye or use other to utilize the human eye of human eye iris reflex effect seizure equipment to obtain Take position of human eye to compare, there is good stability, accuracy high, with low cost and to follow the tracks of equipment with The advantage that distance between first object object does not require.
Refer to Fig. 5, Fig. 5 and show the concrete structure schematic diagram of the acquiring unit in Fig. 4.Aforementioned obtain Take unit to farther include:
Presetting module 21, is used for presetting a standard picture, is provided with reference marker point in described standard picture, And obtain space coordinates and the plane coordinates of described reference marker point;Standard picture can be such as to pass through The standard picture that image capture device gathers, obtains the image coordinate of reference marker point, and uses Other accurate measurement in space equipment such as laser scanners, structured light scanner (such as Kinect etc.) etc. Equipment obtains space coordinates and the plane coordinates of reference marker point in standard picture.
Acquisition module 22, for obtaining the current figure comprising described first object object and described labelling point Picture, and described labelling point is at the plane coordinates of described present image;
Matching module 23, for by described in the labelling point in described present image and described standard picture Reference marker point mates.Here will first by labelling point in the plane coordinates of described present image and ginseng Examine labelling point and between the plane coordinates of standard picture, set up corresponding relation, then by labelling point and reference Labelling point mates.
It is easy for when obtaining the locus of present image by arranging standard picture and reference marker point Can there is an object of reference, it further ensures the target tracker of this utility model embodiment Stability and accuracy.
Further, this tracking equipment 30 also includes:
Collecting unit, is used for gathering described labelling point;
Screening unit, screens target label point from described labelling point.
Specifically, when the quantity of labelling point is multiple, use camera acquisition correspondence first object All labelling points of object, choose labelling maximally related with first object object point from all labelling points, Then using corresponding image processing algorithm to extract the labelling point on image, this extraction needs root Carry out according to the feature of labelling point.Generally, the method extracted the feature of this labelling point is Image I is used feature extraction function H, obtains the feature scores of each point in image, and filter out spy Value indicative sufficiently high labelling point.Here can conclude with following formula and represent:
S (x, y)=H (I (x, y))
F={arg(x, y)(S (x, y) > s0) }
In above-mentioned formula, H is feature extraction function, I (x, y) be each pixel (x, y) corresponding to image Value, can be gray value or three-channel color energy value etc., and (x is y) that (x y) passes through each pixel to S Feature scores after feature extraction, s0 is a feature scores threshold value, and more than the S of s0, (x, y) can be by Being considered labelling point, F is labelling point set.It is preferred that this utility model embodiment uses infrared markers The energy feature of point and the become image of thermal camera is the most obvious.Owing to using narrow-band LED infrared Lamp, and corresponding thermal camera, most of pixel energy of the become image of video camera are the lowest, only The pixel having labelling point corresponding has high-energy.Therefore (x can be y) to using threshold value to corresponding function H Image B after segmentation operators (x, y) carries out region and increases and obtain some subimages, and to the subgraph acquired As carrying out center of gravity extraction.Meanwhile, according in ambient light can in thermal camera the veiling glare of imaging, I Can infrared markers point extract during add such as the become facula area of labelling point, labelling o'clock is two The constraintss such as the position relationship in dimension image click on row filter to the labelling extracted.
When video camera number is more than 1, need different cameras at synchronization or close to same The image that moment obtains is marked Point matching, thus provides condition for follow-up labelling point three-dimensional reconstruction. Depending on the method for reference points matching needs according to feature extraction function H.We can use some classical Feature point extraction operator based on gradient of image and gray scale figure and the matching process such as Harris that matches therewith, The methods such as SIFT, FAST obtain and matched indicia point.Can also retrain by operating limit, the elder generation of labelling point Test the modes such as condition and be marked Point matching.The method carrying out coupling screening used here as limit restraint is: According to the projection on two different cameras images in same o'clock all in this principle of same plane, For some labelling point p0 in some video camera c0, we can be at other video cameras c1 One polar curve equation of middle calculating, labelling point p0 is corresponding to the labelling point p1 symbol on this other video camera c1 Close following relation:
[p1;1]TF[p0;1]=0
In above-mentioned formula, F is the video camera c0 basis matrix to video camera c1.By using above-mentioned relation, We can reduce candidate's number of labelling point p1 significantly, improves matching accuracy.
Additionally, we can use the priori conditions of labelling point to be the spatial order of labelling point, labelling point Size etc..Such as according to the mutual alignment relation of two video cameras make on its captured image every Two pixels of a pair corresponding the same space point are equal in some dimension such as y-axis, this process Also referred to as image calibration (rectification).The most now the coupling of labelling point the most just can be according to labelling The x-axis order of point performs, i.e. minimum x correspondence minimum x, the like, maximum x correspondence is maximum x。
Based on the following for follow the tracks of video camera number number, of the present utility model mesh is discussed in detail Mark follows the tracks of device.
Refer to the concrete structure schematic diagram rebuilding unit that Fig. 6, Fig. 6 show in Fig. 4.As Shown in Fig. 6, in the present embodiment, the labelling that the first object object of this tracking equipment 30 tracking is corresponding Point is less than four, and when using positional information that monocular-camera obtains labelling point, rebuilds Unit farther includes:
First computing module 31, is used for the plane coordinates according to the labelling point in described present image and institute State plane coordinates and the described first object object place scene of the described reference marker point of standard picture Supposition condition calculate the homograph relation between described present image and described standard picture;Ought The labelling point of front image mates with the reference marker point in standard picture, and respective according to the two Plane coordinates calculates the homograph relation between present image and standard picture.So-called homograph is Homography in corresponding geometry, is a kind of alternative approach of often application in computer vision field.
First reconstructed module 32, is clapping for calculating described labelling point according to described homograph relation Take the photograph the locus rigid transformation to the locus of current time in described standard picture moment, then Calculate described labelling point in the locus of current time, and according to described labelling point at current time Locus calculates the locus that described first object object is current.
Specifically, for the supposition condition of scene, we can assume that the firm of labelling point in scene Property conversion time the numerical value of certain dimension constant, in such as three dimensional spatial scene, space coordinates is x, y, Z, x and y are parallel with x-axis in the image coordinate of photographic head (plane coordinates) and y-axis respectively, and z-axis is It is perpendicular to the image of photographic head, it is assumed that condition can be that labelling point coordinate in z-axis is constant, it is possible to Constant to be labelling point coordinate in x-axis and/or y-axis.Different suppositive scenario conditions, is used Estimation method be also not quite similar.The most such as, under the conditions of another kind supposes, it is assumed that first object pair Elephant towards photographic head towards between the anglec of rotation in use remain constant, then may be used According to the distance from each other of the labelling point in present image with the labelling point on standard picture from each other Distance between ratio speculate the current locus of first object object.
By above computational methods, it is possible to achieve monocular-camera is less than four to the quantity of labelling point Rebuilding the locus of described first object object time individual, it is simple to operate, and follows the tracks of result also Relatively accurate, owing to using monocular, reduce the cost of first object Object tracking.
Above-mentioned use single camera gathers image and recovers in object dimensional seat calibration method, owing to obtaining The image information taken is less, it is therefore desirable to increase labelling point number provide more image information from And calculate the three-dimensional coordinate of object.Theoretical according to machine vision, scene to be extrapolated from single image Steric information, needs five fixed points at least determining in image.Therefore, monocular scheme adds mark Credit amount, too increases the complexity of design, but simultaneously, it is only necessary to a video camera thus reduce The complexity of image acquisition, reduces cost.
Refer to Fig. 7, Fig. 7 and show the tool of the second variant embodiment rebuilding unit in Fig. 4 Body structural representation.As it is shown in fig. 7, in the present embodiment, when the quantity of described labelling point is five Above, and when using the positional information of the monocular-camera described labelling point of acquisition, list is rebuild described in Unit farther includes:
Second computing module 33, is used for the plane coordinates according to the labelling point in described present image and institute State the plane coordinates of the described reference marker point of standard picture, calculate described present image and described standard Homograph relation between image.
Second reconstructed module 34, is clapping for calculating described labelling point according to described homograph relation Take the photograph the locus rigid transformation to the locus of current time in described standard picture moment, then Calculate described labelling point in the locus of current time, and according to the sky of described labelling point current time Between the current locus of position calculation first object object.
First gather a width standard picture, use the devices such as accurate depth camera or laser scanner Measure the locus of reference marker point, and the two dimensional image coordinate of the reference marker point that acquisition is now (i.e. plane coordinates).
In use, the institute during video camera constantly catches the present image containing first object object There is the two dimensional image coordinate of labelling point, and according to now two dimensional image coordinate and standard picture reference marker Between labelling point when the two-dimensional coordinate of point calculates the labelling point under current state and shoots standard picture Rigid transformation, assuming between labelling point relatively in the case of invariant position, and then calculating out this Time labelling point convert relative to locus during standard picture, thus calculate the sky of current markers point Between position.
Here, when using the point of more than five points can calculate current markers point with shooting standard picture The locus rigid transformation of labelling point [R | T], it is preferred that this point of more than five is not or not a plane On, and the projection matrix P of photographic head demarcated in advance.The concrete mode calculating [R | T] is as follows:
Each labelling point is respectively X0, Xi in the homogeneous coordinates of standard picture and present image.The two Meet limit restraint, i.e. X0P-1 [R | T] P=Xi.It is [R | T] that all labelling points form unknown parameter Equation group.When mark tally amount is more than 5, [R | T] can be solved;When mark tally amount is more than 6 Time, [R | T] can be sought optimal solution, its method can use singular value decomposition SVD, and/or uses repeatedly The method in generation calculates non-linear optimal solution.After calculating labelling space of points position, we can root According to the mutual alignment relation between labelling point and the first object object (such as human eye) demarcated in advance Deduce the locus of first object object (such as human eye).
The present embodiment only with a video camera, uses the labelling point of five or more than five, it is possible to accurate Really construct the locus of first object object, the most simple to operate, and also with low cost.
Refer to Fig. 8, Fig. 8 and show the tool of the 3rd variant embodiment rebuilding unit in Fig. 4 Body structural representation.As shown in Figure 8, the present embodiment two or more video cameras of use, one Individual or more than one labelling point.Binocular camera or multi-lens camera is used to obtain described labelling point During positional information, described in rebuild unit and farther include:
3rd computing module 35, uses binocular or many mesh three-dimensional reconstruction principle, calculates each labelling point Locus at current time;So-called binocular or three mesh rebuild principle can use following methods, example As used the parallax between the labelling point that left and right photographic head mates, calculate each labelling point currently The locus in moment.Or use other existing common methods to realize.
Third reconstruction module 36, calculates first object according to the locus of described labelling point current time The locus that object is current.
Specifically, first by multi-lens camera calibration method to the mutual position between each video camera The relation of putting is demarcated.The most in use, the image zooming-out mark each video camera got Note point coordinates, and mates each labelling point, i.e. obtains it at labelling corresponding to each video camera Point, then uses mutual alignment relation between labelling point and the video camera of coupling to calculate labelling point Locus.
In a specific example, multi-lens camera (i.e. number of cameras is more than or equal to 2) is used Shot mark point, it is achieved stereo reconstruction.A known labelling point is on a certain shot by camera image Coordinate u and this camera parameters matrix M, we can calculate a ray, this labelling point It is in space on this ray.
α1u1=M1Xj=1 ... n (wherein n is the natural number more than or equal to 2)
In like manner, according to above-mentioned formula, this labelling point can also calculate should on other video camera The ray of other video camera.Theoretically, these two rays converge on a point, i.e. this labelling point Locus.Actually due to the digitizer error of video camera, video camera internal reference and outer ginseng are demarcated Error etc., these rays can not converge at a bit, it is therefore desirable to uses triangulation (triangululation) method approximate calculation goes out the locus of labelling point.Such as can use minimum Two take advantage of judgment criterion to determine apart from the nearest point of all light as object point.
After calculating labelling space of points position, we can according to the labelling point demarcated in advance and Mutual alignment relation between first object object (such as human eye) deduces first object object (human eye) Locus.
In above-mentioned use multi-lens camera realizes the method for stereo reconstruction, preferably method is that use is double Mesh camera calculates.Its principle is rebuild as principle with aforementioned multi-lens camera, is all according to two video cameras Mutual alignment relation and labelling o'clock two video camera imagings two-dimensional coordinate calculate labelling space of points position Put.Its minute differences is binocular camera laid parallel, according to after simple demarcation to two video cameras Image do image calibration as previously described so that the two-dimensional marker point that two match each other y (or X) equal on axle, the most now the labelling point degree of depth away from video camera can be by the two-dimensional marker point after calibrating at x Gap on (or y) axle calculates.The method can be regarded multi-eye stereo as and be reconstituted under biocular case Specific process, which simplify the step of stereo reconstruction and be easier to realize on device hardware.
Embodiment 7
Tracking equipment 30 can obtain the position letter of first object object by the way of software processes Breath, it would however also be possible to employ the mode of hardware handles obtains the positional information of first object object.
Described hardware handles mode refers to that tracking equipment 30 can include hardware processing module, the most firmly Part processing module can be FPGA module, or can also be module ASIC, refer to Figure 21 and Tracking equipment 30 in Figure 22, Figure 21 includes FPGA module 301, the tracking equipment 30 in Figure 22 Including module ASIC 302.And hardware processing module has more powerful parallel processing capability, phase Than in the processing mode of software, the mode of hardware handles can reduce signal delay with speed up processing.
It is understood that previous embodiment is followed the tracks of the functional module involved by equipment 30 or function The function that unit realizes, all can be completed by hardware processing module, and mat is had in hardware processing module Standby parallel processing capability, can process information quickly, and then is greatly improved acquisition first The efficiency of the positional information of destination object, improves the real-time of three-dimensional imaging.
Embodiment 8
Refer to Fig. 9, Fig. 9 and show that in the tracking device of Fig. 4, corresponding first object object arranges labelling The structural representation of the locating support of point.As it is shown in figure 9, this utility model provides a kind of locating support, This locating support is positioned at human eye (first object object) front, and structure is similar to glasses, and it wears class It is similar to glasses, including: crossbeam 11, fixed part 12, supporting part 13 and control portion 14, crossbeam 11 It is provided with labelling point 111;Supporting part 13 is arranged on crossbeam 11;Fixed part 12 and the end of crossbeam 11 Portion is pivotally connected.The position that wherein labelling point 111 is arranged is relative with the position of human eye (first object object) Should, by obtaining the spatial positional information of labelling point 111, calculate the locus of human eye the most accordingly Information.When the head of people is moved, correspondingly, corresponding with human eye labelling point 111 also occurs Mobile, the movement of Camera location labelling point 111, then use the target pair of aforementioned embodiments one The scheme of image tracing method obtains the spatial positional information of labelling point 111, utilizes labelling point 111 and human eye Relative tertiary location relation, recreate the locus of human eye (first object object) (i.e. Three-dimensional coordinate in space).
In the present embodiment, crossbeam 11 is a strip, and has certain radian, its radian and people Forehead radian approximation, to facilitate use.Crossbeam 11 includes upper surface 112, following table corresponding thereto Face, the first surface 114 being arranged between upper surface 112 and lower surface and second surface.
In the present embodiment, labelling point 111 is three LED, and its interval is evenly provided on crossbeam On the first surface 114 of 11.It is understood that labelling point 111 can also be one, two or Person is more, and can be any light source, including LED, infrared lamp or uviol lamp etc..Further, The arrangement mode of described labelling point 111 can also be adjusted as required with arranging position.
It is understood that crossbeam 11 can also be designed to linear or other shapes as required.
In the present embodiment, fixed part 12 has two, and the two ends with crossbeam 11 are pivotally connected respectively, And two fixed parts 12 can opposed, inwardly directed fold, meanwhile, two fixed parts 12 can the most outwards launch To with crossbeam 11 in the interior angle of about 100 °, concrete, in can adjusting according to practical operation demand The size at angle.It should be understood that fixed part 12 can also be one.
Fixed part 12 away from crossbeam 11 one end along supporting part 13 bearing of trend bend arrange, with In the end of fixed part 12 is fixed on the ear of people.
In the present embodiment, supporting part 13, in strip, is arranged on the middle part of the lower surface 113 of crossbeam 11 And downwardly extend.Further, supporting part 13 is provided with nose support 131 away from the end of crossbeam 11, with In positioner being coordinated the bridge of the nose, and positioner is arranged at above human eye.It should be understood that In other embodiments, if being not provided with nose support 131, then supporting part 13 can be set to down Y-shaped, and Along the middle part of crossbeam 11 and downwardly extend, in order to positioner to be coordinated the bridge of the nose, and positioner is set It is placed in above human eye.
The rounded cuboid in control portion 14, is arranged on fixed part 12.Control portion 14 is for described LED, infrared lamp or uviol lamp provide power supply and/or person to control described LED, infrared lamp or The use state of uviol lamp, it includes on and off switch 141, power supply indicator and charging indicator light.Permissible Being understood by, control portion 14 does not limit shape, and it can have any shape, it is also possible to be an integrated core Sheet.Further, control portion 14 can also be arranged on other positions, on crossbeam 11.
During use, turning on the power switch 141, power supply indicator display LED is in power supply state, LED Lamp is lit;When electricity deficiency, charging indicator light prompting electricity is not enough;Turn off the power switch, electricity Source display lamp extinguishes, and represents that LED is closed, and LED is extinguished.
Owing to the interpupillary distance scope of people is 58mm~64mm, the interpupillary distance that can be approximately considered people is definite value, this The locating support that utility model provides is similar to spectacle frame, and is fixed on above human eye, is similar to glasses Frame, as required, is arranged on the precalculated position of positioner by labelling point, such that it is able to simple and convenient Ground determines the position of human eye according to the position of labelling point.Positioning device structure is simple, design and user Just.
Embodiment two
Refer to the stereo display method that Figure 10 to Figure 13, Figure 10 are this utility model embodiments two Schematic flow sheet, Figure 11 is the idiographic flow schematic diagram of S1 in Figure 10, and Figure 12 is S12 in Figure 11 Idiographic flow schematic diagram, Figure 13 is the idiographic flow schematic diagram of the S3 in Figure 10.Such as Figure 10 extremely Shown in Figure 13, the stereo display method of this utility model embodiment two, mainly comprise the steps that
S1 obtains the positional information of first object object;Tracking equipment is utilized to follow the tracks of first object object Position, the such as positional information at beholder place.Particularly, can be obtained by hardware handles mode The positional information of first object object, such as, can be come by FPGA module or module ASIC Obtain the positional information of first object object.Specifically, tracking equipment can shoot first object pair The image of elephant, or receive the signal etc. sent at first object object, and FPGA module or ASIC These data or signal are processed by module, thus calculate the positional information of first object object. Owing to hardware processing module has more powerful parallel processing capability, compared to the process side of software Formula, the mode of hardware handles can reduce signal delay with speed up processing, enter information quickly Row processes, and then is greatly improved the efficiency of the positional information obtaining first object object, improves solid The real-time of imaging.
S2 obtains grating parameter and the display parameters of display unit of the spectrophotometric unit of 3 d display device; The grating parameter of spectrophotometric unit mainly include the pitch (pitch) of grating with grating relative to display floater Angle of inclination, grating are relative to the parameter such as placement distance of display floater.
S3 processes in real time according to the display parameters of this positional information and this grating parameter and display unit and waits to broadcast The image put.Before stereo-picture to be played, need to combine the positional information of human eye and grating ginseng in advance Number and the display parameters of display unit, process image, in order to is supplied to optimal the standing of beholder Body display effect.Particularly, image to be played can be processed in real time by hardware handles mode, Such as can process image to be played in real time by FPGA module or module ASIC.By In hardware processing module, there is more powerful parallel processing capability, compared to the processing mode of software, The mode of hardware handles can reduce signal delay with speed up processing, can believe image quickly Breath processes, and then the efficiency of the row's of being greatly improved figure, improves the real-time of three-dimensional imaging.
S4 shows this image to be played.
Stereo display method of the present utility model, believes by getting the position of first object object in time Breath and grating parameter, and the most directly carry out image procossing, improve the speed of image player, can be full The requirement of stereo display time full.
Further, also included before this S1: S0 image capturing procedure, shoot the second destination object Stereo-picture, and send the information of the stereo-picture of described second destination object photographed in real time, Including left view information and right view information.Here the second destination object refers mainly to video camera and photographs Various scenes, can be actual people, or just clap in live ball match or by some equipment Image etc. in the patient body taken the photograph.By shooting stereo-picture in real time, and the axonometric chart that will photograph As display in real time on the display unit, it is not necessary to through extra image procossing, in time and show truly The various scenes photographed, meet user's demand to display in real time, improve Consumer's Experience.
In a concrete variant embodiment, above-mentioned steps S0 also includes: image acquisition step, adopts Collect the stereo-picture of described second destination object, and from described stereo-picture, extract left view information With right view information.By limit shooting stereo-picture time limit extract stereo-picture left view information and Right view information, improves the speed of image procossing, it is ensured that carry out the display effect of stereo display in real time Really.
Further, in image acquisition step, can be by hardware handles mode to described second The stereo-picture of destination object carries out synthesis process.Such as collecting unit can use FPGA module Or module ASIC carries out synthesis process to stereo-picture.And the mode of hardware handles has more Powerful parallel processing capability, compared to the processing mode of software, the mode of hardware handles can be accelerated Processing speed, improves conversion speed, reduces signal delay.
Embodiment 9
Referring to Figure 11, how this utility model embodiment mainly obtains first object object to S1 Positional information is described in detail.These first object object for example, human eye, the head of people, faces of people The upper part of the body of portion or human body etc. watch relevant position to people.It is above-mentioned that " S1 obtains first object object Positional information " mainly comprise the steps that
The locus of S11 correspondence first object object arranges labelling point;Here labelling point can set Put on first object object, it is also possible to be not provided with on first object object, and be provided in and first Destination object has relative position relation, and also may be used on the object of first object object synchronous motion.Example As, destination object is human eye, then can be arranged around labelling point at the eye socket of human eye;Or at human eye Around configuration locating support, is located at labelling point on the frame of locating support, or is located at by labelling point On the ear of people relatively-stationary with position of human eye relation.This labelling point can be the infrared of transmission signal Emission sensor, LED, GPS sensor, the various parts such as laser positioning sensor, it is also possible to It is that other can e.g. be had shape facility and/or color characteristic by the physical label of cameras capture Object.It is preferred that be the interference avoiding extraneous veiling glare, improve the robustness of labelling point tracking, excellent Choosing uses the narrowest infrared LED lamp of frequency spectrum as labelling point, and uses and can only pass through infrared LED Labelling point is caught by the corresponding thermal camera of used frequency spectrum.It mostly is in view of extraneous veiling glare Irregular shape and Luminance Distribution are uneven, can be arranged to send regular shape by labelling point Hot spot, luminous intensity is higher, brightness uniformity.It can in addition contain arrange multiple labelling point, Mei Gebiao The corresponding hot spot of note point, the geometry of each labelling point composition rule, such as triangle, tetragon Deng, thus it is prone to trace into labelling point, it is thus achieved that the spatial positional information of labelling point, and improves hot spot and carry The accuracy taken.
S12 obtains the positional information of this labelling point;This can be by receiving the signal that labelling point sends, Determine the positional information of labelling point, it is also possible to be to use video camera to shoot the image containing labelling point, Labelling point in image is extracted.The positional information of labelling point is obtained by image processing algorithm.
S13, according to the positional information of this labelling point, rebuilds the locus of this first object object. When after the positional information acquiring this labelling point, recreate the locus of labelling point, then depend on According to the relative position relation of labelling point Yu first object object, the locus of labelling point is transformed into The locus (locus of two, the left and right of such as people) of one destination object.
The position of the labelling point by obtaining corresponding first object object of this utility model embodiment two Information, and according to this positional information, recreate the locus of first object object.With existing Technology uses video camera need that two dimensional image is carried out feature analysis as human eye seizure equipment thus obtain Take position of human eye or use other to utilize the human eye of human eye iris reflex effect to catch equipment acquisition human eye Position compares, and has good stability, and the accuracy of the positional information catching human eye is high, with low cost And to advantages such as the distance not requirements between the equipment of tracking and first object object.
Referring to Figure 12, above-mentioned steps S12 farther includes:
S121 presets a standard picture, is provided with reference marker point, and obtains described in described standard picture The space coordinates of reference marker point and plane coordinates;Standard picture can be such as to be set by image acquisition The standby standard picture gathered, obtains the image coordinate of reference marker point, and uses other accurate Measurement in space equipment such as laser scanner, the equipment such as structured light scanner (such as Kinect etc.) obtains mark The space coordinates of reference marker point and plane coordinates in quasi-image.
S122 obtains and comprises described destination object and the present image of described labelling point, and described labelling point Plane coordinates at described present image;
S123 is by the described reference marker point of the labelling point in described present image Yu described standard picture Mate.Here first labelling point is existed with reference marker point at the plane coordinates of described present image Set up corresponding relation between the plane coordinates of standard picture, then labelling point is carried out with reference marker point Coupling.
It is easy for when obtaining the locus of present image by arranging standard picture and reference marker point Can there is an object of reference, it further ensures the method for tracking target of this utility model embodiment Stability and accuracy.
Further, also included before above-mentioned steps S11: S10 is to being used for obtaining described labelling point The video camera of positional information demarcate.
Above-mentioned demarcation has in the following several ways:
(1), when the video camera of described S10 is monocular-camera, common Zhang Shi gridiron pattern can be used Calibration algorithm, demarcates for example with below equation:
Sm '=A [R | t] M ' (1)
In formula (1), A is inner parameter, and R is external parameter, and t is translation vector, and m ' picture point exists Coordinate in image, M ' is the space coordinates (three-dimensional coordinate the most in space) of object point;Wherein A, R and t is determined by below equation respectively:
A = f x 0 c x 0 f y c y 0 0 0 With R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , Translation vector t = t 1 t 2 t 3 .
Calibration algorithm certainly for video camera has a variety of, it is also possible to use the mark that other industry is conventional Determining algorithm, this utility model is not construed as limiting, and mainly uses calibration algorithm, to improve this utility model The accuracy of first object method for tracing object.
(2), when the video camera of described S10 is binocular camera or multi-lens camera, following steps are used Demarcate:
Arbitrary lens camera in described binocular camera or multi-lens camera is first demarcated by S101, Also it is to use common Zhang Shi gridiron pattern calibration algorithm, for example with below equation:
Sm '=A [R | t] M ' (1)
In formula (1), A is inner parameter, and R is external parameter, and t is translation vector, and m ' picture point exists Coordinate in image, M ' is the space coordinates of object point;Wherein A, R and t are determined by below equation respectively:
A = f x 0 c x 0 f y c y 0 0 1 With R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , Translation vector t = t 1 t 2 t 3 ;
S102 calculates the relative rotation matrices between described binocular camera or described multi-lens camera and phase To translational movement, use below equation:
Relative rotation matrices R , = R 11 R 12 R 12 R 21 R 22 R 23 R 31 R 32 R 33 With relative translation amount T = T 1 T 2 T 3 .
The most above-mentioned calibration algorithm for binocular camera or multi-lens camera is the most more typically A kind of, it is also possible to using the calibration algorithm that other industry is conventional, this utility model is not construed as limiting, mainly It is to use calibration algorithm, to improve the accuracy of first object method for tracing object of the present utility model.
Further, also include between above-mentioned S11 and S12:
S14 gathers described labelling point;
S15 screens target label point from described labelling point.
Specifically, when the quantity of labelling point is multiple, use camera acquisition correspondence first object All labelling points of object, choose labelling maximally related with first object object point from all labelling points, Then using corresponding image processing algorithm to extract the labelling point on image, this extraction needs root Carry out according to the feature of labelling point.Generally, the method extracted the feature of this labelling point is Image I is used feature extraction function H, obtains the feature scores of each point in image, and filter out spy Value indicative sufficiently high labelling point.Here can conclude with following formula and represent:
S (x, y)=H (I (x, y))
F={arg(x, y)(S (x, y) > s0) }
In above-mentioned formula, H is feature extraction function, I (x, y) be each pixel (x, y) corresponding to image Value, can be gray value or three-channel color energy value etc., and (x is y) that (x y) passes through each pixel to S Feature scores after feature extraction, s0 is a feature scores threshold value, and more than the S of s0, (x, y) can be by Being considered labelling point, F is labelling point set.It is preferred that this utility model embodiment uses infrared markers The energy feature of point and the become image of thermal camera is the most obvious.Owing to using narrow-band LED infrared Lamp, and corresponding thermal camera, most of pixel energy of the become image of video camera are the lowest, only The pixel having labelling point corresponding has high-energy.Therefore (x can be y) to using threshold value to corresponding function H Image B after segmentation operators (x, y) carries out region and increases and obtain some subimages, and to the subgraph acquired As carrying out center of gravity extraction.This feature extracts function H, and (x, y), can be Harris, SIFT, FAST etc. Characteristic point function, it is also possible to be the image processing function such as circular light spot extraction.Meanwhile, according to ambient light Middle can in thermal camera the veiling glare of imaging, we can infrared markers point extract during add all Such as labelling facula area that point becomes, the constraints such as labelling point position relationship in two dimensional image is to carrying The labelling taken out clicks on row filter.
When video camera number is more than 1, need different cameras at synchronization or close to same The image that moment obtains is marked Point matching, thus provides condition for follow-up labelling point three-dimensional reconstruction. Depending on the method for reference points matching needs according to feature extraction function H.We can use some classical Feature point extraction operator based on gradient of image and gray scale figure and the matching process such as Harris that matches therewith, The methods such as SIFT, FAST obtain and matched indicia point.Can also retrain by operating limit, the elder generation of labelling point Test the modes such as condition and be marked Point matching.The method carrying out coupling screening used here as limit restraint is: According to the projection on two different cameras images in same o'clock all in this principle of same plane, For some labelling point p0 in some video camera c0, we can be at other video cameras c1 One polar curve equation of middle calculating, labelling point p0 is corresponding to the labelling point p1 symbol on this other video camera c1 Close following relation:
[p1;1]TF[p0;1]=0
In above-mentioned formula, F is the video camera c0 basis matrix to video camera c1.By using above-mentioned relation, We can reduce candidate's number of labelling point p1 significantly, improves matching accuracy.
Additionally, we can use the priori conditions of labelling point to be the spatial order of labelling point, labelling point Size etc..Such as according to the mutual alignment relation of two video cameras make on its captured image every Two pixels of a pair corresponding the same space point are equal in some dimension such as y-axis, this process Also referred to as image calibration (rectification).The most now the coupling of labelling point the most just can be according to labelling The x-axis order of point performs, i.e. minimum x correspondence minimum x, the like, maximum x correspondence is maximum x。
Based on the following for follow the tracks of video camera number number, of the present utility model mesh is discussed in detail Mark tracking.
Refer to Figure 13, be the idiographic flow schematic diagram of first variation of S13 in Figure 10.Such as figure Shown in 13, in the present embodiment, the first object object pair that this first object method for tracing object is followed the tracks of The labelling point answered is less than four, and when using monocular-camera to obtain the positional information of labelling point, Abovementioned steps S13 farther includes:
S131 is according to described in plane coordinates and the described standard picture of the labelling point in described present image The plane coordinates of reference marker point and the supposition condition of described first object object place scene calculate institute State the homograph relation between present image and described standard picture;By the labelling point of present image with Reference marker point in standard picture mates, and calculates current according to the two respective plane coordinates Homograph relation between image and standard picture.So-called homograph is that the list in corresponding geometry should Property, it is a kind of alternative approach conventional in computer vision field.
S132 calculates described labelling point when shooting described standard picture according to described homograph relation The locus carved, to the rigid transformation of the locus of current time, then calculates described labelling point and exists The locus of current time, and calculate described in the locus of current time according to described labelling point The locus that first object object is current.
Specifically, for the supposition condition of scene, we can assume that the firm of labelling point in scene Property conversion time the numerical value of certain dimension constant, in such as three dimensional spatial scene, space coordinates is x, y, Z, x and y are parallel with x-axis in the image coordinate of photographic head (plane coordinates) and y-axis respectively, and z-axis is It is perpendicular to the image of photographic head, it is assumed that condition can be that labelling point coordinate in z-axis is constant, it is possible to Constant to be labelling point coordinate in x-axis and/or y-axis.Different suppositive scenario conditions, is used Estimation method be also not quite similar.The most such as, under the conditions of another kind supposes, it is assumed that first object pair Elephant towards photographic head towards between the anglec of rotation in use remain constant, then may be used According to the distance from each other of the labelling point in present image with the labelling point on standard picture from each other Distance between ratio speculate the current locus of first object object.
By above computational methods, it is possible to achieve monocular-camera is less than four to the quantity of labelling point Rebuilding the locus of described first object object time individual, it is simple to operate, and follows the tracks of result also Relatively accurate, owing to using monocular, reduce the cost of first object Object tracking.
Above-mentioned use single camera gathers image and recovers in object dimensional seat calibration method, owing to obtaining The image information taken is less, it is therefore desirable to increase labelling point number provide more image information from And calculate the three-dimensional coordinate of object.Theoretical according to machine vision, scene to be extrapolated from single image Steric information, needs five fixed points at least determining in image.Therefore, monocular scheme adds mark Credit amount, too increases the complexity of design, but simultaneously, it is only necessary to a video camera thus reduce The complexity of image acquisition, reduces cost.
Refer to Figure 14, be the idiographic flow schematic diagram of second variation of S13 in Figure 10.Such as figure Shown in 14, in the present embodiment, when the quantity of described labelling point is more than five, and monocular is used to take the photograph When camera obtains the positional information of described labelling point, described S13 farther includes:
The S133 plane coordinates according to the labelling point in described present image and the institute of described standard picture Stating the plane coordinates of reference marker point, calculating the list between described present image and described standard picture should Transformation relation;
S134 calculates described labelling point when shooting described standard picture according to described homograph relation The locus carved, to the rigid transformation of the locus of current time, then calculates described labelling point and exists The locus of current time, and calculate the first mesh according to the locus of described labelling point current time The locus that mark object is current.
Specifically, first gather a width standard picture, use accurate depth camera or laser to sweep Retouch the devices such as instrument and measure the locus of reference marker point, and the two of the reference marker point that acquisition is now Dimension image coordinate (i.e. plane coordinates).
In use, the institute during video camera constantly catches the present image containing first object object There is the two dimensional image coordinate of labelling point, and according to now two dimensional image coordinate and standard picture reference marker Between labelling point when the two-dimensional coordinate of point calculates the labelling point under current state and shoots standard picture Rigid transformation, assuming between labelling point relatively in the case of invariant position, and then calculating out this Time labelling point convert relative to locus during standard picture, thus calculate the sky of current markers point Between position.
Here, when using the point of more than five points can calculate current markers point with shooting standard picture The locus rigid transformation of labelling point [R | T], it is preferred that this point of more than five is not or not a plane On, and the projection matrix P of photographic head demarcated in advance.The concrete mode calculating [R | T] is as follows:
Each labelling point is respectively X0, Xi in the homogeneous coordinates of standard picture and present image.The two Meet limit restraint, i.e. X0P-1 [R | T] P=Xi.It is [R | T] that all labelling points form unknown parameter Equation group.When mark tally amount is more than 5, [R | T] can be solved;When mark tally amount is more than 6 Time, [R | T] can be sought optimal solution, its method can use singular value decomposition SVD, and/or uses repeatedly The method in generation calculates non-linear optimal solution.After calculating labelling space of points position, we can root According to the mutual alignment relation between labelling point and the first object object (such as human eye) demarcated in advance Deduce the locus of first object object (such as human eye).
The present embodiment only with a video camera, uses the labelling point of five or more than five, it is possible to accurate Really construct the locus of first object object, the most simple to operate, and also with low cost.
Refer to the idiographic flow schematic diagram of the 3rd variation of S13 in Figure 15, Figure 15 Figure 10.As Shown in Figure 15, the present embodiment 3 uses two or more video camera, one or more Labelling point.When using the positional information that binocular camera or multi-lens camera obtain described labelling point, institute State S13 to farther include:
S135 uses binocular or many mesh three-dimensional reconstruction principle, calculates each labelling point at current time Locus;So-called binocular or three mesh rebuild principle can use following methods, for example with left and right Parallax between the labelling point of photographic head coupling, calculates each labelling point space bit at current time Put.Or use other existing common methods to realize.
It is current that S136 calculates first object object according to the locus of described labelling point current time Locus.
Specifically, first by multi-lens camera calibration method to the mutual position between each video camera The relation of putting is demarcated.The most in use, the image zooming-out mark each video camera got Note point coordinates, and mates each labelling point, i.e. obtains it at labelling corresponding to each video camera Point, then uses mutual alignment relation between labelling point and the video camera of coupling to calculate labelling point Locus.
In a specific example, multi-lens camera (i.e. number of cameras is more than or equal to 2) is used Shot mark point, it is achieved stereo reconstruction.A known labelling point is on a certain shot by camera image Coordinate u and this camera parameters matrix M, we can calculate a ray, this labelling point It is in space on this ray.
α1u1=M1Xj=1 ... n (wherein n is the natural number more than or equal to 2)
In like manner, according to above-mentioned formula, this labelling point can also calculate should on other video camera The ray of other video camera.Theoretically, these two rays converge on a point, i.e. this labelling point Locus.Actually due to the digitizer error of video camera, video camera internal reference and outer ginseng are demarcated Error etc., these rays can not converge at a bit, it is therefore desirable to uses triangulation (triangululation) method approximate calculation goes out the locus of labelling point.Such as can use minimum Two take advantage of judgment criterion to determine apart from the nearest point of all light as object point.
After calculating labelling space of points position, we can according to the labelling point demarcated in advance and Mutual alignment relation between first object object (such as human eye) deduces first object object (human eye) Locus.
In above-mentioned use multi-lens camera realizes the method for stereo reconstruction, preferably method is that use is double Mesh camera calculates.Its principle is rebuild as principle with aforementioned multi-lens camera, is all according to two video cameras Mutual alignment relation and labelling o'clock two video camera imagings two-dimensional coordinate calculate labelling space of points position Put.Its minute differences is binocular camera laid parallel, according to after simple demarcation to two video cameras Image do image calibration as previously described so that the two-dimensional marker point that two match each other y (or X) equal on axle, the most now the labelling point degree of depth away from video camera can be by the two-dimensional marker point after calibrating at x Gap on (or y) axle calculates.The method can be regarded multi-eye stereo as and be reconstituted under biocular case Specific process, which simplify the step of stereo reconstruction and be easier to realize on device hardware.
Embodiment 10
Refer to the idiographic flow schematic diagram that Figure 16, Figure 16 are the S3 in Figure 10.As shown in figure 16, Based on aforementioned embodiments two and previous embodiment, step S3 of stereo display method of the present utility model Farther include:
S301 row's graph parameter determines step, according to the positional information of the described first object object got Calculate on the display unit with the grating parameter of described spectrophotometric unit and the display parameters of display unit Row's graph parameter;
S302 parallax image arrangement step, arranges regarding on described display unit according to described row's graph parameter Difference image;
S303 anaglyph plays step, plays described anaglyph.
By above-mentioned step, stereo-picture to be played is rearranged, improve three-dimensional aobvious The effect shown.
Further, also included before step S301: S304 stereo-picture obtaining step, obtain real Time the information of described stereo-picture that photographs.While anaglyph is play on limit, limit obtains in real time The stereo image information photographed, improves the efficiency of image procossing, not only ensure that real-time play, And decrease the memory data output taken because of stereoscopically displaying images simultaneously and require big internal memory very greatly Requirement, reduces cost.
Through the above description of the embodiments, those skilled in the art is it can be understood that arrive this Utility model embodiment can be realized by hardware, it is also possible to the common hardware adding necessity by software is put down The mode of platform realizes.Based on such understanding, the technical scheme of this utility model embodiment can be with The form of software product embodies, and this software product can be stored in a non-volatile memory medium In (can be CD-ROM, USB flash disk, portable hard drive etc.), including some instructions with so that one It is new that computer equipment (can be personal computer, server, or the network equipment etc.) performs this practicality Each enforcement method described in scene of type embodiment.
The above, be only preferred embodiment of the present utility model, not makees this utility model Any pro forma restriction, although this utility model is disclosed above with preferred embodiment, but not In order to limit this utility model, any those skilled in the art, without departing from present techniques In aspects, when the technology contents of available the disclosure above makes a little change or is modified to equivalent change The Equivalent embodiments changed, as long as being without departing from technical scheme content, according to of the present utility model Any simple modification, equivalent variations and the modification that above example is made by technical spirit, all still falls within In the range of technical solutions of the utility model.

Claims (28)

1. a three-dimensional display system, including display unit, spectrophotometric unit and the equipment of tracking, described tracking equipment is for obtaining the positional information of first object object, described spectrophotometric unit is positioned at the display side of described display unit, left view and right view it is divided on the image space shown by described display unit, it is characterized in that, described three-dimensional display system also includes image player processing unit, it is connected with described tracking equipment and described display unit respectively, described image player processing unit is according to the positional information of described first object object, the grating parameter of described spectrophotometric unit and the display parameters of described display unit process image to be played in real time, send described display unit after process to show;Described tracking equipment farther includes hardware processing module, for being obtained the positional information of first object object by hardware handles mode;Or, described image player processing unit includes hardware processing module, for processing image to be played in real time by hardware handles mode.
2. three-dimensional display system as claimed in claim 1, it is characterized in that, described three-dimensional display system also includes image acquisition unit, described image acquisition unit is for shooting the second destination object, and in real time the stereo image information of described second destination object photographed is sent to described image player processing unit.
3. three-dimensional display system as claimed in claim 2, it is characterised in that described image acquisition unit includes monocular-camera, shoots and obtain the stereo-picture of described second destination object with a photographic head.
4. three-dimensional display system as claimed in claim 2, it is characterised in that described image acquisition unit includes binocular camera, shoots and obtain the stereo-picture of described second destination object with two photographic head.
5. three-dimensional display system as claimed in claim 2, it is characterised in that described image acquisition unit includes multi-lens camera, by the photographic head of more than three stereo-picture shooting and obtaining described second destination object arranged in arrays.
6. the three-dimensional display system as described in any one of claim 2 to 5, it is characterized in that, described image acquisition unit farther includes collecting unit, described collecting unit is for gathering the stereo-picture of described second destination object, and extracts left view information and right view information from described stereo-picture.
7. three-dimensional display system as claimed in claim 6, it is characterised in that described image acquisition unit carries out synthesis process by hardware handles mode to the stereo-picture of described second destination object, and the single stereoscopic image after synthesis comprises left and right view information simultaneously.
8. three-dimensional display system as claimed in claim 7, it is characterised in that the module realizing stereo-picture complex functionality in described image acquisition unit is field programmable gate array processing module.
9. three-dimensional display system as claimed in claim 7, it is characterised in that the module realizing stereo-picture complex functionality in described image acquisition unit is special IC processing module.
10. three-dimensional display system as claimed in claim 1, it is characterised in that described tracking equipment includes video camera, the change in location of first object object described in described Camera location.
11. three-dimensional display systems as claimed in claim 1, it is characterised in that described tracking equipment includes infrared remote receiver, described infrared remote receiver receives and comes from the infrared framing signal that the infrared transmitter set by corresponding described first object object sends.
12. three-dimensional display systems as claimed in claim 1, it is characterised in that described tracking equipment includes:
Labelling point arranges unit, and the locus of corresponding described first object object arranges labelling point;
Acquiring unit, obtains the positional information of described labelling point;
Rebuild unit, according to the positional information of described labelling point, rebuild the locus of described first object object.
13. three-dimensional display systems as claimed in claim 12, it is characterised in that described acquiring unit farther includes:
Presetting module, presets a standard picture, is provided with reference marker point, and obtains described reference marker point space coordinates in described standard picture and plane coordinates in described standard picture;
Acquisition module, obtains the present image comprising described first object object with described labelling point, and described labelling point is at the plane coordinates of described present image;
Matching module, mates the labelling point in described present image with the described reference marker point of described standard picture.
14. three-dimensional display systems as claimed in claim 13, it is characterised in that when the quantity of described labelling point is less than four, and when using positional information that monocular-camera obtains described labelling point, described in rebuild unit and also include:
First computing module, for calculating the homograph relation between described present image and described standard picture according to the plane coordinates of the described reference marker point of the plane coordinates of the labelling point in described present image and described standard picture and the supposition condition of described first object object place scene;
First reconstructed module, for calculating the described labelling point rigid transformation in the locus to the locus of current time shooting the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and calculate, according to described labelling point, the locus that described first object object is current in the locus of current time.
15. three-dimensional display systems as claimed in claim 13, it is characterised in that when the quantity of described labelling point is more than five, and when using positional information that monocular-camera obtains described labelling point, described in rebuild unit and also include:
Second computing module, is used for the plane coordinates of the plane coordinates according to the labelling point in described present image and the described reference marker point of described standard picture, calculates the homograph relation between described present image and described standard picture;
Second reconstructed module, for calculating the described labelling point rigid transformation in the locus to the locus of current time shooting the described standard picture moment according to described homograph relation, then calculate described labelling point in the locus of current time, and calculate, according to the locus of described labelling point current time, the locus that first object object is current.
16. three-dimensional display systems as claimed in claim 13, it is characterised in that when the positional information using binocular camera or multi-lens camera to obtain described labelling point, described in rebuild unit and also include:
3rd computing module, is used for using binocular or many mesh three-dimensional reconstruction principle, calculates each labelling point in the locus of current time;
Third reconstruction module, calculates, for the locus according to described labelling point current time, the locus that first object object is current.
17. three-dimensional display systems as claimed in claim 1, it is characterised in that described hardware processing module is field programmable gate array processing module.
18. three-dimensional display systems as claimed in claim 17, it is characterised in that described hardware processing module is special IC processing module.
19. three-dimensional display systems as described in any one of claim 2~5, it is characterised in that described image player processing unit includes:
Row's graph parameter determines module, the row's graph parameter calculating on described display unit according to the positional information of described first object object got and the grating parameter of described spectrophotometric unit;
Parallax image arrangement module, for arranging the anaglyph on described display unit according to described row's graph parameter;
Anaglyph playing module, plays described anaglyph.
20. three-dimensional display systems as claimed in claim 19, it is characterised in that described image player processing unit includes:
Stereo-picture acquisition module, obtains the information of the described stereo-picture of described image acquisition unit shooting.
21. three-dimensional display systems as claimed in claim 1, it is characterised in that described spectrophotometric unit is disparity barrier or lenticulation.
22. three-dimensional display systems as claimed in claim 21, it is characterised in that described lenticulation is liquid crystal lens grating.
23. three-dimensional display systems as described in Claims 1 to 5,10~18 and 21~22 any one, it is characterized in that, it is provided with laminating unit between described spectrophotometric unit and described display unit, by described laminating unit, described spectrophotometric unit is fitted on described display unit.
24. three-dimensional display systems as claimed in claim 23, it is characterised in that described laminating unit is made up of a monoblock transparent material.
25. three-dimensional display systems as claimed in claim 23, it is characterised in that described laminating unit includes first substrate, second substrate, and the air layer between described first substrate and described second substrate.
26. three-dimensional display systems as claimed in claim 12, it is characterised in that described tracking equipment also includes that a locating support, described locating support are provided with described labelling point.
27. three-dimensional display systems as claimed in claim 26, it is characterised in that described locating support includes: crossbeam, fixed part, supporting part, and described crossbeam is provided with described labelling point;Described supporting part is arranged on crossbeam, supports described crossbeam;Described fixed part is connected with the end pivot of described crossbeam.
28. three-dimensional display systems as described in claim 26 or 27, it is characterised in that described labelling point is light source that can be luminous.
CN201521106711.3U 2014-12-29 2015-12-25 Stereo display system Active CN205610834U (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410837210 2014-12-29
CN2014108372106 2014-12-29

Publications (1)

Publication Number Publication Date
CN205610834U true CN205610834U (en) 2016-09-28

Family

ID=56390324

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201521106711.3U Active CN205610834U (en) 2014-12-29 2015-12-25 Stereo display system
CN201510991868.7A Active CN105791800B (en) 2014-12-29 2015-12-25 Three-dimensional display system and stereo display method

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201510991868.7A Active CN105791800B (en) 2014-12-29 2015-12-25 Three-dimensional display system and stereo display method

Country Status (1)

Country Link
CN (2) CN205610834U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105791800A (en) * 2014-12-29 2016-07-20 深圳超多维光电子有限公司 Three-dimensional display system and three-dimensional display method
CN114374784A (en) * 2022-01-11 2022-04-19 深圳市普朗信息技术有限公司 Intelligent medical live broadcast control method, system and storage medium

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018019217A1 (en) * 2016-07-26 2018-02-01 Hanson Robotics Limited Apparatus and method for displaying a three-dimensional image
CN108696742A (en) * 2017-03-07 2018-10-23 深圳超多维科技有限公司 Display methods, device, equipment and computer readable storage medium
CN107289247B (en) * 2017-08-04 2020-05-05 南京管科智能科技有限公司 Double-camera three-dimensional imaging device and imaging method thereof
CN109089106A (en) * 2018-08-30 2018-12-25 宁波视睿迪光电有限公司 Naked eye 3D display system and naked eye 3D display adjusting method
CN109246419A (en) * 2018-09-17 2019-01-18 广州狄卡视觉科技有限公司 Surgical operation microscope doubleway output micro-pattern three-dimensional imaging display system and method
CN110378969B (en) * 2019-06-24 2021-05-18 浙江大学 Convergent binocular camera calibration method based on 3D geometric constraint
CN115862539B (en) * 2023-03-02 2023-05-05 深圳市柯达科电子科技有限公司 Luminous light source adjusting method of OLED display panel

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5973707B2 (en) * 2011-10-14 2016-08-23 オリンパス株式会社 3D endoscope device
US9055289B2 (en) * 2011-11-23 2015-06-09 Korea Institute Of Science And Technology 3D display system
CN103780896A (en) * 2012-10-22 2014-05-07 韩国电子通信研究院 No-glass three-dimensional display device and method for moving view area
KR101371387B1 (en) * 2013-01-18 2014-03-10 경북대학교 산학협력단 Tracking system and method for tracking using the same
JP5942129B2 (en) * 2013-03-14 2016-06-29 株式会社ジャパンディスプレイ Display device
CN204578692U (en) * 2014-12-29 2015-08-19 深圳超多维光电子有限公司 Three-dimensional display system
CN205610834U (en) * 2014-12-29 2016-09-28 深圳超多维光电子有限公司 Stereo display system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105791800A (en) * 2014-12-29 2016-07-20 深圳超多维光电子有限公司 Three-dimensional display system and three-dimensional display method
CN114374784A (en) * 2022-01-11 2022-04-19 深圳市普朗信息技术有限公司 Intelligent medical live broadcast control method, system and storage medium

Also Published As

Publication number Publication date
CN105791800A (en) 2016-07-20
CN105791800B (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN205610834U (en) Stereo display system
US10560687B2 (en) LED-based integral imaging display system as well as its control method and device
CN101072366B (en) Free stereo display system based on light field and binocular vision technology
CN106840398B (en) A kind of multispectral light-field imaging method
CN105704479B (en) The method and system and display equipment of the measurement human eye interpupillary distance of 3D display system
CN102098524B (en) Tracking type stereo display device and method
CN106101689B (en) The method that using mobile phone monocular cam virtual reality glasses are carried out with augmented reality
CN105809654B (en) Target object tracking, device and stereoscopic display device and method
CN108307675A (en) More baseline camera array system architectures of depth enhancing in being applied for VR/AR
CN204578692U (en) Three-dimensional display system
CN104599317B (en) A kind of mobile terminal and method for realizing 3D scanning modeling functions
CN101729920B (en) Method for displaying stereoscopic video with free visual angles
CN106773080B (en) Stereoscopic display device and display method
CN110390719A (en) Based on flight time point cloud reconstructing apparatus
CN105651384A (en) Full-light information collection system
CN104007556A (en) Low crosstalk integrated imaging three-dimensional display method based on microlens array group
CN105812774B (en) Three-dimensional display system and method based on intubation mirror
CN109660786A (en) A kind of naked eye 3D three-dimensional imaging and observation method
CN104104939A (en) Wide viewing angle integrated imaging three-dimensional display system
CN105812772B (en) Medical image three-dimensional display system and method
CN109084679B (en) A kind of 3D measurement and acquisition device based on spatial light modulator
CN204539353U (en) Medical image three-dimensional display system
CN105812776A (en) Stereoscopic display system based on soft lens and method
CN204377057U (en) Based on the three-dimensional display system of intubate mirror
WO2021115298A1 (en) Glasses matching design device

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20180727

Address after: 518057 Room 201, building A, 1 front Bay Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong

Patentee after: Shenzhen super Technology Co., Ltd.

Address before: 518053 East Guangdong H-1 101, overseas Chinese Town Road, Nanshan District, Shenzhen.

Patentee before: Shenzhen SuperD Photoelectronic Co., Ltd.

TR01 Transfer of patent right