CN106780606A - Four mesh camera positioners and method - Google Patents
Four mesh camera positioners and method Download PDFInfo
- Publication number
- CN106780606A CN106780606A CN201611267494.5A CN201611267494A CN106780606A CN 106780606 A CN106780606 A CN 106780606A CN 201611267494 A CN201611267494 A CN 201611267494A CN 106780606 A CN106780606 A CN 106780606A
- Authority
- CN
- China
- Prior art keywords
- camera
- binocular camera
- binocular
- mesh
- cameras
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 12
- 230000004807 localization Effects 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 5
- 238000000205 computational method Methods 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 abstract description 15
- 230000000694 effects Effects 0.000 abstract description 6
- 238000005259 measurement Methods 0.000 abstract description 5
- 230000007547 defect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The present invention provides a kind of four mesh camera positioner, including upper left camera, lower-left camera, upper right camera and bottom right camera, the upper left camera and upper right camera composition top binocular camera, the lower-left camera and bottom right camera composition bottom binocular camera, upward, the bottom binocular camera is downward for the top binocular camera.Compared with prior art, the present invention constitutes four mesh cameras using four cameras, by careful design luffing angle and the spacing of camera, make measurement blind area as small as possible while ensure that four mesh cameras longitudinal direction visual angle increases, the image pickup scope of four mesh cameras is considerably increased, experience effect is optimized.
Description
Technical field
The present invention relates to nearly eye field of display devices, more specifically to a kind of four mesh camera positioner and side
Method.
Background technology
The space orientation of nearly eye display device or handle is a core technology of virtual reality, and a kind of practical scheme is
Light source is set on the helmet, then projection of the light source on image is caught with binocular camera, if it is known that at least three light sources
With the corresponding relation of projection, the space orientation that PnP algorithms just can obtain the helmet is recalled.Generally, the camera that we are used
It is binocular camera, as shown in figure 1, two the same cameras of parameter are placed side by side, and parallel to horizontal line.This binocular is taken the photograph
As head has the small defect of longitudinal angle of visual field (Fig. 2) in positioning, when user's vertical direction is taken the photograph using scope beyond binocular
Tracking effect is often lost during as head region, the using effect of user is limited to a certain extent, experienced not good.
The content of the invention
The defect small in order to solve the current binocular camera longitudinal direction angle of visual field, it is big that the present invention provides a kind of longitudinal angle of visual field
Four mesh camera positioners and method.
The technical solution adopted for the present invention to solve the technical problems is:A kind of four mesh camera positioner is provided, is wrapped
Include upper left camera, lower-left camera, upper right camera and bottom right camera, the upper left camera and the upper right camera
Composition top binocular camera, the lower-left camera and bottom right camera composition bottom binocular camera, the top
Upward, the bottom binocular camera is downward for binocular camera.
Preferably, the optical axis included angle of the top binocular camera and the bottom binocular camera is 10 °.
Preferably, the spacing of the top binocular camera and the bottom binocular camera is 20mm.
A kind of localization method is provided, is comprised the following steps:
S1:The top binocular camera and the bottom binocular camera are identified to calibration point respectively;
S2:When all calibration points are all located at the top binocular camera or are all located at the shooting model of the bottom camera
When enclosing interior, the computational methods positioned using binocular camera are positioned to calibration point;
S3:When there is calibration point to be located at the overlapping region of the top binocular camera and the bottom binocular camera,
The top binocular camera and the bottom binocular camera are identified to it simultaneously.
Preferably, given threshold r, when the top binocular camera and the bottom binocular camera are in overlapping region
When the space length of the point for being found is less than threshold value r, judge that two calibration points correspondence that two binocular cameras find respectively is empty
Between in same point, take the central point of two space of points positions as the locus coordinate value of the calibration point.
Compared with prior art, the present invention constitutes four mesh cameras using four cameras, by the careful design angle of pitch
The spacing of degree and camera, makes measurement blind area as small as possible, significantly while ensure that four mesh cameras longitudinal direction visual angle increases
The image pickup scope of four mesh cameras is increased, experience effect is optimized.Effectively prevented by setting threshold value r in overlapping region
Four mesh cameras duplicate the situation of identification in overlapping region, make positioning more accurate.
Brief description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is one of prior art binocular camera structural representation;
Fig. 2 is the two of prior art binocular camera structural representation;
Fig. 3 is four mesh camera structure schematic diagram of the invention;
Fig. 4 is four mesh camera positioning principle schematic diagram of the invention.
Specific embodiment
The defect small in order to solve the current binocular camera longitudinal direction angle of visual field, it is big that the present invention provides a kind of longitudinal angle of visual field
Four mesh camera positioners and method.
In order to be more clearly understood to technical characteristic of the invention, purpose and effect, now compare accompanying drawing and describe in detail
Specific embodiment of the invention.
Fig. 3-Fig. 4 is referred to, four mesh camera positioner of the invention includes four mesh cameras 10, four mesh camera bags
4 cameras are included, 4 cameras, respectively positioned at the corner of four mesh cameras 10, are upper left camera 101, lower-left camera
102nd, upper right camera 103, bottom right camera 104.Wherein, upper left camera 101 and upper right camera 103 partner binocular
Camera, we term it top binocular camera;Lower-left camera 102 and bottom right camera 104 partner binocular camera shooting
Head, we term it bottom binocular camera.In vertical direction, the optical axis of two pairs of binocular cameras is not parallel.Upper left is taken the photograph
As first 101 and the binocular camera that constitutes of upper right camera 103 upward, 104 groups of lower-left camera 102 and bottom right camera
Into binocular camera downward, the angle between the optical axis of two binocular cameras is α.By in Fig. 4 as can be seen that optical axis it
Between angle α it is bigger, the length of blind area L is bigger, and larger blind area has a certain impact for space measurement, thus we need
Blind area L is reduced as far as possible in the case where the visual angle for ensureing four mesh cameras 10 is sufficiently large, and this is accomplished by ensureing positioning visual angle
In the case of reduce angle α as far as possible.
When using nearly eye display device (not shown), user is typically in the scope apart from 1 meter or so of camera, I
Located in connection data are calculated according to this scope.When user stretches one's arm downwards upwards, the length of arm is about
1.5 meters, during normal use, the situation that people holds handle fully upward or stretches one's arm downwards seldom occurs, therefore
Actual scope of design can be approximately equal to 1.5 meters.It is sufficiently small in order to ensure blind area L, and visual angle before the camera at 1 meter upwards or
1.5 meters of images of height can be photographed downwards, it is necessary to angle α and upper and lower camera spacing d comprehensive Designs.Here, we
α is taken equal to 10 °, d=20mm, now, the length of blind area L is 24.14mm, the length of this blind area of measurement distance relative to 1 meter
Degree can be ignored.As d=20mm, four mesh cameras 10 can photograph 1.475 meters of images of height, meet near
Approximately equal to 1.5 meters of design condition.Now, the actual longitudinal angle of visual field γ after two cameras are combined is from single camera
Angle of visual field β=55 ° increase to 72.78 °, considerably increase longitudinal visual angle of four mesh cameras 10, more facilitate space orientation.
In positioning, four mesh cameras 10 are positioned over user front by us, and nearly eye display device and handle are respectively positioned on
In the visual field scope of four mesh cameras 10, " infrared lamp " that can send infrared light placed in nearly eye display device and handle
As the calibration point of location tracking.When positioning starts, top binocular camera and bottom binocular camera are respectively to infrared click-through
Row identification, when the calibration point of nearly eye display device and handle is all only positioned in the range of the binocular camera of top, according to binocular
The localization method treatment calibration point of camera carries out space orientation;When the calibration point of nearly eye display device and handle is all only positioned at down
When in the range of portion's binocular camera, space orientation is carried out according to the localization method treatment calibration point of binocular camera.When nearly eye
When the calibration point of display device and handle is located at the part that both overlap, two binocular cameras are known to calibration point simultaneously
Not.It is possible that top binocular camera and bottom binocular camera are repeated to the identification point of overlapping region in identification process
The problem of identification, this to repeat identification it is also possible to causing the mixed opinion of positioning, at this moment we set a threshold value r, when top is double
When the space length of the point that mesh camera and bottom binocular camera are found in overlapping region is less than threshold value r, we recognize
Same point in the two calibration points correspondence space found respectively for two binocular cameras, at this moment we take two space of points positions
Central point as the calibration point locus coordinate value.
Compared with prior art, the present invention constitutes four mesh cameras 10 using four cameras, by careful design pitching
The spacing of angle and camera, makes measurement blind area as small as possible while ensure that the longitudinal visual angle of four mesh camera 10 increases,
The image pickup scope of four mesh cameras 10 is considerably increased, experience effect is optimized.It is effective by setting in overlapping region threshold value r
Prevent four mesh cameras 10 that the situation of identification is duplicated in overlapping region, make positioning more accurate.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific
Implementation method, above-mentioned specific embodiment is only schematical, rather than restricted, one of ordinary skill in the art
Under enlightenment of the invention, in the case of present inventive concept and scope of the claimed protection is not departed from, can also make a lot
Form, these are belonged within protection of the invention.
Claims (5)
1. a kind of four mesh camera positioner, it is characterised in that including upper left camera, lower-left camera, upper right camera
With bottom right camera, the upper left camera and the upper right camera constitute top binocular camera, the lower-left camera
Bottom binocular camera is constituted with the bottom right camera, the top binocular camera upward, take the photograph by the bottom binocular
As head downward.
2. four mesh camera positioner according to claim 1, it is characterised in that the top binocular camera and institute
The optical axis included angle for stating bottom binocular camera is 10 °.
3. four mesh camera positioner according to claim 2, it is characterised in that the top binocular camera and institute
The spacing for stating bottom binocular camera is 20mm.
4. the localization method that a kind of four mesh camera positioner according to claim 1 is positioned, it is characterised in that including
Following steps:
S1:The top binocular camera and the bottom binocular camera are identified to calibration point respectively;
S2:When all calibration points are all located at the top binocular camera or are all located in the image pickup scope of the bottom camera
When, the computational methods positioned using binocular camera are positioned to calibration point;
S3:It is described when there is calibration point to be located at the overlapping region of the top binocular camera and the bottom binocular camera
Top binocular camera and the bottom binocular camera are identified to it simultaneously.
5. localization method according to claim 4, it is characterised in that given threshold r, when the top binocular camera and
When the space length of the point that the bottom binocular camera is found in overlapping region is less than threshold value r, judge that two binoculars are taken the photograph
As same point in two calibration points correspondence space that head finds respectively, the central point of two space of points positions is taken as the calibration point
Locus coordinate value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611267494.5A CN106780606A (en) | 2016-12-31 | 2016-12-31 | Four mesh camera positioners and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611267494.5A CN106780606A (en) | 2016-12-31 | 2016-12-31 | Four mesh camera positioners and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106780606A true CN106780606A (en) | 2017-05-31 |
Family
ID=58952459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611267494.5A Pending CN106780606A (en) | 2016-12-31 | 2016-12-31 | Four mesh camera positioners and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106780606A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103455144A (en) * | 2013-08-22 | 2013-12-18 | 深圳先进技术研究院 | Vehicle-mounted man-machine interaction system and method |
CN105069784A (en) * | 2015-07-29 | 2015-11-18 | 杭州晨安视讯数字技术有限公司 | Double-camera target positioning mutual authentication nonparametric method |
CN105321160A (en) * | 2014-05-27 | 2016-02-10 | 穆阳 | Multi-camera calibration method for 3D panoramic parking |
-
2016
- 2016-12-31 CN CN201611267494.5A patent/CN106780606A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103455144A (en) * | 2013-08-22 | 2013-12-18 | 深圳先进技术研究院 | Vehicle-mounted man-machine interaction system and method |
CN105321160A (en) * | 2014-05-27 | 2016-02-10 | 穆阳 | Multi-camera calibration method for 3D panoramic parking |
CN105069784A (en) * | 2015-07-29 | 2015-11-18 | 杭州晨安视讯数字技术有限公司 | Double-camera target positioning mutual authentication nonparametric method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109483531B (en) | Machine vision system and method for picking and placing FPC board by manipulator at fixed point | |
CN107192331A (en) | A kind of workpiece grabbing method based on binocular vision | |
US8977378B2 (en) | Systems and methods of using a hieroglyphic machine interface language for communication with auxiliary robotics in rapid fabrication environments | |
CN107767423A (en) | A kind of mechanical arm target positioning grasping means based on binocular vision | |
CN106534817B (en) | Curved surface projection automatic geometric correction method based on image recognition | |
CN209174850U (en) | The device of big packet collector nozzle is positioned using machine vision | |
CN104571488B (en) | Electronic file marking method and device | |
CN105328304B (en) | Based on statistical weld seam starting point automatic localization method | |
CN107392853A (en) | Double-camera video frequency merges distortion correction and viewpoint readjustment method and system | |
CN106646903A (en) | Follow-up head display device and method | |
CN107517344A (en) | Method and device for adjusting recognition range of camera device | |
CN109886173B (en) | Side face attitude calculation method based on vision and emotion perception autonomous service robot | |
CN109459984A (en) | A kind of positioning grasping system and its application method based on three-dimensional point cloud | |
WO2019127319A1 (en) | Distortion measurement method and system for head-mounted display device | |
CN109509148A (en) | A kind of panoramic looking-around image mosaic fusion method and device | |
JP2013159480A (en) | Sight auxiliary device | |
CN110536044B (en) | Automatic certificate photo shooting method and device | |
CN106733686A (en) | A kind of streamline object positioning method of view-based access control model and code-disc data fusion | |
CN104019761B (en) | A kind of milpa three-dimensional configuration acquisition methods based on milpa three-dimensional configuration acquisition device | |
CN107917666A (en) | Binocular vision device and coordinate scaling method | |
CN206363397U (en) | Four mesh camera positioners | |
CN105931177B (en) | Image acquisition processing device and method under specific environment | |
CN102385692A (en) | Human face deflection image acquiring system and method | |
CN106780606A (en) | Four mesh camera positioners and method | |
CN104050676B (en) | A kind of backlight image detecting method and device based on Logistic regression models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170531 |