CN206300653U - A kind of space positioning apparatus in virtual reality system - Google Patents

A kind of space positioning apparatus in virtual reality system Download PDF

Info

Publication number
CN206300653U
CN206300653U CN201621440089.4U CN201621440089U CN206300653U CN 206300653 U CN206300653 U CN 206300653U CN 201621440089 U CN201621440089 U CN 201621440089U CN 206300653 U CN206300653 U CN 206300653U
Authority
CN
China
Prior art keywords
camera
image
attitude
pattern
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201621440089.4U
Other languages
Chinese (zh)
Inventor
弭强
王礼辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dynamic (beijing) Technology Co Ltd
Original Assignee
Dynamic (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dynamic (beijing) Technology Co Ltd filed Critical Dynamic (beijing) Technology Co Ltd
Priority to CN201621440089.4U priority Critical patent/CN206300653U/en
Application granted granted Critical
Publication of CN206300653U publication Critical patent/CN206300653U/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The utility model discloses a kind of space positioning apparatus used in virtual reality system, including camera pose calibrating module, camera attitude stuck-module, initialisation image acquisition processing module and seriate images processing module;Camera pose calibrating module includes imaging unit;Imaging unit includes one or more cameras;Camera attitude stuck-module includes stabilizer or head;Initialisation image acquisition processing module includes initialisation image with reference to pattern, laser range finder or ultrasonic range finder;By wearing imaging unit on human body, the image obtained to continuous acquisition during human motion carries out image procossing, obtains the displacement of human body, realizes the space orientation in virtual reality system.The arrangement that technical solutions of the utility model can simplify VR positioning scenes images first-class ancillary equipment, more convenient to use.

Description

A kind of space positioning apparatus in virtual reality system
Technical field
The utility model is related to the space in virtual reality and augmented reality, more particularly to a kind of virtual reality system Positioner.
Background technology
Virtual reality (VR) or augmented reality (AR) technology (the utility model is referred to as VR technologies), are exactly by computer Technology and various kinds of sensors technology are produced in vision, the sense of hearing, the virtual environment of tactile or in true ring in particular range Overlay superposition is carried out on border.
In VR systems, many application scenarios can be related to interior space positioning, can be with using interior space location technology Realize the more preferable interaction of real world and virtual world.But, current existing interior space location technology is mainly by following Two ways:The first is that reflective spot is arranged on human body, by the way that in scene peripheral disposition multiple cameras, to gather these reflective The image of point, then determines spatial relation by the method for image procossing.This positioning method needs to set up multiple shootings Head, high cost and arranges cumbersome, while situations such as reflective spot is blocked occurs in active procedure and positioning is influenceed for human body, Accuracy is not high.Second is, by arranging generating laser outside scene, region entirely to be used by laser scanning, and human body is worn Laser receiving set is worn, after laser receiving set receives the signal of laser transmission, by being calculated current location.With first Kind of localization method is the same, and second positioning method is needed also exist in scene peripheral disposition generating laser, and can be also subject to The influence blocked.Additionally, above two mode all can strictly limit the scope of activities of user, mistake will be positioned after going beyond the scope Lose.And, existing inertia is dynamic to catch equipment and technology cannot provide accurate space orientation.
Utility model content
In order to overcome the above-mentioned deficiencies of the prior art, the utility model provides a kind of sky used in virtual reality system Between positioner, by wearing one or more camera on human body, camera is fixed on a certain position of human body, Image information is continuously acquired according to camera, treatment is analyzed to image information by image processing algorithm, so as to judge people The displacement of body, the arrangement that can simplify VR positioning scenes images first-class ancillary equipment, more convenient to use.
Principle of the present utility model is:, it is necessary to pass through locating and tracking human body in real world in virtual reality system Event trace so that manikin in virtual world is also realized in virtual reality system according to the movement of this track Space orientation, reach the manikin synchronization in the human body and virtual world in real world.The utility model is on human body The camera of roof and/or metope is mounted perpendicular to, because camera can be moved with the movement of human body so that camera institute The image of collection can also move (object in image position in the picture moves), so, connected by camera The continuous image for collecting the objects such as surrounding wall, treatment is analyzed by the image to these objects, obtains picture middle position The information of consecutive variations is put, the space orientation in virtual reality system is achieved in.
The utility model provide technical scheme be:
A kind of space positioning apparatus in virtual reality system, including camera pose calibrating module, camera attitude are solid Cover half block, initialisation image acquisition processing module and seriate images processing module;Camera pose calibrating module includes that imaging is single Unit, calibration plane and reference figure, or (inertial measuring unit IMU units are measurement objects including imaging unit and IMU units The device of three-axis attitude, Inertial Measurement Unit);Imaging unit includes one or more cameras;Camera Pose calibrating module is used to carry out initial calibration to camera attitude;Camera attitude stuck-module includes stabilizer or head, For the camera in imaging unit to be fixedly attached into human body by stabilizer or head so that camera makes in follow-up It is always maintained at during vertical with calibration plane;Initialisation image acquisition processing module using initialisation image with reference to pattern or Laser range finder (or ultrasonic range finder), by camera collection image and obtain image and image that camera is gathered it is right Space proportion numerical relation between the material object answered;Seriate images processing module by during human motion to continuous acquisition The image for obtaining carries out image procossing, is calculated the camera axially movement in direction and in the plane perpendicular to axial direction Displacement and direction, obtain the displacement of human body, are achieved in the space orientation in virtual reality system.
Wherein, one or more cameras in imaging unit may be mounted at the same position of human body or different parts, take the photograph As head can be monocular cam, binocular camera or many mesh cameras.Calibration plane can be roof and (or) metope;It is fixed Camera on human body is perpendicular to roof and/or metope.
In specific implementation, camera pose calibrating module may include imaging unit, calibration plane and with reference to figure, this feelings Condition sets reference marker figure (such as cross or circle, square, the pattern of regular polygon) in calibration plane, can pass through The initial calibration of camera attitude is carried out using reference marker graphical method, realizes that camera is axially vertical with calibration plane;Take the photograph As head pose calibrating module may also comprise imaging unit and IMU units, such case is used places output attitude on camera The method of massaging device (IMU units) realizes the initial calibration of camera attitude, and attitude information device and shooting are exported when installing When head mirror head plane is parallel and vertical with camera axis, the attitude Eulerian angles of output attitude information device output are exactly imaged The attitude Eulerian angles of head mirror head plane.
In specific implementation, initialisation image acquisition processing module can be placed with reference to pattern in calibration plane;Or by laser Rangefinder (or ultrasonic range finder) is connected with camera;Camera can be carried out with reference to pattern by known dimensions respectively Demarcate, or initialisation image collection is carried out using laser ranging or ultrasonic ranging method.
Above-mentioned space positioning apparatus can catch equipment and use cooperatively with inertia action, wherein, camera pose calibrating mould Block includes imaging unit and IMU units, an IMU unit and camera the composition Integral imaging in inertia action seizure equipment Device, for example, IMU units are arranged on camera bottom, parallel to cam lens plane;Because inertia action catches equipment bag Multiple IMU units are included, the space positioning apparatus are installed on real human body will cause the camera perpendicular to roof or wall Face, or both perpendicular to roof and metope;Processed by the camera collection image in the space positioning apparatus, it is determined that The spatial coordinate location of real human body;Recycle inertia action to catch the relative coordinate system that equipment itself is set up, be calculated used Sexual act catches the space coordinates of any one node of equipment;The shooting that can be also reflected by the IMU units in integrated apparatus Head jitter conditions carry out auxiliary camera carries out stabilization treatment.In the utility model embodiment, camera and inertia action catch and set An IMU unit in standby is combined, and solving the problems, such as that inertia moves to catch can not provide exact space positioning.Shot in camera Position be it is reflective especially lead to not strongly by continuously take pictures obtain image consecutive variations when, can be by IMU inertia lists Unit calculates displacement, used as supplement.Meanwhile, by the IMU units with camera one, situation about can be shaken according to camera come Auxiliary camera carries out stabilization treatment.
When space positioning apparatus in above-mentioned virtual reality system work, by be mounted perpendicular on human body roof and/ Or the camera of metope so that camera acquired image is moved with moving for human body, object in image or Person is that characteristic point or object edge in image are moved with moving for human body;Around camera continuous acquisition The image of object, treatment is analyzed by the image to object, obtains the information of position consecutive variations in picture, is achieved in Space orientation in virtual reality system;Mainly include initial calibration camera attitude process, fixing camera attitude process, just Beginningization image acquisition and processing process and seriate images processing process;
1) initial calibration camera attitude process:Realize that camera is axially vertical with calibration plane;
2) fixing camera attitude causes that camera is always maintained at initial calibration camera attitude in use;
3) initialisation image gatherer process obtains the figure that camera is gathered by camera collection image and according to known quantity The numerical relation (space proportion relation) between material object as corresponding to image;
4) seriate images processing process carries out sequential image acquisition, and by extracting image outline and characteristic point, analysis is current The change of characteristic point in frame and previous frame image, according to the change of image outline and characteristic point, is calculated camera in its axle To the movement and the movement in the plane perpendicular to axial direction in direction, in realizing virtual reality system according to mobile distance and direction Space orientation.
For above-mentioned space-location method, step 1) the first of camera attitude can be carried out using reference marker graphical method Begin to calibrate;The initial school of camera attitude can also be realized using the method for the placement output attitude information device on camera It is accurate.
During calibrating mode this using reference marker figure, initial calibration camera attitude process can be by using in school Pattern is set on directrix plane used as reference marker figure, the image obtained when camera is photographed into reference marker figure and reference Marker graphic is compared, and judges whether that (distortion in the utility model refers to deform upon to metamorphopsic distortion, and equal proportion scaling is not It is distortion), if distortion explanation lens plane is not parallel with the face of shooting, if both are parallel without distortion explanation, calibration process is just It is imaging in camera to be caused undistorted with indicia patterns.Initial calibration camera attitude process comprises the following steps:
11) pattern of cross or other shapes (such as circular, square, regular polygon) is set in calibration plane, As reference marker figure
12) IMAQ is carried out to above-mentioned reference marker pattern by camera, obtains imagewise pattern in camera;
13) whether imagewise pattern has distortion with above-mentioned reference marker pattern in analysis camera;Adjustment camera attitude, makes The imaging obtained in camera does not have distortion with above-mentioned reference marker pattern, that is, complete the initial calibration of camera attitude.
When being calibrated using reference marker figure, the shape after judging imaging on photo is also original Shape comes whether identification icon has distortion (only size variation does not include distortion).For example, when the reference marker figure that uses of calibration for During cross shape, can after comparing imaging whether two length of sides equal and angle whether be right angle identify whether to lose Very.
When realizing the initial calibration of camera attitude using the method for the placement output attitude information device on camera, tool Body can export the device (such as attitude transducer, IMU) of attitude information by being installed additional on camera, according to the appearance that device is exported State information, pose adjustment is carried out to camera;Specific calibration steps is as follows:
21) the fixed output attitude information device on camera;
22) attitude information of above-mentioned output attitude information device is obtained;
23) according to above-mentioned attitude information and the fixed position relation of the output attitude information device and camera, calculate To the attitude of camera;
Specifically, when output attitude information device parallel with lens plane and vertical camera axis is installed, according to defeated Go out the attitude Eulerian angles of attitude information device output, be exactly the attitude Eulerian angles of lens plane.
24) attitude of camera is adjusted, until camera is vertical with alignment surface (metope or roof).
For above-mentioned space-location method, further, step 2) fixing camera attitude process pass through stabilizer or cloud Platform so that camera be always maintained at during subsequent use it is vertical with calibration plane, subsequently into initialisation image collection Process.
For above-mentioned space-location method, further, step 3) purpose of initialisation image gatherer process is according to The amount of knowing obtains numerical relation (ratio during imaging between the two between the material object corresponding to the image and image of camera collection Relation), during specific implementation, can be realized using with reference to patterning method, laser ranging or ultrasonic ranging method.
Carry out initialisation image gatherer process using with reference to patterning method, especially by the cross hairs of known dimensions or other Shape (can be in initial calibration camera attitude for pattern when calibrating camera attitude can also be new pattern) Camera is demarcated, after camera gets above-mentioned shape, during according to known dimension of picture information and according to taking pictures The length of focal length, the size at visual angle, extrapolate the picture corresponding to the distance and unit length of now camera and known form Vegetarian refreshments number;Thus the numerical relation between the material object corresponding to the image and image of camera collection is obtained;Specific steps are such as Under:
31) placed with reference to pattern in calibration plane, known to the size of the reference pattern;
32) IMAQ is carried out with reference to pattern to above-mentioned known dimensions;
33) pixel count presented on photo according to above-mentioned 32) gained image, obtains the corresponding actual thing of each pixel The size of body;
Specifically, it is assumed that known with reference to a length of X of pattern, a length of y pixel being imaged on photo, then each pixel Corresponding actual object size is x/y.
34) according to the pixel of picture in its entirety, the actual size W of entire picture photographed is obtained;
The actual size of entire picture is exactly long and wide respectively (z1*x/y) and (z2*x/y), wherein, each pixel Corresponding actual object size is x/y;The currently known resolution ratio of photo is z1*z2 pixels;
35) size of visual angle α during shooting according to camera, calculates camera lens and is clapped with actual by trigonometric function relation Taking the photograph the distance between object L is:
In formula 1, α is visual angle when camera shoots;W is the actual size of entire picture photographed;
Thus the distance between camera and actual photographed object are obtained.
Initialisation image gatherer process is carried out using laser ranging or ultrasonic ranging method, specifically:By Laser Measuring Away from or the mode such as ultrasonic ranging measure between camera and the image material object of camera collection directly apart from L, or directly adopt Range finding and IMAQ are directly realized by with binocular camera;Further according to the pixel pair that the size at visual angle when taking pictures obtains in image Answer the corresponding relation of image material object actual size;Comprise the following steps that:
41) IMAQ is carried out;
42) laser range finder and ultrasonic range finder are used, is measured by modes such as laser ranging or ultrasonic rangings and taken the photograph As the distance between the image material object that head is gathered with camera L;
43) according to above-mentioned camera and visual angle α when shooting the distance between material object L and shooting, according to trigonometric function Relation obtains the size of shooting image correspondence actual object:
W=2 × L × tan α (formula 2)
44) pixel further according to picture in its entirety is it is known that the corresponding actual size of each pixel;
Above-mentioned steps 43) size that shooting image corresponds to actual object is obtained, can also be according to the size W ' of imaging photosensitive piece And camera lens to sensitive film apart from L ', the corresponding relation of the pixel correspondence image material object actual size being calculated in image; Specifically, according to above-mentioned camera and the focal length L ' when shooting the distance between material object L and shooting, according to similar triangles ratio Example relation understands that the size of shooting image correspondence actual object is:
Similarly, according to shoot picture in known object size, according to the object in picture corresponding pixel Number, can obtain the corresponding actual size of full width picture;Further according to known sensitive film size W ' and focal length L ' when, using similar Triangle proportionate relationship:Obtain shooting object to camera lens apart from L;
In initialisation image collection imaging, it is possible to use existing metope or roof are calculated, and to simplify the process, also may be used Simplify graphical analysis to increase visible marking on metope or roof, roof or wall are such as projected in using visible ray or non-visible light It is marked on face, now, camera will be according to the suitable camera of the feature corresponding selection of visible ray or non-visible light.For example, When being projected using the infrared light of a certain wavelength, selection can receive the camera of the infrared imaging of same wavelength.
For above-mentioned space-location method, further, step 4) follow-up Continuous plus process extracts image outline first With characteristic point (using edge detection operator method and feature point detection Operator Method), gather and fix in initialisation image and take the photograph As after head attitude, camera carries out sequential image acquisition, extract image outline and characteristic point and analyze present frame and former frame The change of characteristic point in image, according to the change of profile and characteristic point, calculate camera axially the movement in direction or Perpendicular to the movement of the plane of axial direction, idiographic flow is as follows:
51) picture continuously acquired to camera, by edge detection algorithm and/or feature point detection algorithm or other calculations Method is extracted and obtains image outline and/or characteristic point;
52) image outline and/or characteristic point for current frame image being extracted with previous frame image are compared;
53) when the direct dimension information of image outline or characteristic point is constant, illustrate that camera is not moved in the axial direction; When the direct dimension information of image outline or characteristic point changes, illustrate that camera has displacement in the axial direction;
54) when camera has displacement in the axial direction, the corresponding relation of pixel and actual size after being changed, then root According to current visual angle, the distance of current camera minute surface and the face of shooting is obtained;When camera is not moved in the axial direction, judge Whether the position in picture of image outline or characteristic point changes, if changed, is gathered according to initialisation image Imaging scale relation in journey, obtains displacement and the direction of actual camera;Further according to be imaged in initialization procedure with very The angle in real direction, determines the real moving direction of camera.
It is the calculating for simplifying camera axial displacement, can increases vertical with former camera and perpendicular to metope or roof Camera, so can only calculate and the movement in imaging plane parallel plane, it is possible to obtain the space moving rail of camera Mark;Orthogonal system can also be constituted with three cameras, these three cameras are respectively perpendicular to roof and two mutual Vertical metope, so can only process the displacement of camera axial direction, it is possible to determine the space displacement of camera;When increase is taken the photograph During as head, or whole calculated level and vertical direction displacement, thus reach increase system redundancy, strengthen reliability and essence Degree.
During above-mentioned Continuous plus, step 54) according to the continuous mobile determination side parallel with imaging plane per two field picture Upward displacement can be adopted and substituted with the following method:It is encoded by certain coded system with kind, visible ray or non-visible light Numeral or symbol (such as matrix) are obtained, coding is presented on metope or roof, and metope or roof are covered, through taking the photograph As head imaging after, carry out code identification, according to identification coding determine camera where with imaging plane parallel plane position.
Compared with prior art, the beneficial effects of the utility model are:
The utility model provides the space positioning apparatus and localization method in a kind of virtual reality system, by human body One or more camera is worn, camera is fixed on a certain position of human body, image is continuously acquired according to camera Information, treatment is analyzed to image information by image processing algorithm, so as to judge the displacement of human body, simplifies VR localization fields The arrangement of scape images first-class ancillary equipment, more convenient to use.
Brief description of the drawings
Fig. 1 is camera imaging principle schematic;
Wherein, α is visual angle when camera shoots;W is the actual size of entire picture photographed;L be camera with Shoot the distance between material object;Focal length when L ' is to shoot;W ' is the size of imaging photosensitive piece;V is cam lens.
Fig. 2 is the structured flowchart of the space positioning apparatus in the virtual reality system that the utility model is provided.
Fig. 3 is that imaging unit is that camera and inertia action catch equipment integrated apparatus in the utility model embodiment Structured flowchart.
Fig. 4 is that the imaging unit of space positioning apparatus in the utility model embodiment includes orthogonal binocular camera The position view installed on human body.
Fig. 5 is the workflow block diagram of the space positioning apparatus that the utility model is provided.
Specific embodiment
Below in conjunction with the accompanying drawings, the utility model is further described by embodiment, but limits this practicality never in any form New scope.
The utility model provides the space positioning apparatus in a kind of virtual reality system, by wear on human body one or More than one camera, camera is fixed on human body by stabilizer or head, and figure is continuously acquired according to camera As information, treatment is analyzed to image information by image processor, so as to judge the displacement of human body, simplifies VR localization fields The arrangement of scape images first-class ancillary equipment, more convenient to use.
Fig. 1 is camera imaging principle schematic;In Fig. 1, lens imaging is passed through to sensitive film by actual object, its In, α is visual angle when camera shoots;W is the actual size of entire picture photographed;L is camera and shooting material object Between distance;Focal length when L ' is to shoot;W ' is the size of imaging photosensitive piece;V is cam lens.
The structure of the space positioning apparatus in the virtual reality system that the utility model is provided is as shown in Fig. 2 space orientation Device includes camera pose calibrating module, camera attitude stuck-module, initialisation image acquisition processing module and sequential chart As processing module;Camera pose calibrating module includes imaging unit and calibration plane, and imaging unit is taken the photograph including one or more As head, camera pose calibrating module realizes that camera is axially vertical with calibration plane;Camera attitude stuck-module includes steady Device or head are determined, for the camera in imaging unit to be fixedly attached into human body by stabilizer or head;Initialization figure As acquisition processing module by camera collection image and obtain camera gather image and image corresponding to material object between Space proportion numerical relation;Seriate images processing module is entered by during human motion to the image that continuous acquisition is obtained Row image procossing, is calculated the camera axially movement in direction and the displacement in the plane perpendicular to axial direction and side To, the displacement of human body is obtained, it is achieved in the space orientation in virtual reality system.
The space positioning apparatus that the utility model is provided can catch equipment and use cooperatively with inertia action, and Fig. 3 is imaging Camera and inertia action catch the structured flowchart of equipment integrated apparatus in unit, in camera and inertia action seizure equipment An IMU unit combine, the spatial coordinate location determined by the camera, and inertia action catch equipment itself set up Relative coordinate system combine, the space coordinates that inertia moves any one node of the equipment of catching can be calculated.Solve due to inertia It is dynamic to catch the problem that exact space positioning be provided.The position shot in camera especially leads to not by even strongly for reflective During the consecutive variations of continuous acquisition image of taking pictures, displacement can be calculated by IMU inertance elements, as supplement.Meanwhile, by with take the photograph As the situation that the IMU units of first body can be shaken according to camera carries out stabilization treatment come auxiliary camera.
Narrow viewing angle camera is installed position overhead by following examples, and upward, the camera in imaging unit passes through USB Line or other connected modes and VR backpacks main frame, external main frame or other processor (including graphics processing unit) phases Even, laser ranging is carried out by laser range finder and obtains camera and the direct distance in roof, and image is obtained by camera Information, focal length, visual angle when being taken pictures according to camera calculate the actual size corresponding to view picture photo (image for getting), Because the Pixel Information of camera is known, it is possible to obtain the corresponding relation of currently practical size and pixel, Ran Houtong Cross processor and treatment is analyzed to the consecutive image information that camera is obtained, extract the characteristic information of image, for example, carry out ash Degree treatment and then raising contrast obtain the profile information or characteristic point of image, and the continuous variation according to these characteristic informations can To calculate the change in location of camera, that is, wearer change in location.
Can also be installed on body two or more cameras (can be install human body diverse location it is many Individual camera, camera can be monocular cam, binocular camera or many mesh cameras), it is to avoid a camera is being obtained When marking or photograph the object for moving less than suitable characteristics of image, can be complementary to one another, two places of camera Reason result can also be merged mutually, for example, as shown in figure 4, when two cameras are mutually perpendicular to, the Z axis of a camera (are taken the photograph As head axis direction) direction be also another camera X or Y direction, it is possible to each shooting in terms of image procossing Head only processes two analyses of axial direction, reduces the complexity of processing routine.
When carrying out space orientation using the space positioning apparatus in above-mentioned virtual reality system, it is mounted perpendicular on human body The camera on roof and/or metope so that camera acquired image is moved with moving for human body, in image Characteristic point or object edge in object or image are to be moved with moving for human body;Continuously adopted by camera Collect the image of surrounding objects, treatment is analyzed by the image to object, obtain the information of position consecutive variations in picture, by This realizes the space orientation in virtual reality system;Mainly include initial calibration camera attitude, fixing camera attitude, initial Change image acquisition and processing and seriate images processing process;
1) initial calibration camera attitude process:Realize that camera is axially vertical with calibration plane;
2) fixing camera attitude causes that camera is always maintained at initial calibration camera attitude in use;
3) initialisation image gatherer process obtains the figure that camera is gathered by camera collection image and according to known quantity The numerical relation (space proportion relation) between material object as corresponding to image;
4) seriate images processing process carries out sequential image acquisition, and by extracting image outline and characteristic point, analysis is current The change of characteristic point in frame and previous frame image, according to the change of image outline and characteristic point, is calculated camera in its axle To the movement and the movement in the plane perpendicular to axial direction in direction, in realizing virtual reality system according to mobile distance and direction Space orientation.
Wherein, initial calibration camera attitude process is vertical with calibration plane in order to realize camera axial direction, that is, Cam lens plane and camera to shooting face it is parallel, calibration plane can be roof or metope.
In specific implementation, the initial calibration of camera attitude can be carried out using reference marker graphical method;Can also The initial calibration of camera attitude is realized using the method for the placement output attitude information device on camera.
Reference marker graphical method is specifically:Initial calibration camera attitude process can be by using in calibration plane The pattern of cross or other shapes (such as circular, square, regular polygon) is set used as reference marker figure, by camera Photograph the image obtained during reference marker figure to be compared with reference marker figure, judge whether metamorphopsic distortion (this practicality Distortion in new refers to deform upon, and equal proportion scaling is not distortion), if distortion explanation lens plane is uneven with the face of shooting OK, if both are parallel without distortion explanation, calibration process seeks to cause that the imaging in camera is undistorted with indicia patterns.
During calibrating mode this using reference marker figure, initial calibration camera attitude process comprises the following steps:
11) pattern of cross or other shapes (such as circular, square, regular polygon) is set in calibration plane, As reference marker figure
12) IMAQ is carried out to above-mentioned reference marker pattern by camera, obtains imagewise pattern in camera;
13) whether imagewise pattern has distortion with above-mentioned reference marker pattern in analysis camera;Adjustment camera attitude, makes The imaging obtained in camera does not have distortion with above-mentioned reference marker pattern, that is, complete the initial calibration of camera attitude.
During calibration, the shape after judging imaging on photo is also whether original shape has come identification icon Distortion (only size variation does not include distortion).For example, when the reference marker figure that uses of calibration is for cross shape, can by than Whether two length of sides equal after being relatively imaged and angle whether be right angle identify whether to have distortion.
In specific implementation, camera can also be realized using the method for the placement output attitude information device on camera The initial calibration of attitude;Especially by install additional on camera can export attitude information device (such as:Attitude transducer), root According to the attitude information that device is exported, pose adjustment is carried out to camera, until control camera realizes, tool vertical with calibration plane Body calibration steps is as follows:
21) the fixed output attitude information device on camera;
22) attitude information of above-mentioned output attitude information device is obtained;
23) according to above-mentioned attitude information and the fixed position relation of the output attitude information device and camera, calculate To the attitude of camera;
Specifically, when output attitude information device parallel with lens plane and vertical camera axis is installed, according to defeated Go out the attitude Eulerian angles of attitude information device output, be exactly the attitude Eulerian angles of lens plane.
24) attitude of camera is adjusted, until camera is vertical with alignment surface (metope or roof).
For the space-location method in above-mentioned virtual reality system, further, after above-mentioned calibration process is completed, lead to Cross stabilizer or head so that camera be always maintained at during subsequent use it is vertical with calibration plane, subsequently into first Beginningization image acquisition process.
The purpose of initialisation image gatherer process is corresponding to the image and image that camera collection is obtained according to known quantity Material object between numerical relation (proportionate relationship during imaging between the two), during specific implementation, can be using with reference to pattern side Method, laser ranging or ultrasonic ranging method are realized.
Carry out initialisation image gatherer process using with reference to patterning method, especially by the cross hairs of known dimensions or other Shape (can be in initial calibration camera attitude for pattern when calibrating camera attitude can also be new pattern) Camera is demarcated, after camera gets above-mentioned shape, during according to known dimension of picture information and according to taking pictures The length of focal length, the size at visual angle, extrapolate the picture corresponding to the distance and unit length of now camera and known form Vegetarian refreshments number;Thus the numerical relation between the material object corresponding to the image and image of camera collection is obtained;Specific steps are such as Under:
31) placed with reference to pattern in calibration plane, known to the size of the reference pattern;
32) IMAQ is carried out with reference to pattern to above-mentioned known dimensions;
33) pixel count presented on photo according to above-mentioned 32) gained image, obtains the corresponding actual thing of each pixel The size of body;
Specifically, it is assumed that known with reference to a length of X of pattern, a length of y pixel being imaged on photo, then each pixel Corresponding actual object size is x/y.
34) according to the pixel of picture in its entirety, the actual size W of entire picture photographed is obtained;
The actual size of entire picture is exactly long and wide respectively (z1*x/y) and (z2*x/y), wherein, each pixel Corresponding actual object size is x/y;Picture resolution is z1*z2 pixels;
35) size of visual angle α during shooting according to camera, calculates camera lens and is clapped with actual by trigonometric function relation Taking the photograph the distance between object L is:
In formula 1, α is visual angle when camera shoots;W is the actual size of entire picture photographed;
Thus the distance between camera and actual photographed object are obtained.
Initialisation image gatherer process is carried out using laser ranging or ultrasonic ranging method, specifically:By Laser Measuring Away from or the mode such as ultrasonic ranging measure between camera and the image material object of camera collection directly apart from L, or directly adopt Range finding and IMAQ are directly realized by with binocular camera;Further according to the pixel pair that the size at visual angle when taking pictures obtains in image Answer the corresponding relation of image material object actual size;Comprise the following steps that:
41) IMAQ is carried out;
42) laser range finder and ultrasonic range finder are used, is measured by modes such as laser ranging or ultrasonic rangings and taken the photograph As the distance between the image material object that head is gathered with camera L;
43) according to above-mentioned camera and visual angle α when shooting the distance between material object L and shooting, according to trigonometric function Relation obtains the size of shooting image correspondence actual object:
W=2 × L × tan α (formula 2)
44) pixel further according to picture in its entirety is it is known that the corresponding actual size of each pixel;
Above-mentioned steps 43) size that shooting image corresponds to actual object is obtained, can also be according to the size W ' of imaging photosensitive piece And camera lens to sensitive film apart from L ', the corresponding relation of the pixel correspondence image material object actual size being calculated in image; Specifically, according to above-mentioned camera and the focal length L ' when shooting the distance between material object L and shooting, according to similar triangles ratio Example relation understands that the size of shooting image correspondence actual object is:
Similarly, according to shoot picture in known object size, according to the object in picture corresponding pixel Number, can obtain the corresponding actual size of full width picture;Further according to known sensitive film size W ' and focal length L ' when, using similar Triangle proportionate relationship:Obtain shooting object to camera lens apart from L;
In initialisation image collection imaging, it is possible to use existing metope or roof are calculated, and to simplify the process, also may be used Simplify graphical analysis to increase visible marking on metope or roof, roof or wall are such as projected in using visible ray or non-visible light It is marked on face, now, camera will be according to the suitable camera of the feature corresponding selection of visible ray or non-visible light.For example, When being projected using the infrared light of a certain wavelength, selection can receive the camera of the infrared imaging of same wavelength.
Follow-up Continuous plus process extracts image outline first and characteristic point (is examined using edge detection operator and characteristic point Measuring and calculating is extracted to image outline and characteristic point), after initialisation image collection and fixing camera attitude, camera Sequential image acquisition is carried out, image outline and characteristic point is extracted and is analyzed the change of characteristic point in present frame and previous frame image, According to the change of profile and characteristic point, the camera axially movement in direction or the shifting in the plane perpendicular to axial direction are calculated Dynamic, idiographic flow is as follows:
51) picture continuously acquired to camera, is extracted by edge detection algorithm or other algorithms and obtains image outline And characteristic point;
52) image outline or characteristic point for current frame image being extracted with previous frame image are compared;
53) when the direct dimension information of image outline or characteristic point is constant, illustrate that camera is not moved in the axial direction; When the direct dimension information of image outline or characteristic point changes, illustrate that camera has displacement in the axial direction;
54) when camera has displacement in the axial direction, the corresponding relation of pixel and actual size after being changed, then root According to current visual angle, the distance of current camera minute surface and the face of shooting is obtained;When camera is not moved in the axial direction, judge Whether the position in picture of image outline or characteristic point changes, if changed, is gathered according to initialisation image Imaging scale relation in journey, obtains displacement and the direction of actual camera;Further according to be imaged in initialization procedure with very The angle in real direction, determines the real moving direction of camera.
It is the calculating for simplifying camera axial displacement, can increases vertical with former camera and perpendicular to metope or roof Camera, so can only calculate movement of the camera in imaging plane parallel plane, it is possible to obtain the space of camera Motion track;Orthogonal system can also be constituted by three cameras, these three cameras are respectively perpendicular to roof and two Individual orthogonal metope, so can only process the displacement of camera axial direction, it is possible to determine the space displacement of camera;When When increasing camera, or whole calculated level and vertical direction displacement, thus reach increase system redundancy, strengthen reliability And precision.
During above-mentioned Continuous plus, step 54) according to the displacement in the continuous mobile determination horizontal direction per two field picture Can adopt and substitute with the following method:With in kind, visible ray or non-visible light by certain coded system it is encoded obtain numeral or Symbol (such as matrix), coding is presented on metope metope is covered, through camera imaging after, carry out code identification, The horizontal plane position where coding determination camera according to identification.
In specific implementation, can by above-mentioned space positioning apparatus be applied to it is dynamic catch equipment, space positioning apparatus operationally, Camera is caught equipment one of node and is combined with dynamic, and dynamic to catch equipment itself and set up a coordinate system, each moves and catches festival-gathering There is the coordinate position of each node, when later the dynamic node of catching combined with camera determines that space coordinates can just realize that dynamic catching sets The Coordinate Conversion of standby each node realizes that each node space is positioned in space coordinates.Can be made marks on roof or metope, Position can be demarcated in camera alignment mark point, so as to reduce error, mark point can be luminous point or specific pattern Case.Mark point can be the visible ray or non-visible light of special frequency channel, the camera of supporting corresponding the same band.Range finding Can be realized using binocular camera.
For example, pass through the regular character matrix of laser light irradiation to roof, such as matrixThen Picture is directly shot by camera, digital identification is carried out to the numeral in picture, then according to matrix and actual room size Corresponding relation, it is determined that the horizontal level of current camera, then further according to the distance between above-mentioned analysis matrix numeral or The size of numeral, is calculated upright position, so as to realize space orientation.
It should be noted that the purpose for publicizing and implementing example is help further understands the utility model, but this area Technical staff be appreciated that:Not departing from the utility model and spirit and scope of the appended claims, various replacements and Modification is all possible.Therefore, the utility model should not be limited to embodiment disclosure of that, and the utility model is claimed Scope be defined by the scope that claims are defined.

Claims (6)

1. a kind of space positioning apparatus in virtual reality system, including camera pose calibrating module, camera attitude are fixed Module, initialisation image acquisition processing module and seriate images processing module;The camera pose calibrating module includes imaging Unit;Imaging unit includes one or more cameras;Camera pose calibrating module is used to carry out initially camera attitude Calibration;Camera attitude stuck-module includes stabilizer or head, for by the camera in imaging unit by stabilizer or Head is fixedly attached on human body so that camera is always maintained at vertical with calibration plane during subsequent use;Initially Changing image acquisition and processing module includes initialisation image with reference to pattern, laser range finder or ultrasonic range finder, by camera Collection image and obtain camera collection image and image corresponding to material object between space proportion numerical relation;Sequential chart As processing module carries out image procossing by the image obtained to continuous acquisition during human motion, camera is calculated Axially the movement in direction and the displacement in the plane perpendicular to axial direction and direction, obtain the displacement of human body, thus real Space orientation in existing virtual reality system.
2. space positioning apparatus as claimed in claim 1, it is characterized in that, one or more cameras peace in the imaging unit Mounted in the same position of human body or different parts;The camera is monocular cam, binocular camera or many mesh cameras.
3. space positioning apparatus as claimed in claim 1, it is characterized in that, the camera pose calibrating module includes that imaging is single Unit, calibration plane and reference marker figure, reference marker figure are arranged in calibration plane;Or the camera pose calibrating mould Block includes imaging unit and IMU units, and by IMU units are parallel with lens plane and camera axis right angle setting, the IMU is mono- The attitude Eulerian angles that the output of attitude information device is exported in unit are the attitude Eulerian angles of lens plane, and thus camera attitude is entered Row initial calibration.
4. space positioning apparatus as claimed in claim 3, it is characterized in that, the output attitude information device is attitude transducer.
5. space positioning apparatus as claimed in claim 3, it is characterized in that, the calibration plane be one kind in roof and metope or Two kinds;The camera on human body is fixedly connected on perpendicular to calibration plane;The reference marker figure is cruciform pattern, circle Pattern, square pattern or regular polygon pattern.
6. space positioning apparatus as claimed in claim 1, it is characterized in that, the initialisation image acquisition processing module is by school Placed on directrix plane camera is demarcated with reference to pattern by known dimensions with reference to pattern;Or by by laser ranging Instrument or ultrasonic range finder are connected with camera, carry out initialisation image collection.
CN201621440089.4U 2016-12-26 2016-12-26 A kind of space positioning apparatus in virtual reality system Expired - Fee Related CN206300653U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201621440089.4U CN206300653U (en) 2016-12-26 2016-12-26 A kind of space positioning apparatus in virtual reality system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201621440089.4U CN206300653U (en) 2016-12-26 2016-12-26 A kind of space positioning apparatus in virtual reality system

Publications (1)

Publication Number Publication Date
CN206300653U true CN206300653U (en) 2017-07-04

Family

ID=59205318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201621440089.4U Expired - Fee Related CN206300653U (en) 2016-12-26 2016-12-26 A kind of space positioning apparatus in virtual reality system

Country Status (1)

Country Link
CN (1) CN206300653U (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643699A (en) * 2016-12-26 2017-05-10 影动(北京)科技有限公司 Space positioning device and positioning method in VR (virtual reality) system
CN108196258A (en) * 2017-12-26 2018-06-22 青岛小鸟看看科技有限公司 Method for determining position and device, the virtual reality device and system of external equipment
CN110288650A (en) * 2019-05-27 2019-09-27 盎锐(上海)信息科技有限公司 Data processing method and end of scan for VSLAM

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643699A (en) * 2016-12-26 2017-05-10 影动(北京)科技有限公司 Space positioning device and positioning method in VR (virtual reality) system
CN106643699B (en) * 2016-12-26 2023-08-04 北京互易科技有限公司 Space positioning device and positioning method in virtual reality system
CN108196258A (en) * 2017-12-26 2018-06-22 青岛小鸟看看科技有限公司 Method for determining position and device, the virtual reality device and system of external equipment
CN110288650A (en) * 2019-05-27 2019-09-27 盎锐(上海)信息科技有限公司 Data processing method and end of scan for VSLAM
CN110288650B (en) * 2019-05-27 2023-02-10 上海盎维信息技术有限公司 Data processing method and scanning terminal for VSLAM

Similar Documents

Publication Publication Date Title
CN106643699A (en) Space positioning device and positioning method in VR (virtual reality) system
CN110462686B (en) Apparatus and method for obtaining depth information from a scene
JP6484729B2 (en) Unmanned aircraft depth image acquisition method, acquisition device, and unmanned aircraft
CN108022302B (en) Stereo display device of Inside-Out space orientation's AR
CN106168853B (en) A kind of free space wear-type gaze tracking system
CA2875820C (en) 3-d scanning and positioning system
US20140218281A1 (en) Systems and methods for eye gaze determination
CN106840112B (en) A kind of space geometry measuring method measured using free space eye gaze point
US20070076090A1 (en) Device for generating three dimensional surface models of moving objects
CN111353355B (en) Motion tracking system and method
CN206300653U (en) A kind of space positioning apparatus in virtual reality system
CN111899276A (en) SLAM method and system based on binocular event camera
US20070116457A1 (en) Method for obtaining enhanced photography and device therefor
CN105387847A (en) Non-contact measurement method, measurement equipment and measurement system thereof
CN111445528A (en) Multi-camera common calibration method in 3D modeling
CN104732586B (en) A kind of dynamic body of 3 D human body and three-dimensional motion light stream fast reconstructing method
CN107449403B (en) Time-space four-dimensional joint imaging model and application
CN113487674A (en) Human body pose estimation system and method
Chen et al. Spatial localization of EEG electrodes in a TOF+ CCD camera system
US7839490B2 (en) Single-aperture passive rangefinder and method of determining a range
CN108981690A (en) A kind of light is used to fusion and positioning method, equipment and system
CN114004880B (en) Point cloud and strong reflection target real-time positioning method of binocular camera
CN116295327A (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN107884930A (en) Wear-type device and control method
CN110120062B (en) Image processing method and device

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170704

Termination date: 20201226

CF01 Termination of patent right due to non-payment of annual fee