CN106643699B - Space positioning device and positioning method in virtual reality system - Google Patents

Space positioning device and positioning method in virtual reality system Download PDF

Info

Publication number
CN106643699B
CN106643699B CN201611215923.4A CN201611215923A CN106643699B CN 106643699 B CN106643699 B CN 106643699B CN 201611215923 A CN201611215923 A CN 201611215923A CN 106643699 B CN106643699 B CN 106643699B
Authority
CN
China
Prior art keywords
camera
image
calibration
plane
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611215923.4A
Other languages
Chinese (zh)
Other versions
CN106643699A (en
Inventor
弭强
王礼辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Reciprocity Technology Co ltd
Original Assignee
Beijing Reciprocity Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Reciprocity Technology Co ltd filed Critical Beijing Reciprocity Technology Co ltd
Priority to CN201611215923.4A priority Critical patent/CN106643699B/en
Publication of CN106643699A publication Critical patent/CN106643699A/en
Application granted granted Critical
Publication of CN106643699B publication Critical patent/CN106643699B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a space positioning device and a positioning method used in a virtual reality system, wherein the space positioning device comprises a camera gesture calibration module, a camera gesture fixing module, an initialization image acquisition processing module and a continuous image processing module; the camera is fixed on a human body part by wearing one or more cameras on the human body, and the movement of the camera in the axial direction and the movement distance and direction of the camera in a plane perpendicular to the axial direction are calculated by performing image processing on continuously acquired images in the moving process of the human body, so that the displacement of the human body is obtained, and the space positioning in a virtual reality system is realized. According to the technical scheme, peripheral equipment such as cameras and the like arranged in the VR positioning scene can be simplified, and the method is more convenient to use.

Description

Space positioning device and positioning method in virtual reality system
Technical Field
The present invention relates to virtual reality and augmented reality technologies, and in particular, to a spatial positioning device and a positioning method in a virtual reality system.
Background
Virtual Reality (VR) or Augmented Reality (AR) technologies (the present invention is generally called VR technologies) are that virtual environments with visual sense, auditory sense, and tactile sense or superposition effects on real environments are generated in a specific range through computer technologies and various sensor technologies.
In VR systems, many application scenarios may involve indoor space positioning, with which better interaction of the real world with the virtual world may be achieved. However, the existing indoor space positioning technology is mainly implemented by the following two modes: the first is to arrange reflection points on a human body, collect images of the reflection points by arranging a plurality of cameras at the periphery of a scene, and then determine the spatial position relationship by an image processing method. The positioning mode needs to erect a plurality of cameras, is high in cost and complicated in arrangement, and meanwhile, the positioning is affected due to the fact that reflecting points are shielded in the moving process of a human body, so that the accuracy is low. The second method is that the laser transmitters are arranged outside the scene, the whole use area is scanned by laser, the laser receiving equipment is worn by a human body, and the laser receiving equipment receives a signal sent by the laser and then obtains the current position by calculation. As with the first positioning method, the second positioning method also requires placement of laser transmitters at the periphery of the scene and is also subject to occlusion. In addition, the two modes can strictly limit the movement range of the user, and the positioning failure can occur after the movement range is out of range. Moreover, existing inertial motion capture device technology does not provide accurate spatial positioning.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a space positioning method and a positioning device used in a virtual reality system, which are characterized in that one or more cameras are worn on a human body, the cameras are fixed on a certain part of the human body, image information is continuously acquired according to the cameras, and the image information is analyzed and processed through an image processing algorithm, so that the displacement of the human body is judged, peripheral equipment such as the cameras arranged in a VR positioning scene can be simplified, and the use is more convenient.
The principle of the invention is as follows: in a virtual reality system, it is required to achieve spatial positioning in the virtual reality system by positioning and tracking the moving track of the human body in the real world so that the human body model in the virtual world also moves according to the track, thereby achieving synchronization of the human body in the real world and the human body model in the virtual world. The invention installs the camera vertical to the roof and/or the wall on the human body, and the camera can move along with the movement of the human body, so that the image collected by the camera can also move along with the movement of the camera (the position of the object in the image moves along with the movement of the object in the image), therefore, the camera continuously collects the images of the surrounding wall and other objects, and the information of continuous change of the position in the picture is obtained by analyzing and processing the images of the objects, thereby realizing the space positioning in the virtual reality system.
The technical scheme provided by the invention is as follows:
a space positioning device in a virtual reality system comprises a camera gesture calibration module, a camera gesture fixing module, an initialization image acquisition processing module and a continuous image processing module; the camera pose calibration module comprises an imaging unit, a calibration plane and a reference graph, or comprises an imaging unit and an IMU unit (the inertial measurement unit IMU unit is a device for measuring the three-axis pose of an object, inertial Measurement Unit); the imaging unit comprises one or more cameras; the camera gesture calibration module is used for carrying out initial calibration on the camera gesture; the camera gesture fixing module comprises a stabilizer or a cradle head and is used for fixedly connecting a camera in the imaging unit to a human body through the stabilizer or the cradle head, so that the camera is always kept perpendicular to the calibration plane in the subsequent use process; the initialization image acquisition processing module acquires images through a camera by adopting an initialization image reference pattern or a laser range finder (or an ultrasonic range finder) and acquires a space proportion numerical relation between the images acquired by the camera and a real object corresponding to the images; the continuous image processing module is used for carrying out image processing on images obtained by continuous acquisition in the human body moving process, and calculating to obtain the movement of the camera in the axial direction and the movement distance and direction of the camera in a plane perpendicular to the axial direction, so as to obtain the displacement of the human body, thereby realizing the space positioning in the virtual reality system.
One or more cameras in the imaging unit can be arranged at the same part or different parts of a human body, and the cameras can be monocular cameras, binocular cameras or multi-eye cameras. The calibration plane may be a roof and/or a wall; the camera fixed on the human body is vertical to the roof and/or the wall surface.
In specific implementation, the camera pose calibration module may include an imaging unit, a calibration plane and a reference pattern, where a reference mark pattern (such as a cross or a pattern of a circle, a square, a regular polygon, etc.) is set on the calibration plane, and initial calibration of the camera pose may be performed by using the reference mark pattern method, so as to implement that the camera axis is perpendicular to the calibration plane; the camera pose calibration module may also include an imaging unit and an IMU unit, where the initial calibration of the camera pose is implemented by placing an output pose information device (IMU unit) on the camera, and when the output pose information device is installed and the plane of the camera lens is parallel to the axis of the camera and perpendicular to the axis of the camera, the pose euler angle output by the output pose information device is the pose euler angle of the plane of the camera lens.
In a specific implementation, the initializing image acquisition processing module can place a reference pattern on a calibration plane; or connecting a laser range finder (or an ultrasonic range finder) with the camera; the camera can be calibrated through a reference pattern with a known size, or an initialized image acquisition can be performed by adopting a laser ranging or ultrasonic ranging method.
The space positioning device can be matched with the inertial motion capturing device for use, wherein the camera gesture calibration module comprises an imaging unit and an IMU unit, one IMU unit in the inertial motion capturing device and a camera form an integrated imaging device, for example, the IMU unit is arranged at the bottom of the camera and is parallel to the plane of the camera; because the inertial motion capturing device comprises a plurality of IMU units, the space positioning device is arranged on a real human body so that the camera is vertical to a roof or a wall surface or is vertical to the roof and the wall surface at the same time; acquiring and processing images through a camera in the space positioning device, and determining the space coordinate position of a real human body; then, calculating to obtain the space coordinate of any node of the inertial motion capturing device by utilizing a relative coordinate system established by the inertial motion capturing device; the camera can be assisted in anti-shake processing through the camera shake condition reflected by the IMU unit in the integrated device. In the embodiment of the invention, the camera is combined with one IMU unit in the inertial motion capturing device, so that the problem that inertial motion capturing cannot provide accurate space positioning is solved. When the position photographed by the camera is particularly strong in reflection, and therefore continuous change of images cannot be obtained through continuous photographing, the displacement can be calculated through the IMU inertial unit to supplement the displacement. Meanwhile, through the IMU unit integrated with the camera, the camera can be assisted in anti-shake processing according to the shake condition of the camera.
The invention also provides a space positioning method in the virtual reality system, which is characterized in that a camera perpendicular to a roof and/or a wall surface is arranged on a human body, so that an image acquired by the camera moves along with the movement of the human body, and an object in the image or a characteristic point or an object edge in the image moves along with the movement of the human body; continuously acquiring images of surrounding objects through a camera, and analyzing and processing the images of the objects to acquire information of continuous position change in a picture, thereby realizing space positioning in a virtual reality system; the method mainly comprises the steps of initial calibration of the camera pose, fixing of the camera pose, initial image acquisition and processing and continuous image processing;
1) Initial calibration camera pose process: the axial direction of the camera is vertical to the calibration plane;
2) The camera posture is fixed, so that the camera always keeps the initial calibration camera posture in the use process;
3) Initializing an image acquisition process, namely acquiring an image through a camera and acquiring a numerical relation (space proportion relation) between the image acquired by the camera and a real object corresponding to the image according to a known quantity;
4) And the continuous image processing process is used for carrying out continuous image acquisition, analyzing the change of the characteristic points in the current frame and the previous frame of images by extracting the image contour and the characteristic points, calculating the movement of the camera in the axial direction and the movement of the camera in the plane vertical to the axial direction according to the change of the image contour and the characteristic points, and realizing the space positioning in the virtual reality system according to the moving distance and the moving direction.
Aiming at the space positioning method, the step 1) can adopt a reference mark graph method to perform initial calibration of the gesture of the camera; the initial calibration of the camera pose can also be achieved by placing the output pose information device on the camera.
When the calibration mode of the reference mark graph is adopted, the initial calibration camera gesture process can be used for comparing an image obtained when the camera shoots the reference mark graph with the reference mark graph by using the pattern arranged on the calibration plane as the reference mark graph, judging whether the camera is distorted (the distortion in the invention means that the deformation occurs, the scaling is not the distortion in equal proportion), and if the distortion indicates that the plane of the lens is not parallel to the shooting plane, the calibration process is to ensure that the imaging and the mark graph in the camera are not distorted. The initial calibration camera pose process comprises the following steps:
11 A pattern of cross shapes or other shapes (e.g., circles, squares, regular polygons, etc.) is provided on the calibration plane as a reference mark pattern;
12 Image acquisition is carried out on the reference mark patterns through a camera to obtain imaging patterns in the camera;
13 Analyzing whether the imaging pattern in the camera and the reference mark pattern are distorted; and adjusting the posture of the camera so that imaging in the camera and the reference mark pattern are not distorted, and thus, the initial calibration of the posture of the camera is completed.
When the reference mark pattern is used for calibration, whether the pattern is distorted or not (only the size is changed without distortion) is recognized by judging whether the shape on the photo after imaging is the original shape or not. For example, when the reference mark pattern used for calibration is in a positive cross shape, whether distortion exists can be identified by comparing whether the two sides are equal and the included angle is a right angle after imaging.
When the initial calibration of the posture of the camera is realized by adopting a method of placing the posture information outputting device on the camera, specifically, the posture of the camera is adjusted according to the posture information output by the device by additionally arranging the device (such as a posture sensor and an IMU) capable of outputting the posture information on the camera; the specific calibration steps are as follows:
21 A gesture information outputting device is fixed on the camera;
22 Acquiring the attitude information of the output attitude information device;
23 According to the attitude information and the fixed position relation between the output attitude information device and the camera, calculating to obtain the attitude of the camera;
specifically, when the output attitude information device is installed and the lens plane is parallel and the camera axis is perpendicular, the attitude euler angle output according to the output attitude information device is the attitude euler angle of the lens plane.
24 The posture of the camera is adjusted until the camera is vertical to the calibration surface (wall surface or roof).
For the above spatial positioning method, further, step 2) the process of fixing the pose of the camera passes through the stabilizer or the cradle head, so that the camera always keeps vertical to the calibration plane in the subsequent use process, and then enters the process of initializing image acquisition.
For the above spatial positioning method, further, the purpose of initializing the image acquisition process in step 3) is to obtain the numerical relationship (the proportional relationship between the two images during imaging) between the image acquired by the camera and the real object corresponding to the image according to the known quantity, and when the method is implemented, the method can be implemented by adopting a reference pattern method, a laser ranging method or an ultrasonic ranging method.
Initializing an image acquisition process by adopting a reference pattern method, specifically calibrating a camera through a cross line with a known size or other shapes (the patterns used for calibrating the posture of the camera in the initial calibration of the posture of the camera can be new patterns), and calculating the distance between the camera and the known shape and the number of pixels corresponding to the unit length according to the known image size information and the length and the visual angle of the focal length when photographing after the camera acquires the shapes; the numerical relation between the image acquired by the camera and the real object corresponding to the image is acquired; the method comprises the following specific steps:
31 Placing a reference pattern on the calibration plane, the reference pattern having a known size;
32 Image acquisition of a reference pattern of known dimensions as described above;
33 Obtaining the size of the actual object corresponding to each pixel point according to the pixel number of the image obtained in the 32) on the photo;
specifically, assuming that the reference pattern length is known as X and the length imaged on the photograph is y pixels, the actual object size corresponding to each pixel is X/y.
34 According to the pixels of the whole picture, obtaining the actual size W of the scenery shot by the whole picture;
the actual size of the whole picture is that the length and the width are respectively (z 1 x/y) and (z 2 x/y), wherein the actual object size corresponding to each pixel point is x/y; the currently known resolution of the photo is z1 x z2 pixels;
35 According to the size of the view angle alpha when the camera shoots, calculating the distance L between the lens and the actual shooting object according to the trigonometric function relation, wherein the distance L is as follows:
in the formula 1, alpha is a visual angle when a camera shoots; w is the actual size of the shot scenery of the whole picture;
thereby obtaining the distance between the camera and the actual photographed object.
The method adopts a laser ranging or ultrasonic ranging method to initialize the image acquisition process, and specifically comprises the following steps: measuring a direct distance L between the camera and an image object acquired by the camera by means of laser ranging or ultrasonic ranging or directly adopting a binocular camera to directly realize ranging and image acquisition; obtaining the corresponding relation of the actual size of the image corresponding to the pixels in the image according to the visual angle during photographing; the method comprises the following specific steps:
41 Image acquisition;
42 A laser range finder and an ultrasonic range finder are adopted to measure the distance L between the camera and an image object acquired by the camera in a laser range finding or ultrasonic range finding mode;
43 According to the distance L between the camera and the shooting object and the visual angle alpha during shooting, obtaining the size of the actual object corresponding to the shooting image according to the trigonometric function relation:
w=2×l×tan α (formula 2)
44 The actual size corresponding to each pixel can be known according to the pixel of the whole picture;
step 43) above obtains the size of the actual object corresponding to the photographed image, and calculates the corresponding relation of the actual size of the image corresponding to the pixels in the image according to the size W 'of the imaging photosensitive film and the distance L' between the lens and the photosensitive film; specifically, according to the distance L between the camera and the photographed object and the focal length L' during photographing, the size of the actual object corresponding to the photographed image can be known according to the similar triangle proportional relationship:
similarly, according to the size of the known object in the shot picture and the number of the corresponding pixel points of the object in the picture, the actual corresponding to the whole picture can be obtainedSize; and then according to the known size W 'and focal length L' of the photosensitive film, utilizing the similar triangle proportion relation: Obtaining the distance L from a shooting object to a lens;
in the process of initializing image acquisition and imaging, the existing wall surface or roof can be used for calculation, for simplifying processing, obvious marks can be added on the wall surface or roof to simplify image analysis, for example, visible light or non-visible light is projected on the roof or wall surface for marking, and at the moment, a proper camera is correspondingly selected according to the characteristics of the visible light or the non-visible light. For example, when projecting with infrared light of a certain wavelength, a camera that can receive infrared imaging of the same wavelength is selected.
For the above spatial positioning method, further, step 4) the subsequent continuous calculation process extracts the image contour and the feature point (the edge detection operator method and the feature point detection operator method may be used), after initializing the image acquisition and fixing the pose of the camera, the camera performs continuous image acquisition, extracts the image contour and the feature point and analyzes the change of the feature point in the current frame and the previous frame image, and calculates the movement of the camera in the axial direction or the movement of the camera in the plane perpendicular to the axial direction according to the change of the contour and the feature point, which specifically comprises the following steps:
51 For the pictures continuously acquired by the camera, extracting to obtain image contours and/or feature points through an edge detection algorithm and/or a feature point detection algorithm or other algorithms;
52 Comparing the image contour and/or feature points extracted from the current frame image and the previous frame image;
53 When the direct size information of the image contour or the feature point is unchanged, the camera is not moved in the axial direction; when the direct size information of the image outline or the feature points changes, the camera is indicated to have displacement in the axial direction;
54 When the camera is axially displaced, obtaining the corresponding relation between the changed pixels and the actual size, and obtaining the distance between the current camera mirror surface and the shooting surface according to the current visual angle; when the camera does not move in the axial direction, judging whether the position of the image contour or the characteristic point in the picture changes, and if so, obtaining the moving distance and the moving direction of the actual camera according to the imaging proportion relation in the process of initializing the image acquisition; and determining the real moving direction of the camera according to the included angle between the imaging and the real direction in the initializing process.
In order to simplify the calculation of the axial displacement of the camera, the camera which is vertical to the original camera and vertical to the wall surface or roof can be added, so that the space movement track of the camera can be obtained by only calculating the movement in the plane parallel to the imaging plane; the system can also be composed of three cameras which are respectively vertical to the roof and two mutually vertical wall surfaces, so that only the axial displacement of the cameras can be processed, and the spatial displacement of the cameras can be determined; when the cameras are added, the displacement in the horizontal and vertical directions is calculated completely, so that the redundancy of the system is increased, and the reliability and the precision are enhanced.
In the above-described continuous calculation process, step 54) of determining the displacement in the direction parallel to the imaging plane based on the movement of each frame of images in succession may be replaced by the following method: the method comprises the steps of coding real objects, visible light or non-visible light according to a certain coding mode to obtain numbers or symbols (such as a matrix), displaying the codes on a wall surface or a roof, covering the wall surface or the roof, imaging by a camera, identifying the codes, and determining the position of the plane parallel to an imaging plane where the camera is located according to the identified codes.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a space positioning device and a positioning method in a virtual reality system, wherein one or more cameras are worn on a human body, the cameras are fixed at a certain part of the human body, image information is continuously acquired according to the cameras, and the image information is analyzed and processed through an image processing algorithm, so that the displacement of the human body is judged, peripheral equipment such as the cameras arranged in a VR positioning scene is simplified, and the use is more convenient.
Drawings
FIG. 1 is a schematic diagram of a camera imaging principle;
wherein alpha is the visual angle of the camera during shooting; w is the actual size of the shot scenery of the whole picture; l is the distance between the camera and the photographed object; l' is the focal length at shooting; w' is the size of the imaging photosensitive sheet; v is the camera lens.
Fig. 2 is a block diagram of a spatial positioning device in a virtual reality system according to the present invention.
Fig. 3 is a block diagram of an integrated device of a camera and an inertial motion capturing device according to an embodiment of the present invention.
Fig. 4 is a schematic diagram showing a position of an imaging unit of the spatial locator according to the embodiment of the present invention, in which the binocular cameras perpendicular to each other are mounted on a human body.
Fig. 5 is a block flow diagram of a spatial positioning method provided by the present invention.
Detailed Description
The invention is further described by way of examples in the following with reference to the accompanying drawings, but in no way limit the scope of the invention.
The invention provides a space positioning device and a positioning method in a virtual reality system, wherein one or more cameras are worn on a human body, the cameras are fixed on the part of the human body through a stabilizer or a cradle head, image information is continuously acquired according to the cameras, and the image information is analyzed and processed through an image processor, so that the displacement of the human body is judged, peripheral equipment such as cameras arranged in a VR positioning scene is simplified, and the use is more convenient.
FIG. 1 is a schematic diagram of a camera imaging principle; in fig. 1, an actual object is imaged onto a photosensitive film through a lens, wherein α is a viewing angle when a camera shoots; w is the actual size of the shot scenery of the whole picture; l is the distance between the camera and the photographed object; l' is the focal length at shooting; w' is the size of the imaging photosensitive sheet; v is the camera lens.
The structure of the spatial positioning device in the virtual reality system is shown in fig. 2, and the spatial positioning device comprises a camera gesture calibration module, a camera gesture fixing module, an initialization image acquisition processing module and a continuous image processing module; the camera gesture calibration module comprises an imaging unit and a calibration plane, the imaging unit comprises one or more cameras, and the camera gesture calibration module realizes that the axial direction of the cameras is vertical to the calibration plane; the camera gesture fixing module comprises a stabilizer or a cradle head and is used for fixedly connecting a camera in the imaging unit to a human body through the stabilizer or the cradle head; initializing an image acquisition processing module to acquire an image through a camera and acquire a space proportion numerical relation between the image acquired by the camera and a real object corresponding to the image; the continuous image processing module is used for carrying out image processing on images obtained by continuous acquisition in the human body moving process, and calculating to obtain the movement of the camera in the axial direction and the movement distance and direction of the camera in a plane perpendicular to the axial direction, so as to obtain the displacement of the human body, thereby realizing the space positioning in the virtual reality system.
The spatial positioning device provided by the invention can be matched with the inertial motion capturing device, and fig. 3 is a structural block diagram of a camera and inertial motion capturing device integrated device in an imaging unit, wherein the camera is combined with one IMU unit in the inertial motion capturing device, and the spatial coordinates of any node of the inertial motion capturing device can be calculated by combining the spatial coordinate position determined by the camera and the relative coordinate system established by the inertial motion capturing device. The problem that accurate space positioning cannot be provided due to inertial motion capture is solved. When the position photographed by the camera is particularly strong in reflection, and therefore continuous change of images cannot be obtained through continuous photographing, the displacement can be calculated through the IMU inertial unit to supplement the displacement. Meanwhile, the IMU unit integrated with the camera can assist the camera in anti-shake processing according to the shake condition of the camera.
In the following embodiment, a narrow-view camera is installed at the top of the head, the camera in an imaging unit is connected with a VR knapsack computer host, an external host or other processors (including an image processing unit) through a USB line or other connection modes, the distance between the camera and the roof is obtained through laser ranging by a laser range finder, image information is obtained through the camera, the actual size corresponding to the whole picture (obtained image) is calculated according to the focal length and the visual angle when the camera shoots, the corresponding relation between the current actual size and the pixels can be obtained because the pixel information of the camera is known, then the continuous image information obtained by the camera is analyzed and processed through the processor, the characteristic information of the image is extracted, for example, the contour information or the characteristic points of the contrast obtained image are obtained after gray processing is improved, and the position change of the camera, namely the position change of a wearer can be calculated according to the continuous change of the characteristic information.
Two or more cameras (can be a plurality of cameras at different positions of the human body) can be arranged on the body, the cameras can be monocular cameras, binocular cameras or multi-view cameras, so that when a proper image feature mark cannot be acquired or a moving object is shot by one camera, the two cameras can be mutually complemented, the processing results of the two cameras can be mutually fused, for example, when the two cameras are mutually perpendicular, the Z-axis (the axis direction of the camera) direction of one camera is the X-axis or Y-axis direction of the other camera, and each camera can only process two axial analyses in the aspect of image processing, so that the complexity of a processing program is reduced.
When the spatial positioning device in the virtual reality system is used for spatial positioning, the camera perpendicular to the roof and/or the wall surface is arranged on the human body, so that the image acquired by the camera moves along with the movement of the human body, and an object in the image or a characteristic point or an object edge in the image moves along with the movement of the human body; continuously acquiring images of surrounding objects through a camera, and analyzing and processing the images of the objects to acquire information of continuous position change in a picture, thereby realizing space positioning in a virtual reality system; the method mainly comprises the steps of initial calibration of camera gestures, fixing of camera gestures, initial image acquisition processing and continuous image processing;
1) Initial calibration camera pose process: the axial direction of the camera is vertical to the calibration plane;
2) The camera posture is fixed, so that the camera always keeps the initial calibration camera posture in the use process;
3) Initializing an image acquisition process, namely acquiring an image through a camera and acquiring a numerical relation (space proportion relation) between the image acquired by the camera and a real object corresponding to the image according to a known quantity;
4) And the continuous image processing process is used for carrying out continuous image acquisition, analyzing the change of the characteristic points in the current frame and the previous frame of images by extracting the image contour and the characteristic points, calculating the movement of the camera in the axial direction and the movement of the camera in the plane vertical to the axial direction according to the change of the image contour and the characteristic points, and realizing the space positioning in the virtual reality system according to the moving distance and the moving direction.
The initial calibration of the camera pose process is to realize that the camera axis is perpendicular to the calibration plane, that is, the camera lens plane is parallel to the shooting plane of the camera, and the calibration plane can be a roof or a wall.
In specific implementation, the initial calibration of the camera pose can be performed by adopting a reference mark graph method; the initial calibration of the camera pose can also be achieved by placing the output pose information device on the camera.
The reference mark graphics method specifically comprises the following steps: the initial calibration of the camera pose process can be performed by using a pattern with a cross shape or other shapes (such as a circle, a square, a regular polygon, etc.) arranged on a calibration plane as a reference mark pattern, comparing an image obtained when the camera shoots the reference mark pattern with the reference mark pattern, judging whether distortion is deformed (the distortion in the invention means deformation, equal-proportion scaling is not distortion), if the distortion indicates that the plane of the lens is not parallel to the shooting plane, and if the distortion indicates that the plane of the lens is not parallel to the shooting plane, the calibration process is to ensure that the imaging and the mark pattern in the camera are not distorted.
When the calibration mode of the reference mark graph is adopted, the initial calibration camera posture process comprises the following steps:
11 A pattern of cross shapes or other shapes (e.g., circles, squares, regular polygons, etc.) is provided on the calibration plane as a reference mark pattern
12 Image acquisition is carried out on the reference mark patterns through a camera to obtain imaging patterns in the camera;
13 Analyzing whether the imaging pattern in the camera and the reference mark pattern are distorted; and adjusting the posture of the camera so that imaging in the camera and the reference mark pattern are not distorted, and thus, the initial calibration of the posture of the camera is completed.
In calibration, whether the pattern is distorted is determined by judging whether the shape on the photograph after imaging is the original shape or not (only the size is changed without distortion). For example, when the reference mark pattern used for calibration is in a positive cross shape, whether distortion exists can be identified by comparing whether the two sides are equal and the included angle is a right angle after imaging.
In specific implementation, the initial calibration of the camera pose can also be realized by adopting a method of placing an output pose information device on the camera; specifically, a device (such as a gesture sensor) capable of outputting gesture information is additionally arranged on the camera, gesture adjustment is performed on the camera according to the gesture information output by the device until the camera is controlled to be vertical to a calibration plane, and the specific calibration steps are as follows:
21 A gesture information outputting device is fixed on the camera;
22 Acquiring the attitude information of the output attitude information device;
23 According to the attitude information and the fixed position relation between the output attitude information device and the camera, calculating to obtain the attitude of the camera;
specifically, when the output attitude information device is installed and the lens plane is parallel and the camera axis is perpendicular, the attitude euler angle output according to the output attitude information device is the attitude euler angle of the lens plane.
24 The posture of the camera is adjusted until the camera is vertical to the calibration surface (wall surface or roof).
For the spatial positioning method in the virtual reality system, after the calibration process is completed, the camera is always kept perpendicular to the calibration plane in the subsequent use process through the stabilizer or the cradle head, and then the initial image acquisition process is performed.
The purpose of initializing the image acquisition process is to acquire the numerical relation (the proportional relation between the image and the object corresponding to the image) between the image acquired by the camera according to the known quantity, and the method can be realized by adopting a reference pattern method, a laser ranging method or an ultrasonic ranging method when the method is implemented.
Initializing an image acquisition process by adopting a reference pattern method, specifically calibrating a camera through a cross line with a known size or other shapes (the patterns used for calibrating the posture of the camera in the initial calibration of the posture of the camera can be new patterns), and calculating the distance between the camera and the known shape and the number of pixels corresponding to the unit length according to the known image size information and the length and the visual angle of the focal length when photographing after the camera acquires the shapes; the numerical relation between the image acquired by the camera and the real object corresponding to the image is acquired; the method comprises the following specific steps:
31 Placing a reference pattern on the calibration plane, the reference pattern having a known size;
32 Image acquisition of a reference pattern of known dimensions as described above;
33 Obtaining the size of the actual object corresponding to each pixel point according to the pixel number of the image obtained in the 32) on the photo;
specifically, assuming that the reference pattern length is known as X and the length imaged on the photograph is y pixels, the actual object size corresponding to each pixel is X/y.
34 According to the pixels of the whole picture, obtaining the actual size W of the scenery shot by the whole picture;
The actual size of the whole picture is that the length and the width are respectively (z 1 x/y) and (z 2 x/y), wherein the actual object size corresponding to each pixel point is x/y; photo resolution is z1 x z2 pixels;
35 According to the size of the view angle alpha when the camera shoots, calculating the distance L between the lens and the actual shooting object according to the trigonometric function relation, wherein the distance L is as follows:
in the formula 1, alpha is a visual angle when a camera shoots; w is the actual size of the shot scenery of the whole picture;
thereby obtaining the distance between the camera and the actual photographed object.
The method adopts a laser ranging or ultrasonic ranging method to initialize the image acquisition process, and specifically comprises the following steps: measuring a direct distance L between the camera and an image object acquired by the camera by means of laser ranging or ultrasonic ranging or directly adopting a binocular camera to directly realize ranging and image acquisition; obtaining the corresponding relation of the actual size of the image corresponding to the pixels in the image according to the visual angle during photographing; the method comprises the following specific steps:
41 Image acquisition;
42 A laser range finder and an ultrasonic range finder are adopted to measure the distance L between the camera and an image object acquired by the camera in a laser range finding or ultrasonic range finding mode;
43 According to the distance L between the camera and the shooting object and the visual angle alpha during shooting, obtaining the size of the actual object corresponding to the shooting image according to the trigonometric function relation:
w=2×l×tan α (formula 2)
44 The actual size corresponding to each pixel can be known according to the pixel of the whole picture;
step 43) above obtains the size of the actual object corresponding to the photographed image, and calculates the corresponding relation of the actual size of the image corresponding to the pixels in the image according to the size W 'of the imaging photosensitive film and the distance L' between the lens and the photosensitive film; specifically, according to the distance L between the camera and the photographed object and the focal length L' during photographing, the size of the actual object corresponding to the photographed image can be known according to the similar triangle proportional relationship:
similarly, according to the size of a known object in a shot picture and the number of corresponding pixel points of the object in the picture, the corresponding actual size of the whole picture can be obtained; and then according to the known size W 'and focal length L' of the photosensitive film, utilizing the similar triangle proportion relation:obtaining the distance L from a shooting object to a lens;
in the process of initializing image acquisition and imaging, the existing wall surface or roof can be used for calculation, for simplifying processing, obvious marks can be added on the wall surface or roof to simplify image analysis, for example, visible light or non-visible light is projected on the roof or wall surface for marking, and at the moment, a proper camera is correspondingly selected according to the characteristics of the visible light or the non-visible light. For example, when projecting with infrared light of a certain wavelength, a camera that can receive infrared imaging of the same wavelength is selected.
The subsequent continuous calculation process firstly extracts image contours and characteristic points (the image contours and the characteristic points can be extracted by utilizing an edge detection operator and a characteristic point detection operator), after initializing image acquisition and fixing the pose of a camera, the camera performs continuous image acquisition, extracts the image contours and the characteristic points and analyzes the change of the characteristic points in the current frame and the previous frame of images, and calculates the movement of the camera in the axial direction or the movement of the camera in a plane vertical to the axial direction according to the change of the contours and the characteristic points, wherein the specific flow is as follows:
51 The image contour and the characteristic points are extracted from the pictures continuously acquired by the camera through an edge detection algorithm or other algorithms;
52 Comparing the image contour or characteristic point extracted from the current frame image and the previous frame image;
53 When the direct size information of the image contour or the feature point is unchanged, the camera is not moved in the axial direction; when the direct size information of the image outline or the feature points changes, the camera is indicated to have displacement in the axial direction;
54 When the camera is axially displaced, obtaining the corresponding relation between the changed pixels and the actual size, and obtaining the distance between the current camera mirror surface and the shooting surface according to the current visual angle; when the camera does not move in the axial direction, judging whether the position of the image contour or the characteristic point in the picture changes, and if so, obtaining the moving distance and the moving direction of the actual camera according to the imaging proportion relation in the process of initializing the image acquisition; and determining the real moving direction of the camera according to the included angle between the imaging and the real direction in the initializing process.
In order to simplify the calculation of the axial displacement of the camera, the camera which is vertical to the original camera and vertical to the wall surface or roof can be added, so that the spatial movement track of the camera can be obtained by only calculating the movement of the camera in the parallel plane of the imaging plane; the system can also be formed by three cameras which are respectively vertical to the roof and two mutually vertical wall surfaces, so that only the axial displacement of the cameras can be processed, and the spatial displacement of the cameras can be determined; when the cameras are added, the displacement in the horizontal and vertical directions is calculated completely, so that the redundancy of the system is increased, and the reliability and the precision are enhanced.
In the above-described continuous calculation process, step 54) of determining the displacement in the horizontal direction from the movement of each frame of images in succession may be replaced by the following method: the method comprises the steps of coding real objects, visible light or non-visible light according to a certain coding mode to obtain numbers or symbols (such as a matrix), displaying the codes on a wall surface to cover the wall surface, imaging by a camera, identifying the codes, and determining the horizontal plane position of the camera according to the identified codes.
In specific implementation, the spatial positioning device and the positioning method are applied to the dynamic capturing equipment, when the spatial positioning device works, the camera is combined with one node of the dynamic capturing equipment, the dynamic capturing equipment can establish a coordinate system, each dynamic capturing node can have the coordinate position of each node, and after the spatial coordinates of the dynamic capturing nodes combined with the camera are determined, the coordinates of each node of the dynamic capturing equipment can be converted into the spatial coordinate system, so that the spatial positioning of each node is realized. The method can mark the roof or the wall surface, and can calibrate the position when the camera is aligned to the mark point, thereby reducing errors, and the mark point can be a light spot or a specific pattern. The marking point can be visible light or non-visible light of a specific frequency band, and is matched with a corresponding camera of the same frequency band. Ranging can also be achieved by using a binocular camera.
For example, by shining a regular matrix of numbers, e.g. a matrix, through a laser lamp onto the roofThen shooting a picture directly through the camera, carrying out digital identification on the numbers in the picture, determining the horizontal position of the current camera according to the corresponding relation between the matrix and the actual room size, and calculating to obtain the vertical position according to the distance between the numbers of the analysis matrix or the size of the numbers, thereby realizing space positioning.
It should be noted that the purpose of the disclosed embodiments is to aid further understanding of the present invention, but those skilled in the art will appreciate that: various alternatives and modifications are possible without departing from the spirit and scope of the invention and the appended claims. Therefore, the invention should not be limited to the disclosed embodiments, but rather the scope of the invention is defined by the appended claims.

Claims (10)

1. The spatial positioning method in the virtual reality system is characterized in that one or more cameras are arranged on a human body, so that images acquired by the cameras move along with the movement of the human body, images of objects are continuously acquired by the cameras, then the images of the objects are analyzed and processed, and information of continuous position change in a picture is acquired, so that the spatial positioning in the virtual reality system is realized;
The method sequentially comprises an initial camera posture calibration process, a camera posture fixing process, an initial image acquisition processing process and a continuous image processing process; the camera axial direction is vertical to the calibration plane through the initial calibration camera posture process; the camera is enabled to always keep the initial calibration camera posture in the use process through the camera posture fixing process; the method comprises the steps that an image acquisition process is initialized, a camera acquires an image and further acquires a spatial proportion numerical relation between the image acquired by the camera and a real object corresponding to the image; the continuous image processing process comprises the steps of continuously collecting images through a camera, extracting image contours and characteristic points through an edge detection operator method or a characteristic point detection operator method, analyzing changes of the image contours and the characteristic points in the current frame and the previous frame of images, calculating to obtain movement of the camera in the axial direction and movement of the camera in a plane perpendicular to the axial direction, and realizing space positioning in a virtual reality system according to the distance and the direction of the movement;
according to the change of the contour and the characteristic points, the movement of the camera in the axial direction or the movement of the camera in a plane perpendicular to the axial direction is calculated, and the specific steps are as follows:
51 The image contour and the characteristic points are extracted from the pictures continuously acquired by the camera through an edge detection algorithm;
52 Comparing the image contour or characteristic point extracted from the current frame image and the previous frame image;
53 When the direct size information of the image contour or the feature point is unchanged, the camera does not move in the axial direction; when the direct size information of the image contour or the feature point changes, the camera is axially displaced;
54 When the camera is axially displaced, obtaining the corresponding relation between the changed pixels and the actual size, and obtaining the distance between the current camera mirror surface and the shooting surface according to the current visual angle;
when the camera does not move in the axial direction, judging whether the position of the image contour or the characteristic point in the picture changes, and if so, obtaining the moving distance and the moving direction of the actual camera according to the imaging proportion relation in the process of initializing the image acquisition; determining the real moving direction of the camera according to the included angle between the imaging and the real direction in the initializing process;
when the displacement in the horizontal direction is determined according to the movement of each continuous frame of image, the codes of numbers or matrix symbols are obtained by using a physical object, visible light or non-visible light through coding modes, the codes are displayed on a wall surface, the wall surface is covered, the codes are identified after the images are imaged by the camera, and the horizontal plane position of the camera is determined according to the identified codes.
2. The spatial positioning method as set forth in claim 1, wherein the initial calibration of the camera pose is achieved by a reference mark pattern calibration method or by a method of placing an output pose information device on the camera;
the reference mark graph calibration method specifically comprises the steps of setting a pattern on a calibration plane as a reference mark graph, comparing an image obtained when a camera shoots the reference mark graph with the reference mark graph, judging whether deformation and distortion are caused, and adjusting the posture of the camera to ensure that imaging in the camera and the reference mark graph are not deformed and distorted, so that initial calibration of the posture of the camera is completed;
the method for placing the output attitude information device on the camera comprises the steps that the output attitude information device is installed on the camera, and when the output attitude information device is parallel to the plane of the lens of the camera and perpendicular to the axis of the camera, the Euler angle of the attitude output by the output attitude information device is the Euler angle of the plane of the lens; and carrying out posture adjustment on the camera according to the posture information output by the posture information output device until the camera is vertical to the calibration plane, thereby completing initial calibration of the posture of the camera.
3. The spatial positioning method according to claim 2, wherein the output posture information device is a posture sensor; the calibration plane is one or two of a wall surface and a roof; the reference mark patterns are cross patterns, circular patterns, square patterns or regular polygon patterns.
4. The spatial positioning method according to claim 1, wherein the fixed camera is mounted by a stabilizer or a cradle head during the posture process of the fixed camera, so that the camera is always kept perpendicular to the calibration plane during the use process; the continuous image processing process is specifically aimed at continuously acquiring images of the camera, extracting image contours and characteristic points through an edge detection operator method or a characteristic point detection operator method, and calculating the movement distance and direction of the camera in the axial direction or in a plane vertical to the axial direction according to the changes of the image contours and the characteristic points in the current frame and the previous frame of images.
5. The spatial positioning method according to claim 1, wherein the initializing image acquisition process adopts one of a reference pattern method, a laser ranging method or an ultrasonic ranging method;
the method comprises the steps that a camera is calibrated through a reference pattern with known size, the reference pattern is placed on a calibration plane, the camera acquires images of the reference pattern, the size of an actual object corresponding to each pixel point is obtained according to the number of pixels of the acquired images displayed on a photo, the actual size of a scene shot by a whole picture is obtained according to the pixels of the whole picture, and the distance between a lens and the actual shot object is obtained through calculation, so that the distance between the camera and the actual shot object is obtained;
The laser ranging or ultrasonic ranging method is used for measuring the direct distance between the camera and the image object acquired by the camera through laser ranging or ultrasonic ranging; or a binocular camera is adopted to directly realize distance measurement and image acquisition, and then the corresponding relation of the pixels in the image corresponding to the actual size of the image object is obtained according to the size of the visual angle during photographing.
6. A space positioning device in a virtual reality system for implementing the space positioning method of claim 1, which is characterized by comprising a camera posture calibration module, a camera posture fixing module, an initialization image acquisition processing module and a continuous image processing module; the camera gesture calibration module comprises an imaging unit; the imaging unit comprises one or more cameras; the camera is a monocular camera, a binocular camera or a multi-eye camera; the camera gesture calibration module is used for carrying out initial calibration on the camera gesture; the camera gesture fixing module comprises a stabilizer or a cradle head and is used for fixedly connecting a camera in the imaging unit to a human body through the stabilizer or the cradle head, so that the camera is always kept perpendicular to the calibration plane in the subsequent use process; the initialization image acquisition processing module acquires images through a camera by adopting an initialization image reference pattern, a laser range finder or an ultrasonic range finder and acquires the spatial proportion numerical relation between the images acquired by the camera and the real objects corresponding to the images; the continuous image processing module processes images obtained by continuous acquisition in the human body moving process, analyzes the change of characteristic points in the current frame and the previous frame of images by extracting image outlines and characteristic points, calculates the movement of the camera in the axial direction and the movement distance and direction of the camera in a plane perpendicular to the axial direction, obtains the displacement of the human body, and realizes the space positioning in the virtual reality system according to the distance and direction of the displacement of the human body.
7. The spatial locating device of claim 6, wherein one or more cameras in the imaging unit are mounted on the same or different parts of the human body.
8. The spatial locating device of claim 6, wherein the camera pose calibration module comprises an imaging unit, a calibration plane, and a reference mark pattern disposed on the calibration plane; or the camera gesture calibration module comprises an imaging unit and an IMU unit, the IMU unit and the lens plane are parallel and the camera axis is vertically arranged, and the Euler angle of the gesture output by the gesture information device output in the IMU unit is the Euler angle of the lens plane, so that the camera gesture is initially calibrated.
9. The spatial locator device of claim 6, wherein the calibration plane is one or both of a roof and a wall; the camera fixedly connected to the human body is perpendicular to the calibration plane; the reference mark pattern is a cross pattern, a circular pattern, a square pattern, or a regular polygon pattern.
10. The spatial locator device according to claim 6, wherein the initialization image acquisition processing module calibrates the camera with a reference pattern of known size by placing the reference pattern on a calibration plane; or the laser range finder or the ultrasonic range finder is connected with the camera to perform initialization image acquisition.
CN201611215923.4A 2016-12-26 2016-12-26 Space positioning device and positioning method in virtual reality system Active CN106643699B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611215923.4A CN106643699B (en) 2016-12-26 2016-12-26 Space positioning device and positioning method in virtual reality system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611215923.4A CN106643699B (en) 2016-12-26 2016-12-26 Space positioning device and positioning method in virtual reality system

Publications (2)

Publication Number Publication Date
CN106643699A CN106643699A (en) 2017-05-10
CN106643699B true CN106643699B (en) 2023-08-04

Family

ID=58826910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611215923.4A Active CN106643699B (en) 2016-12-26 2016-12-26 Space positioning device and positioning method in virtual reality system

Country Status (1)

Country Link
CN (1) CN106643699B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315470B (en) * 2017-05-25 2018-08-17 腾讯科技(深圳)有限公司 Graphic processing method, processor and virtual reality system
CN107193380B (en) * 2017-05-26 2020-04-03 成都斯斐德科技有限公司 High-precision virtual reality positioning system
CN107101632A (en) * 2017-06-19 2017-08-29 北京视境技术有限公司 Space positioning apparatus and method based on multi-cam and many markers
CN107274400B (en) * 2017-06-21 2021-02-12 歌尔光学科技有限公司 Space positioning device, positioning processing method and device, and virtual reality system
CN107422857B (en) * 2017-07-21 2020-07-07 成都沃赢创投科技有限公司 Optical positioning system based on multi-directional motion point capture
CN107270900A (en) * 2017-07-25 2017-10-20 广州阿路比电子科技有限公司 A kind of 6DOF locus and the detecting system and method for posture
TWI635255B (en) * 2017-10-03 2018-09-11 宏碁股份有限公司 Method and system for tracking object
TWI642903B (en) * 2017-10-13 2018-12-01 緯創資通股份有限公司 Locating method, locator, and locating system for head-mounted display
CN108062778B (en) * 2017-12-18 2021-05-14 广州大学 Position adjusting method and control device of shooting device
CN108181610B (en) * 2017-12-22 2021-11-19 鲁东大学 Indoor robot positioning method and system
CN108445496B (en) * 2018-01-02 2020-12-08 北京汽车集团有限公司 Ranging calibration device and method, ranging equipment and ranging method
CN108253931B (en) * 2018-01-12 2020-05-01 内蒙古大学 Binocular stereo vision ranging method and ranging device thereof
CN108228892A (en) * 2018-02-02 2018-06-29 成都科木信息技术有限公司 A kind of AR searching algorithms based on tourism big data
CN108280920A (en) * 2018-02-02 2018-07-13 成都科木信息技术有限公司 Tourism outdoor scene display system based on AR technologies
CN108346166A (en) * 2018-02-02 2018-07-31 成都科木信息技术有限公司 A kind of tourism virtual reality system
CN108844529A (en) * 2018-06-07 2018-11-20 青岛海信电器股份有限公司 Determine the method, apparatus and smart machine of posture
CN108960109B (en) * 2018-06-26 2020-01-21 哈尔滨拓博科技有限公司 Space gesture positioning device and method based on two monocular cameras
CN109211267B (en) * 2018-08-14 2022-08-23 广州虚拟动力网络技术有限公司 Method and system for quickly calibrating inertial motion capture attitude
CN109059929B (en) * 2018-08-30 2021-02-26 Oppo广东移动通信有限公司 Navigation method, navigation device, wearable device and storage medium
CN109798888B (en) * 2019-03-15 2021-09-17 京东方科技集团股份有限公司 Posture determination device and method for mobile equipment and visual odometer
CN110322484B (en) * 2019-05-29 2023-09-08 武汉幻石佳德数码科技有限公司 Calibration method and system for multi-device shared augmented reality virtual space
CN110262667B (en) * 2019-07-29 2023-01-10 上海乐相科技有限公司 Virtual reality equipment and positioning method
CN114187509B (en) * 2021-11-30 2022-11-08 北京百度网讯科技有限公司 Object positioning method and device, electronic equipment and storage medium
CN117151140B (en) * 2023-10-27 2024-02-06 安徽容知日新科技股份有限公司 Target identification code identification method, device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005038960A1 (en) * 2005-08-16 2007-03-01 Ludwig-Maximilian-Universität Human/animal arm and leg movement e.g. running, recording and qualitative/quantitative interpretation method for medical application, involves using software to interpret signal by parameters in movement type, whose intensity can be varied
CN101419055A (en) * 2008-10-30 2009-04-29 北京航空航天大学 Space target position and pose measuring device and method based on vision
CN101655361A (en) * 2009-08-31 2010-02-24 中国人民解放军国防科学技术大学 Method for measuring attitude of unstable reference platform based on double camera
CN104486543A (en) * 2014-12-09 2015-04-01 北京时代沃林科技发展有限公司 Equipment and method for controlling cloud deck camera by intelligent terminal in touch manner
CN106017573A (en) * 2016-07-25 2016-10-12 大连理工大学 Field ice thickness and ice velocity automatic measuring method based on variable-focus image method
CN206300653U (en) * 2016-12-26 2017-07-04 影动(北京)科技有限公司 A kind of space positioning apparatus in virtual reality system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005038960A1 (en) * 2005-08-16 2007-03-01 Ludwig-Maximilian-Universität Human/animal arm and leg movement e.g. running, recording and qualitative/quantitative interpretation method for medical application, involves using software to interpret signal by parameters in movement type, whose intensity can be varied
CN101419055A (en) * 2008-10-30 2009-04-29 北京航空航天大学 Space target position and pose measuring device and method based on vision
CN101655361A (en) * 2009-08-31 2010-02-24 中国人民解放军国防科学技术大学 Method for measuring attitude of unstable reference platform based on double camera
CN104486543A (en) * 2014-12-09 2015-04-01 北京时代沃林科技发展有限公司 Equipment and method for controlling cloud deck camera by intelligent terminal in touch manner
CN106017573A (en) * 2016-07-25 2016-10-12 大连理工大学 Field ice thickness and ice velocity automatic measuring method based on variable-focus image method
CN206300653U (en) * 2016-12-26 2017-07-04 影动(北京)科技有限公司 A kind of space positioning apparatus in virtual reality system

Also Published As

Publication number Publication date
CN106643699A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106643699B (en) Space positioning device and positioning method in virtual reality system
CN113379822B (en) Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN108307675B (en) Multi-baseline camera array system architecture for depth enhancement in VR/AR applications
KR101666959B1 (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
CN110230983B (en) Vibration-resisting optical three-dimensional positioning method and device
CN109831660B (en) Depth image acquisition method, depth image acquisition module and electronic equipment
TWI496108B (en) AR image processing apparatus and method
CN109416744A (en) Improved camera calibration system, target and process
WO2021185217A1 (en) Calibration method based on multi-laser distance measurement and angle measurement
US20180018791A1 (en) Computer program, head-mounted display device, and calibration method
JP2016218905A (en) Information processing device, information processing method and program
WO2004044522A1 (en) Three-dimensional shape measuring method and its device
CN111353355B (en) Motion tracking system and method
JP2009042162A (en) Calibration device and method therefor
WO2021185216A1 (en) Calibration method based on multiple laser range finders
CN110419208B (en) Imaging system, imaging control method, image processing apparatus, and computer readable medium
TWI501193B (en) Computer graphics using AR technology. Image processing systems and methods
CN111445528B (en) Multi-camera common calibration method in 3D modeling
KR20120108256A (en) Robot fish localization system using artificial markers and method of the same
CN111664839A (en) Vehicle-mounted head-up display virtual image distance measuring method
JP3842988B2 (en) Image processing apparatus for measuring three-dimensional information of an object by binocular stereoscopic vision, and a method for recording the same, or a recording medium recording the measurement program
CN206300653U (en) A kind of space positioning apparatus in virtual reality system
US10992928B1 (en) Calibration system for concurrent calibration of device sensors
CN111340959B (en) Three-dimensional model seamless texture mapping method based on histogram matching
JP7489670B2 (en) Correction parameter calculation method, displacement amount calculation method, correction parameter calculation device, and displacement amount calculation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220622

Address after: Room 914, floor 9, South Building, No. 8, wenhuiyuan North Road, Haidian District, Beijing 100082

Applicant after: Beijing reciprocity Technology Co.,Ltd.

Address before: Room 483, floor 1, block B, building 1, yard 2, Yongcheng North Road, Haidian District, Beijing 100094

Applicant before: YINGDONG (BEIJING) TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant