CN111156996A - Object mixing positioning system and positioning method suitable for immersive VR operation space - Google Patents

Object mixing positioning system and positioning method suitable for immersive VR operation space Download PDF

Info

Publication number
CN111156996A
CN111156996A CN202010073659.5A CN202010073659A CN111156996A CN 111156996 A CN111156996 A CN 111156996A CN 202010073659 A CN202010073659 A CN 202010073659A CN 111156996 A CN111156996 A CN 111156996A
Authority
CN
China
Prior art keywords
laser
space
data
inertial navigation
attitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010073659.5A
Other languages
Chinese (zh)
Inventor
于晓宇
方志坚
傅博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Eisenrier Technology Co Ltd
Original Assignee
Shenyang Eisenrier Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Eisenrier Technology Co Ltd filed Critical Shenyang Eisenrier Technology Co Ltd
Priority to CN202010073659.5A priority Critical patent/CN111156996A/en
Publication of CN111156996A publication Critical patent/CN111156996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an object mixing positioning system and a positioning method suitable for an immersive VR operation space, belonging to the VR field, comprising a mobile tag device and a camera analysis system, wherein: the mobile tag device comprises a laser emitting system and an inertial navigation positioning system, wherein the laser emitting system is used for emitting multi-color cross line laser, the inertial navigation positioning system comprises a microprocessor, an integrated accelerometer and a gyroscope, and the microprocessor is used for performing action algorithm fusion on the acquired data to acquire spatial attitude and inertial reference displacement data of a tag object and transmitting the spatial attitude and inertial reference displacement data to a computer for analysis; the camera shooting analysis system comprises a camera set and projection planes, wherein the camera set is used for collecting laser projection signals projected onto the projection planes and transmitting the laser projection signals to a computer terminal for image analysis. The hybrid positioning scheme is suitable for a small space and can effectively resist shielding, high precision can be realized, and better positioning stability and higher cost performance can be ensured in practical application.

Description

Object mixing positioning system and positioning method suitable for immersive VR operation space
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to an object hybrid positioning system suitable for an immersive VR operating space and a positioning method thereof.
Background
The spatial positioning is to calculate the self-positioned coordinates of the distance relationship between the positioned object and the reference object, and the calculated positioning comprises three-dimensional spatial position coordinate information and attitude information thereof. The VR system can realize three-dimensional interaction in a VR scene through importing relevant parameters of the positioned object. The method can be divided into optical positioning, wireless radio frequency positioning, ultrasonic positioning and the like according to the principle of a sensor, is mainly divided into active type and passive type according to the classification of a resolving principle, and can be divided into small-space high-precision positioning and large-space coarse-precision positioning concentration according to different use scenes.
Common centralized positioning methods are as follows:
the most representative product of positioning technology using infrared optical positioning is an optical positioning camera of Opti Track (positioning scheme of nuakateng). The basic principle of such positioning schemes is simply to cover an indoor positioning space with a plurality of infrared emission cameras, place infrared reflection points on a tracked object, and determine the position information of the infrared reflection points in the space by capturing the images reflected by the reflection points back to the cameras. The positioning system has very high positioning accuracy, and if a camera with a very high frame rate is used, the delay is very weak, so that a very good effect can be achieved. Its disadvantages are that it is very expensive to manufacture and that the supply is small.
The optical positioning mode is similar to the infrared positioning mode, belongs to a visible light positioning scheme, and is also used for shooting an indoor scene by using a camera, but a tracked point is not a material for reflecting infrared rays, but is a mark point (similar to a small bulb) which actively emits light. Different anchor points are distinguished by different colors. The problem of the great high accuracy location that can't accomplish little space of location luminous point among traditional optics scheme can lead to positioning error when partial luminous portion is sheltered from, only depends on the mode of optical motion capture and can appear the condition that the location is lost when luminous point is sheltered from completely.
The UWB ultra-wideband radio frequency positioning mode is that fixed base stations receive radio signals emitted by positioned objects, and then the distances of the fixed base stations are analyzed through an algorithm to obtain the three-dimensional positions of the space of the fixed base stations.
Positioning light tower solutions such as the Lighthouse indoor positioning technology of HTC Vive and the Step VR product motion capture and indoor positioning system of G-weaables. Scanning an object in a scene by using two fixed alternating periodic lasers for transverse and longitudinal scanning, and resolving the position of the positioned object by installing an optical sensor to judge the received periodic laser signal. The positioning light tower solution requires a larger optical receiver and is not suitable for fixing and capturing smaller objects.
There are Wifi positioning, radio frequency identification technology, ZigBee technology, etc., but so far, because of limited positioning accuracy, it is rarely applied in VR field.
In the prior art, high-precision schemes such as infrared positioning and optical positioning are easily influenced by shielding and are expensive, the size of a positioned object in the optical tower scheme is large and the positioned object is not easy to apply in a small scene, and radio frequency schemes such as UWB (ultra Wide band) and the like cannot accord with the application in the small scene due to low precision and complex arrangement.
In general, the scheme suitable for large-space positioning has the advantages of complex structure, high cost, easy shielding influence and low positioning precision; the cost of the scheme suitable for the small space is relatively generally higher, and the scheme is suitable for a small space scene with the square of 2 meters, and stable positioning is difficult to realize due to more shelters.
Disclosure of Invention
The invention aims to provide an object hybrid positioning system which is particularly suitable for small-space high-precision positioning effect and is suitable for an immersive VR operation space and a positioning method thereof.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
in one aspect, the present invention provides an object hybrid positioning system suitable for use in an immersive VR operating space, particularly for small space positioning, comprising a mobile tag device and a camera analysis system, wherein:
the mobile tag device is used for being fixed on a mobile tag object in an operation space and comprises a laser emission system and an inertial navigation positioning system, wherein the laser emission system is used for emitting multi-color cross line laser, the inertial navigation positioning system comprises an integrated accelerometer, a gyroscope and a microprocessor MCU (microprogrammed control unit), and the microprocessor is used for performing action algorithm fusion on data acquired by the integrated accelerometer and the gyroscope to acquire spatial attitude data and inertial reference displacement of the tag object and transmitting the spatial attitude data and the inertial reference displacement to a computer for analysis;
the camera shooting analysis system comprises a camera set and projection planes, wherein the camera set and the projection planes are used for being installed on an operation space frame, and the camera set is used for collecting laser projection signals projected onto the projection planes and transmitting the collected signals to a computer end for image analysis.
Further, the laser emitting system comprises a laser emitter and a laser optical diffraction sheet, wherein the laser optical diffraction sheet is used for enabling laser emitted by the laser emitter to form fan-shaped cross line laser.
Furthermore, the laser emitter is a semiconductor laser emitter and can emit red, green, blue and purple lasers.
Furthermore, the laser optical diffraction sheet is a word line laser optical diffraction sheet.
Further, the integrated accelerometer and gyroscope is an integrated velocimeter of an accelerometer and an angular velocimeter.
Furthermore, the accelerometer and the gyroscope are integrated chips of a three-axis mems accelerometer and a three-axis mems angular velocity meter, and are used for data transmission and distinguishing external communication through different data addresses.
Furthermore, a wireless WIFI data transmitting end is arranged on the inertial navigation positioning system and used for transmitting inertial navigation data to a computer through a local area network;
furthermore, the mobile label device is also provided with a power supply for supplying power to the laser emission system and the inertial navigation positioning system.
Further, the camera group includes at least four cameras for being installed on an upper portion and at least three sides of the operating space, respectively.
Further, the photographing plane is a ground glass plate or a tensioned gauze for receiving laser lines to form a projection.
Further, the system comprises a computer analysis module which is a data analysis program running on a computer and is used for obtaining the position and the posture of the captured object through the fusion of the acquired projection image and the inertial navigation spatial posture data.
Further, the analysis mode of the computer is as follows: respectively identifying lines of laser of various colors by analyzing images acquired by various cameras, then determining a starting point and an end point of a space position projected by a projection line of the laser of various colors on each surface, and acquiring a space focus position and a posture direction of a label in a three-dimensional space analysis mode; and simultaneously fusing the spatial attitude data of inertial navigation of the inertial navigation positioning system and the position movement data of the acceleration data integral with the label spatial attitude and relative position data obtained by image analysis to obtain the position and attitude of the captured object.
In another aspect, an object mixing and positioning method suitable for an immersive VR operation space is provided, including the following steps:
1) acquiring and analyzing a video signal of the multicolor laser shot by the camera set;
wherein the video signal comprises a laser line image projected from a laser emitter of the moving label object onto a projection plane of the operating space;
the parsing of the video signal includes: firstly, identifying lines of multi-color laser colors, then determining the starting point and the end point of the space position projected on each surface, and acquiring the space focus position and the attitude direction of a moving label object in a three-dimensional space analysis mode;
2) simultaneously receiving spatial attitude data of the inertial navigation system;
the spatial attitude data of the inertial navigation system is obtained through the inertial navigation system installed on the mobile tag object, and comprises spatial attitude data and position movement data of acceleration data integral, wherein the spatial attitude data and the position movement data are obtained through performing action algorithm fusion on the acceleration and the angular velocity of the mobile tag object;
3) and fusing the position movement data integrated by the space and acceleration data of inertial navigation with the space attitude and relative data acquired by image analysis to obtain the position and attitude of the captured object.
4) When the inertial navigation space attitude and the integral position are fused with the space attitude and the position of image analysis, when the image is judged to be not shielded, the analyzed space position and the analyzed attitude are the space attitude and the position of the image analysis, and when the image is judged to be shielded, the optical space attitude and the position at the previous moment are taken as reference datum data, and the position and the space attitude change increment calculated by the inertial navigation system after the optical shielding moment are added to obtain the real-time space attitude and position.
Further, in the step 1), the multi-color laser includes four colors of red, green, blue and violet, and is a fan-shaped cross line laser diffracted by the optical diffraction sheet.
Further, in step 1), the acquiring of the image includes: the camera sets are positioned on different surfaces of the operation space, the laser projection of each surface of the operation space is read respectively, the color and direction position information of a straight line of a light shadow formed on a projection plane by a laser is analyzed, and the space information is resolved by laser lines obtained by at least four different cameras so as to determine the crossed position as the position of the emission position of the laser light source.
After adopting such design, the invention has at least the following advantages:
the invention adopts the cross line laser emission points to replace the light ball emission, can realize higher cross positioning precision, and simultaneously adopts the cross line laser mode of multi-color coding, and can obtain the space orientation of the object by calculation to obtain the space posture of the object; and the micro mems sensor is combined to acquire the motion information of the object in a wireless and synchronous manner, so that the accurate tracking of the position and the posture of the object can be realized in a short time when the identified object is shielded. The positioning system of the invention can track tiny objects and has wide application range: the laser emitting assembly is attached to the tracked object by the miniature circuit board, so that the laser emitting assembly is small in size, light in weight and easy to fix on various objects.
Drawings
The foregoing is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and the detailed description.
FIG. 1 is a schematic block diagram of one embodiment of a mobile tag device of an object hybrid positioning system suitable for use in an immersive VR operating space of the present invention;
FIG. 2 is a schematic view of the mounting distribution of one embodiment of the camera set of the object mixing and positioning system for an immersive VR operating space of the breast pump of the present invention;
fig. 3 is a schematic diagram of one embodiment of a multi-color laser projection onto frosted glass suitable for use in an object mixing and positioning system for an immersive VR operating space of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The present invention provides an embodiment of an object hybrid positioning system suitable for an immersive VR operating space, as shown in fig. 1-3, particularly suitable for small space positioning, including a mobile tag device and a camera analysis system, wherein: the mobile label device is used for being fixed on a mobile label object in an operation space and comprises a laser emission system and an inertial navigation positioning system, wherein the laser emission system is used for emitting multi-color cross line laser, the inertial navigation positioning system comprises an integrated accelerometer, a gyroscope 15 and a microprocessor MCU16, and the microprocessor 16 is used for carrying out action algorithm fusion on data collected by the integrated accelerometer and the gyroscope to obtain spatial attitude data and inertial reference displacement of the label object and transmitting the spatial attitude data and the inertial reference displacement to a computer for analysis; the camera analysis system comprises a camera set 22 and projection planes 21 which are arranged on an operation space frame, wherein the camera set 22 is used for collecting laser projections projected on the projection planes 21 and transmitting collected signals to a computer terminal for image analysis to obtain space focus positions and attitude directions.
When the invention is used, the mobile label device arranged on the mobile label object emits multi-color cross line laser instead of light ball emission, thus realizing higher cross positioning precision, and simultaneously adopting a cross line laser mode of multi-color coding, and obtaining the space orientation of the object by calculation to obtain the space attitude of the object; and the micro mems inertial sensor is combined to acquire the motion information of the object in a wireless and synchronous manner, so that the accurate tracking of the position and the posture of the object can be realized in a short time when the identified object is shielded. The positioning system of the invention can track tiny objects and has wide application range: the laser emitting assembly is attached to the tracked object by the miniature circuit board, so that the laser emitting assembly is small in size, light in weight and easy to fix on various objects.
The invention provides a hybrid positioning scheme of an optical scheme and an inertia integral compensation scheme for laser pose calculation. The optical scheme part adopts an active cross-arranged multicolor linear laser illuminator moving end to carry out pulse light source emission, and a fixed cross angle camera carries out image acquisition and information calculation on the position and the posture of an object; the inertial sensing system is arranged on the positioned moving end, and carries out integral position calculation under the condition that the moving emission light source is shielded so as to compensate airborne position information during shielding. Therefore, the hybrid positioning system is a hybrid positioning scheme which is suitable for being below a small space (2 meters square) and can effectively resist shielding, not only can high precision be realized, but also better positioning stability and higher cost performance can be ensured in practical application.
Further, the laser emission system includes a laser emitter 13 and a laser optical diffraction sheet 14, and the laser optical diffraction sheet 14 is used for forming the laser emitted by the laser emitter 13 into fan-shaped cross-line laser.
Further, the laser emitter 13 is a semiconductor laser emitter, and can emit red, green, blue, and violet laser light.
Further, the laser optical diffraction sheet 14 is a one-line laser optical diffraction sheet.
Further, the integrated accelerometer and gyroscope 15 is an integrated chip of an accelerometer and an angular velocity meter.
Furthermore, the integrated accelerometer and gyroscope are integrated chips of a three-axis mems accelerometer and a three-axis mems angular velocity meter, and are used for data transmission and distinguishing external communication through different data addresses.
Furthermore, a wireless WIFI data transmitting end is arranged on the inertial navigation positioning system and used for transmitting inertial navigation data to a computer through a local area network;
furthermore, the mobile label device is also provided with a power supply which can be a rechargeable button battery and is used for supplying power to the laser emission system and the inertial navigation positioning system.
Further, the camera group 22 includes at least four cameras for being installed on an upper portion and at least three sides of the operating space, respectively.
Further, the photographing plane 21 is a frosted glass plate or a tensioned gauze for receiving the laser lines to form a projection.
Further, the system comprises a computer analysis module which is a data analysis program running on a computer and is used for obtaining the position and the posture of the captured object through the fusion of the acquired projection image and the inertial navigation spatial posture data.
Further, the analysis mode of the computer is as follows: respectively identifying lines of laser of various colors by analyzing images acquired by various cameras, then determining a starting point and an end point of a space position projected by a projection line of the laser of various colors on each surface, and acquiring a space focus position and a posture direction of a label in a three-dimensional space analysis mode; and simultaneously fusing the spatial attitude data and the position movement data of inertial navigation of the inertial navigation positioning system with the tag spatial attitude and the relative position data obtained by image analysis to obtain the position and the attitude of the captured object.
In another aspect, an object hybrid positioning method suitable for an immersive VR operation space is provided, and the positioning system can be adopted, and includes the following steps:
1) acquiring and analyzing a video signal of the multicolor laser shot by the camera set;
wherein the video signal comprises a laser line image projected from a laser emitter of the moving label object onto a projection plane of the operating space;
the parsing of the video signal includes: firstly, lines of multi-color laser colors are identified, then starting points and end points of space positions projected on each surface by projection lines of the lasers of all colors are determined, and the space focus position and the posture direction of a moving label object are obtained in a three-dimensional space analysis mode;
2) simultaneously receiving spatial attitude data of the inertial navigation system;
the space attitude data of the inertial navigation system is obtained through the inertial navigation system arranged on the mobile tag object, and the space attitude data and the position movement data of the acceleration data integral obtained by performing action algorithm fusion on the acceleration and the angular velocity of the mobile tag object are included;
3) and fusing the position movement data integrated by the space and acceleration data of inertial navigation with the space attitude and relative data acquired by image analysis to obtain the position and attitude of the captured object.
4) When the inertial navigation space attitude and the integral position are fused with the space attitude and the position of image analysis, when the image is judged to be not shielded, the analyzed space position and the analyzed attitude are the space attitude and the position of the image analysis, and when the image is judged to be shielded, the optical space attitude and the position at the previous moment are taken as reference datum data, and the position and the space attitude change increment calculated by the inertial navigation system after the optical shielding moment are added to obtain the real-time space attitude and position.
Further, in step 1), the multi-color laser includes four-color lasers of red, green, blue and violet, and is a fan-shaped cross line laser diffracted by the optical diffraction sheet.
Further, in step 1), the acquiring of the image includes: the camera sets are positioned on different surfaces of the operation space, the laser projection of each surface of the operation space is read respectively, the color and direction position information of a straight line of a light shadow formed on a projection plane by a laser is analyzed, and the space information is resolved by laser lines obtained by at least four different cameras so as to determine the crossed position as the position of the emission position of the laser light source.
One specific embodiment of the present invention may be:
the invention relates to two main components, namely a mobile label device (a wireless laser emission system and an inertial navigation positioning system label) and a camera shooting analysis system.
The laser emission system is fixed at the moving end and is arranged on an object to be tracked through attachment, as shown in fig. 1, the camera analysis system is fixed on a framework of a VR desktop operation space, as shown in fig. 2, each surface of the framework is provided with an organic frosted glass plate or tensioned gauze and is responsible for receiving laser lines to form projection, laser emitted by the laser emission system is in a fan-shaped surface shape, and the projection can form a linear light shadow on a plane, as shown in fig. 3.
The laser emitted by the laser emitting system is projected on a projection plane of a VR operation space where the camera shooting analysis system is located, the camera set respectively reads the laser projection of each surface, analyzes the color and direction position information of a straight line of a light shadow formed by the laser projected on a ground glass plate, performs spatial information calculation through laser lines obtained by four different cameras to determine the crossed position as the position where the mobile label is located at the position where the laser light source emits, and then obtains the relative spatial attitude of the mobile label relative to the area where the camera shooting analysis system is located through spatial attitude information calculation.
The mobile label device is provided with two sets of circuit systems which run in parallel, wherein one is a laser emitting system, and the other is an inertial navigation positioning system, wherein the laser emitting system consists of a discrete semiconductor laser emitter 13 and a line laser optical diffraction sheet 14 which can help to form sector laser. The laser emitter 13 is composed of red, green, blue and purple lasers, and the laser projection effect is shown in fig. 3. The inertial navigation positioning system comprises an integrated chip 15 consisting of an integrated three-axis mems accelerometer and a three-axis mems angular velocity meter, data of the integrated three-axis mems accelerometer and three-axis mems angular velocity meter are transmitted to a microprocessor MCU16 through an IIC bus, the MCU obtains spatial attitude data of a tag through fusion of an action algorithm, the spatial attitude data is transmitted to a local area network through a WIFI radio frequency antenna and is transmitted to a computer for attitude analysis, the spatial attitude of a fixed object is obtained, and laser emission, inertial navigation calculation and WIFI radio frequency of the positioning tag are powered by a 17-welding rechargeable button battery.
The camera group 22 transmits the signals acquired by the video to the computer via the USB hub for image analysis, and the computer firstly identifies lines of four laser colors by analyzing the images acquired by the cameras, then determines the starting point and the ending point of the spatial position projected on each surface, and acquires the spatial focus position and the attitude orientation of the label by means of three-dimensional space analysis. And the data of the inertial navigation positioning system is transmitted to a computer for analysis through a local area network through WIFI, and then the computer fuses the spatial attitude data of inertial navigation and the position movement data of the acceleration data integral with the tag spatial attitude and relative position data obtained by image analysis to obtain the position and attitude of the captured object.
Compared with the radio frequency scheme, the single radio frequency scheme cannot acquire the space attitude, the positioning precision of the invention is higher, and the acquisition of the single-point space attitude can be realized.
Compared with the traditional optical positioning scheme, the problem that small-space high-precision positioning cannot be achieved due to the fact that the positioning light-emitting points are large is solved. The invention can provide an optical high-precision attitude acquisition mode, and adopts multi-color cross line laser to ensure higher positioning precision.
Although the positioning method designed by the scheme also adopts the optical image positioning method, the positioning method is different from other optical images in the following aspects:
1) the precision is high: the cross line laser emission points are adopted to replace optical ball emission, so that higher cross positioning precision can be realized;
2) the tracking device can track tiny objects and has wide application range: the laser emission assembly is attached to the tracked object by adopting the flexible circuit board, so that the laser emission assembly is small in size and easy to attach and fix on the surfaces of objects in various shapes;
3) the spatial attitude of the object can be acquired: the spatial direction of the object can be obtained by calculation by adopting a cross line laser mode of multi-color coding;
4) the miniature mems sensor is adopted to acquire the motion information of the object in a wireless and synchronous manner, so that the accurate tracking of the position and the posture of the object can be realized in a short time when the identified object is shielded.
Compared with the prior art, the radio frequency positioning mode has the problems of poor precision, high cost and difficult arrangement and application in a desktop-level small space; the problem that the conventional Lighthouse indoor positioning technology is complex in arrangement, a plurality of positioning points are fixed by an object to be positioned, and the Lighthouse indoor positioning technology is required to be arranged in a large area and is not suitable for being used on a tiny object; and the problem that the marked object needs larger luminous ball volume in the positioning mode of the optical camera in the traditional technology, the high-precision positioning of small space cannot be realized, and the posture of the object cannot be determined due to the tracking of the ball.
The invention has the beneficial effects that: 1) a multicolor crossed linear laser mode is adopted, and besides positioning, the spatial attitude can be obtained; 2) positioning and space attitude loss caused by short-time shielding during optical positioning can be avoided by means of optical pose and inertial navigation compensation; 3) adopting frosted glass or gauze to obtain the projected image of the cross laser, so that the projection line can be clearly seen;
the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention in any way, and it will be apparent to those skilled in the art that the above description of the present invention can be applied to various modifications, equivalent variations or modifications without departing from the spirit and scope of the present invention.

Claims (10)

1. An object hybrid positioning system suitable for use in an immersive VR operating space, comprising a mobile tag device and a camera analysis system, wherein:
the mobile tag device is used for being fixed on a mobile object in an operation space and comprises a laser emitting system and an inertial navigation positioning system, wherein the laser emitting system is used for emitting laser with a plurality of color crossed lines, the inertial navigation positioning system comprises a microprocessor, an integrated accelerometer and a gyroscope, and the microprocessor is used for carrying out action algorithm fusion on data collected by the integrated accelerometer and the gyroscope to obtain spatial attitude data and inertial reference displacement of the tag object and transmitting the spatial attitude data and the inertial reference displacement to a computer for analysis;
the camera shooting analysis system comprises a camera set and projection planes, wherein the camera set and the projection planes are used for being installed on an operation space frame, and the camera set is used for collecting laser projection signals projected onto the projection planes and transmitting the collected signals to a computer end for image analysis.
2. The object-mixing positioning system suitable for use in an immersive VR operating space of claim 1, wherein the laser emitting system includes a laser emitter and a laser optic diffraction plate for shaping laser light emitted by the laser emitter into fan-shaped cross-line laser light.
3. The object-mixing positioning system suitable for use in an immersive VR operating space of claim 2, wherein the laser emitter is a semiconductor laser emitter capable of emitting red, green, blue, and violet lasers;
and/or the laser optical diffraction sheet is a word line laser optical diffraction sheet.
4. The object hybrid positioning system for an immersive VR operating space of any of claims 1 to 3 wherein the integrated accelerometer and gyroscope are integrated chips of a three-axis mems accelerometer and a three-axis mems angular velocity meter for data transfer differentiated external communication via different data addresses.
5. The object hybrid positioning system suitable for the immersive VR operating space of any of claims 1 to 4, wherein a WIFI data transmitting terminal is provided on the inertial navigation positioning system for transmitting inertial navigation data to a computer via a local area network;
and/or the mobile label device is also provided with a power supply for supplying power to the laser emission system and the inertial navigation positioning system.
6. The object-mixing positioning system suitable for use in an immersive VR operation space of any of claims 1 to 5, wherein the set of cameras includes at least four cameras for mounting on an upper portion and at least three sides of the operation space, respectively;
and/or the photographic plane is a ground glass plate or a tensioned gauze and is used for receiving laser lines to form a projection.
7. The object hybrid positioning system for immersive VR operation space of any of claims 1 to 6, further comprising a computer analysis module, the module being a data analysis program running on a computer for obtaining the position and orientation of a captured object by fusing the acquired projection images and the navigated spatial orientation data, the computer analysis module being configured to: respectively identifying lines of laser of various colors by analyzing images acquired by various cameras, then determining a starting point and an end point of a space position projected by a projection line of the laser of various colors on each surface, and acquiring a space focus position and a posture direction of a label in a three-dimensional space analysis mode; and simultaneously fusing the spatial attitude data of inertial navigation of the inertial navigation positioning system and the position movement data of the acceleration data integral with the label spatial attitude and relative position data obtained by image analysis to obtain the position and attitude of the captured object.
8. An object mixing and positioning method suitable for an immersive VR operation space, comprising the following steps:
1) acquiring and analyzing a video signal of the multicolor laser shot by the camera set;
wherein the video signal comprises a laser line image projected from a laser emitter of the moving label object onto a projection plane of the operating space;
the parsing of the video signal includes: firstly, identifying lines of multi-color laser colors, then determining the starting point and the end point of the space position projected by the lines of the laser of each color on each surface, and acquiring the space focus position and the posture direction of a moving label object in a three-dimensional space analysis mode;
2) simultaneously receiving spatial attitude data of the inertial navigation system;
the spatial attitude data of the inertial navigation system is obtained through the inertial navigation system installed on the mobile tag object, and comprises spatial attitude data and position movement data of acceleration data integral, wherein the spatial attitude data and the position movement data are obtained through performing action algorithm fusion on the acceleration and the angular velocity of the mobile tag object;
3) the position and the posture of the captured object are obtained by fusing position movement data integrated by space and acceleration data of inertial navigation with space posture and relative data obtained by image analysis;
4) when the inertial navigation space attitude and the integral position are fused with the space attitude and the position of image analysis, when the image is judged to be not shielded, the analyzed space position and the analyzed attitude are the space attitude and the position of the image analysis, and when the image is judged to be shielded, the optical space attitude and the position at the previous moment are taken as reference datum data, and the position and the space attitude change increment calculated by the inertial navigation system after the optical shielding moment are added to obtain the real-time space attitude and position.
9. The method of claim 8, wherein in step 1), the multi-color laser comprises four colors of red, green, blue and violet, and is a cross-sectional fan laser diffracted by an optical diffraction plate.
10. The object-mixing localization method suitable for use in an immersive VR operation space of claim 8 or 9, wherein in step 1), the obtaining of the image comprises: the camera sets are positioned on different surfaces of the operation space, the laser projection of each surface of the operation space is read respectively, the color and direction position information of a straight line of a light shadow formed on a projection plane by a laser is analyzed, and the space information is resolved by laser lines obtained by at least four different cameras so as to determine the crossed position as the position of the emission position of the laser light source.
CN202010073659.5A 2020-01-22 2020-01-22 Object mixing positioning system and positioning method suitable for immersive VR operation space Pending CN111156996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010073659.5A CN111156996A (en) 2020-01-22 2020-01-22 Object mixing positioning system and positioning method suitable for immersive VR operation space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010073659.5A CN111156996A (en) 2020-01-22 2020-01-22 Object mixing positioning system and positioning method suitable for immersive VR operation space

Publications (1)

Publication Number Publication Date
CN111156996A true CN111156996A (en) 2020-05-15

Family

ID=70565002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010073659.5A Pending CN111156996A (en) 2020-01-22 2020-01-22 Object mixing positioning system and positioning method suitable for immersive VR operation space

Country Status (1)

Country Link
CN (1) CN111156996A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546247A (en) * 2022-10-21 2022-12-30 北京中科深智科技有限公司 Motion capture method based on LightHouse positioning system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546247A (en) * 2022-10-21 2022-12-30 北京中科深智科技有限公司 Motion capture method based on LightHouse positioning system

Similar Documents

Publication Publication Date Title
CN104217439B (en) Indoor visual positioning system and method
Foxlin et al. VIS-Tracker: A Wearable Vision-Inertial Self-Tracker.
CN209230639U (en) A kind of coordinate location device based on laser navigation
JP2014066728A (en) Device and method for measuring six degrees of freedom
CN110230983A (en) Antivibration formula optical 3-dimensional localization method and device
WO2010054519A1 (en) A device and method for measuring 6 dimension posture of moving object
US9588214B2 (en) Sensing direction and distance
CN110274594B (en) Indoor positioning equipment and method
CN207164367U (en) AR glasses and its tracing system
CN104819718B (en) 3D photoelectric sensing alignment systems
CN106405495A (en) VR equipment positioning system and positioning method thereof
CN113191388A (en) Image acquisition system for target detection model training and sample generation method
CN113985390B (en) Optical positioning system and light following method
CN109343000A (en) A kind of indoor visible light imaging positioning system and localization method
Lam et al. Visible light positioning for location-based services in industry 4.0
CN109029423A (en) Substation's indoor mobile robot navigation positioning system and its navigation locating method
CN107462248A (en) A kind of indoor optical positioning system and its application method
CN111156996A (en) Object mixing positioning system and positioning method suitable for immersive VR operation space
CN109032329A (en) Space Consistency keeping method towards the interaction of more people's augmented realities
CN111596259A (en) Infrared positioning system, positioning method and application thereof
CN209530065U (en) A kind of coordinate location device based on image
US20230069480A1 (en) Uav positioning system and method for controlling the position of an uav
CN212256370U (en) Optical motion capture system
CN212058799U (en) Object mixing and positioning system suitable for immersive VR operation space
CN109341687A (en) It is a kind of based on mobile phone any level towards single LED visible light communication indoor orientation method of angle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination