CN106843460B - Multiple target position capture positioning system and method based on multi-cam - Google Patents
Multiple target position capture positioning system and method based on multi-cam Download PDFInfo
- Publication number
- CN106843460B CN106843460B CN201611144669.3A CN201611144669A CN106843460B CN 106843460 B CN106843460 B CN 106843460B CN 201611144669 A CN201611144669 A CN 201611144669A CN 106843460 B CN106843460 B CN 106843460B
- Authority
- CN
- China
- Prior art keywords
- coordinate
- camera
- led light
- server
- glasses
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a kind of, and the multiple target position based on multi-cam captures positioning system, including server-side, at least two cameras, multiple VR glasses, client and photosphere;The number of client, photosphere and VR glasses is all the same;LED light ball provides experiencer position for camera;Server-side receives the image data that camera is sent and processing obtains experiencer's position data;Client reads the Data Concurrent of VR glasses transmission to server-side;The experiencer's position data sent by server-side is received and handled, the position data of total experience person is scaled to the coordinate of virtual environment, virtual environment or model is generated in real time, gives virtual screen real-time rendering to VR glasses.The present invention overcomes existing VR glasses only to support small range head movement and limitation by handle operational movement, so that the restraint of liberty before experiencer completely disengages is mobile, provides with the virtual reality experience close to true environment;It can support more people interaction in same virtual environment jointly.
Description
Technical field
The invention belongs to virtual reality multiposition to track field, and in particular to a kind of positioning system and method, especially one
Multiple target position capture positioning system and method for the kind based on multi-cam.
Background technique
Virtual reality (Virtual Reality:VR) is the new and high technology occurred in recent years: being generated using computer simulation
The virtual environment of one three-dimensional space allows body by the pseudo-experience being supplied on the sense organs such as experiencer's vision, the sense of hearing, tactile
This virtual environment of impression on the spot in person is tested, or even is interacted with virtual environment.In recent years, virtual reality is certain special
Industry field, such as military, medical treatment even development of real estate etc. have larger application.But as u s company Oculus is developed
A immersive VR glasses Oculus Rift, the distance of the virtual reality that furthered and normal domestic.Oculus
High-resolution display screen built in Rift may provide the user with the high definition picture of 1280X800 resolution ratio, and horizontal viewable angle is 90
Degree, plumb visible angle are 110 degree, and the posture and motion sensor of 9 freedom degrees are with the frequency tracking head movement rail of 1000HZ
Mark provides a kind of completely new virtual impression with comprehensive visual experience for user.With the release of Oculus Rift, state
Outer and domestic all big enterprises and some start-up companies are all proposed respective virtual reality glasses.They have the disadvantage that
To head position capture limitation make experiencer can only be sitting on chair interacted by keyboard, mouse or handle or
Person controls movement, experiencer can not be allowed to experience on the spot in person, is really dissolved into virtual environment, experience effect is bad.
VR glasses can only be as a part under the entire frame of virtual reality, how by body as the supplier of virtual vision
The person of testing projects in virtual environment and the movement for controlling virtual portrait by the movement of experiencer itself will be virtual reality
Next step urgent problem to be solved.And the solution most straightforward approach of this problem is to utilize indoor positioning, position capture skill
Art;Existing indoor wireless locating system mainly uses the short distances such as mobile base station, infrared, ultrasonic wave, bluetooth, Wi-Fi, RFID
Wireless technology, these equipment are not only at high price, and installation configuration is complicated, while precision is also unable to satisfy wanting for virtual reality experience
It asks.Currently, be widely used using the position that computer vision technique captures glasses by camera is mobile by all big enterprises, but by
In the limitation of single camera, the small-scale head simple motion of single goal can only also be captured, can not be expanded to entire
Room or bigger region.
Summary of the invention
For above the shortcomings of the prior art, the object of the present invention is to provide a kind of based on the more of multi-cam
Target position captures positioning system and method.System and method of the invention can be realized the omnidirectional's solid for capturing several experiencers
Movement, and accurate Real-time Feedback is to virtual environment.
To achieve the goals above, the present invention, which adopts the following technical scheme that, is resolved:
A kind of multiple target position capture positioning system based on multi-cam, including server-side, client and VR glasses,
It is characterized in that, it further includes at least two cameras, server-side, multiple LED light balls that the client and VR glasses, which have multiple,;
Wherein, the number of client, LED light ball and VR glasses is all the same;Each camera is connect with server-side;VR glasses and client
End connection;Server-side and client pass through wireless network connection;Multiple cameras are evenly distributed on experience ceiling portion;Server-side peace
It is indoor mounted in experience.
Above each component is for completing following function:
The camera is used to acquire experiencer and wears the image data of photosphere, and sends the server-side to;
The LED light ball provides the position of experiencer for shining for camera;
The server-side is for initializing VR glasses, LED light ball, camera;For receiving all camera shooting hairs
The image data sent simultaneously is handled, and experiencer's position data that processing obtains is distributed to each client;
The client uses portable computer, for reading the Data Concurrent of VR glasses transmission to server-side;It is used in combination
In the experiencer's position data received and processing is sent by server-side, the position data of total experience person is scaled virtual environment
Coordinate, and generate virtual environment or model in real time, finally give virtual screen real-time rendering to VR glasses;
The VR glasses obtain built-in gyro data, magnetometer data, gravity count value automatically in real time, and are used to show
Show 3D virtual scene, the experiencer of the VR glasses and the position of other experiencers are worn in display.
Further, the server-side includes the functional module being connected as follows:
Denoising and binarization block, for realizing: Gaussian smoothing removal is carried out to the image data that all cameras transmit
Noise;Binary image is obtained to the image procossing after removal noise;
Contour images extraction module, for realizing: binary image is handled to obtain contour images, and obtains the binaryzation
Profile number in image;
Profile image coordinate extraction module, for realizing: coordinate sequence is extracted to each contour images;
Photosphere extraction module, for realizing: circular curve is fitted using least square method to each coordinate sequence, judgement is each
Whether coordinate sequence meets circular curve, is and the recording light ball's dead center then using the radius of the coordinate sequence as the LED light radius of a ball
Coordinate;Otherwise, leave out coordinate sequence;
Photosphere coordinate calculation module, for realizing: it determines the LED light radius of a ball, obtains each LED light ball and taken the photograph relative to capture
As the coordinate of head coordinate system:
Photosphere true three-dimension coordinate calculation module, for realizing: it is sat by obtained coordinate and the opposite of capture camera
Mark, is calculated the true three-dimension coordinate of the corresponding experiencer of each LED light ball in the room in real time.
Further, the photosphere coordinate calculation module for realizing: determine the LED light radius of a ball, obtain each LED light ball
Relative to capture camera coordinate system coordinate, in particular to:
According to camera projection theory, camera wide-angle, these three parameter presets of wide angle projection angle and photosphere radius, benefit
Each LED light ball is calculated to camera distance with Pearson's type Distribution Algorithm: finally being obtained each LED light ball and is taken the photograph relative to capture
As the coordinate value (x, y, distance) of head coordinate system:
Wherein, radius=R;Remaining parameter is function internal reference in formula.
A kind of multiple target position capturing and positioning method based on multi-cam, comprising the following steps:
Step 1: server-side initializes VR glasses, LED light ball, camera, establishes tcp with client and connect;It is simultaneously every
A experiencer distributes the LED light ball of different colours;
Step 2: each camera captured in real-time, when occurring LED light ball in camera field range, camera is by image
Data hair is transmitted to server-side;Each client reads the attitude gyroscope data for the VR glasses connecting with itself, magnetic force counts
According to, gravity count value, real-time transmission to server-side;
Step 3: server-side calculates each LED light ball in video camera capturing visual according to the image data received in real time
On plane coordinates and photosphere size;Each LED light ball is calculated by the plane coordinates of LED light ball and photosphere size again
True three-dimension coordinate in locating room.
Step 4: server-side by each LED light ball room true three-dimension coordinate together with the VR glasses gyroscope posture
Data, magnetometer data and gravimeter data are packaged, and are distributed to the corresponding client of each VR glasses in real time;
Step 5: the true three-dimension coordinate transformation of total experience person is the coordinate of virtual environment by client, and is generated in real time
Virtual environment or model, finally by virtual screen real-time rendering to the VR glasses of experiencer.
Further, in the step 1, server-side includes camera coordinate to camera Initialize installation;The camera shooting
Head coordinate, which initializes, includes:
It sets one of camera two dimension base to be designated as (0,0), it is indoor around experience that an experiencer wears LED light ball
One week, overlapping region LED light spherical coordinates position of other cameras by the correlation and acquisition with benchmark camera, life
At itself relative coordinate parameter and save.Wherein, the relative coordinate (x of i-th of camerai, yi) calculated by following formula:
(xi,yi)=(xMark-x,yMark-y)
In formula, (xMark,yMark) it is the standard indoor plane coordinate that LED light ball is captured according to benchmark camera;(x, y) is i-th
LED light spherical coordinates in a camera captured images;I=2 ..., p;P is the quantity for experiencing camera in room.
Further, specific step is as follows for step 3:
Step 31: server-side carries out Gaussian smoothing removal noise to the image data that all cameras transmit;Then to going
Except the image after noise is handled to obtain binary image;
Step 32: server-side utilizes and handles binary image to obtain contour images, and obtains in the binary image
Profile number;
Step 33: coordinate sequence is extracted to each contour images;
Step 34: circular curve being fitted using least square method to each coordinate sequence, judges whether each coordinate sequence accords with
Circular curve is closed, is and the recording light ball's dead center coordinate then using the radius of the coordinate sequence as the LED light radius of a ball;Otherwise, leave out
Coordinate sequence;
Step 35: it determines the LED light radius of a ball, obtains coordinate of each LED light ball relative to capture camera coordinate system:
Step 36: each LED light is calculated in the coordinate value obtained by step 35 and the relative coordinate for capturing camera
The true three-dimension coordinate of the corresponding experiencer of ball in the room;
Further, the step 35 determines the LED light radius of a ball, obtains each LED light ball and sits relative to capture camera
The coordinate of mark system specifically includes:
According to camera projection theory, camera wide-angle, these three parameter presets of wide angle projection angle and LED light ball half
Diameter utilizes Pearson's type Distribution Algorithm to calculate each LED light ball to camera distance: finally obtain each LED light ball relative to
Capture the coordinate value (x, y, distance) of camera coordinate system:
Wherein, radius=R;Remaining parameter is function internal reference.
Beneficial effects of the present invention are as follows:
1, existing VR glasses are overcome and only support small range head movement and the limitation by handle operational movement, are led to
It crosses plurality of devices and wide place combines, so that the restraint of liberty before experiencer completely disengages is mobile, provide with close to very
The virtual reality experience of real environment.
2, compared to the virtual reality experience of current single single machine, the present invention by multiple client, multiple VR glasses and
The cooperation of multiple photospheres can support more people interaction in same virtual environment jointly, by capture more people's location informations to
It realizes mutually visible and joint activity can be carried out, the richer multiplicity of the experience of virtual reality.
3, true environment coordinate and virtual environment coordinate are established into connection, really makes virtual and true hook, be virtual existing
Entity, which is tested, provides a possibility that more.
4, environment arrangement of the present invention is set up simple, need to only be modified to camera quantity and parameter for varying environment,
Whole system scalability with higher and transplantability.
Detailed description of the invention
Fig. 1 is the flow chart of the multiple target position capturing and positioning method of the invention based on multi-cam.
Fig. 2 is the schematic diagram that the multiple target position of the invention based on multi-cam captures positioning system.
Fig. 3 is virtual reality multi-target tracking positioning system indoor equipment embodiment schematic diagram of the present invention.
Fig. 4 is that camera of the present invention captures photosphere image and calculates locating plane coordinates and radius schematic diagram.Wherein, scheme
(a) and figure (b) is in the overlapping coverage for having photosphere while appearing in adjacent camera respectively, and two cameras are caught respectively
The picture obtained;Figure (c) is the picture of capture when 2 photospheres while when appearing in a camera coverage;Scheme (d) and
Figure (e) is the picture that two cameras capture respectively respectively when photosphere leaves adjacent camera overlapping shooting area.
Fig. 5 is the flow chart that server-side carries out photosphere tracking in the present invention.
The invention will be further elaborated with reference to the accompanying drawing.
Specific embodiment
As shown in Figure 1 and Figure 2, the multiple target position of the invention based on multi-cam captures positioning system, including at least two
A camera 31, server-side 33, multiple client 36, multiple LED light balls 34 and multiple VR glasses 35;Wherein, client 36,
The number of LED light ball 34 and VR glasses 35 is all the same;Each camera 31 is connect by USB connecting line with server-side 33;VR
Glasses 35 are connect with client 36 by HDMI and USB connecting line;Server-side 33 and client 36 pass through wireless network connection;It takes the photograph
As first 31 be evenly distributed on experience ceiling portion;It is indoor that server-side 33 is mounted on experience;LED light ball 34 and VR glasses 35 are worn respectively
On experiencer head;Client 36 is worn on experiencer.
Above each component is for completing following function:
Camera 31 is used to acquire experiencer and wears the image data of photosphere, and sends server-side 33 to;
For LED light ball 34 for shining to provide the position of experiencer for camera, photosphere color and experiencer one are a pair of
It answers;
Server-side 33 is for initializing VR glasses, LED light ball, camera;It is sent out for receiving all cameras 31
The image data sent simultaneously is handled, and experiencer's position data that processing obtains is distributed to each client 36 by Wi-Fi;
Client 36 uses portable computer, for reading attitude gyroscope data, the magnetometer of the transmission of VR glasses 35
Data, gravity count value, issue server-side;And for receiving and processing the experiencer's position data sent by server-side, by institute
There is the position data of experiencer to be scaled the coordinate of virtual environment, and generate virtual environment or model in real time, finally will virtually draw
VR glasses of the face real-time rendering to experiencer;
VR glasses 35 are automatic in real time to obtain built-in gyro data, magnetometer data, gravity count value, and is used to show
The experiencer of the VR glasses and the position of other experiencers are worn in 3D virtual scene, display.
Preferably, server-side includes the functional module being connected as follows:
Denoising and binarization block, for realizing following function: carrying out Gauss to the image data that all cameras transmit
Smooth removal noise;Then LED light ball is worn according to experiencer and initializes color RPG, single channel array each in image is answered
It is operated with fixed threshold, obtains binary image;
Contour images extraction module, for realizing following function: server-side is using bianry image contours extract algorithm to two
Value image procossing obtains contour images, and obtains the profile number in the binary image;
Profile image coordinate extraction module, for realizing following function: extracting coordinate sequence: { x to each contour imagesi,
yi}j, wherein j is 1,2 ..., n, and n indicates the number of the coordinate sequence of contour images, and i 1,2 ..., m, m indicate j-th of wheel
The point coordinate number for including in wide coordinate sequence;
Photosphere extraction module, for realizing following function: circular curve is fitted using least square method to each coordinate sequence:
R2=(x-A)2+(y-B)2, judge whether each coordinate sequence meets circular curve, be then using the radius R of the coordinate sequence as
The LED light radius of a ball, and recording light ball's dead center coordinate (A, B);Otherwise, leave out coordinate sequence, to filter out non-circular profile;
Least square method is used in this module, can be realized the complete statistics when some LED light ball is at least partially obscured.It can
To estimate that overall profile area, the contour area are surrounded by the string of profile camber line and connection two-end-point by partial contour image
Region area is estimated;Then pass through the overall profile areal calculation LED light spherical coordinates and photosphere radius of estimation.To sum up, make
Judgement processing is carried out to profile with least square method, can be avoided the omission statistics when LID photosphere is at least partially obscured, so as to
It must show each experiencer to enough complete and accurates, ensure that the experience effect of experiencer is accurate, complete.
Photosphere coordinate calculation module, for realizing following function: according to camera projection theory, camera wide-angle, wide-angle
The photosphere radius that these three parameter presets of projection angle and step 33 obtain calculates each LED using Pearson's type Distribution Algorithm
Photosphere is to camera distance: finally obtain each LED light ball relative to capture camera coordinate system coordinate value (x, y,
Distance):
Wherein, radius=R, remaining is function internal reference;
Photosphere true three-dimension coordinate calculation module, for realizing following function: by coordinate value (x, y, distance) and
Capture the relative coordinate (x of camerai, yi), the true three-dimension of the corresponding experiencer of each LED light ball in the room is calculated
Coordinate (x+xi, y+yi, distance).
As shown in figure 3, a kind of multiple target position capturing and positioning method based on multi-cam provided by the invention, including with
Lower step:
Step 1: server-side initializes VR glasses, LED light ball, camera, establishes tcp with client and connect;It is simultaneously every
A experiencer distributes the photosphere of different colours, photosphere color and experiencer's identity Corresponding matching;
Wherein, server-side includes camera coordinate, frequency, brightness initialization to camera Initialize installation;Optionally, it takes
Business end carries out the concrete operations of coordinate initialization to each camera: it sets one of camera two dimension base and is designated as (0,0),
It is one week indoor around experience that one experiencer wears LED light ball, other cameras by the correlation with benchmark camera with
And the overlapping region LED light spherical coordinates position obtained, it generates the relative coordinate parameter of itself and saves.Wherein, i-th of camera
Relative coordinate (xi, yi) calculated by following formula:
(xi,yi)=(xMark-x,yMark-y)
In formula, (xMark,yMark) it is the standard indoor plane coordinate that LED light ball is captured according to benchmark camera;(x, y) is i-th
LED light spherical coordinates in a camera captured images;I=2 ..., p;P is the quantity for experiencing camera in room.
In the present invention, be worn on experiencer using photosphere, can in experiencer's motion process by camera rapidly,
It is accurately captured motion images and experiencer position.
Step 2: each camera captured in real-time, when occurring LED light ball in camera field range, camera is by image
Data hair is transmitted to server-side;Each client reads the attitude gyroscope data for the VR glasses connecting with itself, magnetic force counts
According to, gravity count value, real-time transmission to server-side;
Step 3: server-side calculates each LED light ball in video camera capturing visual according to the image data received in real time
On plane coordinates and photosphere size;Each LED light ball is calculated by the plane coordinates of LED light ball and photosphere size again
True three-dimension coordinate in locating room.
Optionally, server-side calculates each LED light ball putting down on video camera capturing visual using computer vision algorithms make
Areal coordinate and photosphere size;Through testing, the open source algorithm that OPENCV is provided more is applicable in.
Specific step is as follows for step 3:
Step 31: server-side carries out Gaussian smoothing removal noise to the image data that all cameras transmit;Then basis
Experiencer wears LED light ball and initializes color RPG, operates to image single channel array application fixed threshold, obtains binary picture
Picture;
Step 32: server-side handles binary image using bianry image contours extract algorithm to obtain contour images, and
Obtain the profile number in the binary image;
The principle of bianry image contours extract algorithm is to generate profile by hollowing out interior pixels point, and 8 of bright spot are adjacent
The all bright spots of pixel, then the point is interior pixels point, otherwise is profile point.Background is set by all interior pixels points
Color completes contours extract.And because the LED light ball that the present invention is applicable in is circle, judge chamfered shape to non-circular profile
It is filtered.
Step 33: coordinate sequence: { x is extracted to each contour images that step 32 obtainsi,yi}j, wherein j be 1,2 ...,
N, n indicate the number of the coordinate sequence of contour images, and i 1,2 ..., m, m indicate the point for including in j-th of profile coordinate sequence
The number of coordinate;
Step 34: circular curve: R is fitted using least square method to each coordinate sequence that step 33 obtains2=(x-A)2+
(y-B)2, judge whether each coordinate sequence meets circular curve, be then using the radius R of the coordinate sequence as the LED light radius of a ball,
And recording light ball's dead center coordinate (A, B);Otherwise, leave out coordinate sequence, to filter out non-circular profile;
Least square method is used in this step, can be realized the complete statistics when some LED light ball is at least partially obscured.It can
To estimate that overall profile area, the contour area are surrounded by the string of profile camber line and connection two-end-point by partial contour image
Region area is estimated;Then pass through the overall profile areal calculation LED light spherical coordinates and photosphere radius of estimation.To sum up, make
Judgement processing is carried out to profile with least square method, can be avoided the omission statistics when LID photosphere is at least partially obscured, so as to
It must show each experiencer to enough complete and accurates, ensure that the experience effect of experiencer is accurate, complete.
Step 35: according to camera projection theory, camera wide-angle, these three parameter presets of wide angle projection angle and step
33 obtained photosphere radiuses calculate each LED light ball to camera distance using Pearson's type Distribution Algorithm: final to obtain each
Coordinate value (x, y, distance) of the LED light ball relative to capture camera coordinate system:
Wherein, radius=R;Remaining parameter is function internal reference.
Step 36: the coordinate value (x, y, distance) obtained by step 35 and the relative coordinate (x for capturing camerai,
yi), the true three-dimension coordinate (x+x of the corresponding experiencer of each LED light ball in the room is calculatedi, y+yi, distance).
The algorithm carries out dynamic adjustment to camera coordinate in system operation, and guarantee will not be because of the shifting of camera
Dynamic or other reasons lead to the positional fault of experiencer, substantially increase the stability and robustness of system.
Step 4: server-side by each LED light ball room true three-dimension coordinate together with the VR glasses gyroscope posture
Data, magnetometer data and gravimeter data are packaged, and are distributed to the corresponding client of each VR glasses in real time.
Step 5: client is for different application (such as: VR game, Virtual shop, emulation experience), by total experience person
True three-dimension coordinate transformation be the coordinate of virtual environment, and virtual environment or model are generated in real time, finally by virtual screen reality
When render that (rendering refers to that three-dimensional picture is presented to the screen of VR glasses by computer picture engine in real time to the VR glasses of experiencer
Curtain) so that experiencer sees oneself with other experiencers in the movement of virtual world by VR glasses.
Whether server-side real-time judge client terminates operation, is then return step 2.
Embodiment 1:
Inventor in true room to this invention as described actual scene experimental verification, used in experiment PSEYE as
Camera is tested, wherein parameter are as follows:
Pseye_distance_parameters=
Height=*/517.281/*,
Center=*/1.297338/*,
Hwhm=*/3.752844/*,
Shape=*/0.4762335/*,
}。
Concrete scene is as follows:
Step 1 disposes test site:
The example is in one piece of 20m2The interior space in configuration 4 pseye, 120 ° of wide-angle cameras and be connected to server-side
Computer, single camera terrain clearance 3m, overlay area 10.392m2.Totally 2 experiencers wear Oculus DK2, psmove light
Ball equipment and convenient computer client.Server computer and the convenient computer of client access same WLAN jointly
It is interior.
Step 2, equipment, context initialization:
The present invention only need to initialize basis coordinates (0,0) for a camera setting, after startup program, wherein an experiencer
Row, which takes a round, indoors can be completed the coordinate initial work of all cameras, and the coordinate initialized will automatically save
In server-side, so this system need to only initialize once in a certain room, it is not necessarily to repetitive operation.In client initialization, clothes
Business device can be established with client stablizes TCP connection, and distributes identity of the photosphere color to distinguish different experiencers for client.
Step 3, server-side real-time detection simultaneously position:
Server computer collects 4 collected pictures of pseye camera according to the algorithm description side above-mentioned steps 31-36
Formula is detected and is positioned to 2 experiencers, while server-side collects the head gyro data sent of client and by position
Information carries out integration packing and is distributed to 2 experiencers.Fixed threshold wherein is operated to image single channel array application fixed threshold
It is 128.
What we were artificial in an experiment slightly moves some camera position, but since our system can be adjusted dynamically
Whole update camera coordinate, so experiencer and be discovery this case, it was demonstrated that our system have certain robustness and
Anti-interference ability.
Step 4, the rendering of client VR picture:
2 experiencer client's sections constantly receive the position coordinates of itself and the position of other side and head rotation data, from
And may map to the position of virtual environment, oneself is moved freely in virtual environment and two experiencers can phase
Mutually see.Since our true room coordinates and virtual environment coordinate have mutual mapping relations, so we are in virtual ring
It is also provided with wall and desk etc. in border, is corresponded to each other with true room.So that experiencer does not have more true impression simultaneously also not
It will appear and hit wall or the case where 2 experiencers mutually collide.
Finally, undergoing by experiencer, it was demonstrated that the system can accurately capture experiencer position and anti-in real time
It is fed to virtual environment, experiencer's transitions smooth between multiple groups camera, without any pause and transition in rhythm or melody sense.Meanwhile system configuration of the invention
It is convenient, without be arranged by hand each camera coordinate value and can automatic adjusument coordinate parameters in the process of running, Bu Huiyin
The slight variations affect experience effect of camera position, angle, stability and robustness with higher.
Embodiment 2:
The step of the present embodiment is with embodiment 1 is identical, and difference is to expand interior space area to 40m2, increase simultaneously
To 8 cameras.Experimental result is same as Example 1.
Pass through embodiment 1 and the actual test of embodiment 2, it was demonstrated that this system has certain scalability, can be competent at more
The virtual reality task of broad regions.
Claims (3)
1. a kind of multiple target position based on multi-cam captures positioning system, which is characterized in that including server-side, client and
VR glasses, which is characterized in that the client and VR glasses have multiple, further include at least two cameras, server-side, multiple
LED light ball;Wherein, the number of client, LED light ball and VR glasses is all the same;Each camera is connect with server-side;VR
Mirror is connect with client;Server-side and client pass through wireless network connection;Multiple cameras are evenly distributed on experience ceiling portion;
It is indoor that server-side is mounted on experience;
Above each component is for completing following function:
The camera is used to acquire experiencer and wears the image data of photosphere, and sends the server-side to;
The LED light ball provides the position of experiencer for shining for camera;
The server-side is for initializing VR glasses, LED light ball, camera;It is sent for receiving all cameras
Image data is simultaneously handled, and experiencer's position data that processing obtains is distributed to each client;
The client uses portable computer, for reading the Data Concurrent of VR glasses transmission to server-side;And for connecing
The experiencer's position data sent by server-side is received and handled, the position data of total experience person is scaled to the seat of virtual environment
Mark, and virtual environment or model are generated in real time, finally give virtual screen real-time rendering to VR glasses;
The VR glasses obtain built-in gyro data, magnetometer data, gravity count value automatically in real time, and are used to show 3D
The experiencer of the VR glasses and the position of other experiencers are worn in virtual scene, display;
The server-side includes the functional module being connected as follows:
Denoising and binarization block, for realizing: Gaussian smoothing removal noise is carried out to the image data that all cameras transmit;
Binary image is obtained to the image procossing after removal noise;
Contour images extraction module, for realizing: binary image is handled to obtain contour images, and obtains the binary image
In profile number;
Profile image coordinate extraction module, for realizing: coordinate sequence is extracted to each contour images;
Photosphere extraction module, for realizing: circular curve is fitted using least square method to each coordinate sequence, judges each coordinate
Whether sequence meets circular curve, is then using the radius of the coordinate sequence as the LED light radius of a ball, and recording light ball's dead center is sat
Mark;Otherwise, leave out coordinate sequence;
Photosphere coordinate calculation module, for realizing: it determines the LED light radius of a ball, obtains each LED light ball relative to capture camera
The coordinate of coordinate system:
Photosphere true three-dimension coordinate calculation module, for realizing: it is real by obtained coordinate and the relative coordinate for capturing camera
When the true three-dimension coordinate of the corresponding experiencer of each LED light ball in the room is calculated;
The photosphere coordinate calculation module for realizing: determine the LED light radius of a ball, obtain each LED light ball and taken the photograph relative to capture
As head coordinate system coordinate, in particular to:
According to camera projection theory, camera wide-angle, these three parameter presets of wide angle projection angle and photosphere radius, skin is utilized
Ademilson type Distribution Algorithm calculates each LED light ball to camera distance: finally obtaining each LED light ball relative to capture camera
The coordinate value (x, y, distance) of coordinate system:
Wherein, radius=R;Distance refers to each LED light ball to camera distance;Remaining parameter is in function in formula
Ginseng, height=517.281, center=1.297338, hwhm=3.752844, shape=0.4762335.
2. a kind of multiple target position capturing and positioning method based on multi-cam, which comprises the following steps:
Step 1: server-side initializes VR glasses, LED light ball, camera, establishes tcp with client and connect;It is simultaneously every individual
The LED light ball of the person's of testing distribution different colours;
Step 2: each camera captured in real-time, when occurring LED light ball in camera field range, camera is by image data
Hair is transmitted to server-side;Each client reads the attitude gyroscope data of the VR glasses connecting with itself, magnetometer data, again
Power count value, real-time transmission to server-side;
Step 3: server-side calculates each LED light ball on video camera capturing visual according to the image data received in real time
Plane coordinates and photosphere size;The locating of each LED light ball is calculated by the plane coordinates of LED light ball and photosphere size again
True three-dimension coordinate in room;
Specific step is as follows for the step 3:
Step 31: server-side carries out Gaussian smoothing removal noise to the image data that all cameras transmit;Then it makes an uproar to removal
Image after sound is handled to obtain binary image;
Step 32: server-side utilizes and handles binary image to obtain contour images, and obtains the profile in the binary image
Number;
Step 33: coordinate sequence is extracted to each contour images;
Step 34: circular curve being fitted using least square method to each coordinate sequence, judges whether each coordinate sequence meets circle
Curve is and the recording light ball's dead center coordinate then using the radius of the coordinate sequence as the LED light radius of a ball;Otherwise, leave out coordinate
Sequence;
Step 35: it determines the LED light radius of a ball, obtains coordinate of each LED light ball relative to capture camera coordinate system:
Step 36: each LED light ball pair is calculated in the coordinate value obtained by step 35 and the relative coordinate for capturing camera
The true three-dimension coordinate of the experiencer answered in the room;
Step 4: server-side by each LED light ball room true three-dimension coordinate together with the VR glasses gyroscope posture number
It is packaged according to, magnetometer data and gravimeter data, is distributed to the corresponding client of each VR glasses in real time;
Step 5: the true three-dimension coordinate transformation of total experience person is the coordinate of virtual environment by client, and is generated in real time virtual
Environment or model, finally by virtual screen real-time rendering to the VR glasses of experiencer;
The step 35 determines the LED light radius of a ball, obtains coordinate tool of each LED light ball relative to capture camera coordinate system
Body includes:
According to camera projection theory, camera wide-angle, these three parameter presets of wide angle projection angle and the LED light radius of a ball, benefit
Each LED light ball is calculated to camera distance with Pearson's type Distribution Algorithm: finally being obtained each LED light ball and is taken the photograph relative to capture
As the coordinate value (x, y, distance) of head coordinate system:
Wherein, radius=R;Distance refers to each LED light ball to camera distance;Remaining parameter is in function in formula
Ginseng, height=517.281, center=1.297338, hwhm=3.752844, shape=0.4762335.
3. the multiple target position capturing and positioning method based on multi-cam as claimed in claim 2, which is characterized in that the step
In rapid 1, server-side includes camera coordinate to camera Initialize installation;The camera coordinate initializes
It sets one of camera two dimension base to be designated as (0,0), an experiencer wears LED light ball around experience interior one
Week, other cameras are generated by the overlapping region LED light spherical coordinates position of the correlation and acquisition with benchmark camera
The relative coordinate parameter of itself simultaneously saves;Wherein, the relative coordinate (x of i-th of camerai, yi) calculated by following formula:
(xi,yi)=(xMark-x,yMark-y)
In formula, (xMark,yMark) it is the standard indoor plane coordinate that LED light ball is captured according to benchmark camera;(x, y) takes the photograph for i-th
As the LED light spherical coordinates in head captured images;I=2 ..., p;P is the quantity for experiencing camera in room.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611144669.3A CN106843460B (en) | 2016-12-13 | 2016-12-13 | Multiple target position capture positioning system and method based on multi-cam |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611144669.3A CN106843460B (en) | 2016-12-13 | 2016-12-13 | Multiple target position capture positioning system and method based on multi-cam |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106843460A CN106843460A (en) | 2017-06-13 |
CN106843460B true CN106843460B (en) | 2019-08-02 |
Family
ID=59139917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611144669.3A Active CN106843460B (en) | 2016-12-13 | 2016-12-13 | Multiple target position capture positioning system and method based on multi-cam |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106843460B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107329593B (en) * | 2017-06-28 | 2020-10-09 | 歌尔科技有限公司 | VR handle positioning method and device |
CN107820593B (en) * | 2017-07-28 | 2020-04-17 | 深圳市瑞立视多媒体科技有限公司 | Virtual reality interaction method, device and system |
CN112198959A (en) * | 2017-07-28 | 2021-01-08 | 深圳市瑞立视多媒体科技有限公司 | Virtual reality interaction method, device and system |
WO2019019248A1 (en) * | 2017-07-28 | 2019-01-31 | 深圳市瑞立视多媒体科技有限公司 | Virtual reality interaction method, device and system |
WO2019037074A1 (en) * | 2017-08-25 | 2019-02-28 | 深圳市瑞立视多媒体科技有限公司 | Virtual reality interaction system and method, and computer storage medium |
CN108364034B (en) * | 2018-04-02 | 2023-09-22 | 北京大学 | Multimode coupling motion capturing method and device |
CN108445891A (en) * | 2018-05-28 | 2018-08-24 | 山东华力机电有限公司 | A kind of AGV trolleies optical navigation system and air navigation aid |
CN109146961B (en) * | 2018-09-05 | 2019-12-31 | 天目爱视(北京)科技有限公司 | 3D measures and acquisition device based on virtual matrix |
CN109300163B (en) * | 2018-09-14 | 2021-09-24 | 高新兴科技集团股份有限公司 | Space calibration method of indoor panoramic camera, storage medium and electronic equipment |
CN109410283B (en) * | 2018-09-14 | 2021-09-24 | 高新兴科技集团股份有限公司 | Space calibration device of indoor panoramic camera and positioning device with space calibration device |
CN109817031B (en) * | 2019-01-15 | 2021-02-05 | 张赛 | Limbs movement teaching method based on VR technology |
CN111984114B (en) * | 2020-07-20 | 2024-06-18 | 深圳盈天下视觉科技有限公司 | Multi-person interaction system based on virtual space and multi-person interaction method thereof |
CN111988375B (en) * | 2020-08-04 | 2023-10-27 | 瑞立视多媒体科技(北京)有限公司 | Terminal positioning method, device, equipment and storage medium |
CN112040092B (en) * | 2020-09-08 | 2021-05-07 | 杭州时光坐标影视传媒股份有限公司 | Real-time virtual scene LED shooting system and method |
CN115346419B (en) * | 2022-07-11 | 2023-08-29 | 南昌大学 | Training auxiliary system based on visible light communication |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1838032A (en) * | 2005-06-28 | 2006-09-27 | 钟煜曦 | Interactive input control method based on computer image and pure color object |
CN105051648A (en) * | 2013-01-22 | 2015-11-11 | 微软技术许可有限责任公司 | Mixed reality filtering |
CN105445937A (en) * | 2015-12-27 | 2016-03-30 | 深圳游视虚拟现实技术有限公司 | Mark point-based multi-target real-time positioning and tracking device, method and system |
CN205581785U (en) * | 2016-04-15 | 2016-09-14 | 向京晶 | Indoor virtual reality interactive system of many people |
CN106228127A (en) * | 2016-07-18 | 2016-12-14 | 乐视控股(北京)有限公司 | Indoor orientation method and device |
-
2016
- 2016-12-13 CN CN201611144669.3A patent/CN106843460B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1838032A (en) * | 2005-06-28 | 2006-09-27 | 钟煜曦 | Interactive input control method based on computer image and pure color object |
CN105051648A (en) * | 2013-01-22 | 2015-11-11 | 微软技术许可有限责任公司 | Mixed reality filtering |
CN105445937A (en) * | 2015-12-27 | 2016-03-30 | 深圳游视虚拟现实技术有限公司 | Mark point-based multi-target real-time positioning and tracking device, method and system |
CN205581785U (en) * | 2016-04-15 | 2016-09-14 | 向京晶 | Indoor virtual reality interactive system of many people |
CN106228127A (en) * | 2016-07-18 | 2016-12-14 | 乐视控股(北京)有限公司 | Indoor orientation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106843460A (en) | 2017-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106843460B (en) | Multiple target position capture positioning system and method based on multi-cam | |
US11262841B2 (en) | Wireless wrist computing and control device and method for 3D imaging, mapping, networking and interfacing | |
US9628755B2 (en) | Automatically tracking user movement in a video chat application | |
CN103400119B (en) | Face recognition technology-based mixed reality spectacle interactive display method | |
CN102549619B (en) | Human tracking system | |
CN105279795B (en) | Augmented reality system based on 3D marker | |
WO2018171041A1 (en) | Moving intelligent projection system and method therefor | |
US20130293679A1 (en) | Upper-Body Skeleton Extraction from Depth Maps | |
CN106133796A (en) | For representing the method and system of virtual objects in the view of true environment | |
Rudoy et al. | Viewpoint selection for human actions | |
McColl et al. | Human body pose interpretation and classification for social human-robot interaction | |
CN107211165A (en) | Devices, systems, and methods for automatically delaying video display | |
EP2391983A1 (en) | Systems and methods for simulating three-dimensional virtual interactions from two-dimensional camera images | |
CN105760809A (en) | Method and apparatus for head pose estimation | |
CN103593641B (en) | Object detecting method and device based on stereo camera | |
JP2024054137A (en) | Image Display System | |
Wang et al. | Image-based occupancy positioning system using pose-estimation model for demand-oriented ventilation | |
CN206575538U (en) | A kind of intelligent projection display system of trend | |
Chen et al. | Camera networks for healthcare, teleimmersion, and surveillance | |
CN109445598A (en) | A kind of augmented reality system and device of view-based access control model | |
Chen et al. | Real-time 3d face reconstruction and gaze tracking for virtual reality | |
US11273374B2 (en) | Information processing system, player-side apparatus control method, and program | |
Mikawa et al. | Dynamic projection mapping for robust sphere posture tracking using uniform/biased circumferential markers | |
Tepencelik et al. | Body and head orientation estimation with privacy preserving LiDAR sensors | |
CN110841266A (en) | Auxiliary training system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |