CN105931272B - A kind of Moving Objects method for tracing and system - Google Patents
A kind of Moving Objects method for tracing and system Download PDFInfo
- Publication number
- CN105931272B CN105931272B CN201610298381.5A CN201610298381A CN105931272B CN 105931272 B CN105931272 B CN 105931272B CN 201610298381 A CN201610298381 A CN 201610298381A CN 105931272 B CN105931272 B CN 105931272B
- Authority
- CN
- China
- Prior art keywords
- physical markings
- virtual reality
- markings point
- image
- synchronization signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Abstract
The invention discloses a kind of Moving Objects method for tracing and systems, comprising: first processor sends at least the 1st to n-th synchronization signal when determination need to obtain the athletic posture of virtual reality device, to the camera of virtual reality device and photographic device;The athletic posture of virtual reality device is determined according to N frame image;Virtual reality device receives i-th of synchronization signal that first processor is sent, and the state of each physical markings point on virtual reality device, 1≤i≤N are showed by preset state control policy;Camera receives i-th of synchronization signal that first processor is sent, and according to i-th of synchronization signal, captures the i-th frame image of virtual reality device, includes the mark point image of each physical markings point in the i-th frame image.The above method realizes that virtual reality device exhibition is synchronous with the camera of photographic device, based on the N frame image that photographic device obtains, can quickly and accurately determine the athletic posture of virtual reality device.
Description
Technical field
The present embodiments relate to field of communication technology more particularly to a kind of Moving Objects method for tracing and systems.
Background technique
Virtual implementing helmet is referred to a kind of by, by the vision to the external world of people, sense of hearing closing, being drawn using Helmet Mounted Display
Lead the helmet that user generates a kind of feeling in virtual environment.With the continuous development of electronic technology, virtual implementing helmet
User is had allowed for be controlled by a variety of advanced sensing means according to oneself viewpoint and position in virtual environment virtual
Picture, specifically, during user uses virtual implementing helmet, by perceiving the motion state of user's head, thus
Different scenes is showed for user.Can one important experience of virtual implementing helmet be exactly feeling of immersion, therefore, accurately quickly
The motion state for perceiving user's head is to influence the important indicator of virtual implementing helmet performance.
For example, can be arranged on virtual implementing helmet infrared in order to perceive the motion state of virtual implementing helmet
Lamp determines virtual reality head according to the image of infrared lamp then with the image of infrared lamp on camera crawl virtual implementing helmet
The motion state of helmet, but in this scene, the state of infrared lamp needs constantly variation, camera shooting on virtual implementing helmet
When the image of virtual implementing helmet, each state change of the infrared lamp on virtual implementing helmet is difficult to shoot with camera every
One frame image synchronization, the athletic posture inaccuracy of this virtual implementing helmet for causing the image shot according to camera to be determined,
If will lead to user using the display of virtual screen on display on the motion attitude control virtual implementing helmet of inaccuracy and see
To virtual screen it is discontinuous or delay phenomenon occur, influence the feeling of immersion for wearing the user of virtual implementing helmet.
To sum up, a kind of method that can quickly and accurately perceive virtual implementing helmet athletic posture is needed at present.
Summary of the invention
The embodiment of the present invention provides a kind of Moving Objects method for tracing and system, and to realize, quickly and accurately perception is virtual
Real helmet motion state.
The embodiment of the present invention provides a kind of Moving Objects method for tracing, comprising:
First processor Xiang Suoshu virtual reality device and is taken the photograph when determination need to obtain the athletic posture of virtual reality device
As the camera of device sends at least the 1st to n-th synchronization signal, set with obtaining the virtual reality of the camera capture
Standby N frame image;The athletic posture of the virtual reality device is determined according to the N frame image;
The virtual reality device receives i-th of synchronization signal that the first processor is sent, same according to described i-th
Signal is walked, shows the state of each physical markings point on the virtual reality device by preset state control policy, 1≤
i≤N;
The camera receives i-th of synchronization signal that the first processor is sent, according to described i-th synchronous letter
Number, the i-th frame image of the virtual reality device is captured, includes the mark of each physical markings point in the i-th frame image
Note point image.
Further, the athletic posture of the virtual reality device is determined according to the N frame image, comprising:
Second processor obtains the rotation amount that the virtual reality device generates, the rotation that the virtual reality device generates
Amount is determined according to the data of the sensor of virtual reality device acquisition;
The second processor obtains the translational movement that the virtual reality device generates, what the virtual reality device generated
Translational movement is determined according to the N frame image of camera capture;
The rotation amount and the virtual reality device that the second processor is generated according to the virtual reality device generate
Translational movement, determine the athletic posture of the virtual reality device.The athletic posture characterization Moving Objects of virtual reality device are in N
The overall space translational movement that a moment generates, including horizontal translation amount and rotation translational movement.
Further, the preset state control policy includes:
Each physical markings point of all virtual reality devices has unique number, the number pair of each physical markings point
Unique binary coding is answered, binary-coded numerical digit N is determined according to the quantity of physical markings point;Each physical markings point exists
I-th of synchronization signal corresponds to the state at moment, and the numerical value of binary-coded i-th of numerical digit is corresponded to by the physical markings point
Label, the numerical value of i-th of numerical digit are 0, and physical markings point shows first state, and the numerical value of i-th of numerical digit is 1, physical markings point
Show the second state.
Further, according to the N frame image, the translational movement of the virtual reality device is determined, comprising:
Third processor obtains the N frame image of the camera capture, is determined according to the N frame image described each
The location information of mark point image;The location information of each mark point image refers to each mark point image default
Image coordinate system in location information;
The third processor determines each physical markings point and each mark point according to the N frame image
Corresponding relationship between image;
Actual position information of the third processor according to each physical markings point, each mark point image
Location information and each physical markings point and each mark point image between corresponding relationship, determine described in
The translational movement that virtual reality device generates;The actual position information of each physical markings point is each physical markings point
Location information in preset world coordinate system.
In the embodiment of the present invention, each physical markings point of virtual reality device shows according to the state control policy of setting
State of each physical markings point in different moments, so that the label of first each physical markings point into nth frame image
Point image also has a different status informations, and then the status information that is shown at various moments according to each mark point image and each
The state that physical markings point shows according to the state control policy of setting in different moments, it is more acurrate to be quickly obtained mark point shadow
The corresponding relationship of picture and physical markings point, and the location information of the location information based on mark point image and physical markings point, really
The athletic posture information of virtual reality device is made, it is middle compared with the existing technology to obtain rotation attitude using sensors such as gyroscopes
Method, the embodiment of the present invention can effectively determine the translational movement of virtual reality device, so that more acurrate rapidly perception is empty
The athletic posture of quasi- real world devices, real-time is higher, can significantly improve the real experiences of user.
Further, the third processor is according to the N frame image, determines each physical markings point and described each
Corresponding relationship between the mark point image of a physical markings point, comprising:
The display information of each mark point image according to the N frame image and the preset state control
Strategy determines the corresponding binary coding of each mark point image;
According to the corresponding binary coding of each mark point image, the number of each mark point image is determined;
According to the positional relationship between each mark point image, between the number for determining each mark point image
Positional relationship;
If positional relationship between the number of each mark point image and the number of each physical markings point it
Between positional relationship it is consistent, then according to number information, determine the corresponding physical markings point of any mark point image.
Further, if positional relationship between the number of each mark point image and each physical markings point
Number between positional relationship it is inconsistent, the method also includes:
The third processor notifies the first processor to the virtual reality device and the camera after supervention
Send N+1 to the 2N synchronization signal, with obtain camera capture the virtual reality device N+1 to 2N frame
Image;
The third processor is successively according to described second to N+1 frame, the third to N+2 frame ..., the N
Order to 2N-1 frame constitutes multiple groups N frame image, and according to the precedence of the multiple groups N frame image, successively returns to basis
The N frame image determines corresponding between each physical markings point and the mark point image of each physical markings point
The step of relationship, until positional relationship and each physics in one group of N frame image between the number of each mark point image
Positional relationship between the number of mark point is consistent.
Further, the virtual reality device includes at least one helmet equipment that the Moving Objects are worn and at least
One handheld device matched with the helmet equipment;M physical markings point, the handheld device are set on the helmet equipment
P physical markings point of upper setting;
Include first control circuit on the helmet equipment, includes second control circuit in the handheld device;
The first control circuit is described same according to what is received for receiving the described at least the 1st to n-th synchronization signal
Signal is walked, the state of the m physical markings point at various moments is controlled;
The second control circuit is for receiving the described at least the 1st to n-th synchronization signal, according to the synchronous letter received
Number, control the state of the p physical markings point at various moments;
It further include the first MEMS sensor on the helmet equipment;It further include the 2nd MEMS sensing in the handheld device
Device;
It further include the first MEMS sensor on the helmet equipment;It further include the 2nd MEMS sensing in the handheld device
Device;The method also includes:
The helmet equipment receives at least described 1st to after n-th synchronization signal in the first control circuit, according to
The rotation attitude information of the collected helmet equipment of first MEMS sensor determines the rotation that the helmet equipment generates
Turn amount;
The handheld device receives at least described 1st to after n-th synchronization signal in the second control circuit, according to
The collected handheld device rotation attitude information of second MEMS sensor determines the rotation that the handheld device generates
Amount.
Further, the first processor and the third processor are integrated in the photographic device, and described second
Processor is integrated in the helmet equipment;The then movement appearance that the virtual reality device is determined according to the N frame image
State, comprising:
The helmet equipment obtains the helmet by the communicating circuit between the photographic device and the helmet equipment
The translational movement that equipment and the handheld device generate;
The helmet equipment obtains the hand by the communicating circuit between the handheld device and the helmet equipment
The rotation amount that handle equipment generates;
Translational movement that the helmet equipment is generated according to the helmet equipment and the handheld device, the helmet equipment and
The rotation amount that the handheld device generates determines the posture information that the Moving Objects generate;
The helmet equipment controls the display of the display of the helmet equipment according to the posture information.
Further, the third processor is integrated in the master controller of the photographic device;The first processor
It is integrated in first server with the second processor;It is described that the virtual reality device is determined according to the N frame image
Athletic posture, comprising:
The first server is by the communicating circuit between the photographic device and the first server, described in acquisition
The translational movement that helmet equipment and the handheld device generate;
The first server is by the communicating circuit between the handheld device and the first server, described in acquisition
The rotation amount that handheld device generates;
The first server is by the communicating circuit between the helmet equipment and the first server, described in acquisition
The rotation amount that helmet equipment generates;
The translational movement and rotation amount and the handheld device that the first server is generated according to the helmet equipment produce
Raw translational movement and rotation amount determines the posture information that the Moving Objects generate;The posture information is sent to the head
Helmet equipment, so that the helmet equipment controls the display of the display of the helmet equipment according to the posture information.
The embodiment of the present invention provides a kind of Moving Objects tracing system, comprising:
Photographic device, when for the athletic posture of virtual reality device need to be obtained in determination, Xiang Suoshu virtual reality device
At least the 1st is sent to n-th synchronization signal, to obtain the void of the camera capture with the camera of the photographic device
The N frame image of quasi- real world devices;The athletic posture of the virtual reality device is determined according to the N frame image;
Virtual reality device, i-th of the synchronization signal sent for receiving the first processor, according to described i-th
Synchronization signal, is showed the state of each physical markings point on the virtual reality device by preset state control policy, and 1
≤i≤N;
The camera of the photographic device, i-th of the synchronization signal sent for receiving the first processor, according to institute
I-th of synchronization signal is stated, the i-th frame image of the virtual reality device is captured, includes each object in the i-th frame image
Manage the mark point image of mark point.
Further, the preset state control policy includes:
Each physical markings point of all virtual reality devices has unique number, the number pair of each physical markings point
Unique binary coding is answered, binary-coded numerical digit N is determined according to the quantity of physical markings point;Each physical markings point exists
I-th of synchronization signal corresponds to the state at moment, and the numerical value of binary-coded i-th of numerical digit is corresponded to by the physical markings point
Label, the numerical value of i-th of numerical digit are 0, and physical markings point shows first state, and the numerical value of i-th of numerical digit is 1, physical markings point
Show the second state.
Further, the photographic device is also used to:
The N frame image for obtaining the camera capture, determines each mark point image according to the N frame image
Location information;The location information of each mark point image refers to each mark point image in preset image coordinate
Location information in system;
According to the N frame image, determine corresponding between each physical markings point and each mark point image
Relationship;
According to the actual position information of each physical markings point, the location information of each mark point image, with
And the corresponding relationship between each physical markings point and each mark point image, determine that the virtual reality device produces
Raw translational movement;The actual position information of each physical markings point is that each physical markings point is sat in the preset world
Location information in mark system.
Further, the photographic device is specifically used for:
The display information of each mark point image according to the N frame image and the preset state control
Strategy determines the corresponding binary coding of each mark point image;
According to the corresponding binary coding of each mark point image, the number of each mark point image is determined;
According to the positional relationship between each mark point image, between the number for determining each mark point image
Positional relationship;
If positional relationship between the number of each mark point image and the number of each physical markings point it
Between positional relationship it is consistent, then according to number information, determine the corresponding physical markings point of any mark point image.
Further, the photographic device is also used to:
If positional relationship between the number of each mark point image and the number of each physical markings point it
Between positional relationship it is inconsistent, the photographic device to the virtual reality device and the camera continue send N+1 extremely
The 2N synchronization signal, with obtain camera capture the virtual reality device N+1 to 2N frame image;
The photographic device is successively according to described second to N+1 frame, the third to N+2 frame ..., and the N is extremely
The order of 2N-1 frame constitutes multiple groups N frame image, and according to the precedence of the multiple groups N frame image, successively returns according to institute
N frame image is stated, determines the corresponding pass between each physical markings point and the mark point image of each physical markings point
The step of being, until positional relationship and each physics mark in one group of N frame image between the number of each mark point image
Remember that the positional relationship between the number of point is consistent.
Further, the virtual reality device includes at least one helmet equipment that the Moving Objects are worn and at least
One handheld device matched with the helmet equipment;M physical markings point, the handheld device are set on the helmet equipment
P physical markings point of upper setting;
Include first control circuit on the helmet equipment, includes second control circuit in the handheld device;
The first control circuit, for receiving the described 1st to n-th synchronization signal, according to the synchronization received
Signal controls the state of the m physical markings point at various moments;
The second control circuit, for receiving the described 1st to n-th synchronization signal, according to the synchronization signal received,
Control the state of the p physical markings point at various moments;
It further include the first MEMS sensor on the helmet equipment;It further include the 2nd MEMS sensing in the handheld device
Device;
The helmet equipment, be also used to the first control circuit receive at least described 1st to n-th synchronization signal it
Afterwards, according to the rotation attitude information of the collected helmet equipment of first MEMS sensor, the helmet equipment is determined
The rotation amount of generation;
The handheld device, be also used to the second control circuit receive at least described 1st to n-th synchronization signal it
Afterwards, according to the collected handheld device rotation attitude information of second MEMS sensor, determine that the handheld device produces
Raw rotation amount.
Further, the helmet equipment is specifically used for:
The helmet equipment obtains the helmet by the communicating circuit between the photographic device and the helmet equipment
The translational movement that equipment and the handheld device generate;
The helmet equipment obtains the hand by the communicating circuit between the handheld device and the helmet equipment
The rotation amount that handle equipment generates;
Translational movement that the helmet equipment is generated according to the helmet equipment and the handheld device, the helmet equipment and
The rotation amount that the handheld device generates determines the posture information that the Moving Objects generate;
The helmet equipment controls the display of the display of the helmet equipment according to the posture information.
The embodiment of the present invention provides a kind of Moving Objects tracing system, comprising:
First server, when for that need to obtain the athletic posture of virtual reality device in determination, Xiang Suoshu virtual reality is set
The camera of the standby and described photographic device sends at least the 1st to n-th synchronization signal, to obtain described in the camera capture
The N frame image of virtual reality device;The athletic posture of the virtual reality device is determined according to the N frame image;
Virtual reality device, i-th of the synchronization signal sent for receiving the first processor, according to described i-th
Synchronization signal, is showed the state of each physical markings point on the virtual reality device by preset state control policy, and 1
≤i≤N;
The camera of the photographic device, i-th of the synchronization signal sent for receiving the first processor, according to institute
I-th of synchronization signal is stated, the i-th frame image of the virtual reality device is captured, includes each object in the i-th frame image
Manage the mark point image of mark point.
Further, the photographic device is also used to:
The N frame image for obtaining the camera capture, determines each mark point image according to the N frame image
Location information;The location information of each mark point image refers to each mark point image in preset image coordinate
Location information in system;
According to the N frame image, determine corresponding between each physical markings point and each mark point image
Relationship;
According to the actual position information of each physical markings point, the location information of each mark point image, with
And the corresponding relationship between each physical markings point and each mark point image, determine the virtual reality device
Translational movement;The actual position information of each physical markings point is each physical markings point in preset world coordinate system
In location information.
Further, the virtual reality device includes at least one helmet equipment that the Moving Objects are worn and at least
One handheld device matched with the helmet equipment;M physical markings point, the handheld device are set on the helmet equipment
P physical markings point of upper setting;
Include first control circuit on the helmet equipment, includes second control circuit in the handheld device;
The first control circuit, for receiving the described 1st to n-th synchronization signal, according to the synchronization received
Signal controls the state of the m physical markings point at various moments;
The second control circuit, for receiving the described 1st to n-th synchronization signal, according to the synchronization signal received,
Control the state of the p physical markings point at various moments;
It further include the first MEMS sensor on the helmet equipment;It further include the 2nd MEMS sensing in the handheld device
Device;
The helmet equipment, be also used to the first control circuit receive at least described 1st to n-th synchronization signal it
Afterwards, according to the rotation attitude information of the collected helmet equipment of first MEMS sensor, the helmet equipment is determined
The rotation amount of generation;
The handheld device, be also used to the second control circuit receive at least described 1st to n-th synchronization signal it
Afterwards, according to the collected handheld device rotation attitude information of second MEMS sensor, determine that the handheld device produces
Raw rotation amount.
Further, the first server is specifically used for:
The first server is by the communicating circuit between the photographic device and the first server, described in acquisition
The translational movement that helmet equipment and the handheld device generate;
The first server is by the communicating circuit between the handheld device and the first server, described in acquisition
The rotation amount that handheld device generates;
The first server is by the communicating circuit between the helmet equipment and the first server, described in acquisition
The rotation amount that helmet equipment generates;
The translational movement and rotation amount and the handheld device that the first server is generated according to the helmet equipment produce
Raw translational movement and rotation amount determines the posture information that the Moving Objects generate;The posture information is sent to the head
Helmet equipment, so that the helmet equipment controls the display of the display of the helmet equipment according to the posture information.
In the above embodiment of the present invention, in order to realize that virtual reality device is synchronous with photographic device camera, needing
When obtaining the athletic posture of virtual reality device, sent at least to the camera of the virtual reality device and photographic device respectively
1st to n-th synchronization signal, photographic device and virtual reality device receive it is synchronous at the time of synchronization signal so that virtual reality
Equipment one synchronization signal of every reception, shows each physical markings on virtual reality device by preset state control policy
The state of point, the every image for receiving a synchronization signal and capturing a frame virtual reality device of the camera of photographic device, capture
It include mark point image of each physical markings point at the capture moment in image, in this way, realizing on virtual reality device
Between the state that each physical markings point shows, and the mark point image of physical markings point each in camera captured image
Synchronized relation, and the athletic posture of virtual reality device is the N frame according to the virtual reality device of camera capture
What image determined, include the preset state of each physical markings point at various moments in preset state control policy, this
Sample is according to mark point image information of each physical markings point in N frame image on virtual reality device, so that it may accurate, fast
Speed determines the athletic posture of virtual reality device, so solve it is existing in the prior art cannot be accurate and quickly really
The technical issues of determining the athletic posture of virtual reality device.
Detailed description of the invention
Attached drawing is used to provide further understanding of the present invention, and constitutes part of specification, is implemented with the present invention
Example is used to explain the present invention together, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is a kind of schematic diagram of Moving Objects method for tracing process provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic diagram of Moving Objects method for tracing process provided in an embodiment of the present invention;
Fig. 3 is a kind of third processor provided in an embodiment of the present invention according to N frame image, determines the flat of virtual reality device
The schematic diagram of the method flow of shifting amount;
Fig. 4 is a kind of third processor provided in an embodiment of the present invention according to N frame image, determine each physical markings point with
The schematic diagram of the method flow of corresponding relationship between the mark point image of each physical markings point;
Fig. 5 is the structural schematic diagram of the first preferred system architecture provided in an embodiment of the present invention;
Fig. 6 is the signal of the Moving Objects method for tracing process in the first system architecture provided in an embodiment of the present invention
Figure;
Fig. 7 is the structural schematic diagram of second provided in an embodiment of the present invention preferred system architecture;
Fig. 8 is the process signal of the Moving Objects method for tracing in second of system architecture provided in an embodiment of the present invention
Figure;
Fig. 9 to Figure 10 is a kind of structural schematic diagram of Moving Objects tracing system provided in an embodiment of the present invention.
Specific embodiment
In order to make technical problem solved by the invention, technical solution and it is effective be more clearly understood, below in conjunction with
Preferred embodiment of the present invention will be described for Figure of description, it should be understood that preferred embodiment described herein is only used for
The description and interpretation present invention, is not intended to limit the present invention.And in the absence of conflict, the embodiment and reality in the application
The feature applied in example can be combined with each other.
The embodiment of the present invention provide a kind of Moving Objects method for tracing specific method process include control instruction interaction
The interaction flow of process and data sends the translational movement that virtual reality device generates for example, sending the control instruction of synchronization signal
Data or the instruction of rotation amount etc..Because the embodiment of the present invention can be adapted for multiple systems framework, executed in every kind of system architecture
Each control instruction, data command interaction flow control equipment it is slightly different, in order to show the entirety of the embodiment of the present invention
Method flow, first processor of the embodiment of the present invention, second processor and third processor execute different control function to represent
The module of energy.
First processor in the embodiment of the present invention is mainly used for the camera shooting hair to virtual reality device and photographic device
Send synchronization signal;Third processor is mainly used for the N frame image obtained according to the camera of photographic device, determines that virtual reality is set
The standby translational movement generated;Second processor is mainly used for obtaining the rotation amount of virtual reality device generation and third processor is true
The translational movement that fixed virtual reality device generates, and the rotation amount and virtual reality device that are generated according to virtual reality device generate
Translational movement, determine the athletic posture of virtual reality device.
Below with reference to first processor, second processor, the virtual reality for including in third processor and system architecture
Equipment and photographic device illustrate the detailed process of Moving Objects method for tracing provided in an embodiment of the present invention:
As shown in Figure 1, the detailed process that the embodiment of the present invention provides a kind of Moving Objects method for tracing includes:
Step 101, first processor is when determination need to obtain the athletic posture of virtual reality device, to virtual reality device
With the camera of photographic device at least the 1st is sent to n-th synchronization signal, to obtain the virtual reality device of camera capture
N frame image;The athletic posture of virtual reality device is determined according to N frame image;
Step 102, virtual reality device receives i-th of synchronization signal that first processor is sent, according to i-th of synchronous letter
Number, the state of each physical markings point on virtual reality device is showed by preset state control policy;Wherein, 1≤i≤
N;
Step 103, the camera of photographic device receives i-th of synchronization signal that first processor is sent, same according to i-th
Signal is walked, the i-th frame image of virtual reality device is captured, includes the mark point image of each physical markings point in the i-th frame image.
Further, as shown in Fig. 2, based on above-mentioned steps 101 to step 103, the above method further include:
Step 104, third processor obtains the N frame image of camera capture;
Step 105, the N frame image that third processor is obtained according to the camera of photographic device, determines virtual reality device
Translational movement;
Step 106, second processor obtains the rotation amount that virtual reality device generates, the rotation that virtual reality device generates
Amount is determined according to the data of the sensor of virtual reality device acquisition;
Step 107, the translational movement that the virtual reality device that second processor obtains that third processor obtains generates, it is virtual existing
The translational movement that real equipment generates is that third processor is determined according to the N frame image that camera captures;
Step 108, the rotation amount and virtual reality device that second processor is generated according to virtual reality device generate flat
Shifting amount determines the athletic posture of virtual reality device.
Wherein, the overall space translational movement that the athletic posture characterization Moving Objects of virtual reality device generate at N number of moment,
Including horizontal translation amount and rotation translational movement.Second processor merges the rotation of virtual reality device generation according to the algorithm of setting
Turn amount and translational movement that virtual reality device generates, obtain virtual reality device in the athletic posture at N number of moment, N number of moment is
Refer to virtual reality device receive first to n-th synchronization signal each moment.Specific blending algorithm is not the present invention
The emphasis of embodiment saves the specific explanations to it herein.The present invention is not specifically limited blending algorithm, and blending algorithm can
It realizes the translational movement that the rotation amount generated according to virtual reality device and virtual reality device generate, determines virtual reality device
The function of athletic posture.
It is illustrated below for the preset state control policy in above-described embodiment.
Each physical markings point of all virtual reality devices has unique number, the number pair of each physical markings point
Unique binary coding is answered, binary-coded numerical digit N is determined according to the quantity of physical markings point;Each physical markings point exists
I-th of synchronization signal corresponds to the state at moment, and the numerical value mark of binary-coded i-th of numerical digit is corresponded to by the physical markings point
Note, the numerical value of i-th of numerical digit are 0, and physical markings point shows first state, and the numerical value of i-th of numerical digit is 1, physical markings point exhibition
Existing second state.
It is worth noting that virtual in each physical markings point and N frame image on virtual reality device for ease of calculation
Corresponding relationship between the standby each mark point image of reality, each physical markings point on virtual reality device is according to preparatory
The infrared lamp that the laying rule of setting is laid on virtual reality device, such as virtual reality device are that the virtual helmet is set
Standby, the infrared lamp on the virtual each side of helmet equipment is laid in the way of convex polygon array.It is pre-set
It lays in rule, using photographic device as reference substance, presets position of each physical markings point on virtual reality device.
It is worth noting that virtual reality device in above-mentioned steps includes at least helmet equipment, can also include simultaneously with
The matched handheld device of helmet equipment, in same system framework, helmet equipment and handheld device can be set to it is multiple, together
In one system architecture, photographic device can be set to one, may be set to be it is multiple, the embodiment of the present invention to helmet equipment,
Handheld device and the number of photographic device may include subsystem under a system architecture in a preferred embodiment without limitation
System includes a photographic device in one subsystem and makes and sell a helmet equipment.
It is worth noting that in step 101, the value of N represents binary-coded numerical digit, binary-coded numerical digit according to
On all virtual reality devices physical markings point sum determine, for example, under some system architecture include a helmet equipment, two
A handheld device is provided with 40 infrared lamps on one helmet equipment, and equipment has 28 infrared lamps in handheld device 1, and handle is set
Equipment has 28 infrared lamps on standby 1, shares 96 infrared lamps.In order to guarantee that each infrared lamp has unique number, Mei Gehong
The number of outer lamp corresponds to unique binary coding, and binary-coded numerical digit N is at least determined as 7 (27=128, numerical digit N are
7 are at best able to provide unique binary coding for 128 physical markings points), i.e., each number binary coding by
7bit composition, each bit of value or be 0 or be 1, and the number of this 96 infrared lamps is 1 to 96.For each volume
Number corresponding binary coding, 7 numerical digits can from low level to high-order successively serialization, for example, the two of the infrared lamp that number is 1 into
System is encoded to 0000001, then the 1st numerical value is 1, and the value of other numerical digits is 0;The binary coding for the infrared lamp that number is 2
The numerical value for being the 0000010, the 2nd is 1, and the value of other numerical digits is 0;The binary coding for the infrared lamp that number is 3 is
0000011.1st and the 2nd numerical value is 1, and the value of other numerical digits is 0, and so on.
In this example, helmet equipment, handheld device receive the 1st to the 7th synchronization signal under normal circumstances, each
Physical markings o'clock correspond to the state at moment in i-th of synchronization signal, correspond to binary-coded i-th of number by the physical markings point
The numeric indicia of position, it is possible to understand that are as follows: when receiving the 1st synchronization signal, the state for the infrared lamp that number is 1 is by binary system
1st numeric indicia of coding 0000001, because the 1st numerical value is 1, the infrared lamp that number is 1 shows the second state, so
Afterwards when receiving the 2nd to the 7th synchronization signal, because the 2nd to the 7th numerical value is 0, then the infrared lamp that number is 1 shows
First state.For another example the state for the infrared lamp that number is 2 is by binary coding when receiving the 1st synchronization signal
0000010 the 1st numeric indicia, because the 1st numerical value is 0, the infrared lamp that number is 1 shows first state, is receiving
When to the 2nd synchronization signal, the state of the infrared lamp that number is 2 by binary coding 0000010 the 2nd numeric indicia,
Because the 2nd numerical value is 1, the infrared lamp that number is 2 shows the second state;Then the 3rd to the 7th synchronization signal is being received
When, because the 3rd to the 7th numerical value is 0, then the infrared lamp that number is 2 shows first state.For another example receiving the 1st
When a synchronization signal, the state of the infrared lamp that number is 3 by binary coding 0000011 the 1st numeric indicia, because of the 1st
The numerical value of position is 1, and the infrared lamp that number is 3 shows the second state;When receiving the 2nd synchronization signal, number be 3 it is infrared
The state of lamp by binary coding 0000011 the 2nd numeric indicia, because the 2nd numerical value be 1, the infrared lamp that number is 3
Show the second state;When receiving the 3rd to the 7th synchronization signal, because the 3rd to the 7th numerical value is 0, then number is 3
Infrared lamp shows first state.And so on.
It is worth noting that the numerical value of i-th of numerical digit is 0, physical markings point exhibition in preset state control policy
Existing first state;The numerical value of i-th of numerical digit is 1, and physical markings point shows the second state.Wherein, first state can be instruction
Lamp half is bright, and the second state can be indicator light all light, and first state and the second state are also possible to indicate the brightness of infrared lamp difference
Other states, the present invention is not especially limited this.
Further, in step 104, third processor determines the translational movement of virtual reality device according to N frame image, tool
Body includes the following steps, as shown in Figure 3:
Step 301, third processor obtains the N frame image of camera capture, determines each mark point shadow according to N frame image
The location information of picture;The location information of each mark point image refers to each mark point image in preset image coordinate system
Location information;
Step 302, third processor determines between each physical markings point and each mark point image according to N frame image
Corresponding relationship;
Step 303, actual position information of the third processor according to each physical markings point, the position of each mark point image
Confidence breath and the corresponding relationship between each physical markings point and each mark point image determine that virtual reality device generates
Translational movement;Wherein, the actual position information of each physical markings point is each physical markings point in preset world coordinate system
In location information.In step 303, the translational movement of virtual reality device generation, virtual reality are specifically determined according to PnP algorithm
The translational movement that equipment generates to be with the position of photographic device be refer to and determine at N number of moment, (N number of moment, which refers to, to be received
First to n-th synchronization signal each moment) motion profile translational movement.
In the embodiment of the present invention, each physical markings point of virtual reality device shows according to the state control policy of setting
State of each physical markings point in different moments, so that the label of first each physical markings point into nth frame image
Point image also has a different status informations, and then the status information that is shown at various moments according to each mark point image and each
The state that physical markings point shows according to the state control policy of setting in different moments, it is more acurrate to be quickly obtained mark point shadow
The corresponding relationship of picture and physical markings point, and the location information of the location information based on mark point image and physical markings point, really
The athletic posture information of virtual reality device is made, it is middle compared with the existing technology to obtain rotation attitude using sensors such as gyroscopes
Method, the embodiment of the present invention can effectively determine the translational movement of virtual reality device, so that more acurrate rapidly perception is empty
The athletic posture of quasi- real world devices, real-time is higher, can significantly improve the real experiences of user.
Further, in step 302, third processor determines each physical markings point and each physics according to N frame image
Corresponding relationship between the mark point image of mark point, as shown in Figure 4, comprising:
Step 401, display information and preset shape of the third processor according to mark point image each in N frame image
State control strategy determines the corresponding binary coding of each mark point image;
Step 402, third processor determines each mark point shadow according to the corresponding binary coding of each mark point image
The number of picture;
Step 403, third processor determines each mark point image according to the positional relationship between each mark point image
Number between positional relationship;
Step 404, judge the number of the positional relationship and each physical markings point between the number of each mark point image
Between positional relationship it is whether consistent;
Step 405, if positional relationship between the number of each mark point image and the number of each physical markings point it
Between positional relationship it is consistent, then third processor determines the corresponding physical markings point of any mark point image according to number information.
Under normal circumstances, third processor can accurately determine any mark point shadow according to the 1st to nth frame image
As corresponding physical markings point, but one kind can also have the positional relationship between the number of each mark point image in special circumstances
The inconsistent problem of positional relationship between the number of each physical markings point.For example, in lower system of normal condition
Virtual reality device includes 7 infrared lamps, and virtual reality device receives the 1st to the 3rd synchronization signal, virtual reality device
Each infrared lamp 3 numerical digits of number binary coding representation, if number the binary coding of infrared lamp for being 3 and be
011, when receiving the 1st synchronization signal, the infrared lamp that number is 3 shows the state of all light, is receiving the 2nd synchronous letter
Number when, the infrared lamp that number is 3 also shows the state of all light, when receiving the 3rd synchronization signal, infrared lamp that number is 3
Show half bright state.If after camera starting, because of internal glitch, so that camera is receiving the 1st synchronous letter
Number when, display virtual real equipment receives the 2nd synchronization signal in the 1st frame image of shooting and the state showed is (all red
Outer lamp carries out state according to the 2nd numerical value and shows), camera is when receiving the 2nd synchronization signal, the 2nd frame figure of shooting
As in display virtual real equipment receive the 3rd synchronization signal and the state that shows (whole infrared lamps are according to the 3rd numerical value
Carry out state shows), when camera is when receiving the 3rd synchronization signal, the 3rd frame image of shooting, but virtual reality at this time
Equipment may no longer receive synchronization signal, and infrared lamp goes out entirely, it is also possible to which virtual reality device continues to next group the (the 4th to the 6th
A synchronization signal) the 1st synchronization signal.Assuming that virtual reality device continues to next group of 3 synchronization signals, camera
Display virtual real equipment receives next group of the 1st synchronization signal in 3rd frame image of shooting and the state showed is (whole
Infrared lamp carries out state according to the 1st numerical value and shows).At this point, third processor can be according to 7 mark points in this 3 frame image
The display information of image determines 7 mark point images of the predeterminated position in 3 frame images according to the arrangement of default arrangement mode
Number is followed successively by 4 (100), 1 (001), 5 (101), 2 (010), 6 (110), 3 (011), 7 (111), but actually meets default
The number of 7 physical markings points of positional relationship is followed successively by 1,2,3,4,5,6,7, i.e., between the number of each mark point image
Positional relationship between positional relationship and the number of each physical markings point is inconsistent.
If there is between the number of positional relationship and each physical markings point between the number of each mark point image
The inconsistent situation of positional relationship, continue the translational movement meeting for the virtual reality device determined according to step 301 to step 303
There are errors.
Further, the positional relationship between the number in order to solve each mark point image and each physical markings point
The inconsistent problem of positional relationship between number, as shown in figure 4, the above method further include:
Step 406, third processor notifies first processor to continue to send N+1 extremely to virtual reality device and camera
The 2N synchronization signal, with obtain camera capture virtual reality device N+1 to 2N frame image;
Step 407, third processor is successively according to second to N+1 frame, third to N+2 frame ..., N to 2N-1
The order of frame constitutes multiple groups N frame image, and according to the precedence of multiple groups N frame image, successively return step 401 to step 407,
Up between the number of positional relationship and each physical markings point in one group of N frame image between the number of each mark point image
Positional relationship it is consistent.
It is illustrated below with reference to method flow of the above-mentioned example to above-mentioned steps 406 and step 407.If number 1 to 7
Corresponding binary-coded each 1st set of values is combined into N1, and number 1 to 7 is binary-coded each 2nd corresponding
Set of values be combined into N2, corresponding binary-coded each 3rd set of values of number 1 to 7 is combined into N3, under normal circumstances,
The state that virtual reality device is showed when being respectively received the 1st to the 3rd synchronization signal is followed successively by (N1, N2, N3), if
Camera is respectively received the shape that virtual reality device shown in the image shot when the 1st to the 3rd synchronization signal shows
State is (N2, N3, N1), then according to above-mentioned steps 406 and step 407, camera continues to the 4th synchronization signal, and is connecing
The 4th frame image is shot when receiving the 4th synchronization signal, meanwhile, virtual reality device continues to the 4th synchronization signal, shows
N1 state;The the 2nd to the 4th frame image that third processor shoots camera reconfigures as one group of 3 frame image, reconfigures
The state that the virtual reality device shown in 3 frame images shows is (N3, N1, N2), at this point, each in the 3 frame images reconfigured
Positional relationship between positional relationship between the number of a mark point image and the number of each physical markings point is still different
It causes.So camera continues to the 5th synchronization signal, and the 5th frame image is shot when receiving the 5th synchronization signal;Together
When, virtual reality device continues to the 5th synchronization signal, shows N2 state;Third processor by camera shoot the 3rd to
5th frame image reconfigures as one group of 3 frame image, the shape that the virtual reality device shown in the 3 frame images reconfigured shows
State is (N1, N2, N3), that is, the positional relationship in the 3 frame images reconfigured between the number of each mark point image with it is each
Positional relationship between the number of physical markings point is consistent.
Process can correct the camera occurred in above method process and receive i-th of synchronous letter according to the method described above
Number the shooting of correspondence moment image in the state that is showed of each mark point image and virtual reality device receiving i-th
The inconsistent problem of the correspondence moment of synchronization signal each practical state showed of physical markings point, it is ensured that each physical markings point
Positional relationship and each physical markings in the corresponding relationship of each mark point image, between the number of each mark point image
Positional relationship between the number of point is consistent, and then accurately determines the translational movement that virtual reality device generates.
Above method process has the advantages that realize the same of virtual reality device and photographic device camera
Step, in the athletic posture for needing to obtain virtual reality device, respectively to the camera shooting hair of virtual reality device and photographic device
Send at least the 1st to n-th synchronization signal, photographic device and virtual reality device receive it is synchronous at the time of synchronization signal so that empty
Quasi- one synchronization signal of the every reception of real world devices, shows each object on virtual reality device by preset state control policy
The state of mark point is managed, every one synchronization signal of reception of the camera of photographic device captures the image of a frame virtual reality device,
It include mark point image of each physical markings point at the capture moment in captured image, in this way, realizing virtual reality device
The state that upper each physical markings point shows, between the mark point image of physical markings point each in camera captured image
Synchronized relation, and the athletic posture of virtual reality device be according to camera capture virtual reality device N frame image it is true
Fixed, it include the preset state of each physical markings point at various moments, such basis in preset state control policy
Mark point image information of each physical markings point in N frame image on virtual reality device, so that it may accurately and rapidly really
Make the athletic posture of virtual reality device.
Based on above method process, Moving Objects method for tracing provided in an embodiment of the present invention is suitable for multiple systems structure
Frame includes at least helmet equipment and photographic device in every kind of system architecture, all in order to clearly illustrate the excellent of the embodiment of the present invention
Scheme is selected, below by taking two kinds of preferred system architectures as an example, preferred embodiment provided in an embodiment of the present invention is illustrated.
In the first optimum decision system framework, as shown in figure 5, virtual reality device includes 501, hands of a helmet equipment
Handle equipment 502 and a photographic device 503, and first processor and third processor are integrated in photographic device 503, at second
Reason device is integrated in helmet equipment 501.
Photographic device 503 includes master controller 51, camera 52, communicating circuit 53, and camera 52 includes that infrared image passes
Sensor 54, image-signal processor 55.Master controller 51 can be programming devices or the CPU microprocessors such as FPGA, CPLD
Deng;Infrared image sensor 54 can be cmos image sensor, herein with no restrictions;Image-signal processor 55 is realized to red
Outer imaging sensor 54 picture signals such as collects and is handled;Communicating circuit 53 mainly realizes system communication function, realizes
Mode can be wired (such as USB) either wireless (such as bluetooth) mode, preferably wired mode.Wherein, camera 52
Shooting frame rate be 60FPS.Particularly, it is the number for expanding helmet equipment 501 or handheld device 502, is guaranteeing effectively to chase after
Under the premise of track precision etc., the camera of higher shooting frame rate, such as 120FPS can be used.
Helmet equipment 501 includes master controller 41, first control circuit 42, communicating circuit 43, the first MEMS sensor 44
With display 45, m infrared lamp is set, the number m of infrared lamp does not do specific limit in embodiments of the present invention on helmet equipment 501
System, such as 40, under the premise ofs guaranteeing effectively tracking precision etc., the infrared lamp number of the helmet equipment 501 being each tracked can be with
Appropriate adjustment.The programming devices or CPU microprocessor etc. such as master controller 41 or FPGA, CPLD;First control electricity
Road 42 is control circuit for infrared lamp, the main control realized to the infrared lamp on helmet equipment 501;Communicating circuit 43 is mainly real
Existing system communication function, implementation can be wired (such as USB) either wireless (such as bluetooth) mode;First MEMS sensing
Device 44 is used to acquire the rotation attitude information of helmet equipment 501;Display 45, for showing virtual screen.
Handheld device 502 includes: master controller 31, second control circuit 32, communicating circuit 33 and the second MEMS sensor
34, p infrared lamp is set in handheld device 502, the number p of infrared lamp is not particularly limited in embodiments of the present invention, and such as 28
A, under the premise ofs guaranteeing effectively tracking precision etc., the infrared lamp number for the handheld device 502 being each tracked can be adjusted suitably
It is whole.The programming devices or CPU microprocessor etc. such as master controller 31 or FPGA, CPLD;Second control circuit 32 is
Control circuit for infrared lamp, the main control realized to the infrared lamp in handheld device 502;Communicating circuit 33 is mainly realization system
Communication function, implementation can be wired (such as USB) either wireless (such as bluetooth) mode;Second MEMS sensor 34 is used
In the rotation attitude information of acquisition handheld device 502.
Connection is established by communicating circuit between helmet equipment 501, handheld device 502 and photographic device 503.
Photographic device 503 is used to send synchronization signal to the camera of virtual reality device and photographic device 503, according to taking the photograph
As the N frame image that the camera of device 503 obtains, the translational movement that virtual reality device generates is determined.
The translation that helmet equipment 501 is used to obtain the rotation amount of virtual reality device generation and virtual reality device generates
Amount, and the translational movement that the rotation amount and virtual reality device generated according to virtual reality device generates, determine virtual reality device
Athletic posture.
Under this system architecture, the translational movement of helmet equipment 501 and handheld device 502 is determined by photographic device 503, it can
To avoid the transmission in systems of the N frame image data of shooting, the rotation generated by helmet equipment 501 according to virtual reality device
The translational movement that amount and virtual reality device generate, determines the athletic posture of virtual reality device, to reduce system server
Processing load, and then can guarantee the rate of tracing movement object gesture information.
The process of Moving Objects method for tracing in the first system architecture, as shown in Figure 6, comprising:
Step 601, photographic device 503 is when determination need to obtain the athletic posture of virtual reality device, to helmet equipment
501, the camera 52 of handheld device 502 and photographic device 503 sends the 1st to n-th synchronization signal;
Wherein, the 1st to n-th synchronization signal be clock sync pulse signal.
Step 602, helmet equipment 501 receives the photographic device 503 is sent the 1st to n-th synchronization signal, helmet equipment
501 first control circuit 42 controls m infrared lamp according to the synchronization signal received, by preset state control policy
Luminance at various moments, all light or half bright;
Wherein, the particular content of preset state control policy is not repeated herein referring to above-described embodiment.It is each
Moment refers at the time of receiving the 1st to n-th synchronization signal.
Step 603, handheld device 502 receives the photographic device 503 is sent the 1st to n-th synchronization signal, handheld device
502 second control circuit 32 is controlled p infrared according to the synchronization signal received by preset state control policy
The luminance of lamp at various moments;
Wherein, the particular content of preset state control policy is not repeated herein referring to above-described embodiment.It is each
Moment refers at the time of receiving the 1st to n-th synchronization signal.
Step 604, camera 52 receives the photographic device 503 is sent the 1st to n-th synchronization signal, infrared image sensing
For device 54 according to the synchronization signal received, capture includes helmet equipment 501 and handheld device 502 the 1st to nth frame image, and the 1st
The mark point shadow of each infrared lamp at various moments into nth frame image including helmet equipment 501 and handheld device 502
Picture;
Above-mentioned steps 602 to step 604 does not have sequencing, it should be understood that occurs simultaneously.
Step 605, the N frame figure that the image-signal processor 55 of photographic device 503 is obtained according to infrared image sensor 54
Picture determines the translational movement of helmet equipment 501 and handheld device 502;
Specifically, the N frame image that image-signal processor 55 is obtained according to infrared image sensor 54, determines helmet equipment
501 and handheld device 502 translational movement specific steps referring to the step 301 in above-described embodiment to step 303, Yi Jibu
Rapid 401 to step 405 particular content, be not repeated herein.
Step 606, after the reception of first control circuit 42 the 1st to the n-th synchronization signal of helmet equipment 501, the helmet
The master controller 41 of equipment 501 is determined according to the rotation attitude information of the collected helmet equipment 501 of the first MEMS sensor 44
The rotation amount that helmet equipment 501 generates;
Step 607, after the reception of second control circuit 32 the 1st to the n-th synchronization signal of handheld device 502, handle
The master controller 31 of equipment 502 is determined according to the rotation attitude information of the collected handheld device 502 of the second MEMS sensor 34
The rotation amount that handheld device 502 generates;
Step 608, helmet equipment 501 obtains head by the communicating circuit between photographic device 503 and helmet equipment 501
The translational movement that helmet equipment 501 and handheld device 502 generate;Specifically, photographic device 503 passes through between helmet equipment 501
The translational movement of helmet equipment 501 and handheld device 502 is sent to helmet equipment 501 by communicating circuit.
Step 609, helmet equipment 501 is obtained by the communicating circuit between handheld device 502 and helmet equipment 501
The rotation amount that handheld device 502 generates;
Specifically, handheld device 502 is being produced handheld device 502 by the communicating circuit between helmet equipment 501
Raw rotation amount is sent to helmet equipment 501;
Step 610, translational movement, helmet equipment that helmet equipment 501 is generated according to helmet equipment 501 and handheld device 502
501 and handheld device 502 generate rotation amount, determine Moving Objects generate posture information;
Step 611, helmet equipment 501 controls the display of the display 45 of helmet equipment 501 according to posture information.
Wherein, helmet equipment is believed according to the rotation attitude of the collected helmet equipment of first MEMS sensor
Breath determines the rotation amount that the helmet equipment generates;Handheld device is revolved according to the collected handheld device of the second MEMS sensor
Turn posture information, determines the rotation amount that handheld device generates;It is all based on what ripe algorithm in the prior art obtained, such as PNP
Algorithm is not detailed herein.
In above-described embodiment, under the first preferred system architecture, in the athletic posture for needing to obtain virtual reality device
When, the master controller of photographic device taking the photograph to virtual reality device (including helmet equipment and handheld device) and photographic device respectively
As hair is sent at least the 1st to n-th synchronization signal, photographic device and virtual reality device (including helmet equipment and handheld device)
It is synchronized at the time of receiving synchronization signal, so that virtual reality device one synchronization signal of every reception, by preset state control
Strategy processed shows the state of each infrared lamp on virtual reality device (including helmet equipment and handheld device), and photographic device is taken the photograph
As every image for receiving a synchronization signal and capturing a frame virtual reality device (including helmet equipment and handheld device), capture
Image in include mark point image of each infrared lamp at the capture moment, in this way, realizing virtual reality device (including the helmet
Equipment and handheld device) on the state that shows of each infrared lamp, the mark point with each infrared lamp in camera captured image
Synchronized relation between image.It include the default shape of each infrared lamp at various moments in preset state control policy
The master controller of state, such photographic device can be according to each on virtual reality device (including helmet equipment and handheld device)
Mark point image information of a infrared lamp in N frame image accurately and rapidly determines the translational movement of virtual reality device;It is based on
The translational movement and rotation amount that the helmet equipment and the handheld device generate, helmet equipment can be determined accurately and rapidly
The posture information that the Moving Objects generate.
Based on above method process, the embodiment of the present invention also provides second of preferred system architecture, as shown in fig. 7, the
In two kinds of system architectures, virtual reality device includes 71, handheld devices 72 of a helmet equipment and a photographic device 73,
It further include the first server 74 on a PC, and third processor is integrated in the master controller of photographic device, the first processing
Device and second processor are integrated in the master controller in first server 74;Wherein, first server 74, including master controller
741 and communicating circuit 742.First server 74 and helmet equipment 71, handheld device 72 and photographic device 73 pass through communicating circuit
Connection.
Photographic device 73 includes master controller 731, camera 732, and communicating circuit 733, camera 732 includes infrared image
Sensor 734, image-signal processor 735.Master controller 731 can be single-chip microcontroller, by wired (such as USB) or wireless
(such as bluetooth) is connected to first server 74;It realizes to collect infrared image sensor 734 as signal processor 735 and waits images
Signal is handled;Communicating circuit 733 mainly realizes that the preferred implementation of system communication function is wired connection.Wherein,
The shooting frame rate of camera 732 is 60FPS.Particularly, it is the number for expanding helmet equipment or handheld device, is guaranteeing effectively
Under the premise of tracking precision etc., the camera of higher shooting frame rate, such as 120FPS can be used.
Helmet equipment 71 includes master controller 711, first control circuit 712, communicating circuit 713, the first MEMS sensor
714 and display 715, m infrared lamp is set on helmet equipment 71, the number m of infrared lamp is not done in embodiments of the present invention to be had
Body limitation, such as 40, under the premise ofs guaranteeing effectively tracking precision etc., the infrared lamp number for the helmet equipment 71 being each tracked can
With appropriate adjustment.Master controller 711 can be single-chip microcontroller, be connected to first by wired (such as USB) or wireless (such as bluetooth)
Server 74;First control circuit 712 is control circuit for infrared lamp, the main control realized to the infrared lamp on helmet equipment 71
System;Communicating circuit 713 mainly realizes system communication function, and preferred implementation is wired connection;First MEMS sensor
714 for acquiring the rotation attitude information of helmet equipment 71;Display 715, for showing virtual screen.
Handheld device 72 includes: master controller 721, second control circuit 722, communicating circuit 723 and the 2nd MEMS sensing
P infrared lamp is arranged in handheld device 72 for device 724, and the number p of infrared lamp is not particularly limited in embodiments of the present invention, such as
28, under the premise ofs guaranteeing effectively tracking precision etc., the infrared lamp number for the handheld device 72 being each tracked can be adjusted suitably
It is whole.The programming devices or CPU microprocessor etc. such as master controller 721 or FPGA, CPLD;Second control circuit 722
For control circuit for infrared lamp, the main control realized to the infrared lamp in handheld device 72;Communicating circuit 723 mainly realizes system
System communication function, preferred implementation are wired connection;Second MEMS sensor 724 is used to acquire the rotation of handheld device 72
Posture information.
First server 74, including master controller 741, communicating circuit 742 are used for virtual reality device and photographic device
73 camera sends synchronization signal;
Photographic device 73, the N frame image for being obtained according to the camera of photographic device 73 determine that virtual reality device produces
Raw translational movement.
First server 74, rotation amount and virtual reality device for obtaining virtual reality device generation generate flat
Shifting amount, and the translational movement that the rotation amount and virtual reality device generated according to virtual reality device generates, determine that virtual reality is set
Standby athletic posture.
Under this system architecture, the translational movement of helmet equipment and handheld device is determined by photographic device, it can be to avoid bat
The transmission in systems for the N frame image data taken the photograph reduces the processing load of system server, and then can guarantee tracing movement
The rate of object gesture information.
In the embodiment of the present invention, the process of the Moving Objects method for tracing in second of system architecture, as shown in figure 8, packet
It includes:
Step 801, first server 74 determination need to obtain helmet equipment 71, handheld device 72 athletic posture when, to
Helmet equipment 71, handheld device 72 and photographic device 73 send the 1st to n-th synchronization signal;
Wherein, the 1st to n-th synchronization signal be clock sync pulse signal.
Step 802, helmet equipment 71 receives the first server 74 is sent the 1st to n-th synchronization signal, and the first control is electric
Road 712 controls the shape of m infrared lamp at various moments according to the synchronization signal received, by preset state control policy
State;
Wherein, the particular content of preset state control policy is not repeated herein referring to above-described embodiment.
Step 803, handheld device 72 receives the first server 74 is sent the 1st to n-th synchronization signal, and the second control is electric
Road 722 is according to the synchronization signal received, by preset state control policy, controls p infrared lamp at various moments
State;
Wherein, the particular content of preset state control policy is not repeated herein referring to above-described embodiment.It is each
Moment refers at the time of receiving the 1st to n-th synchronization signal.
Step 804, photographic device 73 receives the first server 74 is sent the 1st to n-th synchronization signal, and infrared image passes
For sensor 734 according to the synchronization signal received, capture includes helmet equipment 71 and handheld device 72 the 1st to nth frame image, the
The 1 mark point image of each infrared lamp at various moments into nth frame image including helmet equipment 71 and handheld device 72;
Above-mentioned steps 802 to step 804 does not have sequencing, it should be understood that occurs simultaneously.
Step 805, the N frame figure that the image-signal processor 735 of photographic device 73 is obtained according to infrared image sensor 734
Picture determines the translational movement of virtual reality device;
Specifically, the N frame image that image-signal processor 735 is obtained according to infrared image sensor 734, determines that the helmet is set
Standby 71 and handheld device 72 translational movement specific steps referring to the step 301 in above-described embodiment to step 303, Yi Jibu
Rapid 401 to step 404 particular content, be not repeated herein.
Step 806, after the reception of helmet equipment 71 the 1st to n-th synchronization signal, the master controller of helmet equipment 71
711 according to the rotation attitude information of the collected helmet equipment 71 of the first MEMS sensor 714, determines what helmet equipment 71 generated
Rotation amount;
Step 807, after the reception of handheld device 72 the 1st to n-th synchronization signal, handheld device 72 is according to the 2nd MEMS
The collected 72 rotation attitude information of handheld device of sensor 724 determines the rotation amount that handheld device 72 generates;
Step 808, first server 74 obtains head by the communicating circuit between photographic device 73 and first server 74
The translational movement that helmet equipment 71 and handheld device 72 generate;Specifically, photographic device 73 passes through leading between first server 74
It interrogates circuit and the translational movement of helmet equipment 71 and handheld device 72 is sent to first server 74;
Step 809, first server 74 obtains head by the communicating circuit between helmet equipment 71 and first server 74
The rotation amount that helmet equipment 71 generates;Specifically, helmet equipment 71 is by the communicating circuit between first server 74, by the helmet
The rotation amount that equipment 71 generates is sent to first server 74;
Step 810, first server 74 obtains hand by the communicating circuit between handheld device 72 and first server 74
The rotation amount that handle equipment 72 generates;Specifically, handheld device 72 is by the communicating circuit between first server 74, by handle
The rotation amount that equipment 72 generates is sent to first server 74;
Step 811, the translational movement and rotation amount and handheld device 72 that first server 74 is generated according to helmet equipment 71
The translational movement and rotation amount of generation determine the posture information that Moving Objects generate;
Step 812, posture information is sent to helmet equipment 71 by first server 74;
Step 813, helmet equipment 71 controls the display of the display 715 of helmet equipment 71 according to posture information.
Wherein, helmet equipment is believed according to the rotation attitude of the collected helmet equipment of first MEMS sensor
Breath determines the rotation amount that the helmet equipment generates;Handheld device is revolved according to the collected handheld device of the second MEMS sensor
Turn posture information, determines the rotation amount that handheld device generates;It is all based on what ripe algorithm in the prior art obtained, herein not
It is described in detail.
In above-described embodiment, under second of preferred system architecture, in the athletic posture for needing to obtain virtual reality device
When, first server is sent to the camera of virtual reality device (including helmet equipment and handheld device) and photographic device respectively
At least the 1st to n-th synchronization signal, and photographic device and virtual reality device (including helmet equipment with handheld device) reception is synchronous
It is synchronous at the time of signal, so that virtual reality device one synchronization signal of every reception, by preset state control policy exhibition
The state of each infrared lamp, the camera of photographic device often connect on existing virtual reality device (including helmet equipment and handheld device)
The image that synchronization signal captures a frame virtual reality device (including helmet equipment and handheld device) is received, in captured image
Mark point image including each infrared lamp at the capture moment, in this way, realizing virtual reality device (including helmet equipment and hand
Handle equipment) on the state that shows of each infrared lamp, between the mark point image of each infrared lamp in camera captured image
Synchronized relation.Include the preset state of each infrared lamp at various moments in preset state control policy, takes the photograph in this way
As the master controller of device can be according to each infrared lamp on virtual reality device (including helmet equipment and handheld device)
Mark point image information in N frame image accurately and rapidly determines the translational movement of virtual reality device;Based on the helmet
The translational movement and rotation amount that equipment and the handheld device generate, first server can accurately and rapidly determine the fortune
The posture information that dynamic object generates.
Based on above method process, the embodiment of the present invention also provides a kind of Moving Objects tracing system, these Moving Objects
The particular content of tracing system is referred to above method implementation, and details are not described herein.
The embodiment of the present invention as shown in Figure 9 provides a kind of Moving Objects tracing system, comprising:
Photographic device 901 is set when for that need to obtain the athletic posture of virtual reality device 902 in determination to virtual reality
At least the 1st is sent to n-th synchronization signal, to obtain the virtual existing of camera capture for the camera of 902 and photographic device 901
The N frame image of real equipment 902;The athletic posture of virtual reality device 902 is determined according to N frame image;
Virtual reality device 902, for receiving i-th of synchronization signal of first processor transmission, according to i-th of synchronous letter
Number, the state of each physical markings point on virtual reality device 902,1≤i≤N are showed by preset state control policy;
The camera of photographic device 901, it is same according to i-th for receiving i-th of synchronization signal of first processor transmission
Signal is walked, the i-th frame image of virtual reality device 902 is captured, includes the mark point shadow of each physical markings point in the i-th frame image
Picture.
Further, preset state control policy includes:
Each physical markings point of all virtual reality devices 902 has unique number, the volume of each physical markings point
Number corresponding unique binary coding, binary-coded numerical digit N are determined according to the quantity of physical markings point;Each physical markings
O'clock the state at moment is corresponded in i-th of synchronization signal, the numerical value of binary-coded i-th of numerical digit is corresponded to by the physical markings point
Label, the numerical value of i-th of numerical digit are 0, and physical markings point shows first state, and the numerical value of i-th of numerical digit is 1, physical markings point
Show the second state.
Further, photographic device 901 is also used to:
The N frame image for obtaining camera capture, the location information of each mark point image is determined according to N frame image;It is each
The location information of mark point image refers to location information of each mark point image in preset image coordinate system;
According to N frame image, the corresponding relationship between each physical markings point and each mark point image is determined;
According to the actual position information of each physical markings point, the location information and each object of each mark point image
The corresponding relationship between mark point and each mark point image is managed, determines the translational movement that virtual reality device 902 generates;Each object
The actual position information for managing mark point is location information of each physical markings point in preset world coordinate system.
Further, photographic device 901 is specifically used for:
According to the display information of mark point image each in N frame image and preset state control policy, determine each
The corresponding binary coding of a mark point image;
According to the corresponding binary coding of each mark point image, the number of each mark point image is determined;
According to the positional relationship between each mark point image, determine that the position between the number of each mark point image is closed
System;
If the position between the positional relationship between the number of each mark point image and the number of each physical markings point
Relationship consistency determines the corresponding physical markings point of any mark point image then according to number information.
Further, photographic device 901 is also used to:
If the position between the positional relationship between the number of each mark point image and the number of each physical markings point
Relationship is inconsistent, and photographic device 901 continues to send N+1 to the to the camera of virtual reality device 902 and photographic device 901
2N synchronization signal, with obtain camera capture virtual reality device 902 N+1 to 2N frame image;
Photographic device 901 is successively according to second to N+1 frame, third to N+2 frame ..., the order of N to 2N-1 frame
Multiple groups N frame image is constituted, and according to the precedence of multiple groups N frame image, successively returns according to N frame image, determines each physics
The step of corresponding relationship between mark point and the mark point image of each physical markings point, until each in one group of N frame image
Positional relationship between the number of mark point image is consistent with the positional relationship between the number of each physical markings point.
Further, virtual reality device 902 includes at least one helmet equipment and at least one that Moving Objects are worn
Handheld device matched with helmet equipment;M physical markings point is set on helmet equipment, p physics mark is set in handheld device
Note point;
Include first control circuit on helmet equipment, includes second control circuit in handheld device;
First control circuit controls m according to the synchronization signal received to n-th synchronization signal for receiving the 1st
The state of physical markings point at various moments;
Second control circuit controls p according to the synchronization signal received to n-th synchronization signal for receiving the 1st
The state of physical markings point at various moments;
It further include the first MEMS sensor on helmet equipment;It further include the second MEMS sensor in handheld device;
Helmet equipment is also used to after first control circuit reception at least the 1st to n-th synchronization signal, according to first
The rotation attitude information of the collected helmet equipment of MEMS sensor determines the rotation amount that helmet equipment generates;
Handheld device is also used to after second control circuit reception at least the 1st to n-th synchronization signal, according to second
The collected handheld device rotation attitude information of MEMS sensor determines the rotation amount that handheld device generates.
Further, helmet equipment is specifically used for:
Helmet equipment obtains helmet equipment and handle is set by the communicating circuit between photographic device 901 and helmet equipment
The standby translational movement generated;
Helmet equipment obtains the rotation that handheld device generates by the communicating circuit between handheld device and helmet equipment
Amount;
The rotation that helmet equipment is generated according to translational movement, helmet equipment and the handheld device that helmet equipment and handheld device generate
Turn amount, determines the posture information that Moving Objects generate;
Helmet equipment controls the display of the display of helmet equipment according to posture information.
In above-described embodiment, in the athletic posture for needing to obtain virtual reality device, photographic device is respectively to virtual existing
The camera of real equipment (including helmet equipment and handheld device) and photographic device sends at least the 1st to n-th synchronization signal, takes the photograph
It is synchronous at the time of receiving synchronization signal as device and virtual reality device (including helmet equipment with handheld device), so that virtually showing
One synchronization signal of the real every reception of equipment, showing virtual reality device by preset state control policy, (including the helmet is set
Standby and handheld device) on each infrared lamp state, the camera of photographic device is every to be received synchronization signal to capture a frame empty
Intend the image of real world devices (including helmet equipment and handheld device), includes each infrared lamp in captured image at the capture moment
Mark point image, show in this way, realizing each infrared lamp in virtual reality device (including helmet equipment and handheld device)
State, the synchronized relation between the mark point image of each infrared lamp in camera captured image.Preset shape
It include the preset state of each infrared lamp at various moments in state control strategy, photographic device subsequent in this way can be according to virtual
Mark point image information of each infrared lamp in N frame image in real world devices (including helmet equipment and handheld device), it is quasi-
Really, the translational movement of virtual reality device is quickly determined;The translational movement generated based on the helmet equipment and the handheld device
And rotation amount, the helmet equipment in virtual reality device can accurately and rapidly determine the posture that the Moving Objects generate
Information.
As shown in Figure 10, the embodiment of the present invention provides a kind of Moving Objects tracing system, comprising:
First server 100, when for the athletic posture of virtual reality device 200 need to be obtained in determination, to virtual reality
The camera of equipment 200 and photographic device 300 sends at least the 1st to n-th synchronization signal, to obtain the virtual of camera capture
The N frame image of real world devices 200;The athletic posture of virtual reality device 200 is determined according to N frame image;
Virtual reality device 200, for receiving i-th of synchronization signal of first processor transmission, according to i-th of synchronous letter
Number, the state of each physical markings point on virtual reality device 200,1≤i≤N are showed by preset state control policy;
The camera of photographic device 300, it is same according to i-th for receiving i-th of synchronization signal of first processor transmission
Signal is walked, the i-th frame image of virtual reality device 200 is captured, includes the mark point shadow of each physical markings point in the i-th frame image
Picture.
Further, preset state control policy includes:
Each physical markings point of all virtual reality devices 200 has unique number, the volume of each physical markings point
Number corresponding unique binary coding, binary-coded numerical digit N are determined according to the quantity of physical markings point;Each physical markings
O'clock the state at moment is corresponded in i-th of synchronization signal, the numerical value of binary-coded i-th of numerical digit is corresponded to by the physical markings point
Label, the numerical value of i-th of numerical digit are 0, and physical markings point shows first state, and the numerical value of i-th of numerical digit is 1, physical markings point
Show the second state.
Further, photographic device 300 is also used to:
The N frame image for obtaining camera capture, the location information of each mark point image is determined according to N frame image;It is each
The location information of mark point image refers to location information of each mark point image in preset image coordinate system;
According to N frame image, the corresponding relationship between each physical markings point and each mark point image is determined;
According to the actual position information of each physical markings point, the location information and each object of each mark point image
The corresponding relationship between mark point and each mark point image is managed, determines the translational movement that virtual reality device 200 generates;Each object
The actual position information for managing mark point is location information of each physical markings point in preset world coordinate system.
Further, photographic device 300 is specifically used for:
According to the display information of mark point image each in N frame image and preset state control policy, determine each
The corresponding binary coding of a mark point image;
According to the corresponding binary coding of each mark point image, the number of each mark point image is determined;
According to the positional relationship between each mark point image, determine that the position between the number of each mark point image is closed
System;
If the position between the positional relationship between the number of each mark point image and the number of each physical markings point
Relationship consistency determines the corresponding physical markings point of any mark point image then according to number information.
Further, photographic device 300 is also used to:
If the position between the positional relationship between the number of each mark point image and the number of each physical markings point
Relationship is inconsistent, and photographic device 300 continues to send N+1 to the 2N synchronization signal to virtual reality device 200 and camera,
With obtain camera capture virtual reality device 200 N+1 to 2N frame image;
Photographic device 300 is successively according to second to N+1 frame, third to N+2 frame ..., the order of N to 2N-1 frame
Multiple groups N frame image is constituted, and according to the precedence of multiple groups N frame image, successively returns according to N frame image, determines each physics
The step of corresponding relationship between mark point and the mark point image of each physical markings point, until each in one group of N frame image
Positional relationship between the number of mark point image is consistent with the positional relationship between the number of each physical markings point.
Further, virtual reality device 200 includes at least one helmet equipment and at least one that Moving Objects are worn
Handheld device matched with helmet equipment;M physical markings point is set on helmet equipment, p physics mark is set in handheld device
Note point;
Include first control circuit on helmet equipment, includes second control circuit in handheld device;
First control circuit controls m according to the synchronization signal received to n-th synchronization signal for receiving the 1st
The state of physical markings point at various moments;
Second control circuit controls p according to the synchronization signal received to n-th synchronization signal for receiving the 1st
The state of physical markings point at various moments;
It further include the first MEMS sensor on helmet equipment;It further include the second MEMS sensor in handheld device;
Helmet equipment is also used to after first control circuit reception at least the 1st to n-th synchronization signal, according to first
The rotation attitude information of the collected helmet equipment of MEMS sensor determines the rotation amount that helmet equipment generates;
Handheld device is also used to after second control circuit reception at least the 1st to n-th synchronization signal, according to second
The collected handheld device rotation attitude information of MEMS sensor determines the rotation amount that handheld device generates.
Further, first server 100 is specifically used for:
First server 100 is obtained the helmet and is set by the communicating circuit between photographic device 300 and first server 100
The translational movement that standby and handheld device generates;
First server 100 is obtained handheld device and is produced by the communicating circuit between handheld device and first server 100
Raw rotation amount;
First server 100 is obtained helmet equipment and is produced by the communicating circuit between helmet equipment and first server 100
Raw rotation amount;
Translational movement and rotation amount that first server 100 is generated according to helmet equipment and the translation that handheld device generates
Amount and rotation amount determine the posture information that Moving Objects generate;Posture information is sent to helmet equipment, so that helmet equipment root
According to posture information, the display of the display of helmet equipment is controlled.
In above-described embodiment, under second of preferred system architecture, in the athletic posture for needing to obtain virtual reality device
When, first server is sent to the camera of virtual reality device (including helmet equipment and handheld device) and photographic device respectively
At least the 1st to n-th synchronization signal, and photographic device and virtual reality device (including helmet equipment with handheld device) reception is synchronous
It is synchronous at the time of signal, so that virtual reality device one synchronization signal of every reception, by preset state control policy exhibition
The state of each infrared lamp, the camera of photographic device often connect on existing virtual reality device (including helmet equipment and handheld device)
The image that synchronization signal captures a frame virtual reality device (including helmet equipment and handheld device) is received, in captured image
Mark point image including each infrared lamp at the capture moment, in this way, realizing virtual reality device (including helmet equipment and hand
Handle equipment) on the state that shows of each infrared lamp, between the mark point image of each infrared lamp in camera captured image
Synchronized relation.Include the preset state of each infrared lamp at various moments in preset state control policy, takes the photograph in this way
As device can be according to each infrared lamp on virtual reality device (including helmet equipment and handheld device) in N frame image
Mark point image information, accurately and rapidly determine virtual reality device translational movement;Based on the helmet equipment and the hand
The translational movement and rotation amount that handle equipment generates, first server can accurately and rapidly determine what the Moving Objects generated
Posture information.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (18)
1. a kind of Moving Objects method for tracing characterized by comprising
First processor is when determination need to obtain the athletic posture of virtual reality device, Xiang Suoshu virtual reality device and camera shooting dress
The camera set sends at least the 1st to n-th synchronization signal, to obtain the virtual reality device of the camera capture
N frame image;The athletic posture of the virtual reality device is determined according to the N frame image;
The virtual reality device receives i-th of synchronization signal that the first processor is sent, according to described i-th synchronous letter
Number, show the state of each physical markings point on the virtual reality device by preset state control policy, 1≤i≤
N;
The camera receives i-th of synchronization signal that the first processor is sent and is caught according to i-th of synchronization signal
The i-th frame image of the virtual reality device is obtained, includes the mark point shadow of each physical markings point in the i-th frame image
Picture;
Wherein, the preset state control policy includes:
Each physical markings point of all virtual reality devices has unique number, and the number of each physical markings point is corresponding only
One binary coding;Each physical markings point corresponds to the state at moment in i-th of synchronization signal, by the physical markings point
The numeric indicia of corresponding binary-coded i-th of numerical digit, the numerical value of i-th of numerical digit are 0, and physical markings point shows the first shape
State, the numerical value of i-th of numerical digit are 1, and physical markings point shows the second state.
2. the method as described in claim 1, which is characterized in that determine the virtual reality device according to the N frame image
Athletic posture, comprising:
Second processor obtains the rotation amount that the virtual reality device generates, and the rotation amount that the virtual reality device generates is
It is determined according to the data of the sensor of virtual reality device acquisition;
The second processor obtains the translational movement that the virtual reality device generates, the translation that the virtual reality device generates
Amount is determined according to the N frame image of camera capture;
The rotation amount and the virtual reality device that the second processor is generated according to the virtual reality device generate flat
Shifting amount determines the athletic posture of the virtual reality device.
3. method according to claim 2, which is characterized in that according to the N frame image, determine the virtual reality device
Translational movement, comprising:
Third processor obtains the N frame image of the camera capture, determines each physics according to the N frame image
The location information of mark point image;The location information of each physical markings point image refers to each physical markings point shadow
As the location information in preset image coordinate system;
The third processor determines each physical markings point and each physical markings point according to the N frame image
Corresponding relationship between image;
Actual position information of the third processor according to each physical markings point, each physical markings point image
Location information and each physical markings point and each physical markings point image between corresponding relationship, determine
The translational movement that the virtual reality device generates;The actual position information of each physical markings point is each physics mark
Location information of the note point in preset world coordinate system.
4. method as claimed in claim 3, which is characterized in that the third processor according to the N frame image, determine described in
Corresponding relationship between each physical markings point and the mark point image of each physical markings point, comprising:
The display information of each physical markings point image according to the N frame image and the preset state control
Strategy determines the corresponding binary coding of each physical markings point image;
According to the corresponding binary coding of each physical markings point image, the volume of each physical markings point image is determined
Number;
According to the positional relationship between each physical markings point image, the number of each physical markings point image is determined
Between positional relationship;
If positional relationship between the number of each physical markings point image and the number of each physical markings point it
Between positional relationship it is consistent, then according to number information, determine the corresponding physical markings point of any mark point image.
5. method as claimed in claim 4, which is characterized in that if the position between the number of each physical markings point image
The positional relationship set between relationship and the number of each physical markings point is inconsistent, the method also includes:
The third processor notifies the first processor to continue to send N to the virtual reality device and the camera
+ 1 to the 2N synchronization signal, with obtain camera capture the virtual reality device N+1 to 2N frame image;
The third processor is successively according to described second to N+1 frame, the third to N+2 frame ..., the N to
The order of 2N-1 frame constitutes multiple groups N frame image, and according to the precedence of the multiple groups N frame image, successively returns according to the N
Frame image determines the corresponding relationship between each physical markings point and the mark point image of each physical markings point
Step, until positional relationship and each physics mark in one group of N frame image between the number of each physical markings point image
Remember that the positional relationship between the number of point is consistent.
6. method as claimed in claim 5, which is characterized in that the virtual reality device includes what the Moving Objects were worn
At least one helmet equipment and at least one handheld device matched with the helmet equipment;M are arranged on the helmet equipment
P physical markings point is arranged in the handheld device for physical markings point;
Include first control circuit on the helmet equipment, includes second control circuit in the handheld device;
The first control circuit is for receiving the described at least the 1st to n-th synchronization signal, according to the synchronous letter received
Number, control the state of the m physical markings point at various moments;
The second control circuit is for receiving the described at least the 1st to n-th synchronization signal, according to the synchronization signal received,
Control the state of the p physical markings point at various moments;
It further include the first MEMS sensor on the helmet equipment;It further include the second MEMS sensor in the handheld device;Institute
State method further include:
The helmet equipment is after first control circuit reception at least described 1st to n-th synchronization signal, according to described
The rotation attitude information of the collected helmet equipment of first MEMS sensor determines the rotation that the helmet equipment generates
Amount;
The handheld device is after second control circuit reception at least described 1st to n-th synchronization signal, according to described
The collected handheld device rotation attitude information of second MEMS sensor determines the rotation amount that the handheld device generates.
7. method as claimed in claim 6, which is characterized in that the first processor and the third processor are integrated in institute
It states in photographic device, the second processor is integrated in the helmet equipment;It is then described according to N frame image determination
The athletic posture of virtual reality device, comprising:
The helmet equipment obtains the helmet equipment by the communicating circuit between the photographic device and the helmet equipment
The translational movement generated with the handheld device;
The helmet equipment is obtained the handle and is set by the communicating circuit between the handheld device and the helmet equipment
The standby rotation amount generated;
Translational movement that the helmet equipment is generated according to the helmet equipment and the handheld device, the helmet equipment and described
The rotation amount that handheld device generates determines the posture information that the Moving Objects generate;
The helmet equipment controls the display of the display of the helmet equipment according to the posture information.
8. method as claimed in claim 6, which is characterized in that the third processor is integrated in the master control of the photographic device
In device processed;The first processor and the second processor are integrated in first server;It is described true according to the N frame image
The athletic posture of the fixed virtual reality device, comprising:
The first server obtains the helmet by the communicating circuit between the photographic device and the first server
The translational movement that equipment and the handheld device generate;
The first server obtains the handle by the communicating circuit between the handheld device and the first server
The rotation amount that equipment generates;
The first server obtains the helmet by the communicating circuit between the helmet equipment and the first server
The rotation amount that equipment generates;
What the translational movement and rotation amount and the handheld device that the first server is generated according to the helmet equipment generated
Translational movement and rotation amount determine the posture information that the Moving Objects generate;The posture information is sent to the helmet to set
It is standby, so that the helmet equipment controls the display of the display of the helmet equipment according to the posture information.
9. a kind of Moving Objects tracing system characterized by comprising
Photographic device, when for the athletic posture of virtual reality device need to be obtained in determination, Xiang Suoshu virtual reality device and institute
The camera for stating photographic device sends at least the 1st to n-th synchronization signal, to obtain the described virtual existing of the camera capture
The N frame image of real equipment;The athletic posture of the virtual reality device is determined according to the N frame image;
Virtual reality device, i-th of the synchronization signal sent for receiving the photographic device, according to described i-th synchronous letter
Number, show the state of each physical markings point on the virtual reality device by preset state control policy, 1≤i≤
N;
The camera of the photographic device, i-th of the synchronization signal sent for receiving the photographic device, according to described i-th
A synchronization signal captures the i-th frame image of the virtual reality device, includes each physics mark in the i-th frame image
Remember the mark point image of point;
Wherein, the preset state control policy includes:
Each physical markings point of all virtual reality devices has unique number, and the number of each physical markings point is corresponding only
One binary coding;Each physical markings point corresponds to the state at moment in i-th of synchronization signal, by the physical markings point
The numeric indicia of corresponding binary-coded i-th of numerical digit, the numerical value of i-th of numerical digit are 0, and physical markings point shows the first shape
State, the numerical value of i-th of numerical digit are 1, and physical markings point shows the second state.
10. system as claimed in claim 9, which is characterized in that the photographic device is also used to:
The N frame image for obtaining the camera capture, determines each physical markings point image according to the N frame image
Location information;The location information of each physical markings point image refers to each physical markings point image preset
Location information in image coordinate system;
According to the N frame image, determine corresponding between each physical markings point and each physical markings point image
Relationship;
According to the actual position information of each physical markings point, the location information of each physical markings point image, with
And the corresponding relationship between each physical markings point and each physical markings point image, determine that the virtual reality is set
The standby translational movement generated;The actual position information of each physical markings point is each physical markings point in preset generation
Location information in boundary's coordinate system.
11. system as claimed in claim 10, which is characterized in that the photographic device is specifically used for:
The display information of each physical markings point image according to the N frame image and the preset state control
Strategy determines the corresponding binary coding of each physical markings point image;
According to the corresponding binary coding of each physical markings point image, the volume of each physical markings point image is determined
Number;
According to the positional relationship between each physical markings point image, the number of each physical markings point image is determined
Between positional relationship;
If positional relationship between the number of each physical markings point image and the number of each physical markings point it
Between positional relationship it is consistent, then according to number information, determine the corresponding physical markings point of any mark point image.
12. system as claimed in claim 11, which is characterized in that the photographic device is also used to:
If positional relationship between the number of each physical markings point image and the number of each physical markings point it
Between positional relationship it is inconsistent, the photographic device to the virtual reality device and the camera continue send N+1 extremely
The 2N synchronization signal, with obtain camera capture the virtual reality device N+1 to 2N frame image;
The photographic device is successively according to described second to N+1 frame, the third to N+2 frame ..., the N to 2N-
The order of 1 frame constitutes multiple groups N frame image, and according to the precedence of the multiple groups N frame image, successively returns according to the N frame
Image determines the step of the corresponding relationship between each physical markings point and the mark point image of each physical markings point
Suddenly, up to the positional relationship and each physical markings in one group of N frame image between the number of each physical markings point image
Positional relationship between the number of point is consistent.
13. the system as described in any one of claim 9 to 12, which is characterized in that the virtual reality device includes described
At least one helmet equipment and at least one handheld device matched with the helmet equipment that Moving Objects are worn;The helmet
M physical markings point is set in equipment, p physical markings point is set in the handheld device;
Include first control circuit on the helmet equipment, includes second control circuit in the handheld device;
The first control circuit, for receiving the described 1st to n-th synchronization signal, according to the synchronization signal received,
Control the state of the m physical markings point at various moments;
The second control circuit, for receiving the described 1st to n-th synchronization signal, according to the synchronization signal received, control
The state of the p physical markings point at various moments;
It further include the first MEMS sensor on the helmet equipment;It further include the second MEMS sensor in the handheld device;
The helmet equipment is also used to after first control circuit reception at least described 1st to n-th synchronization signal,
According to the rotation attitude information of the collected helmet equipment of first MEMS sensor, determine that the helmet equipment generates
Rotation amount;
The handheld device is also used to after second control circuit reception at least described 1st to n-th synchronization signal,
According to the collected handheld device rotation attitude information of second MEMS sensor, determine what the handheld device generated
Rotation amount.
14. system as claimed in claim 13, which is characterized in that the helmet equipment is specifically used for:
The helmet equipment obtains the helmet equipment by the communicating circuit between the photographic device and the helmet equipment
The translational movement generated with the handheld device;
The helmet equipment is obtained the handle and is set by the communicating circuit between the handheld device and the helmet equipment
The standby rotation amount generated;
Translational movement that the helmet equipment is generated according to the helmet equipment and the handheld device, the helmet equipment and described
The rotation amount that handheld device generates determines the posture information that the Moving Objects generate;
The helmet equipment controls the display of the display of the helmet equipment according to the posture information.
15. a kind of Moving Objects tracing system characterized by comprising
First server, when for the athletic posture of virtual reality device need to be obtained in determination, Xiang Suoshu virtual reality device and
The camera of photographic device sends at least the 1st to n-th synchronization signal, to obtain the virtual reality of the camera capture
The N frame image of equipment;The athletic posture of the virtual reality device is determined according to the N frame image;
Virtual reality device, i-th of the synchronization signal sent for receiving the first server device are same according to described i-th
Signal is walked, shows the state of each physical markings point on the virtual reality device by preset state control policy, 1≤
i≤N;
The camera of the photographic device, i-th of the synchronization signal sent for receiving the first server, according to described the
I synchronization signal captures the i-th frame image of the virtual reality device, includes each physics mark in the i-th frame image
Remember the mark point image of point, wherein the preset state control policy includes:
Each physical markings point of all virtual reality devices has unique number, and the number of each physical markings point is corresponding only
One binary coding;Each physical markings point corresponds to the state at moment in i-th of synchronization signal, by the physical markings point
The numeric indicia of corresponding binary-coded i-th of numerical digit, the numerical value of i-th of numerical digit are 0, and physical markings point shows the first shape
State, the numerical value of i-th of numerical digit are 1, and physical markings point shows the second state.
16. system as claimed in claim 15, which is characterized in that the photographic device is also used to:
The N frame image for obtaining the camera capture, determines each physical markings point image according to the N frame image
Location information;The location information of each physical markings point image refers to each physical markings point image preset
Location information in image coordinate system;
According to the N frame image, determine corresponding between each physical markings point and each physical markings point image
Relationship;
According to the actual position information of each physical markings point, the location information of each physical markings point image, with
And the corresponding relationship between each physical markings point and each physical markings point image, determine that the virtual reality is set
Standby translational movement;The actual position information of each physical markings point is that each physical markings point is sat in the preset world
Location information in mark system.
17. the system as described in claim 15 or 16, which is characterized in that the virtual reality device includes the Moving Objects
At least one helmet equipment worn and at least one handheld device matched with the helmet equipment;It is set on the helmet equipment
M physical markings point is set, p physical markings point is set in the handheld device;
Include first control circuit on the helmet equipment, includes second control circuit in the handheld device;
The first control circuit, for receiving the described 1st to n-th synchronization signal, according to the synchronization signal received,
Control the state of the m physical markings point at various moments;
The second control circuit, for receiving the described 1st to n-th synchronization signal, according to the synchronization signal received, control
The state of the p physical markings point at various moments;
It further include the first MEMS sensor on the helmet equipment;It further include the second MEMS sensor in the handheld device;
The helmet equipment is also used to after first control circuit reception at least described 1st to n-th synchronization signal,
According to the rotation attitude information of the collected helmet equipment of first MEMS sensor, determine that the helmet equipment generates
Rotation amount;
The handheld device is also used to after second control circuit reception at least described 1st to n-th synchronization signal,
According to the collected handheld device rotation attitude information of second MEMS sensor, determine what the handheld device generated
Rotation amount.
18. system as claimed in claim 17, which is characterized in that the first server is specifically used for:
The first server obtains the helmet by the communicating circuit between the photographic device and the first server
The translational movement that equipment and the handheld device generate;
The first server obtains the handle by the communicating circuit between the handheld device and the first server
The rotation amount that equipment generates;
The first server obtains the helmet by the communicating circuit between the helmet equipment and the first server
The rotation amount that equipment generates;
What the translational movement and rotation amount and the handheld device that the first server is generated according to the helmet equipment generated
Translational movement and rotation amount determine the posture information that the Moving Objects generate;The posture information is sent to the helmet to set
It is standby, so that the helmet equipment controls the display of the display of the helmet equipment according to the posture information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610298381.5A CN105931272B (en) | 2016-05-06 | 2016-05-06 | A kind of Moving Objects method for tracing and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610298381.5A CN105931272B (en) | 2016-05-06 | 2016-05-06 | A kind of Moving Objects method for tracing and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105931272A CN105931272A (en) | 2016-09-07 |
CN105931272B true CN105931272B (en) | 2019-04-05 |
Family
ID=56835238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610298381.5A Active CN105931272B (en) | 2016-05-06 | 2016-05-06 | A kind of Moving Objects method for tracing and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105931272B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106028001B (en) * | 2016-07-20 | 2019-01-04 | 上海乐相科技有限公司 | A kind of optical positioning method and device |
CN106445159A (en) * | 2016-09-30 | 2017-02-22 | 乐视控股(北京)有限公司 | Virtual reality system and positioning method |
CN106445084A (en) * | 2016-09-30 | 2017-02-22 | 乐视控股(北京)有限公司 | Positioning method and acquisition equipment |
CN106651948A (en) * | 2016-09-30 | 2017-05-10 | 乐视控股(北京)有限公司 | Positioning method and handle |
CN106774992A (en) * | 2016-12-16 | 2017-05-31 | 深圳市虚拟现实技术有限公司 | The point recognition methods of virtual reality space location feature |
CN106681510B (en) * | 2016-12-30 | 2020-06-05 | 光速视觉(北京)科技有限公司 | Pose recognition device, virtual reality display device and virtual reality system |
CN107316319B (en) * | 2017-05-27 | 2020-07-10 | 北京小鸟看看科技有限公司 | Rigid body tracking method, device and system |
CN109218252A (en) * | 2017-06-29 | 2019-01-15 | 阿里巴巴集团控股有限公司 | A kind of display methods of virtual reality, device and its equipment |
CN107564064B (en) * | 2017-09-12 | 2020-11-03 | 深圳市欢创科技有限公司 | Positioning point, coding method thereof, positioning method and system thereof |
CN110825333B (en) * | 2018-08-14 | 2021-12-21 | 广东虚拟现实科技有限公司 | Display method, display device, terminal equipment and storage medium |
WO2020020102A1 (en) * | 2018-07-23 | 2020-01-30 | 广东虚拟现实科技有限公司 | Method for generating virtual content, terminal device, and storage medium |
CN110347262A (en) * | 2019-07-16 | 2019-10-18 | 异起(上海)智能科技有限公司 | Mobile device and method in a kind of virtual reality |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339654A (en) * | 2007-07-04 | 2009-01-07 | 北京威亚视讯科技有限公司 | Reinforced real environment three-dimensional registering method and system based on mark point |
CN104225915A (en) * | 2013-06-07 | 2014-12-24 | 索尼电脑娱乐美国公司 | Systems and Methods for Reducing Hops Associated with A Head Mounted System |
CN104321720A (en) * | 2012-01-09 | 2015-01-28 | 爱普生挪威研究发展公司 | Low interference system and method for synchronization, identification and tracking of visual and interactive systems |
CN104699247A (en) * | 2015-03-18 | 2015-06-10 | 北京七鑫易维信息技术有限公司 | Virtual reality interactive system and method based on machine vision |
CN104834381A (en) * | 2015-05-15 | 2015-08-12 | 中国科学院深圳先进技术研究院 | Wearable device for sight focus positioning and sight focus positioning method |
CN105242400A (en) * | 2015-07-10 | 2016-01-13 | 上海鹰为智能科技有限公司 | Virtual reality glasses |
CN105359063A (en) * | 2013-06-09 | 2016-02-24 | 索尼电脑娱乐公司 | Head mounted display with tracking |
CN105392538A (en) * | 2013-06-07 | 2016-03-09 | 索尼电脑娱乐公司 | Image rendering responsive to user actions in head mounted display |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103460256B (en) * | 2011-03-29 | 2016-09-14 | 高通股份有限公司 | In Augmented Reality system, virtual image is anchored to real world surface |
US9514571B2 (en) * | 2013-07-25 | 2016-12-06 | Microsoft Technology Licensing, Llc | Late stage reprojection |
-
2016
- 2016-05-06 CN CN201610298381.5A patent/CN105931272B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339654A (en) * | 2007-07-04 | 2009-01-07 | 北京威亚视讯科技有限公司 | Reinforced real environment three-dimensional registering method and system based on mark point |
CN104321720A (en) * | 2012-01-09 | 2015-01-28 | 爱普生挪威研究发展公司 | Low interference system and method for synchronization, identification and tracking of visual and interactive systems |
CN104225915A (en) * | 2013-06-07 | 2014-12-24 | 索尼电脑娱乐美国公司 | Systems and Methods for Reducing Hops Associated with A Head Mounted System |
CN105392538A (en) * | 2013-06-07 | 2016-03-09 | 索尼电脑娱乐公司 | Image rendering responsive to user actions in head mounted display |
CN105359063A (en) * | 2013-06-09 | 2016-02-24 | 索尼电脑娱乐公司 | Head mounted display with tracking |
CN104699247A (en) * | 2015-03-18 | 2015-06-10 | 北京七鑫易维信息技术有限公司 | Virtual reality interactive system and method based on machine vision |
CN104834381A (en) * | 2015-05-15 | 2015-08-12 | 中国科学院深圳先进技术研究院 | Wearable device for sight focus positioning and sight focus positioning method |
CN105242400A (en) * | 2015-07-10 | 2016-01-13 | 上海鹰为智能科技有限公司 | Virtual reality glasses |
Also Published As
Publication number | Publication date |
---|---|
CN105931272A (en) | 2016-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105931272B (en) | A kind of Moving Objects method for tracing and system | |
CN103458184B (en) | A kind of mobile phone is applied to carry out the method that The Cloud Terrace remotely controls | |
CN109218619A (en) | Image acquiring method, device and system | |
CN103279186B (en) | Merge the multiple goal motion capture system of optical alignment and inertia sensing | |
CN104932698B (en) | A kind of hand-held interactive device device and its projection interactive method | |
CN108292489A (en) | Information processing unit and image generating method | |
CN106200944A (en) | The control method of a kind of object, control device and control system | |
CN105898346A (en) | Control method, electronic equipment and control system | |
CN104243962A (en) | Augmented reality head-mounted electronic device and method for generating augmented reality | |
CN107454947A (en) | Unmanned aerial vehicle (UAV) control method, wear-type show glasses and system | |
JP2024020489A (en) | Depth Sensing Techniques for Virtual, Augmented, and Mixed Reality Systems | |
CN106774868B (en) | Wireless presentation device | |
CN107483836B (en) | A kind of image pickup method and mobile terminal | |
CN108151738B (en) | Codified active light marked ball with attitude algorithm | |
CN104094595A (en) | Method for processing images in a stereo vision system and apparatus for same | |
CN107241546A (en) | Lamp array scintillation system, video camera time detecting initialization system and method | |
CN109453517A (en) | Virtual role control method and device, storage medium, mobile terminal | |
CN106327583A (en) | Virtual reality equipment for realizing panoramic image photographing and realization method thereof | |
CN108280851A (en) | Depth map generation device | |
CN108564613A (en) | A kind of depth data acquisition methods and mobile terminal | |
CN107767394A (en) | A kind of positioning of moving target and Attitude estimation method, apparatus and system | |
CN107471216A (en) | VR body man-controlled mobile robots under hazardous environment | |
CN107818595A (en) | Wearable Instant Interaction System | |
CN109448117A (en) | Image rendering method, device and electronic equipment | |
WO2022174574A1 (en) | Sensor-based bare-hand data annotation method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |