CN105183005A - Scene virtualization device and control method - Google Patents

Scene virtualization device and control method Download PDF

Info

Publication number
CN105183005A
CN105183005A CN201510526465.5A CN201510526465A CN105183005A CN 105183005 A CN105183005 A CN 105183005A CN 201510526465 A CN201510526465 A CN 201510526465A CN 105183005 A CN105183005 A CN 105183005A
Authority
CN
China
Prior art keywords
primary processor
initial point
image
electrically connected
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510526465.5A
Other languages
Chinese (zh)
Inventor
李尔
王政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510526465.5A priority Critical patent/CN105183005A/en
Publication of CN105183005A publication Critical patent/CN105183005A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a scene virtualization device and a control method. The device comprises a main processor, a storage unit, four auxiliary processors, a controller, four first displays which are sequentially arranged on an enclosure, a second display disposed on a ceiling, a six-freedom-degree platform disposed on a floor, and a plurality of arranged chairs disposed on the six-freedom-degree platform. The main processor is electrically connected with the second display, the storage unit, all auxiliary processors, the controller and a driver of the six-freedom-degree platform. All auxiliary processors are respectively in electrical connection with all first displays. The device is high in resolution, is strong in immersion feel, and is good in interactivity.

Description

Scene virtual bench and control method
Technical field
The present invention relates to field of intelligent control technology, especially relate to a kind of resolution is high, feeling of immersion is strong, interactivity is good scene virtual bench and control method.
Background technology
Play System plays the film of immobilized substance usually, and spectators cannot make interference to play content, provides more comprehensively viewing effect cannot to audio frequency and video beholder.
Especially, when watching some images for the scene of showing, the flow process viewing that spectators can only play along with image, according to the wish adjustment play content of beholder, can only can not seek help to making side if there is query, requirement that can not be satisfying personalized; For wright, when making image, consider all may produced problem, and high-caliber on-site support personnel will be provided to carry out answering questions, manpower and materials cost higher, if there is anticipation outside problem time, be difficult to adjust presentation content in time, bandwagon effect is poor.
Chinese patent mandate publication number: CN103902989A, authorize publication date on July 2nd, 2014, disclose a kind of human action video frequency identifying method based on Non-negative Matrix Factorization, comprise the steps: (1) preprocessed video image: (1a) inputs 90 human action video images; (1b) in the human action video image of input, optional 80 human action video images are as a human action video training sample set, and remaining 10 human action video image is separately as a human action video training sample set; (1c) utilize the lateral connection character string strcat function of matrix labotstory matlab, by the human action video image in human action video training sample set and human action video measurement sample set, be converted to the single picture sequence of Time Continuous; (1d) partiting row sampling is carried out to single picture sequence, obtain pretreated down-sampled sequence of pictures.The weak point of this invention is, metrical error is large, easily causes human action to identify by mistake.
Summary of the invention
Goal of the invention of the present invention is the deficiency that cannot provide more comprehensively viewing effect in order to overcome Play System of the prior art to audio frequency and video beholder, provides a kind of resolution is high, feeling of immersion is strong, interactivity is good scene virtual bench and control method.
To achieve these goals, the present invention is by the following technical solutions:
A kind of scene virtual bench, described scene virtual bench is located in the room of buildings, and described room comprises rectangle enclosure wall, floor and ceiling; Comprise primary processor, storer, 4 auxiliary processors, controller, is located at 4 on enclosure wall the first displays be arranged in order, is located at the second display on ceiling, be located at the 6DOF platform on floor, be located at the seat of several arrangements on 6DOF platform; Primary processor is connected with the actuator electrical of second display, storer, each auxiliary processor, controller and 6DOF platform respectively, and each auxiliary processor is electrically connected with each first display respectively; Described controller comprises housing, be located on housing several move up and down the control button be associated respectively all around with initial point, each controls button and is all electrically connected with primary processor.
The image that primary processor and each auxiliary processor are used for experiencing scene processes, controller is provided with the control button moving up and down with initial point and be associated all around, second display and each first display are for showing image, 6DOF platform can drive the user on seat tilt and move up and down all around, the image that 6DOF platform and second display, each first display show combines, to the sensation that people is on the spot in person.
Each user sits on the seat respectively, one of them user controls initial point by controller and moves, the real picture experiencing scene processes according to the movement of initial point by primary processor and each auxiliary processor, and carry out the broadcasting of many picture synchronization, visual experience on the spot in person is built to user, therefore, the present invention has the advantages that resolution is high, feeling of immersion is strong, interactivity is good.
Therefore, the present invention have do not fix play content, beholder can carry out alternately, effectively promoting experience effect with substance film, strengthen the sense of reality, operation according to substance film adjustment 6DOF platform, effectively promote immersive effects feature.
As preferably, also comprise human action collector, human action collector comprises 3 axle acceleration sensor circuit and single-chip microcomputers; 3 axle acceleration sensor circuit and single-chip microcomputer electrical connection, single-chip microcomputer is electrically connected with primary processor; Human action collector is installed on the arm of human body.
As preferably, described 3 axle acceleration sensor circuit comprise 3 axle acceleration sensors, several resistance and several electric capacity that model is MPU6050; 11st pin of 3 axle acceleration sensors is electrically connected with the 63rd pin of single-chip microcomputer, and the 12nd pin of 3 axle acceleration sensors is electrically connected with the 58th pin of single-chip microcomputer.
As preferably, described resistance is two, is respectively R14 and R15; Described electric capacity is 4, is respectively C21, C22, C23 and C24; R14 and R15 one end is electrically connected with the 23rd, the 24th pin of 3 axle acceleration sensors respectively, R14 and the R15 other end all meets VCC; C21 one end is electrically connected with the 20th pin of 3 axle acceleration sensors, C21 other end ground connection; 13rd pin of C22 one end and 3 axle acceleration sensors and VCC electrical connection, C22 other end ground connection; 8th pin and the VCC of C24 one end and 3 axle acceleration sensors are electrically connected, and C23 one end is electrically connected with the 9th pin of 3 axle acceleration sensors, the equal ground connection of C23 and the C24 other end.
As preferably, described second display contacts with each the first display coboundary respectively, and the side edge of adjacent first display is touched.
A control method for scene virtual bench, comprises the steps:
(6-1) be provided with several Image Processing parameter in storer, each Image Processing parameter is transferred to each auxiliary processor by primary processor, is provided with the three-dimensional cubic body Model experiencing scene in storer; The central point of setting three-dimensional cubic body Model is initial point;
(6-2) set in primary processor and each auxiliary processor and be equipped with virtual video camera, each virtual video camera is all positioned on initial point, line is done to the summit at each angle of three-dimensional cubic body Model by initial point, three-dimensional cubic body Model is divided into 6 rectangular pyramids by line, be respectively upper cone, lower cone, precentrum, right cone, posterior pyramids and left cone, the virtual video camera of setting primary processor is for catching the image of upper cone, be respectively used in 4 auxiliary processors catch precentrum, right cone, the image of posterior pyramids and left cone, experience scene image is stored respectively in primary processor and 4 auxiliary processors,
(6-3) user is moved by key control initial point by each of controller
(6-3-1), after primary processor receives the order of initial point movement, the image that primary processor catches according to its virtual video camera of the position calculation of initial point movement, is transported to second display display according to Image Processing parameter to image by this image after processing;
(6-3-2) information conveyance of initial point movement is given each auxiliary processor by primary processor, each auxiliary processor calculates the image that its virtual video camera catches respectively, after processing, image is transported to respectively the first corresponding display display according to Image Processing parameter to image;
(6-3-3) while initial point movement, primary processor is by the information transmission of initial point movement to the driver of body sense platform, and drive belts kinetoplast sense platform is made and moved synchronous swing with initial point.
As the alternative of such scheme, also comprise human action collector, human action collector comprises 3 axle acceleration sensor circuit and single-chip microcomputers; 3 axle acceleration sensor circuit and single-chip microcomputer electrical connection, single-chip microcomputer is electrically connected with primary processor; Human action collector is installed on the arm of human body; Described user to be moved by key control initial point by each of controller and is replaced by following step:
(7-1) when human body makes certain action, single-chip microcomputer obtains net acceleration vector signal A (t) that 3 axle acceleration sensors detect, A (t)=(x (t), y (t), z (t)), wherein, the X-axis vector signal that x (t) detects for 3-axis acceleration sensor, the Y-axis vector signal that y (t) detects for 3-axis acceleration sensor, the Y-axis vector signal that z (t) detects for 3-axis acceleration sensor;
(7-2) primary processor receives A (t) and is stored in storer by A (t); For each moment i in A (t), primary processor chooses moment i and subsequent 4 sample value, utilizes formula A i ( t ) = A ( t ) Σ t = 0 N - 1 A 2 ( t ) / M A X Processing being engraved in interior N number of sample value when comprising i, obtaining the acceleration ideal value A of each moment i i(t); Wherein, MAX is the maximal value in N number of sample value;
And obtain by the A in each moment it acceleration ideal value signal A ' (t) corresponding with net acceleration vector signal that () forms, and A ' (t) is stored in storer;
(7-3) signal intercepts:
Beginning threshold value is provided with in storer terminate the title of threshold value δ, enlargement factor K, a m human action, m the hidden Markov model HMM corresponding respectively with m human action, HMM is characterized by five yuan of array λ=(X, O, A, B, π); Wherein, X is the finite aggregate of state, and O is the finite aggregate of λ, and A is transition probability, and B is output probability, and π is initial state distribution;
Primary processor utilizes formula
Δ A t=| x ' (t+1)-x ' (t) |+| y ' (t+1)-y ' (t) |+| z ' (t+1)-z ' (t) | calculate and obtain the acceleration Δ A of filtering waste motion t;
When then primary processor makes the judgement that current time action starts;
When A i &prime; ( t ) = 1 N &Sigma; t = i i + N - 1 &Delta;A t < &delta; , Then primary processor makes the judgement of current time release;
(7-4) signal amplifies:
Acceleration ideal value signal A ' (t) between action start time and release moment is amplified K doubly;
(7-5) feature extraction:
Each sample value of A ' (t) is divided into P section according to time sequencing, gets L sample value for every section, extracted the average feature of every one piece of data sample value by following formula:
f e a t u r e = &Sigma; j = 0 L s ( j ) / L
Wherein, s (j) represents the jth moment sample value in each segmentation section;
(7-6) action recognition:
Primary processor makes O=feature, utilizes the formula of Forward-Backward algorithm
G e s t u r e ( O ) = arg max q = 1 , ... , m &lsqb; P ( O | &lambda; q ) &rsqb;
Calculate the maximum probability Gesture (O) producing feature in m hidden Markov model HMM; The initial point movement directive corresponding with each human action is stored in storer, after obtaining Gesture (O), the HMM that primary processor inquiry is corresponding with Gesture (O) max, and identify and HMM maxcorresponding human action, and the order obtaining the initial point movement corresponding with human action further.
As preferably, the value size of described P was directly proportional to the duration of an action and an action complexity; K is 110 to 270.
As preferably, for (1.68g, 0,0) is to (1.79g, 0,0); δ is that (1.27g, 0,0) is to (1.39g, 0,0); Wherein, g is acceleration of gravity.
As preferably, m is 8 to 20.
Therefore, the present invention has following beneficial effect:
Play content and corresponding scene can be customized, do not fix play content; User can carry out alternately with substance film, and interaction results is reflected in play content and 6DOF platform in real time, effectively promotes experience effect, strengthens the sense of reality; According to the operation of substance film adjustment 6DOF platform, effectively promote immersive effects.
Accompanying drawing explanation
Fig. 1 is a kind of theory diagram of the present invention;
Fig. 2 is a kind of circuit diagram of 3 axle acceleration sensors of the present invention;
Fig. 3 is a kind of process flow diagram of embodiments of the invention 1;
Fig. 4 is a kind of camera watch region structural representation of the present invention.
In figure: primary processor 1, storer 2, auxiliary processor 3, controller 4, first display 5, second display 6, driver 7,3 axle acceleration sensor circuit 9, single-chip microcomputer 10, initial point 8, camera watch region 11.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention will be further described.
Embodiment 1
Embodiment is as shown in Figure 1 a kind of scene virtual bench, and scene virtual bench is located in the room of buildings, and room comprises rectangle enclosure wall, floor and ceiling; Comprise primary processor 1, storer 2,4 auxiliary processors 3, controller 4, the first display 5 that 4 of being located on enclosure wall are arranged in order, is located at the second display 6 on ceiling, be located at the 6DOF platform on floor, be located at the seat of 50 arrangements on 6DOF platform; Primary processor is electrically connected with the driver 7 of second display, storer, each auxiliary processor, controller and 6DOF platform respectively, and each auxiliary processor is electrically connected with each first display respectively; Described controller comprises housing, 6 of being located on housing move up and down the control button be associated respectively all around with initial point, and each controls button and is all electrically connected with primary processor.Second display contacts with each the first display coboundary respectively, and the side edge of adjacent first display is touched.
As shown in Figure 3, a kind of control method of scene virtual bench, comprises the steps:
Step 100, sets up the three-dimensional cubic body Model experiencing scene, setting initial point
Be provided with image modality, resolution, frame rate and aspect ratio in storer, image modality, resolution, frame rate and aspect ratio are transferred to each auxiliary processor by primary processor, are provided with the three-dimensional cubic body Model experiencing scene in storer; As shown in Figure 4, the central point setting three-dimensional cubic body Model is initial point 8; Fig. 4 is a kind of structural representation overlooked of three-dimensional model;
Step 200, sets the experience scene camera coverage of each virtual video camera and correspondence thereof
Virtual video camera is equipped with in setting primary processor and each auxiliary processor, each virtual video camera is all positioned on initial point, line is done to the summit at each angle of three-dimensional cubic body Model by initial point, three-dimensional cubic body Model is divided into 6 rectangular pyramids forming camera watch region 11 respectively by line, be respectively upper cone, lower cone, precentrum, right cone, posterior pyramids and left cone, the virtual video camera of setting primary processor is for catching the image of upper cone, be respectively used in 4 auxiliary processors catch precentrum, right cone, the image of posterior pyramids and left cone, experience scene image is stored respectively in primary processor and 4 auxiliary processors, arrow in Fig. 4 represents the shooting direction of each virtual video camera.
Step 300, user is moved by key control initial point by each of controller
Step 310, second display display synchronous images
After primary processor receives the order of initial point movement, the image that primary processor catches according to its virtual video camera of the position calculation of initial point movement, utilizes resolution, frame rate and aspect ratio parameter to after image procossing, this image being transported to second display display;
Step 320, each first display display synchronous images
The information conveyance of initial point movement is given each auxiliary processor by primary processor, each auxiliary processor calculates the image that its virtual video camera catches respectively, utilizes resolution, frame rate and aspect ratio parameter to after image procossing, image being transported to respectively the first corresponding display display;
Step 330,6DOF platform synchronous hunting
While initial point movement, primary processor is by the information transmission of initial point movement to the driver of 6DOF platform, and driver drives 6DOF platform is made and moved synchronous swing with initial point.
When initial point upwards and when moving forward, 6DOF platform first moves up and then turns forward; When initial point downwards and when moving forward, 6DOF platform first moves down and then turns forward; When initial point upwards and when moving right, 6DOF platform first moves up and is then tilted to the right; Other move mode of initial point, similar synchronous hunting all made by 6DOF platform.
Embodiment 2
Embodiment 2 comprises all structures in embodiment 1 and step part, and as shown in Figure 1 and Figure 2, embodiment 2 also comprises human action collector, and human action collector comprises 3 axle acceleration sensor circuit 9 and single-chip microcomputer 10; 3 axle acceleration sensor circuit and single-chip microcomputer electrical connection, single-chip microcomputer is electrically connected with primary processor; Human action collector is installed on the arm of human body.
3 axle acceleration sensor circuit comprise 3 axle acceleration sensors that model is MPU6050,2 resistance and 4 electric capacity; 11st pin of 3 axle acceleration sensors is electrically connected with the 63rd pin of single-chip microcomputer, and the 12nd pin of 3 axle acceleration sensors is electrically connected with the 58th pin of single-chip microcomputer.
Two resistance are respectively R14 and R15; 4 electric capacity are respectively C21, C22, C23 and C24; R14 and R15 one end is electrically connected with the 23rd, the 24th pin of 3 axle acceleration sensors respectively, R14 and the R15 other end all meets VCC; C21 one end is electrically connected with the 20th pin of 3 axle acceleration sensors, C21 other end ground connection; 13rd pin of C22 one end and 3 axle acceleration sensors and VCC electrical connection, C22 other end ground connection; 8th pin and the VCC of C24 one end and 3 axle acceleration sensors are electrically connected, and C23 one end is electrically connected with the 9th pin of 3 axle acceleration sensors, the equal ground connection of C23 and the C24 other end.
User in embodiment 1 to be moved by key control initial point by each of controller and is replaced by following step:
(7-1) when human body makes certain action, single-chip microcomputer obtains net acceleration vector signal A (t) that 3 axle acceleration sensors detect, A (t)=(x (t), y (t), z (t)), wherein, the X-axis vector signal that x (t) detects for 3-axis acceleration sensor, the Y-axis vector signal that y (t) detects for 3-axis acceleration sensor, the Y-axis vector signal that z (t) detects for 3-axis acceleration sensor;
(7-2) primary processor receives A (t) and is stored in storer by A (t); For each moment i in A (t), primary processor chooses moment i and subsequent 4 sample value, utilizes formula A i ( t ) = A ( t ) &Sigma; t = 0 N - 1 A 2 ( t ) / M A X Processing being engraved in interior N number of sample value when comprising i, obtaining the acceleration ideal value A of each moment i i(t); Wherein, MAX is the maximal value in N number of sample value;
And obtain by the A in each moment it acceleration ideal value signal A ' (t) corresponding with net acceleration vector signal that () forms, and A ' (t) is stored in storer;
(7-3) signal intercepts:
Beginning threshold value is provided with in storer terminate the title of threshold value δ, enlargement factor K, a m human action, m the hidden Markov model HMM corresponding respectively with m human action, HMM is characterized by five yuan of array λ=(X, O, A, B, π); Wherein, X is the finite aggregate of state, and O is the finite aggregate of λ, and A is transition probability, and B is output probability, and π is initial state distribution;
Primary processor utilizes formula
Δ A t=| x ' (t+1)-x ' () |+| y ' (t+1)-y ' (t) |+| z ' (t+1)-z ' (t) | calculate and obtain the acceleration Δ A of filtering waste motion t;
When then primary processor makes the judgement that current time action starts;
When A i &prime; ( t ) = 1 N &Sigma; t = i i + N - 1 &Delta;A t < &delta; , Then primary processor makes the judgement of current time release;
(7-4) signal amplifies:
Acceleration ideal value signal A ' (t) between action start time and release moment is amplified K doubly;
(7-5) feature extraction:
Each sample value of A ' (t) is divided into P section according to time sequencing, gets L sample value for every section, extracted the average feature of every one piece of data sample value by following formula:
f e a t u r e = &Sigma; j = 0 L s ( j ) / L
Wherein, s (j) represents the jth moment sample value in each segmentation section;
(7-6) action recognition:
Primary processor makes O=feature, utilizes the formula of Forward-Backward algorithm
G e s t u r e ( O ) = arg max q = 1 , ... , m &lsqb; P ( O | &lambda; q ) &rsqb;
Calculate the maximum probability Gesture (O) producing feature in m hidden Markov model HMM; The initial point movement directive corresponding with each human action is stored in storer, after obtaining Gesture (O), the HMM that primary processor inquiry is corresponding with Gesture (O) max, and identify and HMM maxcorresponding human action, and the order obtaining the initial point movement corresponding with human action further.The value size of P was directly proportional to the duration of an action and an action complexity; K is 270. for (1.68g, 0,0); δ is (1.27g, 0,0); Wherein, g is acceleration of gravity.M is 20.
Should be understood that the present embodiment is only not used in for illustration of the present invention to limit the scope of the invention.In addition should be understood that art technology user can make various changes or modifications the present invention, and these equivalent form of values fall within the application's appended claims limited range equally after the content of having read the present invention's instruction.

Claims (10)

1. a scene virtual bench, described scene virtual bench is located in the room of buildings, and described room comprises rectangle enclosure wall, floor and ceiling; It is characterized in that, comprise primary processor (1), storer (2), 4 auxiliary processors (3), controller (4), is located at 4 on enclosure wall the first displays (5) be arranged in order, is located at the second display (6) on ceiling, be located at the 6DOF platform on floor, be located at the seat of several arrangements on 6DOF platform; Primary processor is electrically connected with the driver (7) of second display, storer, each auxiliary processor, controller and 6DOF platform respectively, and each auxiliary processor is electrically connected with each first display respectively; Described controller comprises housing, be located on housing several move up and down the control button be associated respectively all around with initial point, each controls button and is all electrically connected with primary processor.
2. scene virtual bench according to claim 1, is characterized in that, also comprises human action collector, and human action collector comprises 3 axle acceleration sensor circuit (9) and single-chip microcomputer (10); 3 axle acceleration sensor circuit and single-chip microcomputer electrical connection, single-chip microcomputer is electrically connected with primary processor; Human action collector is installed on the arm of human body.
3. scene virtual bench according to claim 2, is characterized in that, described 3 axle acceleration sensor circuit comprise 3 axle acceleration sensors, several resistance and several electric capacity that model is MPU6050; 11st pin of 3 axle acceleration sensors is electrically connected with the 63rd pin of single-chip microcomputer, and the 12nd pin of 3 axle acceleration sensors is electrically connected with the 58th pin of single-chip microcomputer.
4. scene virtual bench according to claim 3, is characterized in that, described resistance is two, is respectively R14 and R15; Described electric capacity is 4, is respectively C21, C22, C23 and C24; R14 and R15 one end is electrically connected with the 23rd, the 24th pin of 3 axle acceleration sensors respectively, R14 and the R15 other end all meets VCC; C21 one end is electrically connected with the 20th pin of 3 axle acceleration sensors, C21 other end ground connection; 13rd pin of C22 one end and 3 axle acceleration sensors and VCC electrical connection, C22 other end ground connection; 8th pin and the VCC of C24 one end and 3 axle acceleration sensors are electrically connected, and C23 one end is electrically connected with the 9th pin of 3 axle acceleration sensors, the equal ground connection of C23 and the C24 other end.
5. the scene virtual bench according to claim 1 or 2 or 3 or 4, is characterized in that, described second display contacts with each the first display coboundary respectively, and the side edge of adjacent first display is touched.
6. be applicable to a control method for scene virtual bench according to claim 1, it is characterized in that, comprise the steps:
(6-1) be provided with several Image Processing parameter in storer, each Image Processing parameter is transferred to each auxiliary processor by primary processor, is provided with the three-dimensional cubic body Model experiencing scene in storer; The central point of setting three-dimensional cubic body Model is initial point;
(6-2) set in primary processor and each auxiliary processor and be equipped with virtual video camera, each virtual video camera is all positioned on initial point, line is done to the summit at each angle of three-dimensional cubic body Model by initial point, three-dimensional cubic body Model is divided into 6 rectangular pyramids by line, be respectively upper cone, lower cone, precentrum, right cone, posterior pyramids and left cone, the virtual video camera of setting primary processor is for catching the image of upper cone, be respectively used in 4 auxiliary processors catch precentrum, right cone, the image of posterior pyramids and left cone, experience scene image is stored respectively in primary processor and 4 auxiliary processors,
(6-3) user is moved by key control initial point by each of controller
(6-3-1), after primary processor receives the order of initial point movement, the image that primary processor catches according to its virtual video camera of the position calculation of initial point movement, is transported to second display display according to Image Processing parameter to image by this image after processing;
(6-3-2) information conveyance of initial point movement is given each auxiliary processor by primary processor, each auxiliary processor calculates the image that its virtual video camera catches respectively, after processing, image is transported to respectively the first corresponding display display according to Image Processing parameter to image;
(6-3-3) while initial point movement, primary processor is by the information transmission of initial point movement to the driver of body sense platform, and drive belts kinetoplast sense platform is made and moved synchronous swing with initial point.
7. the control method of scene virtual bench according to claim 6, also comprises human action collector, and human action collector comprises 3 axle acceleration sensor circuit and single-chip microcomputers; 3 axle acceleration sensor circuit and single-chip microcomputer electrical connection, single-chip microcomputer is electrically connected with primary processor; Human action collector is installed on the arm of human body; It is characterized in that, described user to be moved by key control initial point by each of controller and is replaced by following step:
(7-1) when human body makes certain action, single-chip microcomputer obtains net acceleration vector signal A (t) that 3 axle acceleration sensors detect, A (t)=(x (t), y (t), z (t)), wherein, the X-axis vector signal that x (t) detects for 3-axis acceleration sensor, the Y-axis vector signal that y (t) detects for 3-axis acceleration sensor, the Y-axis vector signal that z (t) detects for 3-axis acceleration sensor;
(7-2) primary processor receives A (t) and is stored in storer by A (t); For each moment i in A (t), primary processor chooses moment i and subsequent 4 sample value, utilizes formula A i ( t ) = A ( t ) &Sigma; t = 0 N - 1 A 2 ( t ) / M A X Processing being engraved in interior N number of sample value when comprising i, obtaining the acceleration ideal value A of each moment i i(t); Wherein, MAX is the maximal value in N number of sample value;
And obtain by the A in each moment it acceleration ideal value signal A ' (t) corresponding with net acceleration vector signal that () forms, and A ' (t) is stored in storer;
(7-3) signal intercepts:
Beginning threshold value is provided with in storer terminate the title of threshold value δ, enlargement factor K, a m human action, m the hidden Markov model HMM corresponding respectively with m human action, HMM is characterized by five yuan of array λ=(X, 0, A, B, π); Wherein, X is the finite aggregate of state, and 0 is the finite aggregate of λ, and A is transition probability, and B is output probability, and π is initial state distribution;
Primary processor utilizes formula
ΔA t=|x′(t+1)-x′(t)|+|y′(t+1)-y′(t)|+|z′(t+1)-z′(t)|
Calculate and obtain the acceleration Δ A of filtering waste motion t;
When then primary processor makes the judgement that current time action starts;
When A i &prime; ( t ) = 1 N &Sigma; t = i i + N - 1 &Delta;A t < &delta; , Then primary processor makes the judgement of current time release;
(7-4) signal amplifies:
Acceleration ideal value signal A ' (t) between action start time and release moment is amplified K doubly;
(7-5) feature extraction:
Each sample value of A ' (t) is divided into P section according to time sequencing, gets L sample value for every section, extracted the average feature of every one piece of data sample value by following formula:
f e a t u r e = &Sigma; j = 0 L s ( j ) / L
Wherein, s (j) represents the jth moment sample value in each segmentation section;
(7-6) action recognition:
Primary processor makes O=feature, utilizes the formula of Forward-Backward algorithm
G e s t u r e ( O ) = arg max q = 1 , ... , m &lsqb; P ( O | &lambda; q ) &rsqb;
Calculate the maximum probability Gesture (O) producing feature in m hidden Markov model HMM; The initial point movement directive corresponding with each human action is stored in storer, after obtaining Gesture (O), the HMM that primary processor inquiry is corresponding with Gesture (O) max, and identify and HMM maxcorresponding human action, and the order obtaining the initial point movement corresponding with human action further.
8. the control method of scene virtual bench according to claim 6, is characterized in that, the value size of described P was directly proportional to the duration of an action and an action complexity; K is 110 to 270.
9. the control method of the scene virtual bench according to claim 6 or 7 or 8, is characterized in that, for (1.68g, 0,0) is to (1.79g, 0,0); δ is that (1.27g, 0,0) is to (1.39g, 0,0); Wherein, g is acceleration of gravity.
10. the control method of the scene virtual bench according to claim 6 or 7 or 8, is characterized in that, m is 8 to 20.
CN201510526465.5A 2015-08-25 2015-08-25 Scene virtualization device and control method Pending CN105183005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510526465.5A CN105183005A (en) 2015-08-25 2015-08-25 Scene virtualization device and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510526465.5A CN105183005A (en) 2015-08-25 2015-08-25 Scene virtualization device and control method

Publications (1)

Publication Number Publication Date
CN105183005A true CN105183005A (en) 2015-12-23

Family

ID=54905143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510526465.5A Pending CN105183005A (en) 2015-08-25 2015-08-25 Scene virtualization device and control method

Country Status (1)

Country Link
CN (1) CN105183005A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510074A (en) * 2009-02-27 2009-08-19 河北大学 High present sensation intelligent perception interactive motor system and implementing method
US20100199230A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Gesture recognizer system architicture
CN101908103A (en) * 2010-08-19 2010-12-08 北京启动在线文化娱乐有限公司 Network dance system capable of interacting in body sensing mode
CN102622774A (en) * 2011-01-31 2012-08-01 微软公司 Living room movie creation
CN104407706A (en) * 2014-12-01 2015-03-11 江苏怡通智运科技发展有限公司 Multimedia interactive query all-in-one machine based on gesture recognition and use method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100199230A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Gesture recognizer system architicture
CN101510074A (en) * 2009-02-27 2009-08-19 河北大学 High present sensation intelligent perception interactive motor system and implementing method
CN101908103A (en) * 2010-08-19 2010-12-08 北京启动在线文化娱乐有限公司 Network dance system capable of interacting in body sensing mode
CN102622774A (en) * 2011-01-31 2012-08-01 微软公司 Living room movie creation
CN104407706A (en) * 2014-12-01 2015-03-11 江苏怡通智运科技发展有限公司 Multimedia interactive query all-in-one machine based on gesture recognition and use method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄文恺,等: "《Arduino开发实战指南:机器人卷》", 30 June 2014 *

Similar Documents

Publication Publication Date Title
CN112351302B (en) Live broadcast interaction method and device based on cloud game and storage medium
JP6516477B2 (en) System and method for authoring user generated content
CN108255295A (en) It is generated for the haptic effect of spatial dependence content
CN105630156A (en) Systems and methods for deformation-based haptic effects
US20130023342A1 (en) Content playing method and apparatus
CN102135798A (en) Bionic motion
CN102473035A (en) Interactive touch screen gaming metaphors with haptic feedback across platforms
CN104094588A (en) Imaging device
JP2008228941A (en) Game device, progress control method and program
US20200371604A1 (en) Display Terminal and Display Control Method
CN112516589A (en) Game commodity interaction method and device in live broadcast, computer equipment and storage medium
CN111163906A (en) Mobile electronic device and operation method thereof
CN112511850A (en) Wheat connecting method, live broadcast display method, device, equipment and storage medium
US20190164444A1 (en) Assessing a level of comprehension of a virtual lecture
CN113546419B (en) Game map display method, game map display device, terminal and storage medium
JP2022505457A (en) How to build buildings in virtual environments, equipment, equipment and programs
CN114449162B (en) Method, device, computer equipment and storage medium for playing panoramic video
CN205050077U (en) Virtual device of scene
CN105183005A (en) Scene virtualization device and control method
WO2023246312A1 (en) Interface interaction method, interface interaction apparatus, and computer storage medium
CN111265851B (en) Data processing method, device, electronic equipment and storage medium
KR100347837B1 (en) Apparatus of rythm and dance game machine and foot key
CN112839254A (en) Display apparatus and content display method
KR20160050664A (en) Electronic device having tactile sensor, method for operating thereof and system
CN215691547U (en) Motion guide apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20180504

AD01 Patent right deemed abandoned