CN110231870A - A kind of interactive performance method, electronic equipment and computer readable storage medium based on Kinect somatosensory technology - Google Patents
A kind of interactive performance method, electronic equipment and computer readable storage medium based on Kinect somatosensory technology Download PDFInfo
- Publication number
- CN110231870A CN110231870A CN201910511002.XA CN201910511002A CN110231870A CN 110231870 A CN110231870 A CN 110231870A CN 201910511002 A CN201910511002 A CN 201910511002A CN 110231870 A CN110231870 A CN 110231870A
- Authority
- CN
- China
- Prior art keywords
- limb action
- image
- audio
- real
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000005516 engineering process Methods 0.000 title claims abstract description 23
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 22
- 230000003238 somatosensory effect Effects 0.000 title claims abstract description 14
- 230000009471 action Effects 0.000 claims abstract description 67
- 230000000007 visual effect Effects 0.000 claims abstract description 37
- 239000012636 effector Substances 0.000 claims abstract description 33
- 238000013507 mapping Methods 0.000 claims abstract description 10
- 230000008569 process Effects 0.000 claims abstract description 10
- 238000005259 measurement Methods 0.000 claims abstract description 7
- 238000013506 data mapping Methods 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 10
- 230000001133 acceleration Effects 0.000 claims description 8
- 210000000988 bone and bone Anatomy 0.000 claims description 3
- 238000004590 computer program Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 235000019504 cigarettes Nutrition 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 239000000700 radioactive tracer Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a kind of interactive performance method, electronic equipment and computer readable storage medium based on Kinect somatosensory technology.Method includes: the limb action that the one or more different players of measurement imitate instrument playing, it is respectively provided for showing the VR image and audio of instrument playing process virtually for each limb action, the mapping relations based on VR image, audio and limb action construct instrument playing property database;Acquire human depth's image of user in its working space in real time using Kinect sensor;Based on the skeleton coordinate of human depth's picture catching user, the real-time limb action of user is identified according to the skeleton coordinate;According to the real-time limb action, matching VR image and audio is called to be played in instrument playing property database in real time, and control end effector is guided to carry out visual presence.
Description
Technical field
The interactive performance method that the present invention relates to a kind of based on Kinect somatosensory technology, electronic equipment and computer-readable
Storage medium.
Background technique
Currently, all kinds of participatory interaction entertainments place, experience type interactive game are generally provided with interactive mode in theme park
Performance system utilizes human body limb tracking and positioning technology, realizes fountain control system (flame control system, signal light control system
System, fireworks control system) etc. end effectors follow user limb motion carry out visual presence, can largely improve
The experience sense and interest of user, however, one is based on sensing there are mainly two types of existing human body limb tracking and positioning technologies
The wearable smart machine technology of device, another kind is the action recognition based on optical alignment and posture capturing technology, wherein being based on
The wearable smart machine technology of sensor is since itself and its circuit connection are more heavy, and wearing walking is more inconvenient, in addition
The factors such as it is expensive greatly limit its application;And based on optical alignment and posture capturing technology due to color image appearance
Lead to motion blur vulnerable to illumination condition influence, and the motion target tracking of traditional view-based access control model is needed to inside and outside camera
Parameter is demarcated, therefore often occurs error in practice.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kind
State interactive performance method, electronic equipment and the computer readable storage medium based on Kinect somatosensory technology of problem.
According to one aspect of the present invention, a kind of interactive performance method based on Kinect somatosensory technology is provided, is wrapped
It includes:
The one or more different players of measurement imitate the limb action of instrument playing, match respectively for each limb action
Set the VR image and audio for virtually showing instrument playing process, the mapping relations based on VR image, audio and limb action
Construct instrument playing property database;
Acquire human depth's image of user in its working space in real time using Kinect sensor;
Based on the skeleton coordinate of human depth's picture catching user, is identified and used according to the skeleton coordinate
The real-time limb action at family;
According to the real-time limb action, matching VR image and audio are called in instrument playing property database
It is played in real time, and control end effector is guided to carry out visual presence.
Optionally, the one or more different players of the measurement imitate the limb action of instrument playing, further wrap
Include: in limb action speed, acceleration and amplitude be acquired;It is described to be called therewith in instrument playing property database
Matched VR image and audio are played in real time, further comprise: controlling the audio based on speed, acceleration and amplitude
Audio.
Optionally, the guidance control end effector carries out visual presence, further comprises:
The visual presence scheme of a variety of end effectors is set, various limb actions and various visual presence schemes are carried out
It matches and establishes data mapping library;
It is described to implement in data mapping library to call matching visual presence scheme according to the real-time limb action
Visual presence.
Optionally, the end effector is specifically fountain control system, flame control system, lamp light control system, cigarette
One or more of flower control system.
Optionally, the guidance control end effector carries out visual presence, further comprises: by Intelligent routing come chain
Connect the end effector.
Optionally, the human depth's image for acquiring user in its working space in real time using Kinect sensor, into
One step includes: using in the depth data in human depth's image described in threshold filtering to specific depth bounds.
Optionally, the depth bounds are specifically 1220mm-3810mm.
Optionally, the skeleton coordinate based on human depth's picture catching user further comprises: utilizing
Kalman filter carries out smothing filtering to the skeleton coordinate.
According to another aspect of the invention, a kind of interactive performance appts based on Kinect somatosensory technology are provided,
Include:
Database sharing module imitates the limb action of instrument playing suitable for measuring one or more different players,
It is respectively provided for showing the VR image and audio of instrument playing process virtually for each limb action, is based on VR image, audio
And the mapping relations of limb action construct instrument playing property database;
Depth image acquisition module, the human body suitable for acquiring user in its working space in real time using Kinect sensor are deep
Spend image;
Limb action identification module, suitable for the skeleton coordinate based on human depth's picture catching user, according to
The real-time limb action of the skeleton coordinate identification user;
Playing module is suitable for being called in instrument playing property database matching according to the real-time limb action
VR image and audio played in real time, and guide control end effector carry out visual presence.
Optionally, the database sharing module, is further adapted for: to the speed, acceleration and amplitude in limb action
It is acquired;The playing module, is further adapted for: the audio of the audio is controlled based on speed, acceleration and amplitude.
Optionally, the playing module, is further adapted for:
The visual presence scheme of a variety of end effectors is set, various limb actions and various visual presence schemes are carried out
It matches and establishes data mapping library;
It is described to implement in data mapping library to call matching visual presence scheme according to the real-time limb action
Visual presence.
Optionally, the end effector is specifically fountain control system, flame control system, lamp light control system, cigarette
One or more of flower control system.
Optionally, the playing module, is further adapted for: the end effector is linked by Intelligent routing.
Optionally, the depth image acquisition module, is further adapted for: using in human depth's image described in threshold filtering
Depth data to specific depth bounds in.
Optionally, the depth bounds are specifically 1220mm-3810mm.
Optionally, the limb action identification module, is further adapted for: using kalman filter to the skeleton
Coordinate carries out smothing filtering.
According to another aspect of the invention, a kind of electronic equipment is provided, wherein the electronic equipment includes:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the place when executed
Reason device executes above-mentioned method.
According to another aspect of the invention, a kind of computer readable storage medium is provided, wherein the computer can
It reads storage medium and stores one or more programs, one or more of programs when being executed by a processor, realize above-mentioned side
Method.
The utility model has the advantages that
The interactive performance system of the embodiment of the present invention constructs limbs relative to hip center by Kinect somatosensory technology
Motion profile and end effector relative to the movement mapping relations between the motion profile of its basic point, realize that imitating musical instrument drills
The real-time, interactive of the end effectors such as the limb action and fountain, flame, light, fireworks played performance and the VR of virtual musical instrument
It has been shown that, real-time and interactivity with higher.System is not necessarily to be dressed as the wearable smart machine of tradition, walk compared with
For convenience, it will not lead to error occur because illumination condition influences, use is relatively stable.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention,
And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can
It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field
Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention
Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the stream of the interactive performance method according to an embodiment of the invention based on Kinect somatosensory technology
Journey schematic diagram;
Fig. 2 shows the knots of the interactive performance appts according to an embodiment of the invention based on Kinect somatosensory technology
Structure schematic diagram;
Fig. 3 shows the structural schematic diagram of electronic equipment according to an embodiment of the invention;
Fig. 4 shows the structural schematic diagram of computer readable storage medium according to an embodiment of the invention.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
It is fully disclosed to those skilled in the art.
Fig. 1 shows the stream of the interactive performance method according to an embodiment of the invention based on Kinect somatosensory technology
Journey schematic diagram.As shown in Figure 1, the method for the embodiment of the present invention includes the following steps successively executed:
S11: the one or more different players of measurement imitate the limb action of instrument playing, for each limb action point
It is not configured to virtually show the VR image and audio of instrument playing process, the mapping based on VR image, audio and limb action
Relationship constructs instrument playing property database;
In this step, one or more different players are measured by interactive performance system imitates instrument playing
Limb action, wherein interactive performance system includes Kinect sensor, computer system, fountain control system, flame control
System processed, lamp light control system, fireworks control system, the Kinect that adjustment height angle is equipped with beside computer system support cloud
Platform, Kinect sensor are mounted on Kinect support holder, and output end is connected by USB data line with computer system.
During measuring limb action, player is in the working space of Kinect sensor, by allowing player
The movement that instrument playing is imitated before sensor continuously moves, the limb action of recording performance person, especially four limbs and finger
Discrete movement;Then it is respectively provided for showing the VR image and audio of instrument playing process virtually for each limb action, connect
VR image, audio are matched with limb action, mapping relations based on this three construct instrument playing property database.
Further, in this step, it is moved by the speed, acceleration of the limb action to player, amplitude etc. special
Property data be acquired, establish player's limb action kinetic characteristic and musical instrument corresponding to audio play when audio (such as
Volume, frequency, tone color etc.) between corresponding relationship and data mapping library.
It should be noted that, for every kind of musical instrument, computer system lays in a large amount of limbs there are many musical instruments in VR image
Body movement and VR image, audio are into instrument playing property database, so that subsequent computing system can be according to limb action
Identify the musical instrument type that play.
S12: human depth's image of user in its working space is acquired in real time using Kinect sensor;
After the building of instrument playing property database, user during experiencing interactive performance system,
Kinect sensor is with 30 frame per second acquisition human depth's image to computer system, and wherein Kinect sensor uses Light
Coding technology encodes corresponding measurement space using continuous light, light encoded is received by inductor, by
Processor is decoded and is calculated, and ultimately forms the 3D rendering with depth data.Specifically, the laser of Kinect sensor
Be irradiated in working space after the limbs of user, it will form random interference fringe, referred to as speckle, speckle have height with
Machine, the different obtained patterns of distance are not also identical, thus any two in working space at speckle be all different, according to this
One principle, when the limbs of user move in working space, since the speckle in working space any two is all different, phase
When can definitely record the specific of user's limbs using these labels in different label on entire working space
Position, to form the 3D rendering with depth data.
S13: the skeleton coordinate based on human depth's picture catching user is known according to the skeleton coordinate
The real-time limb action of other user;
In this step, after computer system receives human depth's image, first by human depth's image, pass through depth
Image processing techniques obtains the artis information of human body, according to artis information, tracks with bone tracer technique and obtains people
The 3D coordinate of 25 skeleton points of body.Three-dimensional space by constantly reading skeleton point with the speed of 30Hz changes, according to change
Change identifies user's posture, to capture the real-time limb action of user.The wherein upper computer software branch of computer system
The bone for holding 2~6 people captures.
It should be noted that in this step, computer system can also be judged according to skeleton data user posture whether
Stablize, such as otherwise wouldn't carry out next step, but is back to the reacquisition for carrying out human depth's image in S12.
S14: according to the real-time limb action, called in instrument playing property database matching VR image and
Audio is played in real time, and control end effector is guided to carry out visual presence.
By the way that the real-time limb action of user is matched with the limb action in instrument playing property database, according to
The audio file that matching result searches for corresponding musical instrument in instrument playing property database is played in real time, calls corresponding VR
Image is shown on VR display, while playing sound according to the speed, acceleration of the real-time limb action of user and amplitude control
The audio of frequency.
The end effector refers specifically to fountain control system, flame control system, lamp light control system, fireworks control system
System, computer system connects each end effector by Intelligent routing technology, to realize fountain, flame, light, fireworks
Free switching is carried out between control interface.
By the way that various limb actions are matched with visual presences schemes such as the various water type of fountain and movement effects in advance
And data mapping library is established, various limb actions are matched with visual presences schemes such as the various moulding of flame and movement effects
And data mapping library is established, by the progress of the visual presences schemes such as the various types and transform effect of various limb actions and light
Match and establish data mapping library, the visual presences schemes such as the various types and movement effects of various limb actions and fireworks are carried out
It matches and establishes data mapping library.In user experience interactive mode performance system, computer system is according to the real-time limbs of user
Matching visual presence scheme is called in movement in data mapping library, and corresponding end is controlled according to visual presence scheme
Actuator carries out visual presence.Specifically, computer system carries out real-time limb action analysis with inverse kinematics, calculates people
Palmistry is for the motion profile of itself hip center, and according to the motion profile, mapping end effector need to transport relative to its basic point
Dynamic track, and track selects visual presence scheme accordingly.
After visual presence scheme determines, computer system passes through the control that DMX512 interface will be exported according to visual presence scheme
System instruction is transferred to end effector, and control end effector follows the limb motion of user to carry out visual presence variation, realizes
Guiding in real time of user's limb motion to end effector.
The interactive performance system of the embodiment of the present invention constructs limbs relative to hip center by Kinect somatosensory technology
Motion profile and end effector relative to the movement mapping relations between the motion profile of its basic point, realize that imitating musical instrument drills
The real-time, interactive of the end effectors such as the limb action and fountain, flame, light, fireworks played performance and the VR of virtual musical instrument
It has been shown that, real-time and interactivity with higher.System is not necessarily to be dressed as the wearable smart machine of tradition, walk compared with
For convenience, it will not lead to error occur because illumination condition influences, use is relatively stable.
In a kind of optional embodiment of the embodiment of the present invention, utilized described in the S12 in method shown in Fig. 1
Human depth's image that Kinect sensor acquires user in its working space in real time specifically includes: using people described in threshold filtering
In depth data to specific depth bounds in body depth image.Specifically, in order to guarantee the real-time of system, human depth
Image uses the resolution ratio of 640*480, frame per second 30f/s;In order to guarantee the accuracy of depth distance, threshold method mistake has been used
Depth data is filtered to 1220mm-3810mm within the scope of this, because data threshold is in 1220mm- in depth graphics process
It is clearest when image between 3810mm, accurate information can be most obtained, in this way could be that later work is reduced trouble and error,
Also the complete of system can be realized to the greatest extent.
In a kind of optional embodiment of the embodiment of the present invention, based on described described in the S13 in method shown in Fig. 1
The skeleton coordinate of human depth picture catching user specifically includes: kalman filter is utilized, by using different filters
The combination of wave device carries out smothing filtering to skeleton coordinate, to eliminate the shake of data.
The structure for the interactive performance based on Kinect somatosensory technology that Fig. 2 shows according to an embodiment of the invention is shown
It is intended to.As shown in Fig. 2, the device of the embodiment of the present invention includes:
Database sharing module 21, the limbs that instrument playing is imitated suitable for measuring one or more different players move
Make, is respectively provided for showing the VR image and audio of instrument playing process virtually for each limb action, is based on VR image, sound
The mapping relations of frequency and limb action construct instrument playing property database;
Depth image acquisition module 22, suitable for acquiring the human body of user in its working space in real time using Kinect sensor
Depth image;
Limb action identification module 23, suitable for the skeleton coordinate based on human depth's picture catching user, root
According to the real-time limb action of skeleton coordinate identification user;
Playing module 24 is suitable for being called in instrument playing property database therewith according to the real-time limb action
The VR image and audio matched are played in real time, and control end effector is guided to carry out visual presence.
In another embodiment of the present invention, the database sharing module 21 shown in Fig. 2, is further adapted for: right
Speed, acceleration and amplitude in limb action are acquired;The playing module 24, is further adapted for: based on speed, accelerating
Degree and amplitude control the audio of the audio.
In another embodiment of the present invention, the playing module 24 shown in Fig. 2, is further adapted for:
The visual presence scheme of a variety of end effectors is set, various limb actions and various visual presence schemes are carried out
It matches and establishes data mapping library;
It is described to implement in data mapping library to call matching visual presence scheme according to the real-time limb action
Visual presence.
Specifically, end effector is specifically fountain control system, flame control system, lamp light control system, fireworks control
One or more of system processed.
In another embodiment of the present invention, playing module 24 shown in Fig. 2, is further adapted for: passing through Intelligent routing
To link the end effector.
In another embodiment of the present invention, depth image acquisition module 22 shown in Fig. 2, is further adapted for: using
In depth data to specific depth bounds in human depth's image described in threshold filtering.
Specifically, depth bounds are specifically 1220mm-3810mm.
In another embodiment of the present invention, the limb action identification module 23 shown in Fig. 2, is further adapted for:
Smothing filtering is carried out to the skeleton coordinate using kalman filter.
The device of the embodiment of the present invention can be used for executing above method embodiment, and principle is similar with technical effect, this
Place repeats no more.
It should be understood that
Algorithm and display be not inherently related to any certain computer, virtual bench or other equipment provided herein.
Various fexible units can also be used together with teachings based herein.As described above, it constructs required by this kind of device
Structure be obvious.In addition, the present invention is also not directed to any particular programming language.It should be understood that can use various
Programming language realizes summary of the invention described herein, and the description done above to language-specific is to disclose this hair
Bright preferred forms.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect
Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself
All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more devices different from this embodiment.It can be the module or mould in embodiment
Block or component are combined into a module or module or component, and furthermore they can be divided into multiple submodule or submodule or
Sub-component.Other than such feature and/or at least some of process or module exclude each other, it can use any
Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed
All processes or module of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
Meaning one of can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors
Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice
Microprocessor or digital signal processor (DSP) realize the wearing state of detection electronic equipment according to an embodiment of the present invention
Device in some or all components some or all functions.The present invention is also implemented as executing institute here
Some or all device or device programs of the method for description are (for example, computer program and computer program produce
Product).It is such to realize that program of the invention can store on a computer-readable medium, or can have one or more
The form of signal.Such signal can be downloaded from an internet website to obtain, and perhaps be provided on the carrier signal or to appoint
What other forms provides.
For example, Fig. 3 shows the structural schematic diagram of electronic equipment according to an embodiment of the invention.The electronic equipment passes
It include processor 31 and the memory 32 for being arranged to storage computer executable instructions (program code) on system.Memory 32 can
To be the Electronic saving of such as flash memory, EEPROM (electrically erasable programmable read-only memory), EPROM, hard disk or ROM etc
Device.Memory 32 has the memory space 33 stored for executing the program code 34 of any method and step in embodiment.Example
It such as, may include each journey for being respectively used to realize the various steps in above method for the memory space of program code 33
Sequence code 34.These program codes can read or be written to from one or more computer program product this or
In the multiple computer program products of person.These computer program products include such as hard disk, compact-disc (CD), storage card or soft
The program code carrier of disk etc.Such computer program product is usually computer-readable storage medium described in such as Fig. 4
Matter.The computer readable storage medium can have the memory paragraph of 32 similar arrangement of memory in the electronic equipment with Fig. 3, deposit
Store up space etc..Program code can for example be compressed in a suitable form.In general, memory module is stored with for executing according to this
The program code 41 of the method and step of invention, it can the program code read by such as 31 etc processor, when these journeys
When sequence code is run by electronic equipment, the electronic equipment is caused to execute each step in method described above.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability
Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.If several in these devices can be through the same hardware branch in the module claim for listing equipment for drying
To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame
Claim.
Claims (10)
1. a kind of interactive performance method based on Kinect somatosensory technology, characterized in that include:
The one or more different players of measurement imitate the limb action of instrument playing, and use is respectively configured for each limb action
In the VR image and audio of virtually display instrument playing process, the mapping relations building based on VR image, audio and limb action
Instrument playing property database;
Acquire human depth's image of user in its working space in real time using Kinect sensor;
Based on the skeleton coordinate of human depth's picture catching user, identify user's according to the skeleton coordinate
Real-time limb action;
According to the real-time limb action, matching VR image and audio is called to carry out in instrument playing property database
It plays in real time, and control end effector is guided to carry out visual presence.
2. according to the method described in claim 1, it is characterized in that, the one or more different players of measurement imitate musical instruments
The limb action of performance further comprises: in limb action speed, acceleration and amplitude be acquired;It is described in musical instrument
Playing in property database calls matching VR image and audio to be played in real time, further comprises: based on speed, adding
Speed and amplitude control the audio of the audio.
3. according to the method described in claim 1, it is characterized in that, guidance control end effector carries out visual presence, into
One step includes:
The visual presence scheme of a variety of end effectors is set, various limb actions are matched with various visual presence schemes
And establish data mapping library;
Matching visual presence scheme is called to implement the vision in data mapping library according to the real-time limb action
It shows.
4. according to the method described in claim 3, it is characterized in that, the end effector is specifically fountain control system, flame
One or more of control system, lamp light control system, fireworks control system.
5. according to the method described in claim 4, it is characterized in that, guidance control end effector carries out visual presence, into
One step includes: that the end effector is linked by Intelligent routing.
6. according to the method described in claim 1, it is characterized in that, it is described to acquire its working space in real time using Kinect sensor
Human depth's image of interior user further comprises: using the depth data in human depth's image described in threshold filtering to spy
In fixed depth bounds.
7. according to the method described in claim 6, it is characterized in that, the depth bounds are specifically 1220mm-3810mm.
8. according to the method described in claim 1, it is characterized in that, the human body based on human depth's picture catching user
Bone coordinate further comprises: carrying out smothing filtering to the skeleton coordinate using kalman filter.
9. a kind of electronic equipment, wherein the electronic equipment includes:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the processor when executed
Method described in execution according to claim 1~any one of 8.
10. a kind of computer readable storage medium, wherein the computer-readable recording medium storage one or more program,
One or more of programs when being executed by a processor, realize method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910511002.XA CN110231870A (en) | 2019-06-13 | 2019-06-13 | A kind of interactive performance method, electronic equipment and computer readable storage medium based on Kinect somatosensory technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910511002.XA CN110231870A (en) | 2019-06-13 | 2019-06-13 | A kind of interactive performance method, electronic equipment and computer readable storage medium based on Kinect somatosensory technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110231870A true CN110231870A (en) | 2019-09-13 |
Family
ID=67859785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910511002.XA Pending CN110231870A (en) | 2019-06-13 | 2019-06-13 | A kind of interactive performance method, electronic equipment and computer readable storage medium based on Kinect somatosensory technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110231870A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110706553A (en) * | 2019-11-13 | 2020-01-17 | 北京音悦荚科技有限责任公司 | Musical instrument auxiliary learning system, method and device based on AR augmented reality |
CN111522930A (en) * | 2020-04-22 | 2020-08-11 | 深圳创维-Rgb电子有限公司 | Scene decompression data processing method, display device and storage medium |
CN113158906A (en) * | 2021-04-23 | 2021-07-23 | 天津大学 | Motion capture-based guqin experience learning system and implementation method |
US11437006B2 (en) * | 2018-06-14 | 2022-09-06 | Sunland Information Technology Co., Ltd. | Systems and methods for music simulation via motion sensing |
WO2023207759A1 (en) * | 2022-04-25 | 2023-11-02 | 漳州松霖智能家居有限公司 | User posture detection method, intelligent cushion system, and related device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106648083A (en) * | 2016-12-09 | 2017-05-10 | 广州华多网络科技有限公司 | Playing scene synthesis enhancement control method and device |
CN107851113A (en) * | 2015-05-08 | 2018-03-27 | Gn 股份有限公司 | Be configured as based on derived from performance sensor unit user perform attribute and realize the framework of automatic classification and/or search to media data, apparatus and method |
-
2019
- 2019-06-13 CN CN201910511002.XA patent/CN110231870A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107851113A (en) * | 2015-05-08 | 2018-03-27 | Gn 股份有限公司 | Be configured as based on derived from performance sensor unit user perform attribute and realize the framework of automatic classification and/or search to media data, apparatus and method |
CN106648083A (en) * | 2016-12-09 | 2017-05-10 | 广州华多网络科技有限公司 | Playing scene synthesis enhancement control method and device |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11437006B2 (en) * | 2018-06-14 | 2022-09-06 | Sunland Information Technology Co., Ltd. | Systems and methods for music simulation via motion sensing |
US20220366884A1 (en) * | 2018-06-14 | 2022-11-17 | Sunland Information Technology Co., Ltd. | Systems and methods for music simulation via motion sensing |
US11749246B2 (en) * | 2018-06-14 | 2023-09-05 | Sunland Information Technology Co., Ltd. | Systems and methods for music simulation via motion sensing |
CN110706553A (en) * | 2019-11-13 | 2020-01-17 | 北京音悦荚科技有限责任公司 | Musical instrument auxiliary learning system, method and device based on AR augmented reality |
CN111522930A (en) * | 2020-04-22 | 2020-08-11 | 深圳创维-Rgb电子有限公司 | Scene decompression data processing method, display device and storage medium |
CN113158906A (en) * | 2021-04-23 | 2021-07-23 | 天津大学 | Motion capture-based guqin experience learning system and implementation method |
WO2023207759A1 (en) * | 2022-04-25 | 2023-11-02 | 漳州松霖智能家居有限公司 | User posture detection method, intelligent cushion system, and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110231870A (en) | A kind of interactive performance method, electronic equipment and computer readable storage medium based on Kinect somatosensory technology | |
JP7418340B2 (en) | Image augmented depth sensing using machine learning | |
KR101839851B1 (en) | Signal generation and detector systems and methods for determining positions of fingers of a user | |
CN102822869B (en) | Capture view and the motion of the performer performed in the scene for generating | |
Menache | Understanding motion capture for computer animation and video games | |
US9861886B2 (en) | Systems and methods for applying animations or motions to a character | |
CN105210117B (en) | Augmented reality (AR) capture and broadcasting | |
Menache | Understanding motion capture for computer animation | |
KR101455403B1 (en) | Capturing and processing facial motion data | |
US9981193B2 (en) | Movement based recognition and evaluation | |
CN105102081B (en) | Control device with passive reflector | |
US8724887B2 (en) | Environmental modifications to mitigate environmental factors | |
US20080100622A1 (en) | Capturing surface in motion picture | |
CN102681657A (en) | Interactive content creation | |
CN110673716A (en) | Method, device and equipment for interaction between intelligent terminal and user and storage medium | |
NO20033301L (en) | Method and system for simulating surgical procedures | |
SG173496A1 (en) | Method and system for rendering an entertainment animation | |
CN108288300A (en) | Human action captures and skeleton data mapped system and its method | |
WO2014051584A1 (en) | Character model animation using stored recordings of player movement interface data | |
CN108496166A (en) | Method and system for the integrated database for generating posture and gesture | |
CN110989839A (en) | System and method for man-machine fight | |
Baker | The History of Motion Capture within the Entertainment Industry | |
JP7511000B2 (en) | Interactive attraction system and method for associating objects with users - Patents.com | |
JP2012070781A (en) | Game device, game control method, and program | |
KR20080085181A (en) | Eye movement data replacement in motion capture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190913 |