CN109116987B - Holographic display system based on Kinect gesture control - Google Patents
Holographic display system based on Kinect gesture control Download PDFInfo
- Publication number
- CN109116987B CN109116987B CN201810913695.0A CN201810913695A CN109116987B CN 109116987 B CN109116987 B CN 109116987B CN 201810913695 A CN201810913695 A CN 201810913695A CN 109116987 B CN109116987 B CN 109116987B
- Authority
- CN
- China
- Prior art keywords
- skeleton
- action
- holographic display
- kinect
- gesture control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a Kinect gesture control-based holographic display system which comprises a Kinect gesture control end, a server and a holographic display client, wherein the Kinect gesture control end is connected with the holographic display client through the server, the Kinect gesture control end is used for collecting a figure image, extracting a skeleton point structure of a figure in the image and then binding the figure image on a virtual model to form an action instruction, the server receives the action instruction of the Kinect gesture control end and sends the action instruction to the holographic display client, and the holographic display client receives the action instruction and projects a four-divided screen of the action instruction to a holographic display platform. According to the invention, the actions of the real person are mapped onto the virtual person and displayed through the holographic display platform, so that a user can watch the 3D phantom stereoscopic display effect without any polarized glasses under the condition of no constraint, visual impact is given to people, and the holographic display platform has strong depth feeling.
Description
Technical Field
The invention relates to the field of virtual reality, in particular to a holographic display system based on Kinect gesture control.
Background
Since 2010, Kinect has been popular among many developers because Kinect supports the simultaneous import of dynamically captured skeletal data, supports a variety of functions such as image recognition, language input, and speech recognition. The official also provides tools such as embedded operation drive, possessing a program development interface (Raw Sensor networks API), convenient installation files and complete development manuals, and the like, so that developers can easily develop a motion sensing system and an application supporting natural human-computer interaction under a visual studio development platform by using a mainstream high-level programming language. The Kinect for Windows SDK also adds a number of possibilities.
The holographic display is a novel display technology which is made to rise internationally in recent years, the technology can enable a three-dimensional image to be directly suspended in a free space outside equipment without any screen or medium, and the three-dimensional image is displayed at any angle of 360 degrees.
Disclosure of Invention
The invention aims to provide a Kinect gesture control-based holographic display system, which realizes a three-dimensional display effect by mapping the action of a real person onto a virtual person and displaying through a holographic display table.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the utility model provides a holographic display system based on Kinect gesture control, includes Kinect gesture control end, server and holographic show customer end, Kinect gesture control end passes through server connection holographic show customer end, and Kinect gesture control end is used for collecting user's gesture action and turns into action command, and the server receives the action command of Kinect gesture control end and sends to holographic show customer end, and holographic show customer end receives action command and shows the projection of quartering screen to holographic show stand with action command.
In the scheme, the Kinect gesture control end comprises a Kinect sensor and a human-computer interaction module, the Kinect sensor is connected with the human-computer interaction module, the Kinect sensor is used for capturing and recognizing figure images, and the human-computer interaction module is used for extracting skeleton point structures of figures and then binding the skeleton point structures on a virtual model to form action instructions and transmitting the action instructions to the server.
In the scheme, the server comprises a character driver and an action matcher, wherein the character driver is connected with the action matcher, the character driver is used for acquiring and analyzing character action instructions, and the action matcher is used for judging the matching condition of the character action instructions and the server instructions, converting the matching condition into control instructions and driving the character action to be sent to the holographic display client.
In the above scheme, the holographic display client includes a skeleton model controller and a display player, the skeleton model controller is configured to receive a transmission instruction of the server and drive a corresponding instruction to be reflected on the virtual model, and the display player is configured to display an action instruction on the virtual model.
In the above scheme, the Kinect gesture control end further comprises an input module, and the input module is connected with the human-computer interaction module and used for inputting a control instruction matched with the action instruction.
In the scheme, the display player is a holographic display platform with four split screens and is pyramid-shaped.
A holographic display method of a holographic display system based on Kinect gesture control comprises the following steps:
s1 model binding: capturing a human body image through a Kinect sensor, extracting a skeleton framework of a person, obtaining the coordinate of each skeleton node, converting the coordinate of the Kinect sensor into the coordinate of a virtual person relative to a screen, realizing the synchronization of joint points of the virtual person and a real person, and constructing a mobile skeleton model according to the joint points through an Avatar skeleton system provided by Unity;
and S2 action acquisition: when a user enters an identification area, the Kinect sensor automatically starts a capture mode, the user in the area is tracked and identified, and after the body motion meeting the conditions is captured, a virtual model corresponding to the motion is constructed by the method for moving the skeleton model in the step one;
the S3 server builds: acquiring and analyzing character skeleton point data and action instructions, comparing whether the action instructions are matched with the skeleton point data, performing action instruction preview on a server side after matching is successful, and transmitting the action instructions according to user requirements;
the S4 platform plays: and projecting the picture after the action command reflected on the virtual model is subjected to four split screens on four surfaces of the holographic display platform for playing.
According to the Kinect gesture control-based holographic display system and method, the actions of the real person are mapped onto the virtual person and displayed through the holographic display table, a user does not need to wear any polarized glasses, the 3D phantom stereoscopic display effect is watched without being bound completely, visual impact is brought to people, and the Kinect gesture control-based holographic display system and method have strong depth feeling.
Drawings
FIG. 1 is a block diagram of a holographic display system based on Kinect gesture control according to an embodiment of the present invention;
FIG. 2 is a timing diagram of a holographic display method based on Kinect gesture control according to an embodiment of the present invention;
FIG. 3 is a flowchart of a Kinect gesture control-based holographic display method according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the accompanying drawings and examples.
As shown in fig. 1, the holographic display system based on Kinect gesture control comprises a Kinect gesture control end, a server and a holographic display client, wherein the Kinect gesture control end is connected with the holographic display client through the server, the Kinect gesture control end is used for collecting images of people, extracting skeletal point structures of the people in the images and then binding the skeletal point structures on a virtual model to form action instructions, the server receives the action instructions of the Kinect gesture control end and sends the action instructions to the holographic display client, and the holographic display client receives the action instructions and displays four screens of the action instructions to project the holographic display platform.
The Kinect gesture control end comprises a Kinect sensor and a human-computer interaction module, the Kinect sensor is connected with the human-computer interaction module, the Kinect sensor is used for capturing and recognizing figure images, and the human-computer interaction module is used for extracting a skeleton point structure of a figure and then binding the skeleton point structure on a virtual model to form action instructions and transmitting a server. Meanwhile, the Kinect gesture control end further comprises an input module, such as a keyboard, and corresponding keys for directly inputting action instructions.
The specific work flow of the Kinect gesture control end is as follows:
the Kinect sensor is opened, a user enters a motion capture area, software recognizes that the user enters the capture area and carries out capture and tracking, and if the user does not carry out corresponding motion, motion instruction operation cannot be generated and transmitted; and if the user successfully completes the corresponding action, the man-machine interaction module acquires the action instruction and transmits the action instruction to the server, and the server sends the action instruction to the holographic display client. And if the holographic display client does not receive the corresponding action instruction, prompting the server that the transmission fails and providing action correction information. And if the holographic display client successfully receives the action instruction and reproduces corresponding action data, displaying the action instruction in four split screens and projecting the action instruction to the holographic display platform.
The server is a bridge connecting the Kinect gesture control end and the holographic display client, and comprises a figure driver and an action matcher, wherein the figure driver is connected with the action matcher, the figure driver is used for acquiring and analyzing figure action instructions, and the action matcher is used for judging the matching condition of the figure action instructions and the server instructions, converting the figure action instructions into control instructions to drive figure actions and sending the control instructions to the holographic display client.
The holographic display client comprises a skeleton model controller and a display player, wherein the skeleton model controller is connected with the display player, the skeleton model controller is used for receiving a transmission instruction of the server and driving a corresponding instruction to be reflected on the virtual model, and the display player is used for displaying an action instruction on the virtual model. The display player is a holographic display stand with four split screens and is pyramid-shaped. The specific working process comprises the following steps:
the user opens the APP and enters the service IP address. And (5) prompting the IP address error when the connection fails, and guiding to inquire the IP connection method. And (5) successfully connecting, and entering a four-screen projection interface. The user enters the motion capture area and begins to run. And the user changes the body position state, and the holographic display system completes the corresponding instruction. And the user instruction is failed to be captured, and the holographic display client side does not respond. The user leaves the motion capture area and exits the holographic display system.
A holographic display method of a holographic display system based on Kinect gesture control is shown in FIGS. 2 and 3, and comprises the following steps:
s1 model binding: capturing a human body image through a Kinect sensor, extracting a skeleton framework of a person, obtaining the coordinate of each skeleton node, converting the coordinate of the Kinect sensor into the coordinate of a virtual person relative to a screen, realizing the synchronization of joint points of the virtual person and a real person, and constructing a mobile skeleton model according to the joint points through an Avatar skeleton system provided by Unity;
and S2 action acquisition: when a user enters an identification area, the Kinect sensor automatically starts a capture mode, the user in the area is tracked and identified, and after the body motion meeting the conditions is captured, a virtual model corresponding to the motion is constructed by the method for moving the skeleton model in the step one;
the S3 server builds: acquiring and analyzing character skeleton point data and action instructions, comparing whether the action instructions are matched with the skeleton point data, performing action instruction preview on a server side after matching is successful, and transmitting the action instructions according to user requirements;
the S4 platform plays: and projecting the picture after the action command reflected on the virtual model is subjected to four split screens on four surfaces of the holographic display platform for playing.
Step S1 is a key point of the present invention, and the mapping of the actions of the real character to the virtual character specifically includes:
first, the skeleton of the person and the coordinates of each skeleton node are acquired by the Kinect. Since the coordinates used by the Kinect are different from those used inside Unity, necessary conversion is needed to convert the coordinates of the character relative to the Kinect sensor into the coordinates of the virtual character relative to the screen, i.e. to synchronize the joint points of the real character and the virtual character.
And secondly, the virtual character moves the skeleton model according to the synchronized joint points to realize the motion effect of the virtual character. This function is intended to be implemented through the Avatar skeletal system offered by Unity. The Avatar system can smoothly move bones according to the Rotation of the person's bone nodes. The human bone node sequence numbers are sequentially associated with the bone nodes of the Avatar system using the key value pair Dictionary < int, humanbody bones > boneIndex2 MecanimMap. Namely, when a certain node of a person moves, a corresponding Avatar bone node is found through the boneIndex2MecanimMap according to the serial number of the node, the Avatar bone node and the human bone node are moved to the same position, and the movement of the Avatar bone node can pull the bone to move so as to realize the movement of the whole bone.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (1)
1. The utility model provides a holographic display system based on Kinect gesture control which characterized in that: the Kinect gesture control end is connected with the holographic display client through the server, the Kinect gesture control end is used for collecting a figure image, extracting a skeleton point structure of a figure in the image and then binding the skeleton point structure on a virtual model to form an action instruction, the server receives the action instruction of the Kinect gesture control end and sends the action instruction to the holographic display client, and the holographic display client receives the action instruction and projects a four-divided screen of the action instruction display to the holographic display platform;
the Kinect gesture control end comprises a Kinect sensor and a human-computer interaction module, the Kinect sensor is connected with the human-computer interaction module, the Kinect sensor is used for capturing and identifying a figure image, and the human-computer interaction module is used for extracting a skeleton point structure of a figure and then binding the skeleton point structure on a virtual model to form an action instruction and transmitting the action instruction to a server;
the server comprises a character driver and an action matcher, wherein the character driver is connected with the action matcher, the character driver is used for acquiring and analyzing character action instructions, and the action matcher is used for judging the matching condition of the character action instructions and the server instructions, converting the matching condition into control instructions, driving character actions and then sending the control instructions to the holographic display client;
the holographic display client comprises a skeleton model controller and a display player, wherein the skeleton model controller is used for receiving a transmission instruction of the server and driving a corresponding instruction to be reflected on the virtual model, the display player is used for displaying an action instruction on the virtual model, and the display player is a holographic display stand with four split screens and is in a pyramid shape;
the Kinect gesture control end further comprises an input module, and the input module is connected with the human-computer interaction module and used for directly inputting action instructions; the method comprises the following steps:
s1 model binding: capturing a human body image through a Kinect sensor, extracting a skeleton framework of a person, obtaining the coordinate of each skeleton node, converting the coordinate of the Kinect sensor into the coordinate of a virtual person relative to a screen, realizing the synchronization of joint points of the virtual person and a real person, and constructing a mobile skeleton model according to the joint points through an Avatar skeleton system provided by Unity, wherein the specific mode for realizing the synchronization of the joint points of the virtual person and the real person comprises the following steps:
firstly, acquiring a skeleton of a real person and coordinates of each skeleton node through a Kinect sensor, converting the coordinates of the real person relative to the Kinect sensor into the coordinates of a virtual person relative to a screen, and synchronizing joint points of the real person and the virtual person so as to solve the problem that the coordinates used by the Kinect are different from the coordinates used inside the Unity;
secondly, through the Avatar skeleton system provided by Unity, the virtual character moves a skeleton model according to the synchronized joint points to achieve the motion effect of the virtual character, the Avatar skeleton system smoothly moves the skeleton according to the Rotation of the skeleton nodes of the character, and the serial numbers of the skeleton nodes of the human body sequentially correspond to the skeleton nodes of the Avatar skeleton system by using the key value pairs of Dictionary < int, humanbody > body index2 MecanimMap:
when a certain node of the real person moves, finding a corresponding Avatar bone node through a boneIndex2MecanimMap according to the serial number of the node, moving the Avatar node and the human bone node to the same position, wherein the movement of the Avatar node can pull the bone to move so as to realize the movement of the whole bone;
and S2 action acquisition: when a user enters an identification area, the Kinect sensor automatically starts a capture mode, the user in the area is tracked and identified, and after the body motion meeting the conditions is captured, a virtual model corresponding to the motion is constructed by the method for moving the skeleton model in the step one;
the S3 server builds: acquiring and analyzing character skeleton point data and action instructions, comparing whether the action instructions are matched with the skeleton point data, performing action instruction preview on a server side after matching is successful, and transmitting the action instructions according to user requirements;
the S4 platform plays: and projecting the picture after the action command reflected on the virtual model is subjected to four split screens on four surfaces of the holographic display platform for playing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810913695.0A CN109116987B (en) | 2018-08-13 | 2018-08-13 | Holographic display system based on Kinect gesture control |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810913695.0A CN109116987B (en) | 2018-08-13 | 2018-08-13 | Holographic display system based on Kinect gesture control |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109116987A CN109116987A (en) | 2019-01-01 |
CN109116987B true CN109116987B (en) | 2022-04-08 |
Family
ID=64852161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810913695.0A Active CN109116987B (en) | 2018-08-13 | 2018-08-13 | Holographic display system based on Kinect gesture control |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109116987B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112379777A (en) * | 2020-11-23 | 2021-02-19 | 南京科盈信息科技有限公司 | Digital exhibition room gesture recognition system based on target tracking |
CN115576417A (en) * | 2022-09-27 | 2023-01-06 | 广州视琨电子科技有限公司 | Interaction control method, device and equipment based on image recognition |
CN117319628A (en) * | 2023-09-18 | 2023-12-29 | 四开花园网络科技(广州)有限公司 | Real-time interactive naked eye 3D virtual scene system supporting outdoor LED screen |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106952348A (en) * | 2017-03-28 | 2017-07-14 | 云南大学 | A kind of digital building model methods of exhibiting and system based on infrared gesture identification |
CN107272882A (en) * | 2017-05-03 | 2017-10-20 | 江苏大学 | The holographic long-range presentation implementation method of one species |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140325455A1 (en) * | 2013-04-26 | 2014-10-30 | Ebay Inc. | Visual 3d interactive interface |
-
2018
- 2018-08-13 CN CN201810913695.0A patent/CN109116987B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106952348A (en) * | 2017-03-28 | 2017-07-14 | 云南大学 | A kind of digital building model methods of exhibiting and system based on infrared gesture identification |
CN107272882A (en) * | 2017-05-03 | 2017-10-20 | 江苏大学 | The holographic long-range presentation implementation method of one species |
Also Published As
Publication number | Publication date |
---|---|
CN109116987A (en) | 2019-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106363637B (en) | A kind of quick teaching method of robot and device | |
CN108986189B (en) | Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation | |
CN109116987B (en) | Holographic display system based on Kinect gesture control | |
WO2022227408A1 (en) | Virtual reality interaction method, device and system | |
CN103793060B (en) | A kind of user interactive system and method | |
CN112667068A (en) | Virtual character driving method, device, equipment and storage medium | |
CN106648098B (en) | AR projection method and system for user-defined scene | |
US9076345B2 (en) | Apparatus and method for tutoring in convergence space of real and virtual environment | |
CN103116857A (en) | Virtual sample house wandering system based on body sense control | |
CN113625869B (en) | Large-space multi-person interactive cloud rendering system | |
CN107066081B (en) | Interactive control method and device of virtual reality system and virtual reality equipment | |
CN109814718A (en) | A kind of multi-modal information acquisition system based on Kinect V2 | |
CN107357434A (en) | Information input equipment, system and method under a kind of reality environment | |
KR20120072126A (en) | Visual surrogate for indirect experience, apparatus and method for providing thereof | |
TW202305551A (en) | Holographic calling for artificial reality | |
CN107817701B (en) | Equipment control method and device, computer readable storage medium and terminal | |
CN111670431B (en) | Information processing device, information processing method, and program | |
CN108646578B (en) | Medium-free aerial projection virtual picture and reality interaction method | |
CN106502401B (en) | Image control method and device | |
CN106598233A (en) | Input method and input system based on gesture recognition | |
CN108459716B (en) | Method for realizing multi-person cooperation to complete task in VR | |
Putra et al. | Designing translation tool: Between sign language to spoken text on kinect time series data using dynamic time warping | |
CN110362199A (en) | A kind of body-sensing AR and VR interaction Control management system | |
CN112612358A (en) | Human and large screen multi-mode natural interaction method based on visual recognition and voice recognition | |
CN110262662A (en) | A kind of intelligent human-machine interaction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |