CN117555415A - Naked eye 3D interaction system based on action recognition and display device thereof - Google Patents

Naked eye 3D interaction system based on action recognition and display device thereof Download PDF

Info

Publication number
CN117555415A
CN117555415A CN202210937108.8A CN202210937108A CN117555415A CN 117555415 A CN117555415 A CN 117555415A CN 202210937108 A CN202210937108 A CN 202210937108A CN 117555415 A CN117555415 A CN 117555415A
Authority
CN
China
Prior art keywords
data
naked eye
action
rendering engine
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210937108.8A
Other languages
Chinese (zh)
Inventor
叶晓青
冯兵
朱正刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TAICANG INSTITUTE OF COMPUTING TECHNOLOGY CHINESE ACADEMY OF SCIENCES
Original Assignee
TAICANG INSTITUTE OF COMPUTING TECHNOLOGY CHINESE ACADEMY OF SCIENCES
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TAICANG INSTITUTE OF COMPUTING TECHNOLOGY CHINESE ACADEMY OF SCIENCES filed Critical TAICANG INSTITUTE OF COMPUTING TECHNOLOGY CHINESE ACADEMY OF SCIENCES
Priority to CN202210937108.8A priority Critical patent/CN117555415A/en
Publication of CN117555415A publication Critical patent/CN117555415A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a naked eye 3D interaction system based on action recognition and display equipment thereof, wherein the naked eye 3D interaction system based on action recognition comprises the following steps: the method comprises the steps of obtaining an action model of a person through a real-time sensing device, identifying action classification attributes of the person through the action model, sending the action classification attributes to a rendering engine through a network communication protocol, rendering a three-dimensional scene by the rendering engine in real time and displaying the three-dimensional scene on a display device in real time, setting a plurality of interactable elements in the scene, setting the interactable elements to change differently according to different action classification attributes transmitted by a network, and finally realizing naked eye 3D immersive experience capable of interacting in real time according to the actions of the person.

Description

Naked eye 3D interaction system based on action recognition and display device thereof
Technical Field
The invention relates to the field of naked eye 3D interaction experience, in particular to the field of naked eye 3D interaction mode experience using action recognition.
Background
In recent years, more and more cities make hot card punching places above the golden section of the city center, and various cool naked eye 3D LED screens continuously refresh eyeballs of people, thereby attracting countless masses to go around and punch cards. Behind these shocking display effects, supported is the LED screen + naked eye 3D technology. Everything that our eyes see in life is three-dimensional because there is a small distance interval between two eyeballs of a person, and the left eye and the right eye see two pictures with slight differences, and it is this slight difference that lets we precisely brain convert each space coordinate between objects, we can distinguish the distance and the size of the objects through this sense, namely three-dimensional sense. The LED display screen is required to achieve the naked eye 3D effect, and the core technical principle is that a three-dimensional effect is built in a two-dimensional video picture by means of the comprehensive relationship of the distance, the size, the shadow effect and the perspective of an object. The shadow in the video background can be used as a static three-dimensional reference line, so that the space layering sense with the visual angle is constructed, the naked eye 3D visual effect is achieved, and the immersive experience is constructed.
The technical superposition creative, tidal current superposition business and interactive superposition experience are adopted, the distance between the media landmark and citizens is shortened through real-time interaction between the offline virtual picture and the naked eye 3D virtual picture, the participation degree and the happiness degree of the citizens are further improved, and successful drainage is achieved for business circles and business bodies. Whether the urban landscape landmark, business card or network red flow center, the naked eye 3D+ interaction experience decodes the new idea of digital business marketing by means of stereoscopic reality and feeling of being in the scene.
Therefore, the invention provides a naked eye 3D interaction system based on action recognition and display equipment thereof.
Disclosure of Invention
The invention aims to provide a naked eye 3D interaction system based on action recognition and display equipment thereof.
In order to solve the above technical problems, in a first aspect, a technical solution adopted in an embodiment of the present invention is: the naked eye 3D interaction system based on action recognition comprises: the system comprises character action sensing equipment, an action recognition module, a data transmission module, a data receiving module, a rendering engine and display equipment. The system workflow includes: the character action sensing device acquires coordinate data of key skeleton points of a human body, the action recognition module analyzes and recognizes the coordinate data of the key skeleton points of the human body to obtain attribute categories of actions, the data transmission module transmits the attribute categories of the actions to the data receiving module, the data receiving module is integrated with the rendering engine, the rendering engine displays rendering scenes and interactable scene elements placed in the scenes, the rendering engine drives the interactable scene elements to change according to the data after receiving the attribute category data of the actions, and the scenes and the changed scene elements are finally presented through the display device, so that real interaction experience is finally realized.
Furthermore, the character action sensing device is a device integrating the function of acquiring coordinate data of key skeleton points of a human body, and acquires data of the device in a USB communication mode.
In some embodiments, the character motion sensing device acquires coordinate data of key skeletal points of the human body, and also includes hand skeletal data. The motion recognition module analyzes and recognizes coordinate data of skeletal points of the human hand to obtain attribute categories of the hand motions.
In some embodiments, the character motion perception device also includes an image sensor, while the motion recognition module includes functionality to recognize coordinate data of key skeletal points of a human body.
Further, the image sensor collects image data containing a person, the action recognition module analyzes the image data of the person to obtain coordinate data of key skeleton points of the human body, and the action recognition module also analyzes and recognizes the coordinate data of the key skeleton points of the human body to obtain attribute categories of actions.
In some embodiments, the data sending module and the data receiving module send and receive data using a transmission protocol specified by a TCP/IP network architecture, and may also be an upper layer protocol based on the TCP/IP network architecture.
In order to solve the technical problem, in a second aspect, an embodiment of the present invention further provides a display device, including: the LED screen is used for outputting and displaying the scene rendered by the rendering engine, the LED screen frame is used for supporting and fixing the LED screen, the loudspeaker box is fixed on the screen frame, the LED screen is used for outputting the sound effect of the scene rendered, and the computer host is used for running computer programs for embodying the functions of the action recognition module, the data transmission module, the data receiving module and the rendering engine.
To solve the above technical problem, in a third aspect, embodiments of the present invention further provide a computer program product, which includes a computer program stored on a computer readable storage medium, the computer program including program instructions which, when executed by a computer, will implement the functions of the respective modules and rendering engine as described in the first aspect above.
The beneficial effects of the embodiment of the invention are as follows: unlike the prior art, the invention provides a naked eye 3D interaction system based on action recognition, which comprises: the system comprises character action sensing equipment, an action recognition module, a data transmission module, a data receiving module and a rendering engine. Meanwhile, the display device is also provided for displaying and presenting the interactable naked eye 3D content. Compared with the traditional naked eye 3D screen purely displaying video elements, the naked eye 3D interaction system based on action recognition increases the natural interaction form of human actions, and meanwhile, the virtual reality experience of more immersive people is brought to the naked eye 3D presentation form.
Drawings
Fig. 1 is a block diagram of a system structure according to an embodiment of the present invention.
Fig. 2 is a structural diagram of a display device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
In order to facilitate an understanding of the present application, the present application will be described in more detail below with reference to the accompanying drawings and specific examples. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items.
As shown in the system structural block diagram of fig. 1, a naked eye 3D interaction system based on motion recognition includes a motion sensing device a, a motion recognition module b, a data sending module c, a data receiving module D, a rendering engine e and a display device f.
The action sensing equipment a is professional equipment for acquiring coordinate data of key skeleton points of a human body. In some embodiments, the motion sensing device a may also be an image sensor, camera or other device that can obtain rgb image information.
In some embodiments, the motion sensing device a can completely identify the coordinate data of the key skeleton of the human body, or can be the coordinate data of the key skeleton of the hand, or the combination of the two, depending on the configuration of the motion sensing device a.
The character action recognition module b is configured to recognize and analyze, determine an attribute category of the action, and in some embodiments, the action recognition module b may also analyze coordinate data of key skeleton points of the human body through image data, and further recognize the attribute category of the action.
The data identified by the character motion identification module b is a character motion attribute category in a general sense, including but not limited to standing, sitting, lying, walking, running, jumping, waving, etc., and in some embodiments, the data identified by the character motion identification module b also includes a gesture motion attribute category, including but not limited to fist making, hand spreading, scissors, single finger stretching, ok gesture, ring drawing, etc., hand motion.
The data sending module c is configured to send data to the data receiving module d through a protocol defined by the TCP/IP network interface framework or an upper layer protocol implemented based on the TCP/IP network interface framework, where the data sending module c and the data receiving module d may operate on the same host computer.
The data receiving module d is integrated in the rendering engine e, usually in a plug-in form, and the data received by the data receiving module d can directly perform data interaction on the rendering engine.
In some embodiments, the data sending module c and the data receiving module d run on different hosts, where the hosts where the two modules are respectively located use routers to establish a lan connection, and ip and ports of the data sending module c and the data receiving module d are configured under the same network address.
The rendering engine e is used for editing the scene and the interactable elements in the scene, and presenting the image rendered by the scene rendering module on the display device.
In some embodiments, the rendering engine e may be a fantasy or unitarity game rendering engine, acquires the data of the data receiving module d by writing codes, and interacts with the scene elements of the engine through the API call of the rendering engine itself to change the scene content to be displayed.
The display device f is a terminal display device for rendering pictures by the rendering engine e. The viewer can experience the final presentation device of the interaction.
The following describes a display device provided by an embodiment of the present invention with reference to fig. 2, where the display device includes an L-shaped LED display screen 1, a host computer 2, a sound box 3, and an LED screen frame 4.
The LED display screen 1 is fixedly arranged on the LED screen frame 4, the computer host 2 is arranged inside the LED screen frame, and the rear of the LED display screen 1. The loudspeaker box 3 is arranged on the LED screen frame 4, the computer host 2 is connected to the display interface of the LED display screen 1 through a video output interface, and the computer host 2 is connected to the audio interface of the loudspeaker box 3 through an audio output interface.
The motion sensing device a is installed in front of a person viewing the display device and is connected to the computer host 2 in the form of an interface of a USB.
The computer host 2 is provided with a computer program, and the program comprises the system functions of the action recognition module b, the data transmission module c, the data receiving module d and the rendering engine e.
It should be noted that, the display device described above is merely schematic, where the components may or may not be the appearance presented in the figures, and the components may or may not be embodied as a whole, and some or all of the modules may be selected according to actual needs to achieve the purpose of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in a software product that may be stored in a computer-readable storage medium, including instructions for performing the method described in the various embodiments or portions of the embodiments with at least one computer device.

Claims (12)

1. A naked eye 3D interactive system based on motion recognition, comprising:
character motion perception device: the method comprises the steps of obtaining action gesture data of a person;
the action recognition module: identifying the acquired action gesture data into specific action category attributes;
and a data transmitting module: transmitting the identified action category attribute;
and a data receiving module: the rendering system integrates a data receiving module, and receives data into a rendering engine;
rendering engine: processing three-dimensional scene data and setting interactive elements;
display device: and displaying the three-dimensional scene data in a naked eye 3D mode.
2. The motion recognition based naked eye 3D interactive system according to claim 1, wherein: the equipment for sensing the action of the person can acquire coordinate data of key positions of the skeleton of the person.
3. The motion recognition based naked eye 3D interactive system according to claim 1, wherein: the action recognition module can distinguish action category attributes represented by the data acquired by the person action sensing equipment by using a data set for inputting different person actions in a classification training mode.
4. The motion recognition based naked eye 3D interactive system according to claim 1, wherein: the data transmitting module can transmit data from one computer host to another computer host in a local area network mode.
5. The data transmission module of claim 4, wherein: data may be sent to another application of the host itself using local IP.
6. The motion recognition based naked eye 3D interactive system according to claim 1, wherein: the data receiving module is integrated in a rendering engine in a plug-in mode, and the rendering engine can acquire the received data to influence the data of the engine.
7. The motion recognition based naked eye 3D interactive system according to claim 1, wherein: the rendering engine can import the three-dimensional model data into the engine and render and display the three-dimensional model data, and can edit and manage the three-dimensional model data.
8. The rendering engine of claim 7, wherein: the program may be developed in a programming language and the imported three-dimensional model data states changed.
9. The motion recognition based naked eye 3D interactive system according to claim 1, wherein: the display device has an L-shaped LED screen and a frame supporting the LED screen.
10. The display device of claim 9, comprising a host computer operable to run said rendering engine.
11. The rendering engine of claim 7, wherein: the rendered result can be displayed on the display device, and the naked eye 3D effect can be seen from a fixed viewing angle.
12. The motion recognition based naked eye 3D interactive system according to claim 1, wherein: the content displayed on the display device may be changed differently as a person makes different actions in front of the motion sensing device.
CN202210937108.8A 2022-08-05 2022-08-05 Naked eye 3D interaction system based on action recognition and display device thereof Pending CN117555415A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210937108.8A CN117555415A (en) 2022-08-05 2022-08-05 Naked eye 3D interaction system based on action recognition and display device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210937108.8A CN117555415A (en) 2022-08-05 2022-08-05 Naked eye 3D interaction system based on action recognition and display device thereof

Publications (1)

Publication Number Publication Date
CN117555415A true CN117555415A (en) 2024-02-13

Family

ID=89819181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210937108.8A Pending CN117555415A (en) 2022-08-05 2022-08-05 Naked eye 3D interaction system based on action recognition and display device thereof

Country Status (1)

Country Link
CN (1) CN117555415A (en)

Similar Documents

Publication Publication Date Title
US10527846B2 (en) Image processing for head mounted display devices
Orlosky et al. Virtual and augmented reality on the 5G highway
CN101763762B (en) Educational system and method using virtual reality
CN113240782B (en) Streaming media generation method and device based on virtual roles
CN108389249B (en) Multi-compatibility VR/AR space classroom and construction method thereof
CN106730815B (en) Somatosensory interaction method and system easy to realize
US20200349751A1 (en) Presentation interface and immersion platform
CN106582005A (en) Data synchronous interaction method and device in virtual games
CN110610546B (en) Video picture display method, device, terminal and storage medium
JP6683864B1 (en) Content control system, content control method, and content control program
CN110427227B (en) Virtual scene generation method and device, electronic equipment and storage medium
CN110401810A (en) Processing method, device, system, electronic equipment and the storage medium of virtual screen
Roberts et al. withyou—an experimental end-to-end telepresence system using video-based reconstruction
CN112667085A (en) Classroom interaction method and device, computer equipment and storage medium
CN108762508A (en) A kind of human body and virtual thermal system system and method for experiencing cabin based on VR
CN112071130A (en) Knowledge education system and education method based on VR technology
CN110446090A (en) A kind of virtual auditorium spectators bus connection method, system, device and storage medium
KR102358997B1 (en) The service platform for multi-user supporting extended reality experience
CN113220130A (en) VR experience system for party building and equipment thereof
CN109116987A (en) A kind of holographic display system based on Kinect gesture control
CN105718054A (en) Non-contact intelligent terminal control method, device and system of augmented reality object
CN107102726A (en) A kind of preschool education system based on AR
CN110413109A (en) Generation method, device, system, electronic equipment and the storage medium of virtual content
CN117555415A (en) Naked eye 3D interaction system based on action recognition and display device thereof
KR100445846B1 (en) A Public Speaking Simulator for treating anthropophobia

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination