CN115100742A - Meta-universe exhibition and display experience system based on air-separating gesture operation - Google Patents

Meta-universe exhibition and display experience system based on air-separating gesture operation Download PDF

Info

Publication number
CN115100742A
CN115100742A CN202210720889.5A CN202210720889A CN115100742A CN 115100742 A CN115100742 A CN 115100742A CN 202210720889 A CN202210720889 A CN 202210720889A CN 115100742 A CN115100742 A CN 115100742A
Authority
CN
China
Prior art keywords
gesture
exhibition
universe
space
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210720889.5A
Other languages
Chinese (zh)
Inventor
王爱民
杨宁
韩东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Miaowen Exhibition Service Co ltd
Original Assignee
Shanghai Miaowen Exhibition Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Miaowen Exhibition Service Co ltd filed Critical Shanghai Miaowen Exhibition Service Co ltd
Priority to CN202210720889.5A priority Critical patent/CN115100742A/en
Publication of CN115100742A publication Critical patent/CN115100742A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a meta-universe exhibition display experience system based on air gesture operation, which belongs to the technical field of on-line virtual exhibition and comprises the following steps: the Kinect hardware intervenes, the actions of exhibition personnel and arms are sensed through the Kinect hardware intervenes in advance, and the Kinect hardware comprises a camera capable of capturing a panoramic view angle; the body sensing controller is involved for the second time, the LeapMotion sends out detection rays on the equipment through an infrared emitter, and a three-dimensional solid surface is generated after a returned signal is collected. The on-line exhibition hall in this embodiment improves the authenticity of on-line exhibition hall, adopts gesture and skeleton discernment response equipment reinforcing on-line exhibition hall's interactivity, increases the guide's mode function that the on-line exhibition hall visited, realizes that the advantage of present on-line exhibition hall and off-line exhibition hall is complementary.

Description

Meta universe exhibition display experience system based on air-separating gesture operation
Technical Field
The invention relates to the technical field of on-line virtual exhibition, in particular to a meta-universe exhibition experience system based on air gesture operation.
Background
The exhibition hall is popular and accepted by more people, different exhibition halls around the world are rapidly expanded, along with the development of artificial intelligence and the rise of the meta universe, compared with the traditional on-line exhibition hall with the defects of 'passive propaganda introduction', 'space-time limitation', 'simple picture type exhibition', and the like, the on-line virtual exhibition hall without closing the door for 24 hours shows greater advantages, and has the visualization mode of interconnection data, intelligent operation, content-based user immersive experience and no space-time limitation.
More importantly, the online exhibition system can solve the problem of passive marketing, the online exhibition mainly enters a virtual exhibition hall through various app entrances at present, the online exhibition hall is visited in a keyboard mouse or gamepad device mode, however, the key operation mode adopted by the device enables visitors to lose the natural interaction of hands, and the substitution feeling and the immersion feeling are lost. The invention provides an isolated non-contact gesture operation mode for a virtual exhibition hall system to realize the exhibition hall guide function.
Disclosure of Invention
The invention aims to provide a metasequestrian exhibition experience system based on the operation of the space gesture, wherein the online virtual exhibition hall has the natural interaction of human hands and has the effects of substitution feeling, immersion feeling and the like so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a meta-universe exhibition and experience system based on air gesture operation comprises Kinect hardware, a somatosensory controller, an operation unit and an imaging unit, and comprises the following steps:
the Kinect hardware intervenes, the actions of exhibition personnel and arms are sensed through the Kinect hardware intervenes in advance, and the Kinect hardware comprises a camera capable of capturing a panoramic view angle;
the body sensing controller is involved for the second time, the Leap Motion sends out detection rays on the equipment through an infrared emitter, and a three-dimensional solid surface is generated after a returned signal is collected;
the arithmetic unit acquires the same joint coordinate point according to the gesture recognition result;
the imaging unit is used for providing attributes for the hand object to reflect the physical characteristics of the detected hand.
As a still further scheme of the invention: the Kinect hardware analyzes head turning of the audiences according to head positions and face integrality of the audiences, binds head motions and displayed focusing parts, binds both hands and focusing view sizes, and changes the view sizes along with opening of telescopic motions of the audiences.
As a still further scheme of the invention: the somatosensory controller can acquire real coordinate space coordinates (x, y, z) of the whole hand in real time, and if a part of the hand is moved out of an interactive effective space, the KinectAzure device can immediately detect invalid actions and prompt an interdynamic person to return to an effective area on a display large screen.
As a still further scheme of the invention: and the somatosensory controller captures the difference of fine final sections among gesture progress, and the hand is split into joints and bones of the skeletal structure of the human body by adopting AI (artificial intelligence) recognition analysis.
As a still further scheme of the invention: and the somatosensory controller performs secondary intervention, performs matching analysis on the pre-established motion model, acquires the contrast similarity percentage of each prefabricated model, and judges whether the gesture triggering is successfully acquired according to the size of the percentage value.
As a still further scheme of the invention: the operation unit judges whether the gesture moves left or right according to the change of the same coordinate point in the gesture, wherein the data of the first frame is initial point coordinates, the data of the second frame is end point coordinates, and the operation unit judges whether the gesture moves left or right according to the change of the same coordinate point in the gesture.
As a still further scheme of the invention: the Motion sensing controller distributes an ID indicator to gesture data in the operation unit, the indicator keeps unchanged because the gesture exists in a visual range of the equipment, and the previous frame data displacement, rotation and scale change Leap Motion give out frame Motion factors.
As a still further scheme of the invention: wherein the imaging unit, direction and palm normal direction are vectors describing a direction of a hand in a Leap Motion coordinate system.
A manufacturing method of a meta-universe exhibition display experience gesture based on air-isolated gesture operation comprises the following steps:
1) performing item exhibition gesture interaction, operating scene interaction based on a gesture recognition mode, moving hands to a scanning button by visitors, and waiting for 3 seconds to trigger a mechanical arm to start equipment maintenance;
after the scanning is finished, prompting the position of the equipment needing to be maintained in the scene, and selecting to execute maintenance operation through gestures;
2) before visiting, the manager of the exhibition hall identifies a drawing mode of entering the visiting route of the exhibition hall area through a special gesture, defines a visiting line by user, and provides reference of the visiting sequence of the exhibition hall for the visiting staff;
the special gesture adopts two gestures which are alternately kept for 5 seconds to enter the drawing of the observation line, so that the false triggering rate of entering the drawing mode is reduced;
after the virtual navigation route function is set through activating the space-separating gesture through the preset special gesture, the large screen prompts a palm to move in an effective detection area so as to control a navigation route indication cursor;
if the tour route is carelessly operated by mistake or the main guide point is changed in the process of setting the tour route, another gesture for canceling can be immediately made at the moment; the cancelled gesture is a model preset in a program, and when the program detects the cancelled gesture, the cursor for recording the tour route becomes an erasing cursor;
3) firstly, when recording a tour route, controlling an indication cursor to approach a key tour point by hand movement, and then activating a real-time camera component of the current approaching key tour point by using a preset trigger gesture;
then, the real-time camera picture of the currently selected key tourist point can be directly entered into the large screen, so that an interdynamic person can observe the actual situation in the museum in real time under the condition that the interdynamic person does not arrive at the spot, and the interdynamic person can experience the related exhibition items in the scene.
As a still further scheme of the invention: wherein the erase cursor only needs to move the hand back, the cursor erases the unwanted course and does not need to record from scratch.
Compared with the prior art, the invention has the beneficial effects that:
1. the on-line exhibition hall in this embodiment improves the authenticity of on-line exhibition hall, adopts gesture and skeleton discernment response equipment reinforcing on-line exhibition hall's interactivity, increases the guide's mode function that the on-line exhibition hall visited, realizes that the advantage of present on-line exhibition hall and off-line exhibition hall is complementary.
2. In the online exhibition hall in the embodiment, the online virtual exhibition hall system uses some special operation means, namely gesture recognition and skeleton behavior recognition algorithms, to replace the conventional mouse and keyboard operation mode to visit the virtual exhibition hall.
3. The on-line exhibition hall in the embodiment defines a special gesture real-time operation to realize the exhibition hall visiting route.
Drawings
FIG. 1 is a block diagram of the B/S architecture connection of the present invention;
FIG. 2 is a block diagram of a human-machine interconnection in the present invention;
FIG. 3 is a schematic view of a camera capturing view according to the present invention;
FIG. 4 is a connection block diagram of the motion sensing controller according to the present invention;
FIG. 5 is a schematic view of a gesture joint mark according to the present invention;
FIG. 6 is a diagram illustrating finger data information according to the present invention;
FIG. 7 is a schematic diagram of a gesture recognition structure according to the present invention;
fig. 8 is a connection block diagram of the navigation interactive system of the present invention.
Detailed Description
Referring to fig. 1, an online virtual exhibition hall generally adopts a B/S architecture, and a mainstream client currently has a Chrome, Edge, and wechat Webkit browser. The server side is placed on a cloud virtual cloud host, and resource request services are provided through a reverse proxy program such as nginx or apache. And after the visitor renders the multimedia resource in the kernel of the browser of the client, the visitor performs interactive operation by using a keyboard mouse or a touch screen. The interactive interface can clearly identify several functional buttons, particularly return to an upper menu, and enter frame level operation such as a current selection scene, which can affect the aesthetic property and the consistency of the overall style of the interface and completely break the visual sense of immersive navigation.
The off-line manual guided tour is often sent to manual guiding personnel for manual guided tour when important groups visit. However, few and few professional instructors who are well trained and seriously responsible often have three languages, two languages and grass during the process of explaining the scenery, so that tourists are often given a bad impression of spreading accidents or cheerful, and particularly foreign language explanation is difficult to be provided for each tourist due to the limited instructors.
As shown in fig. 2, when a tourist self-service guide is used, some exhibition halls and exhibition halls are equipped with an electronic self-service guide in order to better serve the tourist. These intelligent guiding machines have prevailed in museums and tourist attractions of developed countries earlier, in recent years, scenic spots and cultural museums in our country have begun to be popularized, with the increase of self-help touring and casual guests of domestic scenic spots and museums year by year, voice guiding service has been a highlight of scenic spots and museums as a necessary novel service facility, and the latest national tourist attraction quality grade division and evaluation standards have brought whether to provide portable electronic voice explanation into an A-added item, which is an added item of a national 4A scenic spot and a necessary explanation service item of a 5A-level scenic spot. However, the navigation mode only passively receives the electronically synthesized or recorded sound preset by the navigator, so that the experience for the participants is limited, everyone experiences the sound uniformly, the pertinence is lacked, and much fun is lost, so that the navigation efficiency which can be really played is not high.
In summary, the above mentioned exhibition methods have the following disadvantages that the traditional offline exhibition hall has the disadvantages of passive propaganda and introduction, space-time limitation, simple picture-type exhibition, etc., the online exhibition interaction method is single, and the two-dimensional color image is based on the identification technology of the two-dimensional color image, namely, the two-dimensional color image is obtained after a scene is shot by a common camera, and then the content in the image is identified by the computer graphic algorithm. Two-dimensional hand type recognition can only recognize a few static gesture actions, and the actions need to be preset in advance, so that the workload is large.
Based on the above-mentioned shortcomings in the prior art, a system for presenting and experiencing meta-universe exhibition based on space gesture operation is proposed as shown in fig. 3, which comprises the following steps: hardware intervenes, the actions of exhibition personnel and arms are sensed through the Kinect hardware intervene in advance, the hardware equipment comprises a camera capable of capturing a panoramic view angle, and after the Kinect intervenes, three-dimensional modeling is conducted on the sensed actions.
In this embodiment, Kinect has 1200 w's live camera and depth sensor, can reach 4K and show, utilizes Kinect's three-dimensional reconstruction function, when the visitor walks to the screen, will intervene in advance by Kinect, establishes the 3D model in the follow input data, to the accurate seizure of people drawing spectator's human skeleton, transmits the signal into the screen, awakens up entire system, and Kinect also can catch spectator's some arm actions, if: when the audience looks over the full picture, the audience only needs to make the posture of the telescope by hands, in the process, the head turning direction of the audience is analyzed according to the head position and the integrity of the face of the audience, meanwhile, the head action is bound with the displayed focusing part, the size of the two hands and the size of the focusing picture are bound, and the size of the view is changed along with the opening of the action of the audience telescope.
As shown in fig. 4, the body sensing controller intervenes secondarily, the Leap Motion senses specific hand gestures, after the Kinect awakens the whole system, the human body is captured to walk to the platform, the Leap Motion serves as secondary intervention, the Leap Motion sends detection rays through an infrared emitter on the device, returned signals are collected, a three-dimensional stereo surface is generated, at the moment, after the hand is placed into the effective detection space, the Leap Motion sensor can obtain real space coordinates (x, y, z) of the whole hand in real time, if the hand is moved out of the effective interaction space, the Kinectazure device can immediately detect invalid actions, and the interactor is prompted to return to an effective area on a large screen, and gapless fusion of body sensing and accurate gesture recognition is achieved.
As shown in fig. 5, the detection of the complete gesture can be realized by completely tracking each joint of both hands and converting it into a digital signal to be transmitted back to the background program for judgment. To distinguish as many gestures as possible and capture the subtle end-to-end differences between each gesture, we use the function of AI recognition analysis to split the hand into the skeletal structure-level joints and bones of the human body, first the palm portion, which are recognized directly as a sphere in the system.
Wherein in fig. 5, each numeral represents:
Figure BDA0003711227600000071
this facilitates tracking of the position of each hand in space, as shown in fig. 6. Then, each finger also establishes different data objects according to the name of the finger, and the objects contain the joint angle, the phalange length, the rotation angle and the information of the space coordinate of the corresponding finger. These information are stored in their respective buffers and compared with other time data on the time line to obtain motion information such as acceleration, relative displacement, etc.
According to the information, matching analysis is carried out on the preset motion model, the similarity percentage of each preset model is obtained, and whether the gesture triggering is successful or not can be obtained only by judging the value of the percentage.
The three-dimensional gesture recognition technology is used, the Z-axis information is added in the three-dimensional gesture recognition, various hand types, gestures and actions can be recognized, the three-dimensional gesture recognition is also the main direction of the development of the existing gesture recognition, however, the gesture recognition containing certain depth information needs special hardware to be realized, and the three-dimensional gesture recognition technology is completed through a customized industrial sensor and a professional optical camera to count sample characteristics and a deep learning neural network technology.
As shown in fig. 7, the arithmetic unit takes the same one joint coordinate point according to the gesture recognition result, in this embodiment, the data of the first frame to the left of the "0" point in the figure (i.e., the center point of the wrist) is taken as the initial point coordinate, and the data of the second frame is taken as the end point target, and then, the left shift or the right shift is determined according to the change of the same coordinate in the gesture.
The Leap Motion software assigns it a unique ID indicator. As long as the entity is always within the visual range of the device, the ID indicator remains the same, the software analyzes the overall Motion, and as long as the previous frame data is shifted, rotated, scaled, etc., the Leap Motion program gives the frame Motion factor based on the Motion of that hand.
Wherein:
1) the Rotation coordinate Rotation Axis, a direction vector, describes the Rotation of the coordinates.
2) Rotation Angle, which is the Angle of Rotation in the clockwise direction relative to the Rotation coordinates (cartesian coordinate system).
3) Rotation Matrix, a Matrix transformation of Rotation.
4) Scale Factor, a Factor to describe inflation and deflation.
5) Displacement Translation, a vector, describes linear motion.
The imaging unit, the hand object, provides some attributes to reflect a physical characteristic of a detected hand
Wherein:
the hand object provides some attributes to reflect a physical characteristic of a detected hand.
1. Palm Position, the coordinates of the Palm center measured in millimeters under the Leap Motion coordinate system.
2. Palm Velocity, Palm millimeter per second of motion.
3. Palm Normal, the vector perpendicular to the plane formed by the Palm, points toward the inside of the Palm.
4. Direction, the vector pointing from the palm center to the finger.
5. The Sphere Center Sphere can be suitable for one Sphere Center of the inner cambered surface of the palm. (assuming that a ball is held)
6. Sphere Radius, as above, this is the Sphere Radius. When the shape of the hand changes, the radius changes accordingly.
The direction and the palm normal direction are vectors describing the direction of the hand in the Leap Motion coordinate system.
The content making process matched with the gesture is carried out,
the method comprises the following steps: exhibition item gesture interaction, which is described by taking a content scene maintained by a virtual machine as an example, the scene interaction is operated based on a gesture recognition mode, visitors move hands to a scanning button, wait for 3 seconds to trigger a mechanical arm to start equipment maintenance, prompt the part of equipment needing maintenance in the scene after scanning is completed, and select to execute maintenance operation through gestures.
Step two: and a gesture drawing guide line, wherein before visiting, the manager of the exhibition hall enters a drawing mode of the regional visiting line of the exhibition hall through special gesture recognition to define a visiting line by user, so that reference of the visiting sequence of the exhibition hall is provided for the visitor.
The special gesture adopts two gestures of a next picture, the two gestures alternately keep for 5 seconds to enter drawing of the observation line, and the false triggering rate of entering the drawing mode is reduced.
After setting up virtual guide route function through the special gesture activation of presetting apart from empty gesture, the large screen can indicate and remove the palm at effective detection area, control guide route instruction vernier, at the in-process that removes the hand, the sensor can be continuously with the real space moving path of hand, turn into the guide route in the virtual guide three-dimensional program, this in-process procedure can be according to the automatic laminating of adsorbing of the indicator distance of hand every key guide point, then generate regular straight line and present on the large screen.
Wherein if the tour route is carelessly operated by mistake or the main change point is changed during the process of setting the tour route, a gesture for additional cancellation can be immediately made at this time. This gesture is also a model that we preset in the program, and when the program detects a cancel gesture, the cursor that records the tour route becomes an erase cursor.
We need only move the hand back and the cursor wipes the unwanted way and does not need to record from scratch.
After the set of processes is completed, the program records the tour route just before and generates related tour suggestions and cautionary items.
Step three: the gesture virtual guide previews the real scene, and when a tour route is set, some visitors may have strong interest in some exhibition items in a museum or hesitate to use the tour route for visiting. The real appearance in the lower house is sought to be drawn in advance, and then the selection conforming to the mind is carried out.
At the moment, the air-isolated gesture interactive navigation software is accessed with real-time monitoring of key points in the hall, and the requirement that the relevant points need to be previewed when a navigation route is set is met.
As shown in fig. 8, first the cursor is indicated to approach the key tour points by hand movement control while recording the tour route, and then the real-time camera component of the currently approaching key tour point is activated with our preset trigger gesture.
Then, the real-time camera picture of the currently selected key tourist point can be directly entered into the large screen, so that an interdynamic person can observe the actual situation in the museum in real time under the condition that the interdynamic person does not arrive at the spot, and the interdynamic person can experience the related exhibition items in the scene.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention are equivalent to or changed within the technical scope of the present invention.

Claims (10)

1. A meta-universe exhibition and experience system based on air-isolated gesture operation comprises Kinect hardware, a somatosensory controller, an operation unit and an imaging unit, and is characterized by comprising the following steps:
the Kinect hardware intervenes, the actions of exhibition personnel and arms are sensed through the Kinect hardware intervenes in advance, and the Kinect hardware comprises a camera capable of capturing a panoramic view angle;
the body sensing controller is involved for the second time, the Leap Motion sends out detection rays on the equipment through an infrared emitter, and a three-dimensional solid surface is generated after a returned signal is collected;
the arithmetic unit acquires the same joint coordinate point according to the gesture recognition result;
the imaging unit is used for providing attributes for the hand object to reflect the physical characteristics of the detected hand.
2. The spaceflight gesture-based meta-universe exhibition experience system of claim 1, wherein the Kinect hardware analyzes the head turning of the audience according to the head position and the integrity of the face of the audience, binds the head movement and the displayed focusing part, and binds both hands and the focused view size, and the view size changes with the opening of the telescopic movement of the audience.
3. The space gesture operation-based meta-universe exhibition experience system of claim 1, wherein the somatosensory controller obtains real coordinate space coordinates (x, y, z) of the entire hand in real time, and if the hand is moved out of the interactive effective space, the KinectAzure device immediately detects invalid action and prompts the interactor to return to the effective area again on the display large screen.
4. The spaceflight gesture operation-based metastess exhibition experience system according to claim 1 or 3, wherein the capturing of subtle end-to-end differences between gesture advances in the somatosensory controller is performed by using AI recognition analysis to split the hand into joints and bones of the skeletal structure of the human body.
5. The space-separated gesture operation-based meta-universe exhibition experiencing system according to claim 4, wherein the somatosensory controller performs secondary intervention, performs matching analysis on a pre-established motion model, obtains similarity percentage of each pre-model comparison, and judges whether the gesture triggering is successfully obtained according to the percentage value.
6. The space-time gesture operation-based meta-universe exhibition experiencing system of claim 1, wherein the computing unit is configured to determine whether the gesture is left-shifted or right-shifted according to the change of the same coordinate point in the gesture, wherein the data of the first frame is the initial point coordinate and the data of the second frame is the end point coordinate.
7. The space-time gesture operation-based meta-space exhibition experience system according to claim 1 or 6, wherein the somatosensory controller assigns "ID" indicators to the gesture data in the arithmetic unit, the indicators will remain unchanged in the visual range of the device due to the gesture, and the displacement, rotation and scale change Leap Motion of the previous frame data will give frame Motion factors.
8. The space-gesture-operation-based meta-universe exhibition experience system according to claim 1, wherein the imaging unit, direction and palm normal direction are vectors describing the direction of the hand under the Leap Motion coordinate system.
9. A method for making a gesture for presenting and experiencing a metasuniverse exhibition based on an air-break gesture operation, which is applied to the metasuniverse exhibition presenting and experiencing system based on an air-break gesture operation as claimed in any one of claims 1 to 8, the method comprising the following steps:
(1) performing item exhibition gesture interaction, operating scene interaction based on a gesture recognition mode, moving hands to a scanning button by visitors, and waiting for 3 seconds to trigger a mechanical arm to start equipment maintenance;
after the scanning is finished, prompting the position of the equipment needing to be maintained in the scene, and selecting to execute maintenance operation through gestures;
(2) the method comprises the following steps that a gesture drawing guide line is drawn, a manager of the exhibition hall enters a drawing mode of a visiting line of an exhibition hall area through special gesture recognition before visiting, the visiting line is customized, and reference of an exhibition hall visiting sequence is provided for the visitor;
the special gesture adopts two gestures which are alternately kept for 5 seconds to enter the drawing of the observation line, so that the false triggering rate of entering the drawing mode is reduced;
after the virtual navigation route function is set through activating the space-separating gesture through the preset special gesture, the large screen prompts a palm to move in an effective detection area so as to control a navigation route indication cursor;
if the tour route is carelessly operated by mistake or the main guide point is changed in the process of setting the tour route, another gesture for canceling can be immediately made at the moment; the gesture of canceling is a model preset in a program, and when the program detects the gesture of canceling, the cursor for recording the tour route becomes an erasing cursor;
(3) firstly, when recording a tour route, controlling an indication cursor to approach a key tour point by hand movement, and then activating a real-time camera component of the current approaching key tour point by using a preset trigger gesture;
then, the real-time camera picture of the currently selected key tourist point can be directly entered into the large screen, so that an interdynamic person can observe the actual situation in the museum in real time under the condition that the interdynamic person does not arrive at the spot, and the interdynamic person can experience the related exhibition items in the scene.
10. The space-time gesture-based meta-universe exhibition experience system of claim 9, wherein the erasing cursor only needs to move back the hand, the cursor will erase the unwanted course and not need to record from scratch.
CN202210720889.5A 2022-06-23 2022-06-23 Meta-universe exhibition and display experience system based on air-separating gesture operation Pending CN115100742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210720889.5A CN115100742A (en) 2022-06-23 2022-06-23 Meta-universe exhibition and display experience system based on air-separating gesture operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210720889.5A CN115100742A (en) 2022-06-23 2022-06-23 Meta-universe exhibition and display experience system based on air-separating gesture operation

Publications (1)

Publication Number Publication Date
CN115100742A true CN115100742A (en) 2022-09-23

Family

ID=83293255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210720889.5A Pending CN115100742A (en) 2022-06-23 2022-06-23 Meta-universe exhibition and display experience system based on air-separating gesture operation

Country Status (1)

Country Link
CN (1) CN115100742A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071531A (en) * 2023-04-03 2023-05-05 山东捷瑞数字科技股份有限公司 Meta universe display method, device, equipment and medium based on digital twin
CN116627260A (en) * 2023-07-24 2023-08-22 成都赛力斯科技有限公司 Method and device for idle operation, computer equipment and storage medium
CN117340931A (en) * 2023-09-21 2024-01-05 北京三月雨文化传播有限责任公司 All-angle autonomous adjustable multimedia real object exhibition device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071531A (en) * 2023-04-03 2023-05-05 山东捷瑞数字科技股份有限公司 Meta universe display method, device, equipment and medium based on digital twin
CN116627260A (en) * 2023-07-24 2023-08-22 成都赛力斯科技有限公司 Method and device for idle operation, computer equipment and storage medium
CN117340931A (en) * 2023-09-21 2024-01-05 北京三月雨文化传播有限责任公司 All-angle autonomous adjustable multimedia real object exhibition device

Similar Documents

Publication Publication Date Title
JP4768196B2 (en) Apparatus and method for pointing a target by image processing without performing three-dimensional modeling
CN115100742A (en) Meta-universe exhibition and display experience system based on air-separating gesture operation
US20050206610A1 (en) Computer-"reflected" (avatar) mirror
CN113096252B (en) Multi-movement mechanism fusion method in hybrid enhanced teaching scene
CN103793060B (en) A kind of user interactive system and method
CN105393284B (en) Space engraving based on human body data
CN102222347B (en) Creating range image through wave front coding
CN105279795B (en) Augmented reality system based on 3D marker
CN110045816A (en) Near-eye display and system
EP3729238A1 (en) Authoring and presenting 3d presentations in augmented reality
US8866898B2 (en) Living room movie creation
US20110292036A1 (en) Depth sensor with application interface
Leibe et al. Toward spontaneous interaction with the perceptive workbench
CN106125921A (en) Gaze detection in 3D map environment
CN102262438A (en) Gestures and gesture recognition for manipulating a user-interface
WO2013185714A1 (en) Method, system, and computer for identifying object in augmented reality
CN103038727A (en) Skeletal joint recognition and tracking system
JP2013514585A (en) Camera navigation for presentations
CN105229571A (en) Nature user interface rolls and aims at
CN102222431A (en) Hand language translator based on machine
CN111880659A (en) Virtual character control method and device, equipment and computer readable storage medium
KR20010081193A (en) 3D virtual reality motion capture dance game machine by applying to motion capture method
CN107861629A (en) A kind of practice teaching method based on VR
WO2020145224A1 (en) Video processing device, video processing method and video processing program
Sreejith et al. Real-time hands-free immersive image navigation system using Microsoft Kinect 2.0 and Leap Motion Controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination