CN114489347A - Dynamic sand table and demonstration method and system thereof - Google Patents

Dynamic sand table and demonstration method and system thereof Download PDF

Info

Publication number
CN114489347A
CN114489347A CN202210336744.5A CN202210336744A CN114489347A CN 114489347 A CN114489347 A CN 114489347A CN 202210336744 A CN202210336744 A CN 202210336744A CN 114489347 A CN114489347 A CN 114489347A
Authority
CN
China
Prior art keywords
sensing
sand table
interactive
image data
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210336744.5A
Other languages
Chinese (zh)
Inventor
郑岱华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Win Vector Technology Co ltd
Original Assignee
Shenzhen Win Vector Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Win Vector Technology Co ltd filed Critical Shenzhen Win Vector Technology Co ltd
Priority to CN202210336744.5A priority Critical patent/CN114489347A/en
Publication of CN114489347A publication Critical patent/CN114489347A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F19/00Advertising or display means not otherwise provided for
    • G09F19/12Advertising or display means not otherwise provided for using special optical effects
    • G09F19/18Advertising or display means not otherwise provided for using special optical effects involving the use of optical projection means, e.g. projection of images on clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • Holo Graphy (AREA)

Abstract

The invention discloses a dynamic sand table and a demonstration method and a demonstration system thereof, wherein the demonstration method comprises a holographic system, the holographic system comprises a sand table base and a holographic projector for projecting and imaging a model to the sand table base, the demonstration method also comprises a dynamic interaction system, and the dynamic interaction system comprises: the position sensing module is used for limiting an interactive action area and sensing a human body output sensing signal of the interactive action area; the camera module is used for acquiring images of the interactive action area and outputting image data; and the master control console is connected with the holographic projector, the position sensing module and the camera module, and is used for analyzing the sensing signals and the image data to obtain user action information, searching a preset interaction instruction database according to the user action information to obtain a matched interaction instruction for controlling the holographic projector. The application can improve the interaction effect of the holographic sand table.

Description

Dynamic sand table and demonstration method and system thereof
Technical Field
The application relates to a demonstration teaching aid, in particular to a dynamic sand table and a demonstration method and system thereof.
Background
The sand table model, as a real scene with reduced proportion, has been widely used in the military field, and is now used as a structure of exhibition and display parks and communities. Because the traditional sand table model is manufactured, the steps from model kneading, model laying, model coloring and the like are all carried out manually, the universal price is relatively high.
At present, with the development of projection technology, the holographic projection sand table is developed to replace the traditional sand table; however, most of the current holographic technologies need to be implemented by means of a sealed and semi-closed projection cabinet structure, which results in relatively weak interaction experience, and especially when a sand table is used for training demonstration in some exploration and engineering classes, most of lecturers and students rely on a mouse to operate on a computer, and the interaction feeling is relatively poor, so that the application provides a new technical scheme.
Disclosure of Invention
In order to improve the interaction effect of the holographic sand table, the application provides a dynamic sand table and a demonstration method and system thereof.
In a first aspect, the present application provides a dynamic sand table, which adopts the following technical scheme:
a dynamic sand table comprising a holographic system, the holographic system comprising a sand table base and a holographic projector for projecting images of a model onto the sand table base, further comprising a dynamic interaction system, the dynamic interaction system comprising:
the position sensing module is used for limiting an interactive action area and sensing a human body output sensing signal of the interactive action area;
the camera module is used for acquiring images of the interactive action area and outputting image data;
and the master control console is connected with the holographic projector, the position sensing module and the camera module, and is used for analyzing the sensing signals and the image data to obtain user action information, searching a preset interaction instruction database according to the user action information to obtain a matched interaction instruction for controlling the holographic projector.
Optionally, the position sensing module includes a plurality of groups of laser correlation sensing units, and the laser correlation sensing units are at least divided into a boundary calibration group and a coordinate sensing group;
the laser correlation sensing units of the boundary calibration group are arranged in a staggered manner, and the laser correlation sensing units are arranged in the staggered manner;
the plurality of laser correlation sensing units of the coordinate sensing group are configured to: and arranging based on the three-dimensional coordinate parameters of the interactive action area in a high-low/left-right order.
Optionally, the highlight unit includes a wearing structure and a highlight signal generator fixed on the wearing structure, and the highlight signal generator is provided with an adaptive signal switch.
Optionally, the highlight signal generator is configured to: for emitting at least two light signals and the two light signals are generated based on the preset lighting logic.
In a second aspect, the present application provides a method for demonstrating a dynamic sand table, which adopts the following technical scheme:
a method for demonstrating a dynamic sand table comprises the following steps:
acquiring a sensing signal of a position sensing module and image data of a camera module; and the number of the first and second groups,
and analyzing the sensing signals and the image data to obtain user action information, searching a preset interaction instruction database according to the user action information to obtain a matched interaction instruction for controlling the holographic projector.
Optionally, the analyzing the sensing signal and the image data includes: recording the sensing signal fed back by the laser correlation sensing unit of the boundary calibration group as boundary information
Figure 100002_DEST_PATH_IMAGE001
Recording sensing signals fed back by a plurality of laser correlation sensing units of the coordinate sensing group as coordinate information
Figure 77271DEST_PATH_IMAGE002
Generating a coordinate label according to the three-dimensional coordinate parameters of the laser correlation sensing unit during the layout to obtain coordinate information
Figure 100002_DEST_PATH_IMAGE003
Based on boundary information
Figure 476023DEST_PATH_IMAGE001
And coordinate information
Figure 450407DEST_PATH_IMAGE003
Judging whether the user enters the interactive action area, if so, judging that the user action information is as follows: interactive triggering action is carried out, and a shooting acquisition starting instruction is matched as a current interactive instruction; if not, the user action message is judgedThe information is as follows: and (5) performing interactive stopping action, and matching the shooting acquisition stopping instruction as the current interactive instruction.
Optionally, the analyzing the sensing signal and the image data includes:
according to the coordinate information
Figure 223191DEST_PATH_IMAGE003
Generating human body contour data and obtaining human body position data;
extracting a real-time image from the image data;
extracting to obtain an effective action region in the real-time image according to the human body position data and a preset contour extraction rule;
identifying the effective action region, and executing the next step when an optical signal exists; otherwise, returning to the real-time image extraction;
calculating the pixel position of the light signal pattern in the effective action region;
converting the pixel position based on a preset conversion scale to obtain an actual coordinate parameter of the optical signal relative to the projection model;
and identifying user behaviors based on the image data, searching a preset behavior database to obtain a matched interaction instruction, and binding an execution unit to a certain unit of the projection model corresponding to the actual coordinate parameter.
In a third aspect, the present application provides a dynamic sand table system, which adopts the following technical solution:
a dynamic sandbox system comprising a memory and a processor, said memory having stored thereon a computer program capable of being loaded by the processor and executing any one of the presentation methods as described above.
In summary, the present application includes at least one of the following beneficial technical effects: after a user enters an interactive action area, if the user wants to control the change of a certain building model and the like, one hand starts the highlighting unit to mark the model needing to be changed, and the other hand makes a corresponding action matching interactive instruction, so that the holographic system matched with the sand table can improve the interactive effect of a teacher, a student and the sand table during training and demonstration.
Drawings
FIG. 1 is a schematic structural diagram of a dynamic sand table of the present application;
fig. 2 is a main flow diagram of the demonstration method of the present application.
Detailed Description
The present application is described in further detail below with reference to figures 1-2.
The embodiment of the application discloses a dynamic sand table.
Referring to fig. 1, the dynamic sand table includes: a holographic system and a dynamic interactive system; wherein, holographic system sand table base and holographic projector.
It can be understood that the sand table base (i.e. the holographic projection cabinet) is arranged and then a plurality of image areas are pre-divided, and the reference points are established according to the dividing lines; the holographic projector projects a built-in base from bottom to top or is hung on the top of a certain display space through a hanging rod to project towards the sand table base according to the type of the sand table base, namely, a pre-established three-dimensional model is placed on the sand table base, and the position and the focusing of the model are adjusted according to the reference points, so that the three-dimensional model (namely, the sand table model) can be normally displayed.
The dynamic interaction system is a specific improvement scheme aiming at the defects of the prior art, and a user can achieve a follow-up effect with the holographic sand table through the dynamic interaction system, namely, the user makes a certain preset action, and then a sand table projection model correspondingly changes, so that the interaction effect in the teaching demonstration process is effectively improved. A dynamic interaction system, comprising: position sensing module, camera module and master console.
The position sensing module is used for limiting an interactive action area and sensing a human body output sensing signal of the interactive action area;
the camera module is used for acquiring images of the interactive action area and outputting image data;
and the master control console is connected with the holographic projector, the position sensing module and the camera module, and is used for analyzing the sensing signals and the image data to obtain user action information, searching a preset interaction instruction database according to the user action information to obtain a matched interaction instruction for controlling the holographic projector.
In an embodiment of the dynamic sand table, the position sensing module includes a plurality of sets of laser correlation sensing units, and the sets are at least divided into a boundary calibration set and a coordinate sensing set; specifically, the method comprises the following steps:
referring to fig. 1, in the embodiment, taking an example that an interactive region defined by a position sensing module is quadrilateral in a top view, each corner of the quadrilateral is respectively provided with a vertical slide rail; the opposite surfaces of two adjacent vertical sliding rails are respectively provided with a sliding chute.
The laser correlation sensing unit (driver) of the boundary calibration group is connected with the vertical slide rails in a sliding mode through the sliding blocks of the adaptive sliding grooves, and the transmitting head and the receiving head are respectively located on the two adjacent vertical slide rails so as to form a boundary line by utilizing correlation lines. It can be understood that the slider is fixed with the vertical slide rail by bolts.
The reason why the boundary lines are formed by the boundary calibration sets is that at least one step in the use logic of the sand table needs to be realized based on whether the user crosses the boundary of the interactive action area; therefore, the laser correlation sensing units are installed through the vertical sliding rails, boundary lines can be adjusted according to height distribution of different using groups, and various sensing requirements are met.
In this embodiment, the emitting head and the receiving head of the laser correlation sensing unit of the coordinate sensing set are symmetrically installed on the pre-fixing frame bodies on two sides of the quadrangle. It should be noted that the laser correlation sensing units of the coordinate sensing group are arranged according to the three-dimensional coordinate parameter of the interaction region in order of height/left and right.
Taking the intersection of the four corner connecting lines of the quadrangle (i.e. the interactive action area) as the origin of coordinates as an example: at the moment, the height direction of the interactive action area is the Z axial direction of the coordinate, the length direction is the X axial direction of the coordinate, and the width direction is the Y axial direction of the coordinate; at this time, if the minimum unit dimension of the three-dimensional coordinate is 10cm, a plurality of laser correlation sensing units can be distributed in the Y-axis direction, and the interval between adjacent laser correlation sensing units is 10 cm; the laser correlation sensing units are arranged in a layer, a plurality of layers of laser correlation sensing units are arranged along the Z-axis direction, and the interval between adjacent laser correlation sensing units is 10 cm. Therefore, a three-dimensional coordinate sensing mechanism can be constructed in the interactive action area, and once a user enters the interactive action area, the position of the user can be quickly estimated through the sensing signal of the position sensing module.
It should be noted that the interaction region is set by preset scale enlargement of the holographic sand table model (the ratio of the non-building entity to the model, but the ratio of the user standing on the sand table model region and the model capable of interacting) and the three-dimensional coordinate origins of the two are the same, so that the position of the proportional human body relative to the holographic sand table can be obtained by scaling according to the position of the user relative to the interaction region.
The camera module comprises a plurality of high-definition cameras distributed around the interactive action area, and each camera collects image data of the interactive action area and transmits the data back to a master control console (a computer) through a collection card to wait for the analysis of a sensing signal of the combination position sensing module.
In an embodiment of the sand table, the sand table further comprises a highlighting unit, the highlighting unit comprises a wearing structure and a highlighting signal generator fixed on the wearing structure, the highlighting signal generator is provided with an adaptive signal switch, and the highlighting signal generator is configured to: for emitting at least two light signals and the two light signals are generated based on the preset lighting logic.
The wearing structure can be a ring, a bracelet, a glove and the like; the highlighted signal generator is an LED lamp strip embedded on the wearing structure; the LED lamp strip is provided with a drive control module and a battery, and the power supply loop is provided with a signal switch (namely a power switch). Assuming that the LED lamp strip is turned on and off at the same time in the colors of yellow and green, respectively, and the flashing frequency is 1 second/time, the light signal is 4 yellow lights on and 4 green lights on.
The reason why the highlight signal generator is arranged in the dynamic interaction system is that when the system is used, an interaction instruction of the sand table model needs to be matched based on behavior and action of a user, and meanwhile, an interaction effective point position, namely the minimum movable unit of the model, needs to be locked; at this time, if the palm, the finger, and the like are selected as the index points of the effective points, the recognition difficulty of the palm, the finger, and the like is large with different movement postures of the user, the calculation effort and time are large, and the misjudgment probability is large. Therefore, the system introduces a highlight signal generator, and utilizes the light signal emitted by the highlight signal generator to quickly calibrate the interaction position of the model; and because the optical signals are at least two, the misjudgment caused by the color on the clothes of the user can be reduced.
The manner of presentation of the present dynamic sand table is explained in detail in the next embodiment.
The embodiment of the application also discloses a demonstration method of the dynamic sand table, which is realized based on the dynamic sand table.
Referring to fig. 2, the method for demonstrating a dynamic sand table includes:
acquiring a sensing signal of a position sensing module and image data of a camera module; and the number of the first and second groups,
and analyzing the sensing signals and the image data to obtain user action information, searching a preset interaction instruction database according to the user action information to obtain a matched interaction instruction for controlling the holographic projector.
The above analysis of the sensing signal and the image data specifically includes:
s101, recording sensing signals fed back by laser correlation sensing units of the boundary calibration group as boundary information
Figure 29473DEST_PATH_IMAGE001
S102, recording sensing signals fed back by a plurality of laser correlation sensing units of the coordinate sensing group as coordinate information
Figure 220414DEST_PATH_IMAGE002
Generating a coordinate label according to the three-dimensional coordinate parameters of the laser correlation sensing unit during the layout to obtain coordinate information
Figure 352318DEST_PATH_IMAGE003
It is understood that the sensing signal can be simply understood as presence/absence information, and if a human body is detected, the presence or absence information is not detected. Therefore, assuming that the user enters the interactive action area from the left side, the boundary information a is present for a while. Similarly, after the user enters, the sensing unit which detects the human body is reversedFed to its own coordinates
Figure 612398DEST_PATH_IMAGE004
Is as follows.
S103, according to the boundary information
Figure 238683DEST_PATH_IMAGE001
And coordinate information
Figure 267819DEST_PATH_IMAGE003
Judging whether entering an interactive action area or not; i.e. when the boundary information
Figure 836203DEST_PATH_IMAGE001
When n seconds of occurrence followed by no, there are a plurality
Figure 318000DEST_PATH_IMAGE004
If the coordinate is yes, the user enters the interactive action area; otherwise, it is left.
At this time, if yes, the user action information is judged as: interactive triggering action is carried out, and a shooting acquisition starting instruction is matched as a current interactive instruction; if not, judging that the user action information is as follows: and (5) performing interactive stopping action, and matching the shooting acquisition stopping instruction as the current interactive instruction.
It can be understood that the above-mentioned content is implemented by configuring the corresponding computer program for the master console, and through the above-mentioned setting, the camera module can be in silent standby when the user is not in the interactive action area, so as to reduce energy consumption, etc., and the image data can be collected in time for analysis after the user enters.
The above analysis of the sensing signal and the image data specifically further includes:
s201, according to the coordinate information
Figure 747976DEST_PATH_IMAGE003
Generating human body contour data and obtaining human body position data.
It can be understood that if coordinate points in the three-dimensional coordinates representing "there" are marked by the distinguishing color, the approximate contour of the human body is obtained, and the set of corresponding coordinate points is the human body position data.
S202, extracting the real-time image from the image data.
The multiple cameras collect multiple pictures at the same time, and a human body part of a highlight signal generator worn by a user can be shielded, so that one or more cameras cannot shoot effective pictures; and if the other cameras shoot, the images collected by the other cameras are extracted at the moment.
And S203, extracting the effective action region in the real-time image according to the human body position data and a preset contour extraction rule.
The contour extraction rule takes the human body position data and +2 pixel points as an example, and at the moment, the human body position data and the image within the boundary of the human body position data and the +2 pixel points are intercepted to be used as an effective action map area. The step is arranged, on one hand, the interference of various colors of the invalid regions can be avoided, and on the other hand, the processing amount in the image recognition process is reduced.
S204, identifying the effective action region, and executing the next step when an optical signal exists; otherwise, returning to the real-time image extraction.
S205, calculating the pixel position of the optical signal pattern in the effective action region, namely calculating the pixel point of the binary image to obtain the pixel point.
Converting the pixel position based on a preset conversion scale (the ratio in the dynamic sand table embodiment) to obtain an actual coordinate parameter of the optical signal relative to the projection model;
s206, identifying user behaviors based on the image data, searching a preset behavior database to obtain a matched interaction instruction, and binding an execution unit to a certain unit of the projection model corresponding to the actual coordinate parameter.
Namely, the execution unit of the current behavior action of the user, such as a certain building model, is calibrated according to the actual coordinate parameters of the optical signal relative to the projection model.
It can be understood that the user behavior can be a palm raising and palm erecting action and a palm making fist erecting action of an arm without wearing the highlight unit, and is specifically determined by training a human behavior recognition model by acquiring a standard image by a worker in advance; the action database stores action-interaction instructions in a one-to-one correspondence.
According to the content, after a user enters an interactive action area, if the user wants to control a certain building model to change, one hand starts the highlighting unit to mark the model needing to be changed, and the other hand makes a corresponding action matching interactive instruction, so that the holographic system matched with the sand table effectively improves the interactive effect.
The embodiment of the application also discloses a dynamic sand table system, which comprises a memory and a processor, wherein the memory is stored with a computer program which can be loaded by the processor and can execute the demonstration method.
The above embodiments are preferred embodiments of the present application, and the protection scope of the present application is not limited by the above embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (8)

1. A dynamic sand table comprising a holographic system, the holographic system comprising a sand table base and a holographic projector for projecting images of a model onto the sand table base, characterized in that: still include the dynamic interaction system, the dynamic interaction system includes:
the position sensing module is used for limiting an interactive action area and sensing a human body output sensing signal of the interactive action area;
the camera module is used for acquiring images of the interactive action area and outputting image data;
and the master control console is connected with the holographic projector, the position sensing module and the camera module, and is used for analyzing the sensing signals and the image data to obtain user action information, searching a preset interaction instruction database according to the user action information to obtain a matched interaction instruction for controlling the holographic projector.
2. The dynamic sand table of claim 1, wherein: the position sensing module comprises a plurality of groups of laser correlation sensing units, and is at least divided into a boundary calibration group and a coordinate sensing group;
the laser correlation sensing units of the boundary calibration group are arranged in a staggered manner, and the laser correlation sensing units are arranged in the staggered manner;
the plurality of laser correlation sensing units of the coordinate sensing group are configured to: and arranging based on the three-dimensional coordinate parameters of the interactive action area in a high-low/left-right order.
3. The dynamic sand table of claim 1, wherein the dynamic interaction system comprises: the highlight unit comprises a wearing structure and a highlight signal generator fixed on the wearing structure, and the highlight signal generator is provided with an adaptive signal switch.
4. The dynamic sand table of claim 3, wherein: the highlight signal generator is configured to: for emitting at least two light signals and the two light signals are generated based on the preset lighting logic.
5. A presentation method based on the dynamic sand table as claimed in claim 4, comprising:
acquiring a sensing signal of a position sensing module and image data of a camera module; and the number of the first and second groups,
and analyzing the sensing signals and the image data to obtain user action information, searching a preset interaction instruction database according to the user action information to obtain a matched interaction instruction for controlling the holographic projector.
6. The presentation method as claimed in claim 5, wherein: the analyzing of the sensing signal, the image data includes:
recording the sensing signal fed back by the laser correlation sensing unit of the boundary calibration group as boundary information
Figure DEST_PATH_IMAGE001
Will coordinateSensing signals fed back by a plurality of laser correlation sensing units of the sensing group are recorded as coordinate information
Figure 270609DEST_PATH_IMAGE002
Generating a coordinate label according to the three-dimensional coordinate parameters of the laser correlation sensing unit during the layout to obtain coordinate information
Figure DEST_PATH_IMAGE003
Based on boundary information
Figure 615615DEST_PATH_IMAGE001
And coordinate information
Figure 525802DEST_PATH_IMAGE003
Judging whether the user enters the interactive action area, if so, judging that the user action information is as follows: interactive triggering action is carried out, and a shooting acquisition starting instruction is matched as a current interactive instruction; if not, judging that the user action information is as follows: and (5) performing interactive stopping action, and matching the shooting acquisition stopping instruction as the current interactive instruction.
7. The presentation method as claimed in claim 6, wherein: the analyzing of the sensing signal, the image data includes:
according to the coordinate information
Figure 264082DEST_PATH_IMAGE003
Generating human body contour data and obtaining human body position data;
extracting a real-time image from the image data;
extracting to obtain an effective action region in the real-time image according to the human body position data and a preset contour extraction rule;
identifying the effective action region, and executing the next step when an optical signal exists; otherwise, returning to the real-time image extraction;
calculating the pixel position of the light signal pattern in the effective action region;
converting the pixel position based on a preset conversion scale to obtain an actual coordinate parameter of the optical signal relative to the projection model;
and identifying user behaviors based on the image data, searching a preset behavior database to obtain a matched interaction instruction, and binding an execution unit to a certain unit of the projection model corresponding to the actual coordinate parameter.
8. A dynamic sand table system, comprising: comprising a memory and a processor, said memory having stored thereon a computer program which can be loaded by the processor and which performs the method as claimed in any one of the claims 5 to 7.
CN202210336744.5A 2022-04-01 2022-04-01 Dynamic sand table and demonstration method and system thereof Pending CN114489347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210336744.5A CN114489347A (en) 2022-04-01 2022-04-01 Dynamic sand table and demonstration method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210336744.5A CN114489347A (en) 2022-04-01 2022-04-01 Dynamic sand table and demonstration method and system thereof

Publications (1)

Publication Number Publication Date
CN114489347A true CN114489347A (en) 2022-05-13

Family

ID=81487842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210336744.5A Pending CN114489347A (en) 2022-04-01 2022-04-01 Dynamic sand table and demonstration method and system thereof

Country Status (1)

Country Link
CN (1) CN114489347A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278735A (en) * 2023-09-15 2023-12-22 山东锦霖智能科技集团有限公司 Immersive image projection equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201535853U (en) * 2009-04-01 2010-07-28 黄振强 Interactive type sand table system
CN107896508A (en) * 2015-04-25 2018-04-10 肖泉 Multiple target/end points can be used as(Equipment)" method and apparatus of the super UI " architectures of equipment, and correlation technique/system of the gesture input with dynamic context consciousness virtualized towards " modularization " general purpose controller platform and input equipment focusing on people of the integration points of sum
CN109032357A (en) * 2018-08-15 2018-12-18 北京知感科技有限公司 More people's holography desktop interactive systems and method
CN209417968U (en) * 2018-12-07 2019-09-20 天维尔信息科技股份有限公司 A kind of fire-fighting drill electronic sand table based on virtual reality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201535853U (en) * 2009-04-01 2010-07-28 黄振强 Interactive type sand table system
CN107896508A (en) * 2015-04-25 2018-04-10 肖泉 Multiple target/end points can be used as(Equipment)" method and apparatus of the super UI " architectures of equipment, and correlation technique/system of the gesture input with dynamic context consciousness virtualized towards " modularization " general purpose controller platform and input equipment focusing on people of the integration points of sum
CN109032357A (en) * 2018-08-15 2018-12-18 北京知感科技有限公司 More people's holography desktop interactive systems and method
CN209417968U (en) * 2018-12-07 2019-09-20 天维尔信息科技股份有限公司 A kind of fire-fighting drill electronic sand table based on virtual reality

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278735A (en) * 2023-09-15 2023-12-22 山东锦霖智能科技集团有限公司 Immersive image projection equipment
CN117278735B (en) * 2023-09-15 2024-05-17 山东锦霖智能科技集团有限公司 Immersive image projection equipment

Similar Documents

Publication Publication Date Title
US4843568A (en) Real time perception of and response to the actions of an unencumbered participant/user
CN201535853U (en) Interactive type sand table system
CN105190703A (en) Using photometric stereo for 3D environment modeling
EP0019662A1 (en) Cosmetic apparatus and method
CN104656890A (en) Virtual realistic intelligent projection gesture interaction all-in-one machine
CN105027190A (en) Extramissive spatial imaging digital eye glass for virtual or augmediated vision
CN104035557B (en) Kinect action identification method based on joint activeness
CN111835984B (en) Intelligent light supplementing method and device, electronic equipment and storage medium
CN101681438A (en) System and method for tracking three dimensional objects
CN103930944A (en) Adaptive tracking system for spatial input devices
CN107102736A (en) The method for realizing augmented reality
CN101295442A (en) Non-contact stereo display virtual teaching system
US20110109628A1 (en) Method for producing an effect on virtual objects
CN109816784A (en) The method and system and medium of three-dimensionalreconstruction human body
CN110211222B (en) AR immersion type tour guide method and device, storage medium and terminal equipment
Suzuki et al. Enhancement of gross-motor action recognition for children by CNN with OpenPose
CN115933868B (en) Three-dimensional comprehensive teaching field system of turnover platform and working method thereof
CN108139876B (en) System and method for immersive and interactive multimedia generation
CN114489347A (en) Dynamic sand table and demonstration method and system thereof
CN205028239U (en) Interactive all -in -one of virtual reality intelligence projection gesture
CN104933278B (en) A kind of multi-modal interaction method and system for disfluency rehabilitation training
KR20020028578A (en) Method of displaying and evaluating motion data using in motion game apparatus
CN201226189Y (en) Multimedia virtual teaching equipment
WO2023103145A1 (en) Head pose truth value acquisition method, apparatus and device, and storage medium
CN113989462A (en) Railway signal indoor equipment maintenance system based on augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220513

RJ01 Rejection of invention patent application after publication