WO2019119314A1 - 一种仿真沙盘系统 - Google Patents

一种仿真沙盘系统 Download PDF

Info

Publication number
WO2019119314A1
WO2019119314A1 PCT/CN2017/117551 CN2017117551W WO2019119314A1 WO 2019119314 A1 WO2019119314 A1 WO 2019119314A1 CN 2017117551 W CN2017117551 W CN 2017117551W WO 2019119314 A1 WO2019119314 A1 WO 2019119314A1
Authority
WO
WIPO (PCT)
Prior art keywords
sandbox
mark
virtual
real
physical
Prior art date
Application number
PCT/CN2017/117551
Other languages
English (en)
French (fr)
Chinese (zh)
Inventor
王子南
Original Assignee
王子南
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 王子南 filed Critical 王子南
Priority to JP2019560437A priority Critical patent/JP7009508B2/ja
Priority to GB1907569.6A priority patent/GB2571853A/en
Priority to PCT/CN2017/117551 priority patent/WO2019119314A1/zh
Priority to KR1020197023916A priority patent/KR102463112B1/ko
Publication of WO2019119314A1 publication Critical patent/WO2019119314A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Definitions

  • the invention relates to the field of psychology, and relates to a simulation sand table system, in particular to a simulated sand disk system created based on the practice and development of the sand table game theory in psychology.
  • group sandboxes have achieved good results in improving interpersonal interactions in groups and promoting group and individual growth.
  • group sandboxes have achieved good results in improving interpersonal interactions in groups and promoting group and individual growth.
  • group sandboxes have achieved good results in improving interpersonal interactions in groups and promoting group and individual growth.
  • Generally used is a restricted group sand table.
  • the so-called restricted group sandbox refers to the group sandbox game method with certain rules and restrictions. The requirements are the same as the single sandbox.
  • Group members can use a variety of random methods to generate order in a manner agreed upon in advance. All members complete one round, and the whole process does not allow any linguistic and non-verbal communication and interaction between members. The consultant can discover the personality characteristics through the members' compliance with the rules.
  • Restricted group sandboxes are suitable for the treatment of relationships, relationships, and relationships between children and parents. Restricted sandbox games are an acronym for social reality, and each individual's behavioral patterns can be reflected in group sandboxes. In group sandbox games, everyone has their own ideas, but to complete a common work, this may be integrated. At the same time, through the process of sandboxing, the consultant can observe each person's personality characteristics to help them understand these characteristics. Group sandbox games require multiple production processes, after no The break-in will achieve the final integration.
  • the focus is on solving the fixed location of the sand table, the large amount of physical sandbox, the limited number of participants, the limited number of models, the inflexible use time of the operators, and the inability to produce sound effects during operation. , smell and other problems.
  • the invention provides a simulation sandbox system, which transplants a group sandbox to an electronic device, which perfectly solves the shortcomings that the consultant can not effectively cope with many users at the same time, solves the address problem of the sandbox room, and solves the space of the sandbox room.
  • the problem is that the virtual sandbox is more flexible in operation than the real sandbox. In the space expansion and function extension of the sandbox, the virtual sandbox is more flexible and richer than the real sandbox.
  • the virtual sandbox no longer has space limitation, and the sandbox design Can be customized, the model has a unified performance style and scale standards, making the model more representative, the operation of the sandbox increases the vibration feedback of the device, and the electronic sandbox can present the light and shadow effects that are not available in the physical sandbox.
  • Each mold has its own unique attribute action, the sand table has changes in day and night rotation, weather, etc., and the same mold can be used multiple times, or it can appear in the sand table at the same time.
  • the operation of the single sandbox and the group sandbox is no longer limited by the consultant's physical, mental and organizational capabilities.
  • the user can be more immersed in the sandbox operation, expand and enhance the user's perception, and can more conveniently help the user to observe and feel the sand table and the model from all angles.
  • the user can get rid of the physical rule restrictions in the real world under certain circumstances, and can effectively perform all-round operations on the sand table and the model.
  • Will be sanded by the assistant in the real sandbox The part of the disk data statistics is converted into the system to digitize the data information of the sand disk operation.
  • a simulated sandbox system includes a lens device, a controller, a digitizer, a display device, a physical sandbox mark, and a physical sandbox mark
  • the lens device includes a mobile phone, a tablet, a notebook computer, a smart glasses, a Bluetooth camera, and the like.
  • the device for capturing a real scene the controller includes a motion capture handle, other handheld devices that can map user behavior through sensors, a lens device, a controller connected to the digitizer and the display device, and a real scene through the lens device.
  • the digital processor associated with the lens device and the controller acquires and parses the real-world scene data captured by the lens device, digitizes the data of the real-world scene, and then presents the data to the display device through the Internet or a local area network and other networking methods.
  • the digitized data is connected and interacted with other devices in which the present invention is installed; the physical sandbox mark refers to the fact that the real sand disk cannot be placed due to many reasons in the real environment, and the virtual sand disk mapping is started by some mark.
  • the mark may be a picture, a plastic, a cloth, or other object that can be used as a mark by the example of the present invention, and the physical sandbox mark is optically transmitted and connected with the lens device; the physical sandbox mark refers to the real environment due to For many reasons, it is impossible to place a real cabinet, and the virtual sandbox mapping can be opened by some kind of mark.
  • the mark can be a picture, a plastic, a cloth, or other objects that can be used as a mark by the example of the present invention.
  • the physical sandbox marker is connected to the lens device for optical information transfer.
  • a device certified by the example of the present invention can emit a corresponding harmless gas on the external device of the device if the relevant conditions are satisfied.
  • Further method specific The method may be: obtaining a basic parameter of an upcoming virtual scent in the virtual environment; generating a control signal according to the basic parameter and a layout parameter and a performance parameter of the odor generator; and controlling the odor generator according to the control signal
  • the scent generator produces an odor corresponding to the virtual scent.
  • the obtaining the basic parameters of the virtual scent that is about to appear in the virtual environment includes: parsing the pre-stored virtual data, and generating the basic data set; and when the virtual scent is about to appear in the virtual environment, according to the time parameter of the upcoming virtual scent Obtaining corresponding basic parameters from the basic data set.
  • the method further includes: acquiring a realistic odor parameter in the real environment; and generating a control signal according to the basic parameter and the layout parameter and the performance parameter of the odor generator
  • the method includes: generating a control signal according to the actual odor parameter, the basic parameter, and a layout parameter and a performance parameter of the odor generator.
  • Controlling the scent generator according to the control signal to cause the scent generator to generate an odor corresponding to the virtual scent comprising: controlling, by the control signal, the odor generator to generate a corresponding scent corresponding to the virtual scent
  • the true scent is volatilized by the fan to the true odor.
  • the method can be: Get virtual The state information of each object in the real scene and/or the environment of the tester; generating tactile feedback information according to the state information; and performing tactile feedback on the tester according to the tactile feedback information.
  • Obtaining state information of each object in the virtual reality scene and/or the environment in which the tester is located includes: identifying each of the virtual reality scenes An object, determining physical state information corresponding to each identified object; and/or monitoring motion state information of each object in the environment of the tester by a preset radar sensor or an infrared sensor; wherein the physical state
  • the information includes: softness and/or roughness of the object; the motion state information includes: position, moving direction, and/or moving speed information.
  • the generating the haptic feedback information includes: determining, according to the motion state information of each object, a distance between the tester and the object in the environment where the tester is located; if the test is determined The distance between the tester and the object in the environment is greater than 0, and the feedback strength information corresponding to each object is generated according to the correspondence between the preset distance and the feedback strength.
  • the haptic feedback to the tester according to the haptic feedback information includes: applying a pressure corresponding to the feedback strength information to the tester by using a pressure device; and/or applying the feedback strength information to the tester by using the vibration device; Corresponding vibration.
  • the generating the haptic feedback information includes: generating a touch corresponding to each object according to physical state information of each object in the virtual reality scene displayed on the preset touch screen Sensing information; wherein the touch sensing information includes: a magnitude of a force and a distribution area information required to sense an object's softness and/or roughness when the object is touched.
  • the haptic feedback of the tester according to the haptic feedback information includes: inputting, according to the generated touch sensing information, a corresponding input voltage signal to the transmitting electrode and the receiving electrode according to a preset matching rule in different regions of the touch screen; The difference in the input voltage signal produces a corresponding electrostatic force for each region to sense the softness and/or roughness of the object when it is touched based on the electrostatic force generated by each region.
  • the method further includes: detecting a pressure of the tester touching the touch screen; when detecting the touch of the tester When the pressure of the touch screen is greater than a preset threshold, the touch screen is triggered to perform tactile feedback on the tester according to the tactile feedback information.
  • a virtual sandbox appears in the device scene, and the virtual sandbox style is correspondingly rendered according to the characteristics of the physical sandbox or the physical sandbox mark.
  • the virtual sandbox is completely within the plane of the physical sandbox, but it expands the range of inoperable space in the real sandbox. This spatial extent includes the upper and bottom of the real sandbox, based on the largest flat section of the solid sandbox.
  • the virtual sandbox is a virtual sandbox that appears entirely in the sandbox based on the sandbox mark.
  • Physical sandbox and sandbox mark are all reference materials for viewfinder devices. Because the physical sandbox and sandbox mark exist at the same time, they can be switched to each other, and the virtual sandbox has special characteristics compared to the limitations of the physical sandbox. The properties of the appearance can be changed: the sky, the ground and the underground. Therefore, the special case of the virtual sandbox is different from the virtual sandbox: the shape of the virtual sandbox after setting the sandbox can be actively or passively changed. The virtual sandbox does not have the interactive properties of sky, ground and underground.
  • a virtual sandbox appears in the device scene.
  • the style of the virtual sandbox is displayed according to the mark of the physical cabinet or the physical cabinet. If it is a reference cabinet, all the sandware will be enhanced on the solid cabinet. If it is a physical cabinet mark, the virtual sandbox completely continues the cabinet mark to recreate a completely virtual sandbox.
  • the lens device is transferred to the sandbox. Then you can observe the virtual sand table in the state of holding the sand. You can choose to place or discard the sand. If you discard it, release the hand outside the sand table and the sand will disappear. If it is placed, release the hand within the virtual sandbox. Through the device, you can observe that the sand will be superimposed on the solid sand table.
  • AR and VR and other devices can be connected to multiple networks via LAN or Internet.
  • the virtual reality platform receives the communication request message sent by the first virtual reality device.
  • the communication request message includes a first view position information and an identifier of the second virtual reality device, where the first view position information is any view position information in the panoramic view position information of the first virtual reality device; Determining, by the first view location information and the identifier of the second virtual reality device, a corresponding second video region in the panoramic video region of the second virtual reality device; acquiring a multi-dimensional video corresponding to the second video region, And sending to the first virtual reality device to perform display of the multi-dimensional video corresponding to the second video region.
  • the virtual The reality platform includes a processing chip built in or externally disposed in the first virtual reality device or the second virtual reality device for implementing multi-dimensional video communication between the first virtual reality device and the second virtual reality device.
  • the virtual reality platform further includes: the virtual reality platform receiving a registration request message sent by the first virtual reality device and the second virtual reality device, respectively
  • the registration request message sent by the first virtual reality device includes a correspondence between the panoramic video area of the first virtual reality device and the panoramic view position information
  • the registration request message sent by the second virtual reality device includes Corresponding relationship between the panoramic video area of the second virtual reality device and the panoramic view position information; establishing and saving the identifier of the first virtual reality device, the panoramic video area of the first virtual reality device, and the panoramic view Corresponding relationship between the location information; establishing and saving the identifier of the second virtual reality device, the correspondence between the panoramic video region of the second virtual reality device and the panoramic view position information.
  • the Determining, by the first view position information and the identifier of the second virtual reality device, the corresponding second video area in the panoramic video area of the second virtual reality device including: according to the first view position information and the An identifier of the second virtual reality device, where the second view position information corresponding to the first view position information is determined in the panoramic view position information of the second virtual reality device, where the second view position information is the Determining, in the panoramic video area of the second virtual reality device, the corresponding position information of the second perspective device according to the second view position information.
  • the virtual reality platform receives an image in a panoramic video area of the first virtual reality device and an image in a panoramic video area of the second virtual reality device; according to the panoramic video area of the first virtual reality device The image establishes a panoramic multi-dimensional video of the first virtual reality device; and the panoramic multi-dimensional video of the second virtual reality device is established according to an image in the panoramic video region of the second virtual reality device.
  • the present invention provides unique settings and methods.
  • its equipment includes voice mobile terminal, virtual environment terminal, external server
  • the external server is in communication with the voice mobile terminal and the virtual environment terminal, and the voice mobile terminal is connected to the virtual environment terminal.
  • the voice mobile terminal includes a voice collection module, configured to collect voice signals of the user, and collect the voice signals of the user.
  • the voice signal is preprocessed; the voice recognition module is configured to convert the preprocessed voice signal into text information, and generate corresponding control commands and parameters for the text information; and the voice emotion feature parameter extraction module is configured to extract the voice collection module Emotional features in the processed speech signal a storage module for storing updated voice recognition data, a voice control command database, and a voice emotion database from an external server; a wireless communication module for using the recognized control commands and parameters or voice text information and corresponding voice emotions Sent to the virtual environment terminal and used to communicate with external servers, thus The corresponding data packet in the server is loaded and updated into the storage module; the processor is configured to process the collected user voice information, or send an update command to the external server to load the data stored by the update storage module; the processor and the voice collection module respectively The voice recognition module, the voice emotion feature parameter extraction module, the storage module, and the wireless communication module are connected; the voice collection module is respectively connected with the voice recognition module and the voice emotion feature parameter extraction module, and the emotion feature extracted by the voice emotion feature parameter extraction module Separating respectively with
  • the voice collection module is mainly a microphone.
  • the speech recognition module includes a speech feature extraction unit, a speech feature comparison unit, and a comparison result output unit.
  • the speech feature extraction unit is coupled to the speech feature comparison unit, and the speech feature comparison unit is coupled to the comparison result output unit.
  • the speech emotion feature parameter extraction module includes an emotion feature extraction unit, an emotion feature comparison unit, and an emotion feature output unit, and the emotion feature extraction unit is connected to the emotion feature comparison unit, and the emotion feature comparison unit is connected to the emotion feature output unit.
  • the voice playing module includes a tone matching unit and a voice playing unit, and the tone matching unit is connected to the voice playing unit.
  • the display module includes an action matching list And a display unit, wherein the action matching unit is connected to the display unit.
  • the method includes the following steps: the voice mobile terminal is connected to the virtual environment terminal. After the connection is successful, the processor of the voice mobile terminal and the virtual environment terminal respectively send a database version query command to the external server, and query the storage module of the voice mobile terminal for storage.
  • the version of the voice recognition data, the voice control command database, and the voice emotion database, and the model library of the avatar's emotional expressions and actions stored in the storage unit of the virtual environment terminal, the version of the tone and the speech rate database corresponding to the voice emotion, and the external server Consistent, if not consistent, load and update the latest version of the data from the external server to the corresponding storage module and storage unit, so that the data in the storage module and the storage unit are in the latest state; the voice collection module collects the voice signal of the user.
  • the collected speech signal is filtered, quantized and the like, and then sent to the speech recognition module and the speech emotion feature parameter extraction module; the speech recognition module combines the speech recognition data stored in the storage module to preprocess the speech signal.
  • Switching to the text information form, and matching the text information with the command data in the voice control command database is a control command; if it is a control command, generating corresponding control commands and parameters and outputting to the virtual environment terminal for corresponding control operations; If it is not a control command, it is a voice communication information, and the voice emotion feature parameter extraction module analyzes the waveform of the preprocessed voice signal, and extracts parameters having emotional characteristics, and extracts the emotion parameter and the emotion of the voice emotion database.
  • the data is matched to obtain corresponding emotional features, and then the emotional feature information is mapped to the corresponding words or sentences, and the emotional features and the emotional feature information mapping corresponding words or sentences are transmitted to the virtual environment terminal, and the action of the virtual environment terminal
  • the matching unit matches the received emotional features with a model library of the avatar's emotional expressions and actions in the storage unit, and obtains the emotional features.
  • the corresponding emotional expressions and actions display the corresponding emotional expressions and actions through the display unit;
  • the intonation matching unit matches the words or sentences corresponding to the emotional features with the tonality corresponding to the speech emotions and the data in the speech rate database, thereby obtaining the words or
  • the tone and the speech rate corresponding to the statement are played by the voice playing unit to play the corresponding voice communication information with the tone and the speech rate, and the voice playing module and the display module are synchronously played, thereby the virtual user's multi-person communication in the real environment.
  • AR development tools including: ARPA, ARLab, DroidAR, Metaio, Wikitude, vuforia
  • development platforms include: Android, IOS, GoogleGlass, WindowsPC, Unity, EpsonMoverio BT-2000, Vuzix M-100, Optinvent OPA1, PhoneGap, Titanium, Xamarin .
  • VR development tools including: HoloLens Emulator, Google VR SDK, Google VR View, Web VR, Cardboard SDK, Faceshift Studio, A-Frame, Oculus DK2, Cryengine, Destinations Workshop Tools, RealSense SDK, Leap Motion SDK, kinect SDK, Source Engine, OpenVR SDK, Oculus SDK, Gear VR, Nibiru VR SDK, development platforms include: Android, IOS, GoogleGlass, WindowsPC, Unity, EpsonMoverio BT-2000, Vuzix M-100, Optinvent OPA1, PhoneGap, Titanium, Xamarin, Auto Stingray3D , Gamebryo, Virtools.
  • the invention opens and observes the virtual sand disk through the lens device, and the Bluetooth model device in the lens device involved can be separated from the device, and can also enhance the interactive operation in the augmented reality or virtual reality.
  • a proprietary authentication device of the present invention the device having a responsive switch Dissipate the corresponding odor.
  • the invention is opened, and the main interface is entered.
  • the person logged into the main interface has two identities when operating the sandbox instance: one is the organizer or the initiator (the following are all organizers), and the other identity is invited. People, ordinary users.
  • the user After the sandbox instance is created, the user automatically converts the identity to the organizer. Other users who enter the sandbox instance are invitees.
  • the organizer can open a sandbox instance with no other users involved, and the sandbox instance becomes a single sandbox, which is a single sandbox.
  • the organizer creates a new sandbox instance and sends an invitation to invite people in the list of their own online or LBS location (location-based services) to invite nearby people to perform multiplayer sandboxing.
  • LBS location location-based services
  • the invitee can make a request to join a sandbox instance in preparation.
  • Organizers can view a list of existing contact lists.
  • a list of surrounding online user lists can be viewed based on the LBS (Location Based Service).
  • Text, voice or video communication can be done separately with people who have not yet agreed to accept the invitation.
  • the final finishing round of the sand table can be determined, and the sand table instance can be modified freely before it has started.
  • This preparation interface can be closed or hidden.
  • both the organizer and the invitee can view the contact's tags, including: avatar, name, notes, and other tags. Click on the contact's tag to view more detailed contact information.
  • the system will set the current The status is temporary, when the user performs any interface operation and then returns to the state before the temporary departure.
  • the sandbox instance selected by the organizer from one person to one hundred people to participate in the group can be changed when the sandbox instance has not started yet. If the number of people modified is less than the number of people who have joined the sandbox instance, it will not succeed.
  • the organizer needs to determine the operation round of this sandbox instance after creating the sandbox instance.
  • the operation round is usually 5 or 8 rounds.
  • finishing rounds are usually 1-5 times and can be changed when the sandbox instance has not yet started.
  • the system assigns a separate label to each person in the preparation interface.
  • the label content includes the avatar, name, and assigned color block. This label can be used by others. Click and there will be more detailed information after the point.
  • the organizer can invite, can only invite online people, the invitee can not be in the sandbox instance that has already started operation, but can be in the preparation interface of other organizers, the organizer needs to wait for the invitee after issuing the invitation In response, the invitee can choose to decline or agree within a certain period of time, or the system will help the invitee to reject the invitation after the waiting time has elapsed.
  • the organizer can also ask an invitee to leave a sandbox instance that has not yet started.
  • the invitee can wait for the organizer's invitation.
  • the invitee can apply to join an organization's unstarted sandbox instance, but the status of the applied organizer can only be in preparation, and the applied organizer needs to choose to agree or refuse within a certain period of time. Or after the timeout, the system will automatically choose to reject.
  • the organizer can receive multiple applications at the same time. You can apply for the consent without the order. When the organizer's number of sandbox instances is full, other applications will be invalidated.
  • the invitee can communicate with others in text, voice, and video.
  • everyone can communicate with people on the contact list in text, voice, and video.
  • the invitee can leave the sandbox instance in preparation in advance.
  • the organizer will become a single sandbox after the start of an unsatisfactory sandbox instance, organizer The identity becomes the user who operates the sandbox.
  • the organizer can invite multiple people to enter the sandbox instance at the same time. This number is the number that can exceed the prescribed number of sandbox instances.
  • the invitees enter the sandbox instance in the order in which the organizer agrees. After the number of people in the sandbox instance is full, the invitees who subsequently agree to enter cannot enter the sandbox instance. When the number of sandbox instances is full, the organizer No new invitations can be initiated.
  • the invitee can apply to join the sandbox instance of multiple organizers at the same time. If the number of the sandbox instances of the applied organizer is full, the applicant will not be able to apply, and the invitee will enter the first sandbox instance that is agreed to enter and enter a certain After the organizer's sandbox instance, other applications will automatically expire.
  • Invitees can also be invited by multiple organizers. After entering a sandbox instance, the applications that have already been sent, and other invitations will automatically expire.
  • the invitee After entering the preparation interface of a sandbox instance, the invitee can view the rounds of the sandbox instance, view the list of people in the sandbox instance, and communicate with others for text, voice, and video.
  • the invitee is offline or offline, if it has already entered a sandbox instance, it will automatically leave, and the current sandbox instance is free of a location.
  • the organizer can start the sandbox instance and have a countdown wait time. After the countdown ends, everyone will close the current preparation interface and wait for the load to complete before entering a virtual sandbox instance.
  • VR virtual reality
  • AR augmented reality
  • the cabinet is also a virtual sand cabinet, which is a cabinet for placing models. All the initial placement of the model needs to be extracted in the sand cabinet, and some other operations need to start from the sand cabinet.
  • All users can view the panorama of the sandbox with the Rotate feature, including close-up and distant views. It is also possible to switch between the normal viewing angle and the vertical viewing angle through the switch. It is also possible to move around within the allowable range of the virtual sandbox room, and the active observation and operation of the sandbox can be performed at any time. You can also zoom in or out through the magnifying glass switch to view the partial sand table.
  • the sandbox and model can be operated in the operating state. Including the placement of the model and the placement of the operation, you need to select the model, then the model will be rendered operational, and the operation, rotation, scaling, and deletion operations can be performed in the operational state of the model.
  • the sand tray can be dug and piled, and sound effects will occur at the same time.
  • the electronic device will respond to the vibration effect, and operate “dig” at a certain position in the sand table. After digging to a certain extent, water will appear on the sand table. After the water has emerged and then digging to a certain extent, that is, after the water reaches a certain depth, it will not respond to the digging action at this position.
  • the handheld device can be illusory. Turn into hands or other assistive devices.
  • the model can be taken from the cabinet in the scene and placed in the sandbox.
  • the model in the sandbox can be edited when placed.
  • the model After manipulating the model that needs to be moved, the model follows the movement of the finger, hand or controller.
  • the corresponding rotation is performed by the swipe of the finger, gesture or controller.
  • Two fingers, gestures, or controllers are required to operate, move relative or vice versa, and then map to the scale of the model.
  • the interface is a small number of invited people +1.
  • the window in addition to a common window for communication with everyone, and a separate window for each individual invitee.
  • the organizer can actively communicate with the invitees for text, voice and video.
  • the sandbox instance will have obvious prompt confirmation every round after the official start, which is used to let the invitee confirm the start, everyone After confirming, the next step can be performed.
  • the confirmation status of all invitees will be displayed in the organizer interface. If someone does not confirm, the organizer needs to conduct a separate inquiry for an undetermined invitee.
  • the system will randomly customize the order of the invitees, and then in this sandbox instance, the order of each person will not change.
  • each invitee has a limited time for operation in each round. Before the end of the countdown, one can be performed. Effective operation or continuous effective operation. If it is a one-time operation, it will automatically jump to the next operator after the operation ends. If it is a continuous operation, you can end the operation before the countdown technique to jump to the next operator, or wait for the countdown to automatically jump to the next operator. If there is no follow-up operator in the current round, then The sandbox instance of the current round is in a node state. At this time, there will be a separate window in front of each operator's screen, which requires the operator to operate the current round.
  • the supplementary explanation can be in the form of inputting text, voice and video.
  • the supplementary explanation has a time limit. After the countdown is finished at this time, the operator will submit it after the operator completes the supplement. The input content information, the operator can also give up the supplementary instructions.
  • the organizer's interface has the function of hiding other and only displaying the sandbox instance, and the organizer can take screenshots of any angle before the countdown ends. After the countdown is over, the current round of operations is officially finished, and then the next round is officially started to repeat the current sandbox instance.
  • each operator will be prompted to select a favorite area or a certain type of model in the current sandbox instance, and there will be a window on the screen for supplementary explanation. Why do you like this? The reason is that this operation will count down. After the countdown ends, the last choice will be recorded.
  • the operator can choose not to add or not, and the supplementary form can be text, voice and video.
  • each operator is prompted to select a model that is particularly concerned, and there will be a window on the screen for supplementary explanation, why is it especially For this reason, this operation will count down. After the countdown is over, the final selection will be recorded.
  • the operator can choose not to add or not, and the supplementary form can be text, voice and video.
  • each operator After selecting a model that is particularly concerned, each operator will be prompted to select the most annoying model or sandbox area, and there will be a window on the screen to supplement the explanation. Why do you hate this reason most? There is a countdown, the final selection will be recorded after the countdown is over, the operator can choose not to add or not, and the supplementary form can be text, voice and video.
  • each operator After recording the most annoying operation, each operator begins to choose a self-image, and there is a window on the screen to explain why this is chosen as a self-image.
  • the supplementary forms can be text, voice and video.
  • the organizer's interface has the function of hiding other and only displaying the sandbox instance, and the organizer can take screenshots of any angle before the countdown ends.
  • the self-image operation After the end of the self-image operation, it will enter the stage of sorting out the current sandbox instance. According to the previous operator's operation sequence, each person has a short adjustment time, and there will be a countdown. In this sorting phase, the operator can The sandbox instance performs any operation, and the specific operation round is specified by the organizer in the preparation interface of the sandbox instance.
  • the first multi-person, online networking, 3D virtual sandbox that can operate simultaneously in different places.
  • Figure 1 is a flow chart showing the operation of the present invention.
  • a simulated sandbox system includes a lens device, a controller, a digitizer, a display device, a physical sandbox mark, and a physical sandbox mark
  • the lens device includes a mobile phone, a tablet, a notebook computer, a smart glasses, a Bluetooth camera, and the like.
  • the device for capturing a real scene the controller includes a motion capture handle, other handheld devices that can map user behavior through sensors, a lens device, a controller connected to the digitizer and the display device, and a real scene through the lens device.
  • the digital processor associated with the lens device and the controller acquires and parses the real-world scene data captured by the lens device, digitizes the data of the real-world scene, and then presents the data to the display device through the Internet or a local area network and other networking methods.
  • the digitized data is connected and interacted with other devices in which the present invention is installed; the physical sandbox mark refers to the fact that the real sand disk cannot be placed due to many reasons in the real environment, and the virtual sand disk mapping is started by some mark.
  • the mark may be a picture, a plastic, a cloth, or other object that can be used as a mark by the example of the present invention, and the physical sandbox mark is optically transmitted and connected with the lens device; the physical sandbox mark refers to the real environment due to For many reasons, it is impossible to place a real cabinet, and the virtual sandbox mapping can be opened by some kind of mark.
  • the mark can be a picture, a plastic, a cloth, or other objects that can be used as a mark by the example of the present invention.
  • the physical sandbox marker is connected to the lens device for optical information transfer.
  • connection may be a detachable connection or an integral connection, which may be a mechanical connection or an electrical connection, and may be directly connected or indirectly connected through an intermediate medium, and may be internal communication between the two elements.
  • the specific meaning of the above terms in the present invention can be understood in a specific case by those skilled in the art. Further, in the description of the present invention, the meaning of "a plurality" is two or more unless otherwise specified.
  • a simulation sandbox system including a lens device, a controller, a digital processor, and a display
  • the device includes a mobile phone, a tablet, a notebook computer, smart glasses, a Bluetooth camera, and other devices that can be used to capture a real scene
  • the controller includes a motion capture handle.
  • Other handheld devices that can map user behavior through sensors, the lens device and the controller are connected to the digital processor and the display device, and the real-life scene is acquired through the lens device, and the digital processor associated with the lens device and the controller acquires and analyzes the lens device. Capture the actual scene data, digitize the data of the real scene, and then present it to the display device.
  • the device performs the digitized data and other devices with the examples of the present invention through the Internet or a local area network and other networking methods.
  • Connection interaction physical sandbox mark refers to the fact that in the real environment, due to many reasons, the real sandbox cannot be placed, and the virtual sandbox mapping can be opened by some kind of mark.
  • the mark can be picture, plastic, cloth, or other through the invention.
  • Instance authentication It can be used as a marked object, and the physical sandbox mark is connected with the lens device for optical information transmission; the physical sandbox mark refers to the fact that in the real environment, the real cabinet cannot be placed due to many reasons, and the virtual sand is opened by some kind of mark.
  • Cabinet mapping such a mark may be a picture, a plastic, a cloth, or other object that can be used as a mark by the example of the present invention, and the physical sandbox mark is optically transmitted with the lens device.
  • the operator can simulate the real natural environment in a complete virtual environment, so that the operator can feel the perception of vision, hearing, touch, taste, smell and movement, so that the operator can have all the perceptions that human beings can have.
  • the operator can have the sense of presence in the real environment, the operator has the feedback and response generated when interacting with the virtual environment to obtain the operational sense and interactivity.
  • the characteristics of the virtual environment itself are that there is no need for the operator to participate. Autonomy and diversity. All the information in the virtual sandbox is digitized.
  • the virtual information to the sand table information (image, sound, taste, touch, etc.) which is difficult to experience in a certain time and space range in the real world.
  • the real sand table it is perceived by human senses to achieve a sensory experience that transcends reality.
  • the real environment and virtual objects are superimposed in real time on the same sandbox.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
PCT/CN2017/117551 2017-12-20 2017-12-20 一种仿真沙盘系统 WO2019119314A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2019560437A JP7009508B2 (ja) 2017-12-20 2017-12-20 模擬箱庭システム
GB1907569.6A GB2571853A (en) 2017-12-20 2017-12-20 Simulated sandbox system
PCT/CN2017/117551 WO2019119314A1 (zh) 2017-12-20 2017-12-20 一种仿真沙盘系统
KR1020197023916A KR102463112B1 (ko) 2017-12-20 2017-12-20 시뮬레이션 모래상자 시스템

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/117551 WO2019119314A1 (zh) 2017-12-20 2017-12-20 一种仿真沙盘系统

Publications (1)

Publication Number Publication Date
WO2019119314A1 true WO2019119314A1 (zh) 2019-06-27

Family

ID=66992916

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117551 WO2019119314A1 (zh) 2017-12-20 2017-12-20 一种仿真沙盘系统

Country Status (4)

Country Link
JP (1) JP7009508B2 (ja)
KR (1) KR102463112B1 (ja)
GB (1) GB2571853A (ja)
WO (1) WO2019119314A1 (ja)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110916687A (zh) * 2019-11-07 2020-03-27 苏志强 一种虚拟沙盘心理分析处理方法及储存介质、系统
CN111724881A (zh) * 2020-06-19 2020-09-29 中国科学院自动化研究所 一种心理沙盘分析方法和系统
CN111973201A (zh) * 2020-07-29 2020-11-24 北京塞欧思科技有限公司 基于眼动交互的多维虚拟心理沙盘智能分析方法及装置
CN112419838A (zh) * 2020-12-07 2021-02-26 肖永忠 一种心理健康教育教具
CN113870680A (zh) * 2021-10-13 2021-12-31 九江学院 一种经济管理用模拟沙盘结构
CN114120047A (zh) * 2022-01-26 2022-03-01 中国科学院自动化研究所 基于视觉分析的物体联结性的判断方法、系统和设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230368794A1 (en) * 2022-05-13 2023-11-16 Sony Interactive Entertainment Inc. Vocal recording and re-creation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102654953A (zh) * 2011-05-20 2012-09-05 上海华博信息服务有限公司 一种基于vr交互模式的沙盘系统及其应用
CN106682329A (zh) * 2016-12-30 2017-05-17 中南大学 虚拟沙盘系统及其数据处理方法
CN107029424A (zh) * 2017-05-10 2017-08-11 北京派希教育科技有限公司 一种用于增强现实的积木搭建系统及方法
CN107239294A (zh) * 2017-07-11 2017-10-10 王子南 一种虚拟沙盘的创建操作方法及其应用
CN107450721A (zh) * 2017-06-28 2017-12-08 丝路视觉科技股份有限公司 一种vr互动方法和系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040037608A (ko) * 2002-10-29 2004-05-07 한국전자통신연구원 정신질환 치료를 위한 가상현실 게임 운영시스템 및 그 방법
JP5350427B2 (ja) * 2011-03-31 2013-11-27 株式会社コナミデジタルエンタテインメント 画像処理装置、画像処理装置の制御方法、及びプログラム
JP5819686B2 (ja) * 2011-09-14 2015-11-24 株式会社バンダイナムコエンターテインメント プログラム及び電子機器
KR20140108436A (ko) * 2013-02-27 2014-09-11 주식회사 멀틱스 증강현실을 이용한 소셜네트워크형 운동 게임 시스템 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102654953A (zh) * 2011-05-20 2012-09-05 上海华博信息服务有限公司 一种基于vr交互模式的沙盘系统及其应用
CN106682329A (zh) * 2016-12-30 2017-05-17 中南大学 虚拟沙盘系统及其数据处理方法
CN107029424A (zh) * 2017-05-10 2017-08-11 北京派希教育科技有限公司 一种用于增强现实的积木搭建系统及方法
CN107450721A (zh) * 2017-06-28 2017-12-08 丝路视觉科技股份有限公司 一种vr互动方法和系统
CN107239294A (zh) * 2017-07-11 2017-10-10 王子南 一种虚拟沙盘的创建操作方法及其应用

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GONG, WENFEI: "Design and Research for Virtual Reality Sandplay Therapy Based on HTC VIVE", PHILOSOPHY AND HUMANITIES SCIENCES, CHINA MASTER'S THESES FULL-TEXT DATABASE, 15 December 2016 (2016-12-15), pages 18 - 19, ISSN: 1674-0246 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110916687A (zh) * 2019-11-07 2020-03-27 苏志强 一种虚拟沙盘心理分析处理方法及储存介质、系统
CN111724881A (zh) * 2020-06-19 2020-09-29 中国科学院自动化研究所 一种心理沙盘分析方法和系统
CN111724881B (zh) * 2020-06-19 2024-02-23 中国科学院自动化研究所 一种心理沙盘分析方法和系统
CN111973201A (zh) * 2020-07-29 2020-11-24 北京塞欧思科技有限公司 基于眼动交互的多维虚拟心理沙盘智能分析方法及装置
CN112419838A (zh) * 2020-12-07 2021-02-26 肖永忠 一种心理健康教育教具
CN113870680A (zh) * 2021-10-13 2021-12-31 九江学院 一种经济管理用模拟沙盘结构
CN114120047A (zh) * 2022-01-26 2022-03-01 中国科学院自动化研究所 基于视觉分析的物体联结性的判断方法、系统和设备
CN114120047B (zh) * 2022-01-26 2022-04-08 中国科学院自动化研究所 基于视觉分析的物体联结性的判断方法、系统和设备

Also Published As

Publication number Publication date
GB2571853A (en) 2019-09-11
JP2020508822A (ja) 2020-03-26
GB201907569D0 (en) 2019-07-10
JP7009508B2 (ja) 2022-01-25
KR20200097637A (ko) 2020-08-19
KR102463112B1 (ko) 2022-11-04

Similar Documents

Publication Publication Date Title
WO2019119314A1 (zh) 一种仿真沙盘系统
US10846520B2 (en) Simulated sandtray system
JP6902683B2 (ja) 仮想ロボットのインタラクション方法、装置、記憶媒体及び電子機器
KR101918262B1 (ko) 혼합 현실 서비스 제공 방법 및 시스템
US10699461B2 (en) Telepresence of multiple users in interactive virtual space
KR101505060B1 (ko) 가상 현실 연동 서비스 제공 시스템 및 그 방법
US9244533B2 (en) Camera navigation for presentations
CN104170318B (zh) 使用交互化身的通信
JP7408792B2 (ja) シーンのインタラクション方法及び装置、電子機器並びにコンピュータプログラム
WO2022227408A1 (zh) 一种虚拟现实交互方法、设备及系统
JP2022549853A (ja) 共有空間内の個々の視認
US11334165B1 (en) Augmented reality glasses images in midair having a feel when touched
CN111643899A (zh) 一种虚拟物品显示方法、装置、电子设备和存储介质
TW201351202A (zh) 虛擬互動溝通方法
EP3864575A1 (en) Systems and methods for virtual and augmented reality
Sénécal et al. Modelling life through time: cultural heritage case studies
Rouanet et al. A robotic game to evaluate interfaces used to show and teach visual objects to a robot in real world condition
Lin et al. CATtalk: An IoT-based interactive art development platform
WO2023232103A1 (zh) 一种观影互动方法、装置及计算机可读存储介质
KR20200028830A (ko) 실시간 cg 영상 방송 서비스 시스템
KR20220023005A (ko) 체험 교구를 사용하는 실감형 인터렉티브 에듀테인먼트 시스템
Zhang et al. Interaction Design for Room Escape Virtual Reality Games
US20230218984A1 (en) Methods and systems for interactive gaming platform scene generation utilizing captured visual data and artificial intelligence-generated environment
Parker Theater as virtual reality
US20230393648A1 (en) System for multi-user collaboration within a virtual reality environment

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 201907569

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20171220

ENP Entry into the national phase

Ref document number: 2019560437

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17935496

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17935496

Country of ref document: EP

Kind code of ref document: A1