CN116954349A - MR element universe remote psychological consultation AI interactive immersion system - Google Patents

MR element universe remote psychological consultation AI interactive immersion system Download PDF

Info

Publication number
CN116954349A
CN116954349A CN202210393068.5A CN202210393068A CN116954349A CN 116954349 A CN116954349 A CN 116954349A CN 202210393068 A CN202210393068 A CN 202210393068A CN 116954349 A CN116954349 A CN 116954349A
Authority
CN
China
Prior art keywords
real
face
consultation
time
technology
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210393068.5A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingshi Beijing Technology Co ltd
Original Assignee
Xinanyi Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinanyi Beijing Technology Co ltd filed Critical Xinanyi Beijing Technology Co ltd
Priority to CN202210393068.5A priority Critical patent/CN116954349A/en
Publication of CN116954349A publication Critical patent/CN116954349A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Pathology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of online remote psychological consultation, in particular to an MR universe remote psychological consultation AI interactive immersion system, which comprises the following steps: s10: remote double-ended consulting room environment setting; s20: the visitor enters a consultation room, when the cameras of the two parties start working by utilizing the face recognition and AR face changing technology, the face position information of the visitor is automatically locked and tracked, and a face changing command is automatically triggered by a real-time face changing method; s30: after entering the consulting room, capturing faces of consultants, positioning facial markers, collecting whole body sitting posture pictures and real-time matting into an MR three-dimensional virtual space by using the camera systems of the two parties of the S104, so that effective synthesis of cloud figures and virtual backgrounds is realized; the method solves the problems of sound and real-time picture transmission by adopting 5G real-time communication, and places a real picture in a holographic space with virtual three-dimensional immersion sense by using an MR green curtain real-time matting technology, thereby creating the face-to-face communication effect of consultants and visitors.

Description

MR element universe remote psychological consultation AI interactive immersion system
Technical Field
The application relates to the technical field of online remote psychological consultation, in particular to an MR universe remote psychological consultation AI interactive immersion system.
Background
Psychological consultation is a way for people to communicate with people in the traditional field, is greatly limited by regions and spaces, and cannot serve society efficiently.
In recent years, indirect interpersonal communication is carried out on the market through a network or a mobile phone, and online communication consultation is realized through a WeChat video or a Tencel video conference and other modes. However, the facial expressions are only seen through the small windows of the mobile phone, but the body full view and limb actions of the other party cannot be seen, so that a lot of valuable information in the communication between the two parties is missed, and fine information such as talking, expression, impersonation, holding and the like of the visitor cannot be accurately transmitted. The loss of the fine information can seriously affect the consultation effect, so that the shallow language communication channel cannot reach the true restoration 1 of the patent: 1 ratio, and creates the effect of real face-to-face consultation.
Therefore, the problem of different places is solved by the way of carrying out on-line psychological consultation in a mobile phone video mode, but the size and immersion communication mode of the equal-proportion characters cannot be truly restored, so that the expectation of psychological consultation cannot be reached.
Disclosure of Invention
Aiming at the defects and shortcomings of the prior art, the application provides an MR element universe remote psychological consultation AI interactive immersion system which carries out 1 on a consultant and a visitor through an MR real-time image matting technology: 1, all the pictures of the body are collected and projected to the opposite side in real time, so that a face-to-face communication stereoscopic immersion effect is created; the method effectively achieves the purpose of remote real-time consultation, and simultaneously effectively protects the privacy of visitors through a face changing technology and a sound changing technology.
The MR element universe remote psychological consultation AI interactive immersion system provided by the application adopts the following steps:
the method comprises the following steps:
s10: the environment setting of the remote double-end consulting room adopts the following steps:
s101: establishing image consultation rooms of both parties, and wrapping seats of both parties of the opposite broadcasting in a green screen in the image consultation rooms;
s102: a microphone system is established, microphones are arranged in consultation rooms of the two parties, and the microphones collect the languages of the two parties;
s103: a lighting system is established, and lighting is arranged above the seats in the consultation rooms of the two parties, so that soft reverse lighting is formed;
s104: a projection system is established, and a projector and curtain walls are installed in a consultation room;
s104: establishing an imaging system, and installing a camera at the front part of a seat in S101;
s20: the visitor enters a consultation room, when the cameras of the two parties start working by utilizing the face recognition and AR face changing technology, the face position information of the visitor is automatically locked and tracked, and a face changing command is automatically triggered by a real-time face changing method;
s30: after entering the consulting room, capturing faces of consultants, positioning facial markers, collecting whole body sitting posture pictures and real-time matting into an MR three-dimensional virtual space by using the camera systems of the two parties of the S104, so that effective synthesis of cloud figures and virtual backgrounds is realized;
s40: by utilizing AI voice recognition and voice conversion technology and by relying on intelligent voice technology and deep learning technology, when a visitor enters a consulting room to start speaking, a microphone of the consulting room collects voice signals in real time to process and conduct voice recognition technology, target tone migration is conducted, and characteristics of the original speaker such as tone, rhythm and the like are restored highly;
s50: transmitting the file to be rendered to a cloud through a 5G communication module by utilizing an MR cloud real-time rendering technology, receiving a rendered picture file returned by the cloud, and transmitting the picture file to a remote rendering auxiliary module for processing and displaying;
s60: the projection system of S104 is utilized, the bullet time real-time shooting and modeling synthesis technology is adopted, and the image of the opposite party is projected onto the wall opposite to the seat;
s70: and synchronously transmitting the image and sound information data of the opposite party to a projector of the consulting room by using a 5G live broadcast technology, and realizing the projection to a curtain wall of the opposite party consulting room by using the image, thereby realizing the face-to-face real-time consultation and communication of the two 3D images in the MR three-dimensional virtual space.
Further, a green curtain is arranged on one side of the image consultation room, the green curtain is L-shaped, and a seat is arranged in front of the green curtain; a microphone, a sound and a camera are arranged in front of the seat; and a projector is arranged at the upper part of the other side of the image consultation room.
Further, the microphone, the sound equipment, the camera and the projector are respectively connected with the computer control system through wires.
Further, the seat is wrapped in a space covered by the green curtain.
The application has the beneficial effects that: the MR-element universe remote psychological consultation AI interactive immersion system solves the problems of sound and real-time picture transmission by adopting 5G real-time communication, places a real picture in a holographic space of virtual three-dimensional immersion sense by an MR green screen real-time matting technique, and creates the face-to-face communication effect of a consultant and a visitor. People with near-far-end mutual videos do not need to wear any glasses or helmets, the real chat is just like sitting on opposite chat, the details of the people are full, the people can be more direct, more specific and more intimate, all information of the two parties can be restored deeply, and the effect of face-to-face consultation is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate and together with the description serve to explain the application, if necessary:
FIG. 1 is a topological flow diagram of the present application;
fig. 2 is a configuration diagram of a consulting room in the present application.
Detailed Description
The present application will now be described in detail with reference to the drawings and the specific embodiments thereof, wherein the exemplary embodiments and the description are for the purpose of illustrating the application only and are not to be construed as limiting the application.
As shown in fig. 1, the MR meta-universe remote psychological counseling AI interactive immersion system according to the present embodiment adopts the following steps:
s10: the environment setting of the remote double-end consulting room adopts the following steps:
s101: establishing image consultation rooms of both parties, and wrapping seats of both parties of the opposite broadcasting in a green screen in the image consultation rooms;
s102: a microphone system is established, microphones are arranged in consultation rooms of the two parties, and the microphones collect the languages of the two parties;
s103: a lighting system is established, and lighting is arranged above the seats in the consultation rooms of the two parties, so that soft reverse lighting is formed;
s104: a projection system is established, and a projector and curtain walls are installed in a consultation room;
s104: establishing an imaging system, and installing a camera at the front part of a seat in S101;
s20: the visitor enters a consultation room, when the cameras of the two parties start working by utilizing the face recognition and AR face changing technology, the face position information of the visitor is automatically locked and tracked, and a face changing command is automatically triggered by a real-time face changing method;
s30: after entering the consulting room, capturing faces of consultants, positioning facial markers, collecting whole body sitting posture pictures and real-time matting into an MR three-dimensional virtual space by using the camera systems of the two parties of the S104, so that effective synthesis of cloud figures and virtual backgrounds is realized;
s40: by utilizing AI voice recognition and voice conversion technology and by relying on intelligent voice technology and deep learning technology, when a visitor enters a consulting room to start speaking, a microphone of the consulting room collects voice signals in real time to process and conduct voice recognition technology, target tone migration is conducted, and characteristics of the original speaker such as tone, rhythm and the like are restored highly;
s50: transmitting the file to be rendered to a cloud through a 5G communication module by utilizing an MR cloud real-time rendering technology, receiving a rendered picture file returned by the cloud, and transmitting the picture file to a remote rendering auxiliary module for processing and displaying;
s60: the projection system of S104 is utilized, the bullet time real-time shooting and modeling synthesis technology is adopted, and the image of the opposite party is projected onto the wall opposite to the seat;
s70: and synchronously transmitting the image and sound information data of the opposite party to a projector of the consulting room by using a 5G live broadcast technology, and projecting the image to a curtain wall of the opposite party consulting room to realize face-to-face real-time consultation and communication of the two 3D images in the MR three-dimensional virtual space.
Further, a green curtain is arranged on one side of the image consultation room, the green curtain is L-shaped, and a seat is arranged in front of the green curtain; a microphone, a sound and a camera are arranged in front of the seat; and a projector is arranged at the upper part of the other side of the image consultation room.
Further, the microphone, the sound equipment, the camera and the projector are respectively connected with the computer control system through wires.
Further, the seat is wrapped in a space covered by the green curtain.
The main components of the system of the application are as follows:
(1) The projector displays a picture transmitted from a far end, the camera acquires the picture of the sitting posture of the other party, and the picture is scratched into an MR three-dimensional virtual space in real time;
(2) The speaker transmits far-end sound to the near-end and the microphone collects language communication sound of both sides;
(3) The lighting equipment truly restores the whole-body pictures of both parties and the computer rendering system;
(4) The green curtain space is arranged, the green curtain is L-shaped or U-shaped, so that the consultant can be conveniently wrapped, and both sides are simultaneously seated in the whole green curtain coverage space.
(5) The illumination is used to produce a soft reflected illumination. Wherein the camera operates at 120Hz and is equipped with a filter to block NIR light. For each captured image, the face is detected and facial markers are located simultaneously. The 2D positions of the eyes, mouth and ears are determined as a weighted combination of adjacent landmarks, which are acquired by triangulation.
(6) The audio transmission techniques are beamforming, reverberation reduction, webRTC transmission, talker/listner-dock virtual audio synthesis, binaural crosstalk cancellation crossover combining, and amplitude panning. Accurate tracking of talkers and listeners is a key factor in achieving realism in a shared space compared to conventional videoconferencing systems.
The composition technology related in the application is as follows:
(1) MR real-time image matting system: the technology is based on a cloud processing real-time high-definition image matting method, and preprocessing is carried out on a video image/three-dimensional model based on (Y, U, V) color space; the video image/model with the unified format after pretreatment is used as a real-time input source to be input into a cloud broadcasting guide platform; scanning information of all pixel points of each frame of video image/model of the input source; carrying out image matting calculation on a foreground image from a real-time input source; and fusion calculation and real-time fusion are carried out on the foreground image and the background image based on the image matting calculation result, so that the effective synthesis of the cloud person and the virtual background is realized.
(2) 5G live broadcast technology: the live broadcast management cloud platform is connected with a user terminal through a second 5G base station; the live broadcast management cloud platform adopts 5G edge calculation to split local flow, and adopts a 5G slicing technology to ensure that files uploaded by a live broadcast field end are not affected when being transmitted.
(3) The bullet time real-time shooting and modeling synthesis technology comprises the following steps: shooting by bullet time combined by a plurality of camera arrays to obtain a video and a real-time three-dimensional modeling model, and carrying out binocular calibration on the cameras to obtain a calibration module of internal and external parameters of each camera; 3D modeling is carried out on the shot object or human body to obtain a 3D model, and the 3D model of texture mapping and albedo is restored to the 3D model; and the bullet time special effect generation module is used for carrying out brightness adjustment, correction and anti-shake treatment on the synthesized image and carrying out splicing treatment on the synthesized image. The bullet time shooting system is more free and flexible, and can generate more cool and dazzling bullet time special effects.
(4) Face recognition and AR face technology: but the visitor enters the consulting room, the camera can automatically lock and track the facial position information of the visitor when starting working, a face changing command is automatically triggered by a real-time face changing method, a modified face changing video is detected based on a model of multi-task learning, and face detection is carried out for each query positioning modified area. The function of protecting personal privacy for visitors is achieved through face AI face changing.
(5) AI voice recognition and sound conversion technique: the intelligent voice technology and the deep learning technology are relied on in the patent, but when a visitor enters the consulting room to start speaking, the microphone of the consulting room collects voice signals in real time to process and conduct voice recognition technology, target tone migration can be accurately conducted, and the characteristics of the original speaker such as the mood and rhythm are highly restored. The method can well solve the problem of traditional sound change, achieves the sound change effect of high identification, high naturalness and high fluency, can keep the characteristics of the original speaker such as the mood and rhythm, and enables the converted sound to be more layered and more personalized. By the method, the personal privacy of the visitor can be effectively protected.
(6) MR cloud real-time rendering techniques: and the MR real-time synthesized video picture transmits the file to be rendered to the cloud for rendering through the 5G communication module, receives the rendered picture file returned by the cloud, and transmits the rendered picture file to the remote rendering auxiliary module for processing and displaying. By means of the high bandwidth of the 5G network, better holographic three-dimensional display experience of pictures of the two parties of MR consultation can be provided through a remote rendering mode, and meanwhile the operation pressure of a local host is greatly reduced.
In the application, a consultation room with equal configuration is established, a green curtain, an illuminating lamp, a projector, a camera and a 5G network are installed in the consultation room, and a matched control system is integrated with an MR real-time image matting system, a bullet time real-time shooting and modeling synthesis technology system, a face recognition and AR face changing technology system, an audio transmission technology system and the like.
When the face-changing method is used, a consultant and a psychological teacher respectively enter a consultation room, a face recognition and AR face-changing technology system works, a camera in the consultation room is controlled to start working, face position information of a visitor is automatically locked and tracked, a face-changing command is automatically triggered through a real-time face-changing method, and personal privacy of the visitor is protected through face AI face-changing. Therefore, consultants and psychological teachers are not faces in reality to meet, and psychological consultation and barrier-free communication are facilitated.
Then, the two parties sit on the seats in the consultation room, at the moment, the two parties are wrapped in the green curtains in the consultation room, the MR real-time image matting systems in the control systems in the respective consultation rooms start working in real time, the image data of the consultants and psychological teachers are quickly sent to the control systems in the consultation rooms of the other party in real time by using a 5G network, and the foreground images and the background images are fused and calculated based on the matting calculation result and fused in real time, so that the effective synthesis of cloud figures and virtual backgrounds is realized, the control systems in the respective consultation rooms control projectors to realize projection on curtain walls of the consultation rooms, and the 3D immersive interaction place setting is constructed.
Then, the two parties communicate with each other, and the collection of sound is realized by utilizing a microphone and a loudspeaker in each consultation room; the speaker transmits far-end sound to the near end, the microphone collects the language communication sound of both sides and utilizes AI sound recognition and sound conversion technology, when a visitor (consultant) enters the consultation room to start speaking, the microphone of the consultation room collects the voice signal in real time to process and carry out the voice recognition technology, the target tone migration can be accurately carried out, and the characteristics of the original speaker such as the tone, rhythm and the like are highly restored. The method can well solve the problem of traditional sound change, achieves the sound change effect of high identification degree, high naturalness and high fluency, can keep the characteristics of the original speaker such as the mood and rhythm, and ensures that the converted sound is more layered and more individual, and the personal privacy of the visitor is effectively protected. The method is equivalent to psychological consultation without adopting the original face and the original sound, so that the privacy of the consultant is protected in the psychological way, and the method is more beneficial to the open mind of the consultant.
Meanwhile, using bullet time real-time shooting and modeling synthesis technology, shooting by bullet time combined by a plurality of camera arrays to obtain a video and a real-time three-dimensional modeling model, performing 3D modeling on a shot object or human body to obtain a 3D model, and recovering a texture mapping and albedo 3D model from the 3D model; and the bullet time special effect generation module is used for carrying out brightness adjustment, correction and anti-shake treatment on the synthesized image and carrying out splicing treatment on the synthesized image so as to be more beneficial to the quality of the image.
Thus, the counseling rooms at the two ends of the network are shortened to be synthesized into a counseling place, and the psychological counseling place with no sense of distance and MR element universe immersion is formed.
The foregoing description is only of the preferred embodiments of the application, and all changes and modifications that come within the meaning and range of equivalency of the features and concepts described herein are therefore intended to be embraced therein.

Claims (4)

  1. An MR element universe remote psychological consultation AI interactive immersion system is characterized by comprising the following steps:
    s10: the environment setting of the remote double-end consulting room adopts the following steps:
    s101: establishing green screens of the two parties for real-time image matting, and wrapping seats of the two parties for sowing in the green screens in the image consultation room;
    s102: a microphone system is established, microphones are arranged in consultation rooms of the two parties, and the microphones collect the languages of the two parties;
    s103: a lighting system is established, and lighting is arranged above the seats in the consultation rooms of the two parties, so that soft reverse lighting is formed;
    s104: a projection system is established, and a projector and curtain walls are installed in a consultation room;
    s104: establishing an imaging system, and installing a camera at the front part of a seat in S101;
    s20: the visitor enters a consultation room, when the cameras of the two parties start working by utilizing the face recognition and AR face changing technology, the face position information of the visitor is automatically locked and tracked, and a face changing command is automatically triggered by a real-time face changing method;
    s30: after entering the consulting room, capturing faces of consultants, positioning facial markers, collecting whole body sitting posture pictures and real-time matting into an MR three-dimensional virtual space by using the camera systems of the two parties of the S104, so that effective synthesis of cloud figures and virtual backgrounds is realized;
    s40: by utilizing AI voice recognition and voice conversion technology and by relying on intelligent voice technology and deep learning technology, when a visitor enters a consulting room to start speaking, a microphone of the consulting room collects voice signals in real time to process and conduct voice recognition technology, target tone migration is conducted, and characteristics of the original speaker such as tone, rhythm and the like are restored highly;
    s50: transmitting the file to be rendered to a cloud through a 5G communication module by utilizing an MR cloud real-time rendering technology, receiving a rendered picture file returned by the cloud, and transmitting the picture file to a remote rendering auxiliary module for processing and displaying;
    s60: the projection system of S104 is utilized, the bullet time real-time shooting and modeling synthesis technology is adopted, and the image of the opposite party is projected onto the wall opposite to the seat;
    s70: and synchronously transmitting the image and sound information data of the opposite party to a projector of the consulting room by using a 5G live broadcast technology, and realizing the projection to a curtain wall of the opposite party consulting room by using the image, thereby realizing the face-to-face real-time consultation and communication of the two 3D images in the MR three-dimensional virtual space.
  2. 2. The MR meta-universe remote psychological counseling AI interactive immersion system of claim 1, wherein: a green curtain is arranged on one side of the image consultation room, the green curtain is L-shaped, and a seat is arranged in front of the green curtain; a microphone, a sound and a camera are arranged in front of the seat; and a projector is arranged at the upper part of the other side of the image consultation room.
  3. 3. The MR meta-universe remote psychological counseling AI interactive immersion system of claim 2, wherein: the microphone, the sound equipment, the camera and the projector are respectively connected with the computer control system through wires.
  4. 4. The MR meta-universe remote psychological counseling AI interactive immersion system of claim 3, wherein: the seat is wrapped in a space covered by the green curtain.
CN202210393068.5A 2022-04-15 2022-04-15 MR element universe remote psychological consultation AI interactive immersion system Pending CN116954349A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210393068.5A CN116954349A (en) 2022-04-15 2022-04-15 MR element universe remote psychological consultation AI interactive immersion system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210393068.5A CN116954349A (en) 2022-04-15 2022-04-15 MR element universe remote psychological consultation AI interactive immersion system

Publications (1)

Publication Number Publication Date
CN116954349A true CN116954349A (en) 2023-10-27

Family

ID=88453425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210393068.5A Pending CN116954349A (en) 2022-04-15 2022-04-15 MR element universe remote psychological consultation AI interactive immersion system

Country Status (1)

Country Link
CN (1) CN116954349A (en)

Similar Documents

Publication Publication Date Title
US10880582B2 (en) Three-dimensional telepresence system
CN102084650B (en) Telepresence system, method and video capture device
US8289367B2 (en) Conferencing and stage display of distributed conference participants
US6583808B2 (en) Method and system for stereo videoconferencing
CN101534413B (en) System, method and apparatus for remote representation
TW297985B (en)
US20070182812A1 (en) Panoramic image-based virtual reality/telepresence audio-visual system and method
CN106210703A (en) The utilization of VR environment bust shot camera lens and display packing and system
EP0970584A1 (en) Videoconference system
de Bruijn Application of wave field synthesis in videoconferencing
EP2352290A1 (en) Method and apparatus for matching audio and video signals during a videoconference
CN110333837B (en) Conference system, communication method and device
US9661273B2 (en) Video conference display method and device
CN110324553B (en) Live-action window system based on video communication
CN110324554B (en) Video communication apparatus and method
CN114827517A (en) Projection video conference system and video projection method
Breiteneder et al. TELEPORT—an augmented reality teleconferencing environment
WO2014175876A1 (en) Social television telepresence system and method
JP4501037B2 (en) COMMUNICATION CONTROL SYSTEM, COMMUNICATION DEVICE, AND COMMUNICATION METHOD
CN117041608A (en) Data processing method and storage medium for linking on-line exhibition and off-line exhibition
CN116954349A (en) MR element universe remote psychological consultation AI interactive immersion system
JPH08256316A (en) Communication conference system
CN110324556B (en) Video communication apparatus and method
JP2002027419A (en) Image terminal device and communication system using the same
CN112565720A (en) 3D projection system based on holographic technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240409

Address after: No. 9-1F-0901, 1st Floor, Building 9, Yihuayuan, 200 meters east of Xiaoyangfang, Yizhuang Town, Daxing District, Beijing, 100026

Applicant after: Xingshi (Beijing) Technology Co.,Ltd.

Country or region after: China

Address before: No. 9-1F-0907, 1st Floor, Building 9, Yihuayuan, 200 meters east of Xiaoyangfang, Yizhuang Town, Daxing District, Beijing, 100023

Applicant before: Xinanyi (Beijing) Technology Co.,Ltd.

Country or region before: China